Wednesday, June 04, 2008
Testing a new disto...
Eventually, I had to give up on the Freespire experiment on this laptop (Acer 5315). It's too easy to brick Freespire, thereby putting myself in a place from which the only escape is re-installation.
For example: when I connected a second screen to the machine, Freespire detected it straight away. I couldn't get the second screen to behave like a second desktop, but something's better than nothing, right?
Except that in auto-configuring the second screen, something (no, I don't know what or why) happened, and thereafter the machine declined to boot unless the second screen was connected.
Facing the nth reinstall, I decided that Freespire had had its day, and am now an OpenSuSE user.
But getting things bedded down and stable was non-trivial. OpenSuSE is just as breakable as Freespire was if you're trying to install anything unusual.
The great paradox of Linux is this combination of stability and fragility. Once you get things set up and working, the golden rule is "touch nothing", because if you do anything rash, you'll get into trouble.
On OpenSuSE, what got me into trouble was trying to get the trackpad to behave itself. The 5315 has a hyper-sensitive trackpad and a badly-behaved cursor, and the only way I can make the machine usable is to attach an external mouse and switch the trackpad off.
To fix this, I tried to get the Synaptics driver working; this broke the Xorg configuration, which meant the machine would only boot to the command line. Thankfully, I was able to recover the orginal configuration without a complete re-install.
I have, however, found to my satisfaction that OpenSuSE has a much better understanding of the wireless configuration. Ndiswrapper with the right driver worked completely without a hitch.
Now I just have to win the battle with the machine's sound card, learn how to run two screens properly, and I'll have an almost functional machine.
Tuesday, May 06, 2008
About the ITIF's Broadband Rankings
It's nice to see a broadband measure that gives Australia a reasonable score. It's a pity there are so many flaws in the data's assumptions. I'm referring to the ITIF study published here.
Since last year, when with Market Clarity's Shara Evans (my then employer) I looked at the then-inadequate OECD data (it has improved immeasurably since, but still has some unfortunate measures and assumptions that I'll get to in a minute), I have approached this debate with some trepidation. Broadband measures are a religious thing: they seem to exist primarily so that policy-makers and advocates can verify the opinions they already hold, rather than to inform the debate and spark new ideas. Saying “the broadband league tables don't mean that much” is a good way to get screamed at from every direction.
Hence any measure that says “(insert your country name here) is (leading the world) / (trailing the world)” will be accepted uncritically by whichever side agrees with the figures, and rejected by the other side. The figures themselves are irrelevant to the debate.
Well, here's what the ITIF has found: Australia is placed 12th in the OECD according to a measure of penetration, average download speed, and price per megabit per second.
Only one of those measures – penetration – is valid as a measure. As for the other two:
Average download speed – measured, according to ITIF, by “averaging the speeds of the incumbent DSL, cable and fiber offerings [sourced from the OECD's April 2006 figures, so 12 months out of date – RC] with each assigned a weight according to that technology's relative percentage of the nation's overall broadband subscribership”.
The ITIF admits that “average plan speeds” is flawed, and tries to weigh the average plan speeds against share of subscribers. But that's silly: because the core measure (average plan speed) is meaningless, and anyhow, all OECD nations publish statistics placing subscribers in speed bands (similar to the ABS's ongoing Internet survey in this country). Why not use a direct measure rather than massaging an indirect measure?
Lowest price per Mbps – This measure is even worse than the last, because the measure gives the best score to the worst service. The provider that most oversubscribes its infrastructure will yield the lowest price per Mbps, but at the cost of the real throughput delivered to the customer. A better measure would be to analyse price on a per-Gigabyte, per-month basis, and the only reason I haven't done such an analysis on an international basis is that nobody's paying me to do so.
Another point: international price comparison should be based not on a “raw” translation to US dollars, but on a proportion of average income. That way, if an Australian Gigabyte per month costs X% of income and a South Korean Gigabyte costs Y%, we can confidently identify which consumer is paying more.
Still, these flaws can arguably be forgiven, if the authors were able to argue that they do not affect relative outcomes between countries.
What's interesting is the overall conclusion from ITIF that “non-policy factors account for roughly three-fourths of a nation's broadband performance”.
In other words – as I have argued for years – government policy is not what broadband is all about. Broadband is a consumer product, just like 4WD vehicles, plasma TVs and the rest, and beyond the single question “is it available?” there's not much the government can do.
And why, anyway, is there some magic about broadband which makes it deserving of government funds, but not an LCD TV or a new fridge?
So what are the factors, according to ITIF, which most influence a country's broadband scores?
Urbanicity, age, and Internet usage.
The last is a no-brainer: people don't buy broadband if they're not connecting to the Internet at all. The age of a population can be discussed at length, so I'll leave that for another time.
But urbanicity caught my eye, because (again with Market Clarity) I ran this analysis in 2006. At that time, we concluded that urbanicity was a huge factor in broadband adoption – simply because concentrations of people attract telecommunications infrastructure investment. It's nice to see ITIF agreeing with that analysis.
Thursday, May 01, 2008
Digital Dividend or Free Spectrum?
The question arises because that silly expression is so ineradicable in the political lexicon. A "digital dividend" is out there somewhere, we just have to (as Senator Conroy put it) "put in the hard work" and we'll reap the rewards.
The commercial data that led to America's recent spectrum auctions raising $19 billion aren't on the public record. We don't know how many people Verizon and AT&T consider to be the addressable population of their spectrum. But the price tag provides a hint: the premium paid for the spectrum tells us that the spectrum will be used to deploy broad-based services. The "Internet socialist" ideal, that TV spectrum could deliver broadband to those without fast wired services, won't come true in America.
If we assume that the spectrum is destined for broad-based services, then population is a good way to look at the value of spectrum from the outside. The auctions raised $19 billion; America has just over 300 million people; so the per-person value of the spectrum is about $63.
And if that figure is applicable to Australia, the digital dividend would be just over $1.2 billion - or three years' interest on the Communications Fund.
While $1.2 billion is a lot of money, it's not much in terms of the total economy, which is close to $650 billion - and there's no guarantee that investors in Australia would put the same value on new spectrum anyhow.
There was, however, a very interesting nugget in Senator Conroy's speech to the ACMA RadComms conference last week: the Ofcom estimate that radio spectrum contributed £42 billion to the UK economy. It's not actually news (the report was published in 2006), but still interesting.
Instead of focussing on the price tag, though, the employment impact is worth noting: Ofcom claimed that spectrum use contributes to 240,000 jobs in Britain, or 0.7% of the workforce. That would be about 70,000 jobs in Australia.
The way to maximise jobs growth based on spectrum would be to make spectrum applications irresistibly attractive.
If you accept that the likely "digitial dividend" is going to be small (and $1.2 billion only sounds big), perhaps the idea that the spectrum should be opened up for free (or close to it) isn't so silly. Those who entered the fray would be able to focus on infrastructure and services rather than having to design the business plan around recovering money over-spent at auction.
Look again at the US experience. What are Verizon and AT&T going to do with their expensive spectrum? Verizon is talking 4G cellular services, while AT&T is talking about a network that can support 3G iPhones.
Somehow, it's hard to wax lyrical about spectrum bought to support a network-locked toy phone (with no apology to Apple fanboys) in a new cellular network. Whatever is claimed for new mobile networks, they remain focussed on the urban market, because that's where the people are.
The widespread belief that the digital dividend will bring new rural and remote services is a delusion. Unless political thinking changes - and unless somebody pops the wishful expectation of a huge payoff to general revenue - the digital dividend will end up with the same urban focus as is clearly emerging in the US.
Monday, April 28, 2008
Farewell CDMA...
However, as the CDMA switch-off approaches, I can say one thing that most of the Sydney-centric IT press can't say: I have seen first-hand that there are places where CDMA works and NextG doesn't work.
Last week, I took a trip down to Bendigo to see some relatives, some of whom live to the south somewhere near a town called Harcourt.
Mostly, phone coverage is a pretty dull topic, hardly in the “barbecue-stopper” category, but head out to the world of patchy coverage, and people get quite heated on the subject. So it was that I found myself looking at: a CDMA phone and a NextG phone. Guess which one couldn't get a signal?
Telstra confidently asserts that NextG has more coverage than CDMA. This may well be true: what has irritated the country users is that it's quite possible to cover more area with the new technology while still leaving out parts of the country.
Telstra is big on telling us that NextG covers two million square kilometres – which is still only a bit over a quarter of the landmass. Most of the uncovered area is pretty empty of people. But to a degree, this is slieght-of-hand. Nobody has asked Telstra whether NextG is “bigger” than CDMA: they ask whether NextG replicates the service they currently receive.
The answer is “probably, nearly”, but there are still gaps out there.
Monday, April 14, 2008
How to Build an Insecure Environment
Today's manifestation of "because we can" comes from Cisco, which is introducing application server capability to routers. The idea is that since all your traffic's going to pass through a router, why not put the application right there in the router, so that the application is where the traffic is?
It's a very simple idea, and very obvious. So simple and obvious that it has, in fact, been tried before. Back a decade or more, I recall looking at enterprise network equipment with modules designed to host Novell NetWare servers.
It didn't take off then, and I hope it doesn't take off now.
In case you hadn't noticed, Cisco's name makes regular appearances in CERT advisories. Now, this isn't to say "Cisco's been slack", but rather to point out that its equipment, like all the equipment on the Internet, will be subject to vulnerabilities.
Only fools put all their eggs in one basket.
Where I will criticise Cisco is that in this case, the marketing pitch is at odds with the customers' interests. The customer's interest is to have a properly segregated environment; Cisco's interest is to be in command of as much of the customer's infrastructure as possible.
In previous iterations, "servers in network infrastructure" failed not because of security concerns - we all lived in a more innocent world 15 years ago. Rather, customers were concerned about tying their applications to a single-vendor execution platform, and expected that server vendors would run a faster upgrade cycle than network platform vendors (for whom servers were a sideline).
This time around, security should be the deal-breaker - if the customers are paying attention.
Thursday, April 10, 2008
Telemetry and 3G
It's an interesting thought, although not quite bleeding-edge. After all, telemetry over cellular networks has been with us for a while - even if it's hard to find in carriers' financial reports.
The reason it's hard to locate in the financials is, of course, because the telemetry market is so small that it doesn't warrant a break-out line item of its own.
There's a good reason for this: telemetry is a misfit with the way carriers like to market their mobile networks. The best mobile subscriber is the one that generates lots of billable events (calls, SMSs, MMSs, browsing and so on).
Why do you think prepaid dollars have a use-by date? The last thing any carrier wants is to have someone buy $50 of prepaid minutes and leave them sitting there for a year. Among other things, the expiry of pre-paid minutes is designed to encourage people to make calls. Bundled free calls with post-paid mobiles share the same aim (in social engineering terms, anyhow): if you habituate the user to make lots of calls, there's a good chance they'll keep happily racking up billable calls after their free calls are consumed.
The telemetry SIM might stay quiet for days - which is why, for the small amount of hoopla Telstra generated with its "telemetry over 3G" story, this kind of application remains "white noise" in carrier financias.
That's quite reasonable. Take a weather station, for example: it's sufficient for the device to collect its data and return that data to base by making a few of extremely brief calls each day. There may only be a hundred kilobytes to transfer with each call, after all (temperature samples, humidity samples, rainfall totals).
It's the same with many telemetry applications. Even with continuous data collection, you don't need continuous data transfer, unless there's an exception. If you're monitoring remote equipment, you don't need "up to the second" updates that simply say "two o'clock and all's well".
What you want is something which says "here's the data" at a specified period, but which can call for help if something goes wrong. And that's what most cellular telemetry does today - which means that compared to a human user, telemetry is very frugal in its use of the communications channel.
What Telstra wants is to create demand for telemetry applications that are more talkative: hence the focus on video streaming and live remote control in its telemetry press release. The ideal is to encourage applications that use the data channel as much as possible.
Another small point. Telstra made much of the usefulness of telemetry in remote locations. This is perfectly true: if you're monitoring a device that's 50 km from the homestead, you want to avoid travelling to the device wherever possible.
But who is Telstra kidding? For all of its claims that the mobile network covers 98% of Australia's population, the geographical coverage is a different matter.
So here's your quiz starter for ten points: is remote equipment likely to be close to the population centre?
As an idea, remote telemetry for the rural sector is wonderful - but only if the device is in the mobile footprint.
Tuesday, April 08, 2008
Who will save us from bad software?
What's on my mind here is the more general issue of software quality. More precisely, what's going to be the impact of crap software on the world at large?
My washing machine is a case in point. Unlike the old world of washing machines, where a complex and breakable machine (a timer / controller) ran the wash cycles, the modern machine is computer-controlled. That means its functions are controlled by software.
And that means the behaviour of the machine is at the mercy of the brain that conceived the software. In this particular case, whenever something like balance upsets the washer, instead of going into a "hold" state and simply resuming its cycle when you re-balance it (like the old electro-mechanical timer would have done), the software designer has decided the machine should revert to the beginning of the cycle in which it unbalanced. If it unbalances at the end of a rinse cycle, it returns to the beginning of the rinse cycle - unless I or my wife runs over to the machine and resets its timer to a more appropriate spot.
But wait, it gets worse: its rinse cycle is actually multiple rinses. So if it unbalances when emptying from "first rinse" to begin "second rinse", it often decides to reset itself - it returns to the beginning of "first rinse" automatically. It's quite capable of spending hours getting half-way through its rinse cycle and returning to the start (which means you can't leave it home alone).
The result of bad software in the washing machine is a machine that is a relentless waster of water and electricity, and which needs constant supervision - neither of which would or should be the intention of the designer.
Since software is increasingly at the basis of everyday life, software quality is important.
Last week, I was at a press lunch hosted by Websense. That company is putting a lot of effort into new technologies designed to protect companies against the damage caused by botnets, trojans, drive-by Websites planting malicious code in browsers, and so on.
All of this is laudable and important - but like the water-wasting washer, the dangerous Web application has its roots in bad software: insecure operating systems, very poorly coded Web applications built in a rush and hyped to the max, and software that's simply a dangerous idea in the first place (why, oh why, do people think it's a good idea to host your desktop search and office applications out in Google?).
Websense was quite happy to agree that bad software is the starting point of bad security. Happily for that company, there's no immediate prospect of thing improving...
The Old “Peak Speeds” Game Again
Part of the ongoing muddying of the waters in Australia's WiMAX-versus-mobile debate comes from confusion about the intended applications of technologies (for example, fixed WiMAX is older than mobile and therefore it's obsolete).
But the misuse of speed specifications also plays a role here. Hence we're told by breathless journalists that the next generation of LTE-based mobile cellular technologies will deliver speeds of 40 Mbps.
So if anyone's listening – it has to happen eventually – here's a three-second primer.
1) Peak speed is meaningless
The peak speed game is played over and over again: years ago, someone discovered that 802.11g (peak speed 54 Mbps) didn't deliver those speeds to end users. But the press still gets very excited at peak speeds. Whatever the 40 Mbps represents in LTE, it doesn't represent the user data download speed from a base station to a device.
2) Users share channels
This is not unique to cellular environments. In Ethernet, users eventually share the total available bandwidth (even if it's the backplane of the switch). In WiFi, users share the base station bandwidth. In DSL, users share the capacity of the DSLAM. In LTE, as in all cellular environments, users will share the data capacity of the cell. I'm still doing the research to put a number to this, but I can state categorically that a user will only get the base station's top speed if that user is the only one running data through the base station at any given moment.
3) Remember Shannon's Law
The 40 Mbps now carelessly cited all over the place for LTE is a best-case test result. Any white paper from the industry makes it clear that in LTE as in every communciations technology, Shannon's Law still holds true. In the case of cellular data, the practical upshot of this is that as you move away from the base station, your speeds drop.
4) Carriers provision services
The service speeds carriers eventually offer over LTE won't just depend on what the cells can do. They'll be constructed according to the different plans the carrier wants to offer. I'll bet a dollar, here and now, that carriers won't be rushing to sell users $29 mobile broadband plans with 40 Mbps download speeds.
So the next time someone says "the next generation of cellular wireless will give us download speeds of 40 Mbps", the correct answer is "bulldust".
Monday, April 07, 2008
Could the New Zealand Institute's Fibre Co Work?
The problem is that when they're bent on a privatisation, governments are happy to take the biggest cheque they can get – something like structural reform of an industry might reduce the sale price, so it gets left on the shelf. Afterwards is too late: any government that halved the value of a private company would be in line for lawsuits. So it's too late to easily restructure the Telstra monolith.
New Zealand is grappling with a similar problem, which is why the New Zealand Institute has suggested an independent monopoly (the so-called Fibre Co) be put in charge of the last mile. That way, the NZI argues, the upgrade of NZ's customer access network can proceed without tensions between the incumbent and its competitors getting in the way.
A long time ago now – back when Australian Communications was in publication, and therefore in the late 1990s – I advocated a very similar structure for Australia, with the wholesaler existing as a monopoly whose shares were owned by the carriers using its infrastructure.
Since I'm an advocate of such a model – well, since I was prepared to consider such a model as viable – I'm going instead to ask the opposite question: what's wrong with the idea of a wholesale infrastructure monopoly?
1) Bureaucracy
The biggest problem is that there are more than 150 carriers in Australia. Some of them might not need or want membership of a Fibre Co, but even 50 companies might prove unwieldy, with different agendas, commercial imperatives, and geographies to be serviced.
Without offending people on the other side of the Tasman, New Zealand has a much smaller market. There are fewer carriers to participate, and there's less geography to cover; so a Fibre Co might well be manageable.
2) Clout
A simple “shareholding by size” model has this against it: the Fibre Co would be at risk of being under the control of its largest shareholder (which is also its largest customer). So there's a legitimate question about the degree to which one carrier's interests should dominate the process, versus the degree to which minor carriers should influence decisions reaching far beyond their own sphere.
3) Flexibility
Some commentators in New Zealand have criticised the Fibre Co model as a “one size fits all” proposal. It may be so – I haven't yet read the entire New Zealand Institute report – but that doesn't mean any implementation of Fibre Co has to be inflexibly chartered to deliver “this solution and only this solution”.
For some reason, people treat consumer telecommunications as a different creature to business telecommunications. Any sensible business starts building its network by defining its requirements: work out the applications you need to support, and build the network to support those applications. But in the consumer world, people insist on defining the technology first, without reference to requirements.
If a government were to charter the establishment of a Fibre Co, it should do so without reference to a particular technology. Rather, it should mandate a simple charter of requirements, and leave the infrastructure owner to meet those requirements with whatever technology best fits the circumstances.
But there is a second “flexibility” issue – one that goes with the structure of a Fibre Co. If the structure is too difficult to manage, bureaucratic, or too dominated by narrow interests, then inflexibility may be the result.
Can it be done?
The answer is “yes”. A Fibre Co could be made to work, and we can even see an example in Australia of a co-operatively managed carrier, in which there exists dominant participants and minor participants, but which has a clear enough mandate so that it stands as a success story of the industry.
It's called AARNET.
Saturday, April 05, 2008
Where on Earth is my Microphone?
Philip K Dick once described psychosis as being typified by an inability to see the easy way out. Dammit, I can see the easy way out, but I've already put too much of my stuff into the Linux box.
During one of my sessions trawling around to get the USB headset working, I managed to do something (heaven knows what) to stop Freespire booting: it hung while loading the desktop. Eventually, failing every other possibility, I connected the external backup hard drive, learned how to mount it at the command line, copied most of my data (forgetting the mailbox, dammit) to the drive, and re-installed.
At this point, I gave up on the USB headset and reverted to an ordinary headset. This, at least, functions. Next, I tell myself, I need to get a softphone.
None of them work with Freespire sound, or ALSA, on the Acer 5315. All of them have output, but none of them respond to the microphone. To save time, I'll quickly list the softphones I've tried: KPhone, Gizmo, X-Lite and Twinkle. KPhone crashes when I try to make a call; Gizmo can't see the microphone; X-Lite barely launches; and Twinkle won't install properly (yes, I added the unbelievable host of libraries it needs).
So I decided to see whether any other application would see the microphone.
KRecord is happy with the microphone, as is Audacity. GNUSound, sort of, but it has the sort of catastrophic crashes that boosters tell you don't happen under Linux. So the problem has to be the interaction between the softphones and the sound system.
But it's not just Linux that gives pain.
Somewhere in the world, there's a designer of such unrestrained malice or stupidity that he or she has made the touchpad of the Acer 5315 almost completely unusable.
The design decision was this: "Wouldn't it be great if, instead of having to click, we made a self-clicking cursor? That way, you only need to hover the cursor over something, and it will act as if you clicked on it."
It's the worst idea I have encountered in a long time. While you're typing, the cursor regularly activates other parts of the screen, and the user has no input into this whatever. The only way to get a usable machine is to connect an external mouse and de-activate the touchpad completely.
Thursday, April 03, 2008
Opel Coverage: Why is WiMAX Obsolete?
The decision to unplug the Opel contract received plenty of media attention, as you might expect. And, as might also be expected, the mainstream media has managed some bloopers in its coverage of the affair. I thought I'd pick out a few highlights.
From Malcolm Maiden of Fairfax: “The technology Opel was relying on was also questionable, because it relied, in part, on outdated WiMAX fixed wireless microwave signal propagation...”
WiMAX outdated? Nonsense: there is legitimate debate about whether the technology is the best one for a particular application, but that has nothing to do with the alleged obsolescence of WiMAX. WiMAX is, for all its faults, a current and ongoing technology development.
I don't mean to take the stick to Fairfax, but its commentators seem to be hunting in a pack on the question of WiMAX obsolescence: Jesse Hogan of The Age also pursued this line: “The problem was that Coonan wanted to use an obsolete fixed (non-portable) version of WiMAX instead of the newer mobile version...”
Hogan simply doesn't fully understand the technology. Yes, mobile WiMAX is a newer standard than fixed WiMAX. But that doesn't mean the requirement for fixed wireless applications will disappear. Again, we have a confusion between two questions: “Does the world have a use for fixed wireless broadband data services?”, which anyone familiar with this industry would answer “Yes”; and “Is fixed WiMAX the best technology for fixed wireless broadband data services?” for which the answer is “Some people say yes, others say no”.
Elizabeth Knight, also of Fairfax, had the good sense to stay away from technology commentary, but couldn't resist repeating the government's line about network duplication: “The risk of overlap is real ... remember the Telstra and Optus overbuild of cable in metropolitan areas. It was a multibillion dollar commercial gaffe.”
I agree that it would be silly for the government to fund two networks, and I also agree that the way HFC was rolled out was a disaster. But neither of these make a general rule about infrastructure duplication. Mobile networks are duplicated, yet they're a commercial success – and as we see in most telecommunication markets, infrastructure duplication (another word is “competition”) drives prices downwards, which is why it's cheaper to buy a data link from Sydney to Melbourne than from Brisbane to Cloncurry.
If “WiMAX obsolescence” is the pack-line from Fairfax, News Limited's favourite line is that Optus is a “lazy competitor”: this came from both John Durie and Terry McCrann. The Australian, however, has had the good sense to stay away from the WiMAX technology debate!
Tuesday, April 01, 2008
The Life of the Linux Newbie
OK: it's not a proper blog. For a start, there's no link-farming. Second, no plagiarism (bloggers often don't dig ideas like "fair use" and "attribution". For them, it's about hitrates: whatever gets clicks is good). And third, I promise to try and offer moderately readable articles.
Mostly I'll be talking about telecommunications, sometimes computing, sometimes life in general.
But first, I want to talk about my life with Linux. Since I've just set up on my own, I had to find a low-cost way of doing what I wanted to do, and that meant finding a laptop that could support Grass-GIS, my favourite mapping and geographic analysis system. Having tried to get it running on Vista and failed, and having run it on Windows but worried at low speed, I decided to give Linux another spin (my third attempt since about 2000).
I fear my lessons can't be taught. You have to feel the pain yourself. Still...
Lesson 1: Research
All smug Linux experts, please leave the room now. Yes, I know you have to find out in advance whether your laptop will run with Linux. That's easier said than done, since models turn over faster than pancakes, and different models get different monikers in different countries, and even the claim that “this laptop will run Linux” can't always be taken at face value.
So I bought an Acer Aspire 5315: cheap and low-powered, but most of what I do doesn't need muscle, and I didn't buy this little box expecting a multimedia powerhouse.
The problem is that the Acer Aspire 5315 has an Atheros chipset for which Linux doesn't really have decent driver support yet, so I have no wireless networking. And its sound system doesn't seem to like Linux, so I have a silent machine. More on this in a minute...
Lesson 2: The Distribution
It might sound easy: choose your distribution carefully. But there are still way too many distos, and Linux distribution research is difficult. You may as well say “buy a dictionary and choose the word that suits your needs”. Choosing words comes naturally if you're a native of the language you're using. Choosing a Linux distribution comes easy if you're intimate with Linux. For second-language people, in computers as in conversation, the right choice is not second nature (the fact that every Linux disto believes itself to be the best is no help at all. Could we leave the marketing-speak over in the world of commercial software, please?).
After trying and failing to make a sensible choice of distribution, I decided to waste some CDs and take a more active approach. The end result is that I'm using Freespire, but there were some burns along the way.
My favourite distribution is actually Puppy Linux (http://www.puppylinux.org). The problem is that Puppy would not find any networking of any sort – not even the wired Ethernet would come alive. I wandered around somewhat, searched for drivers, downloaded various extra packages to try and install them, and achieved nothing. Perhaps I'll give it another swing some other day, but this time around I had to give up.
Next, I investigated Knoppix. I quickly gave up on that line of investigation. The problem with Puppy, I reasoned, was that nobody had caught up with the laptop's Ethernet drivers yet, and the latest build of Puppy was only a few weeks old. As far as I could tell, all the Knoppix distributions I could locate were more than a year old (this was just before the 5.8 release was posted, as I have just discovered). I couldn't see it likely that older software would have out-of-the-box support for a newer driver, so I moved on.
Next came Ubuntu. I decided to go for an install to the hard drive, intending to use the boot loader to twin-boot the box, but I was never offered a Windows boot again. The Windows directories and partitions were still in place (as I discovered later), but the machine simply skipped through to booting Ubuntu without asking my opinion on the matter. This may not have mattered, but for one thing: Ubuntu would not boot.
Not at all.
It just hung during the boot cycle.
Overnight is long enough to wait, so when I came out in the morning to a machine that still hadn't booted, I decided that Ubuntu wasn't what I wanted. So I downloaded a Freespire Live CD ISO to test that one – and somewhere around this time, I noticed the aforementioned no-Windows issue. This, I must say, brought me out in a bit of a sweat. I didn't relish the idea of reinstalling Vista from cold, but I was worried that whatever had happened to Ubuntu might also happen to other Linuxes.
Thankfully, Freespire did boot, and it booted with Ethernet if not with wireless, so it's been Freespire that I've stayed with.
If you're just a user – and yes, I am just a user – you shouldn't have to burn midnight oil just to get to the starting gate.
Lesson 3: The Application Question
Thankfully, Freespire will run Grass-GIS (given a certain amount of bloody-mindedness. Fortunately, I found some lying around in the shed...). But getting the installation happening was
very
very
painful.
You see, Grass-GIS is not offered on Freespire's preferred software installation site, CNR.com (which, by the way, needs more consistency: what's the point of offering software for installation, only to give the user the message “not available for your distribution”?).
I started by asking myself “what's the underlying distribution beneath Freespire?” and, since the answer seemed to be Debian, I tried installing the Debian port of Grass-GIS. This went very, very badly, because Grass-GIS needs lots of supporting libraries, some of which were difficult to locate “the Debian way”, and others of which seemed to dislike the versions of themselves installed by Freespire ...
So I broke the installation and had to go back to scratch. When things were working again on a clean install, I made a list of the libraries Grass-GIS needed, installed as many as possible from CNR, got others direct from their source rather than via the Debian repositories, and ended up breaking everything again.
Oh.
Next, I picked the computer up off the road and glued it back together – OK, that's exaggeration – I re-installed Freespire again, and decided that I didn't want the Debian port after all. Would the Ubuntu port work?
Yes, it would. Not only that, but once I'd let the computer know where to download libraries from, I managed to get the libraries installed in only a couple of steps, Grass-GIS in a couple more steps, and away we went.
Other things are less forgiving – by which I mean that instead of a painful process with success at the end, some software declines to run at all under Freespire, even when the brain-dead CNR.com installer thinks (a) the software is available and (b) the installation went just wonderfully well, thank you.
Things that don't work include Google Earth (which induces a complete and catastrophic crash – all the way back to the user login); the MySQL query browser (thankfully I can run MySQL from the command line – and before you ask, I have the database sitting behind the geographic stuff), the Navicat database query application, the Seamonkey browser, and various HTML editors offered on the CNR.com Website.
So I can't really recommend Freespire on any basis other than “at least it boots”, which has to be the lamest recommendation I can imagine.
Lesson 4: Some Things Work Well
I'm glad I don't really care about sound, because I would be really upset if I had a sound-crippled laptop when sound really mattered to me. I'm a little less pleased about the wireless, but thankfully I was able to pull an Ethernet cable to the right part of the house, and with any luck it won't get rat-bitten.
But there are upsides, and the best one of all is in Grass-GIS.
Geographic information systems can be pretty demanding applications. Back in an office environment, before I struck out on my own, I was accustomed to running Grass-GIS on a very high-powered MacBook Pro carrying as much memory as you could install. I was also accustomed to Grass-GIS being so demanding that some maps might take half and hour to display on the screen (which made me handle the mouse with care: accidentally reloading a map could mean “we're staying late at the office tonight”).
Grass-GIS on Windows machines could be even worse, because it had to run under the Cygwin libraries. So you have an operating system running an emulation layer for another operating system, and on top of the emulation layer you are running Grass-GIS itself. Doing things this way requires very serious grunt.
But Grass-GIS on Linux is so fast that my first response was “what happened?” – things happened so fast. Here is a box which, compared with its predecessor, is little better than a cripple both in processing power and in memory, Moreover, it's running both the GIS system and the MySQL database feeding it, yet it cruises along.
So after the pain, there has been a payoff. All I need next is to find out why there's no sound and wireless...
On the Loose Again
Now I'm freelance again, and so with any luck I will get to blog again. I wish I could find some way to move all the old posts into an archive directory, but either I'm stupid or I just haven't worked things out yet. Time, time ...
I'd also like to mention that I've acquired some skills over the last few years which may be of use to the industry I work in. In particular, I have had great fun learning how to operate the Grass-GIS open source geographical analysis and mapping software. At the same time, I've been learning how to run Linux on a laptop (Freespire, not because I love it but because it did the best job of finding my LAN).
And I've got a couple of quick items for other people wanting to use Grass-GIS on Freespire.
The first is that the distribution that works best on Freespire seems to be the Grass-GIS compile for Ubuntu, here: http://www.les-ejk.cz/ubuntu/
Second: If you want to experiment with QGIS (http://www.qgis.org) as a front-end for Grass-GIS, do not install QGIS from the CNR service. The CNR installation wiped my Grass-GIS 6.2 and replaced it with 6.0, which I didn't really need.