Thursday, April 10, 2008

Telemetry and 3G

Telstra is spruiking the wonders of telemetry over the 3G network.

It's an interesting thought, although not quite bleeding-edge. After all, telemetry over cellular networks has been with us for a while - even if it's hard to find in carriers' financial reports.

The reason it's hard to locate in the financials is, of course, because the telemetry market is so small that it doesn't warrant a break-out line item of its own.

There's a good reason for this: telemetry is a misfit with the way carriers like to market their mobile networks. The best mobile subscriber is the one that generates lots of billable events (calls, SMSs, MMSs, browsing and so on).

Why do you think prepaid dollars have a use-by date? The last thing any carrier wants is to have someone buy $50 of prepaid minutes and leave them sitting there for a year. Among other things, the expiry of pre-paid minutes is designed to encourage people to make calls. Bundled free calls with post-paid mobiles share the same aim (in social engineering terms, anyhow): if you habituate the user to make lots of calls, there's a good chance they'll keep happily racking up billable calls after their free calls are consumed.

The telemetry SIM might stay quiet for days - which is why, for the small amount of hoopla Telstra generated with its "telemetry over 3G" story, this kind of application remains "white noise" in carrier financias.

That's quite reasonable. Take a weather station, for example: it's sufficient for the device to collect its data and return that data to base by making a few of extremely brief calls each day. There may only be a hundred kilobytes to transfer with each call, after all (temperature samples, humidity samples, rainfall totals).

It's the same with many telemetry applications. Even with continuous data collection, you don't need continuous data transfer, unless there's an exception. If you're monitoring remote equipment, you don't need "up to the second" updates that simply say "two o'clock and all's well".

What you want is something which says "here's the data" at a specified period, but which can call for help if something goes wrong. And that's what most cellular telemetry does today - which means that compared to a human user, telemetry is very frugal in its use of the communications channel.

What Telstra wants is to create demand for telemetry applications that are more talkative: hence the focus on video streaming and live remote control in its telemetry press release. The ideal is to encourage applications that use the data channel as much as possible.

Another small point. Telstra made much of the usefulness of telemetry in remote locations. This is perfectly true: if you're monitoring a device that's 50 km from the homestead, you want to avoid travelling to the device wherever possible.

But who is Telstra kidding? For all of its claims that the mobile network covers 98% of Australia's population, the geographical coverage is a different matter.

So here's your quiz starter for ten points: is remote equipment likely to be close to the population centre?

As an idea, remote telemetry for the rural sector is wonderful - but only if the device is in the mobile footprint.

Tuesday, April 08, 2008

Who will save us from bad software?

No, this is not part of my ongoing battles with Linux (the sound's died again, by the way. Damned if I'm reinstalling...).

What's on my mind here is the more general issue of software quality. More precisely, what's going to be the impact of crap software on the world at large?

My washing machine is a case in point. Unlike the old world of washing machines, where a complex and breakable machine (a timer / controller) ran the wash cycles, the modern machine is computer-controlled. That means its functions are controlled by software.

And that means the behaviour of the machine is at the mercy of the brain that conceived the software. In this particular case, whenever something like balance upsets the washer, instead of going into a "hold" state and simply resuming its cycle when you re-balance it (like the old electro-mechanical timer would have done), the software designer has decided the machine should revert to the beginning of the cycle in which it unbalanced. If it unbalances at the end of a rinse cycle, it returns to the beginning of the rinse cycle - unless I or my wife runs over to the machine and resets its timer to a more appropriate spot.

But wait, it gets worse: its rinse cycle is actually multiple rinses. So if it unbalances when emptying from "first rinse" to begin "second rinse", it often decides to reset itself - it returns to the beginning of "first rinse" automatically. It's quite capable of spending hours getting half-way through its rinse cycle and returning to the start (which means you can't leave it home alone).

The result of bad software in the washing machine is a machine that is a relentless waster of water and electricity, and which needs constant supervision - neither of which would or should be the intention of the designer.

Since software is increasingly at the basis of everyday life, software quality is important.

Last week, I was at a press lunch hosted by Websense. That company is putting a lot of effort into new technologies designed to protect companies against the damage caused by botnets, trojans, drive-by Websites planting malicious code in browsers, and so on.

All of this is laudable and important - but like the water-wasting washer, the dangerous Web application has its roots in bad software: insecure operating systems, very poorly coded Web applications built in a rush and hyped to the max, and software that's simply a dangerous idea in the first place (why, oh why, do people think it's a good idea to host your desktop search and office applications out in Google?).

Websense was quite happy to agree that bad software is the starting point of bad security. Happily for that company, there's no immediate prospect of thing improving...

The Old “Peak Speeds” Game Again

I'll have more to say on this in Communications Day next week – I need a little time for research – but it may as well be blogged as well.

Part of the ongoing muddying of the waters in Australia's WiMAX-versus-mobile debate comes from confusion about the intended applications of technologies (for example, fixed WiMAX is older than mobile and therefore it's obsolete).

But the misuse of speed specifications also plays a role here.
Hence we're told by breathless journalists that the next generation of LTE-based mobile cellular technologies will deliver speeds of 40 Mbps.

So if anyone's listening – it has to happen eventually – here's a three-second primer.


1) Peak speed is meaningless


The peak speed game is played over and over again: years ago, someone discovered that 802.11g (peak speed 54 Mbps) didn't deliver those speeds to end users. But the press still gets very excited at peak speeds. Whatever the 40 Mbps represents in LTE, it doesn't represent the user data download speed from a base station to a device.


2) Users share channels


This is not unique to cellular environments. In Ethernet, users eventually share the total available bandwidth (even if it's the backplane of the switch). In WiFi, users share the base station bandwidth. In DSL, users share the capacity of the DSLAM. In LTE, as in all cellular environments, users will share the data capacity of the cell. I'm still doing the research to put a number to this, but I can state categorically that a user will only get the base station's top speed if that user is the only one running data through the base station at any given moment.


3) Remember Shannon's Law


The 40 Mbps now carelessly cited all over the place for LTE is a best-case test result. Any white paper from the industry makes it clear that in LTE as in every communciations technology, Shannon's Law still holds true. In the case of cellular data, the practical upshot of this is that as you move away from the base station, your speeds drop.


4) Carriers provision services


The service speeds carriers eventually offer over LTE won't just depend on what the cells can do. They'll be constructed according to the different plans the carrier wants to offer. I'll bet a dollar, here and now, that carriers won't be rushing to sell users $29 mobile broadband plans with 40 Mbps download speeds.


So the next time someone says "the next generation of cellular wireless will give us download speeds of 40 Mbps", the correct answer is "bulldust".

Monday, April 07, 2008

Could the New Zealand Institute's Fibre Co Work?

For some time, I have believed that the structure of Australia's telecommunications industry and networks should have been sorted out before privatisation of Telstra went ahead, rather than after.

The problem is that when they're bent on a privatisation, governments are happy to take the biggest cheque they can get – something like structural reform of an industry might reduce the sale price, so it gets left on the shelf. Afterwards is too late: any government that halved the value of a private company would be in line for lawsuits. So it's too late to easily restructure the Telstra monolith.

New Zealand is grappling with a similar problem, which is why the New Zealand Institute has suggested an independent monopoly (the so-called Fibre Co) be put in charge of the last mile. That way, the NZI argues, the upgrade of NZ's customer access network can proceed without tensions between the incumbent and its competitors getting in the way.


A long time ago now – back when Australian Communications was in publication, and therefore in the late 1990s – I advocated a very similar structure for Australia, with the wholesaler existing as a monopoly whose shares were owned by the carriers using its infrastructure.


Since I'm an advocate of such a model – well, since I was prepared to consider such a model as viable – I'm going instead to ask the opposite question: what's wrong with the idea of a wholesale infrastructure monopoly
?

1) Bureaucracy

The biggest problem is that there are more than 150 carriers in Australia. Some of them might not need or want membership of a Fibre Co, but even 50 companies might prove unwieldy, with different agendas, commercial imperatives, and geographies to be serviced.

Without offending people on the other side of the Tasman, New Zealand has a much smaller market. There are fewer carriers to participate, and there's less geography to cover; so a Fibre Co might well be manageable.


2) Clout

A simple “shareholding by size” model has this against it: the Fibre Co would be at risk of being under the control of its largest shareholder (which is also its largest customer). So there's a legitimate question about the degree to which one carrier's interests should dominate the process, versus the degree to which minor carriers should influence decisions reaching far beyond their own sphere.


3) Flexibility

Some commentators in New Zealand have criticised the Fibre Co model as a “one size fits all” proposal. It may be so – I haven't yet read the entire New Zealand Institute report – but that doesn't mean any implementation of Fibre Co has to be inflexibly chartered to deliver “this solution and only this solution”.


For some reason, people treat consumer telecommunications as a different creature to business telecommunications. Any sensible business starts building its network by defining its requirements: work out the applications you need to support, and build the network to support those applications. But in the consumer world, people insist on defining the technology first, without reference to requirements.


If a government were to charter the establishment of a Fibre Co, it should do so without reference to a particular technology. Rather, it should mandate a simple charter of requirements, and leave the infrastructure owner to meet those requirements with whatever technology best fits the circumstances.


But there is a second “flexibility” issue – one that goes with the structure of a Fibre Co. If the structure is too difficult to manage, bureaucratic, or too dominated by narrow interests, then inflexibility may be the result.


Can it be done?

The answer is “yes”. A Fibre Co could be made to work, and we can even see an example in Australia of a co-operatively managed carrier, in which there exists dominant participants and minor participants, but which has a clear enough mandate so that it stands as a success story of the industry.


It's called AARNET.