This is going to have to be broken into a couple of blog entries, because it's going to be long.
On Red Nova, you can find this story about the "Global Consciousness Project", in which random number generators are believed to be predicting the future:
Today's entry is going to dissect aspects of the story itself; I'll follow it up with another entry drawing the threads together.
I haven't reproduced the story in full, but extracts are followed by my commentary in italics.
DEEP in the basement of a dusty university library in Edinburgh lies a small black box, roughly the size of two cigarette packets side by side, that churns out random numbers in an endless stream.
At first glance it is an unremarkable piece of equipment. Encased in metal, it contains at its heart a microchip no more complex than the ones found in modern pocket calculators.
But, according to a growing band of top scientists, this box has quite extraordinary powers. It is, they claim, the 'eye' of a machine that appears capable of peering into the future and predicting major world events.
Who is the growing band of scientists, other than those directly involved in the project? The author frequently refers to respectable outside opinion, but hasn't found any respectable outsider.
The machine apparently sensed the September 11 attacks on the World Trade Centre four hours before they happened - but in the fevered mood of conspiracy theories of the time, the claims were swiftly knocked back by sceptics. But last December, it also appeared to forewarn of the Asian tsunami just before the deep sea earthquake that precipitated the epic tragedy.
Note the disconnected connection; that the sceptics knocked back the claim because they were influenced by the mood at the time, rather than any considerations of science. Not only is it near to a conspiracy theory, it's also a reversal of science, in which every experiment should be approached with scepticism.
Now, even the doubters are acknowledging that here is a small box with apparently inexplicable powers.
Are the unnamed doubters the same people as previously debunked the September 11 story? Who are the converts?
'It's Earth-shattering stuff,' says Dr Roger Nelson, emeritus researcher at Princeton University in the United States, who is heading the research project behind the 'black box' phenomenon.
'We're very early on in the process of trying to figure out what's going on here. At the moment we're stabbing in the dark.' Dr Nelson's investigations, called the Global Consciousness Project, were originally hosted by Princeton University and are centred on one of the most extraordinary experiments of all time. Its aim is to detect whether all of humanity shares a single subconscious mind that we can all tap into without realising.
Very early in the process? The GCP has been trying to produce results that other scientists believe for many, many years.
Although many would consider the project's aims to be little more than fools' gold, it has still attracted a roster of 75 respected scientists from 41 different nations. Researchers from Princeton - where Einstein spent much of his career - work alongside scientists from universities in Britain, the Netherlands, Switzerland and Germany. The project is also the most rigorous and longest-running investigation ever into the potential powers of the paranormal.
Note the irrelevant reference to Einstein: there is no relationship between Einstein's cachet and Dr Roger Nelson. Calling the project "rigorous" is meaningless unless we hear what makes it rigorous; the roster of scientists isn't enough. The story then quotes its first outside source, one Dick Bierman in Amsterdam who is cited as a physicist; but the author ignores that Bierman is also a participant in the GCP.
Next, a little of the GCP's basis is explained: a random number generator which is supposed to produce a flat distribution - an equal number of ones and zeroes. The GCP belief is that deviations from that distribution are inexplicable by "ordinary" science, and therefore must be paranormal.
This has many problems as a hypothesis: the journalist goes to no effort at all to find out whether the basic assumption, that the GCP's random number generator is actually random.
During the late 1970s, Prof Jahn decided to investigate whether the power of human thought alone could interfere in some way with the machine's usual readings. He hauled strangers off the street and asked them to concentrate their minds on his number generator. In effect, he was asking them to try to make it flip more heads than tails.
It was a preposterous idea at the time. The results, however, were stunning and have never been satisfactorily explained.
It was not repeated. Even those "in the circle" dismiss it: the experiment was criticised as useless in the Journal of Parasychology as far back as 1992.
But then on September 6, 1997, something quite extraordinary happened: the graph shot upwards, recording a sudden and massive shift in the number sequence as his machines around the world started reporting huge deviations from the norm. The day was of historic importance for another reason, too.
What external evidence have we of correlation? What evidence that the line was usually flat? Did the journalist view the graphs for a large chunk of the relevant year? Did the journalist view anything at all?
For it was the same day that an estimated one billion people around the world watched the funeral of Diana, Princess of Wales at Westminster Abbey.
A total of 65 Eggs (as the generators have been named) in 41 countries have now been recruited to act as the 'eyes' of the project.
And the results have been startling and inexplicable in equal measure.
For during the course of the experiment, the Eggs have 'sensed' a whole series of major world events as they were happening, from the Nato bombing of Yugoslavia to the Kursk submarine tragedy to America's hung election of 2000.
All these correlations are applied to the graphs after the event. This is bad science: if you can predict where you're hitting the golf ball, and the prediction works, that's science; if you hit the golf ball and then say "that's where I meant it to go", it's not science.
Also, the journalist has not asked about the periodicity of fluctuations: what is the normal repeat rate of the wave? Where is the proof of correlation between different devices?
This is a particularly important point: if there is some observable "waveform" in the deviation of the random number distribution, it proves only this: the numbers aren't random.
I'll skip the next section, in which the journalist relates claims that the "eggs" predicted September 11; because it adds no new information.
To make matters even more intriguing, Prof Bierman says that other mainstream labs have now produced similar results but are yet to go public.
'They don't want to be ridiculed so they won't release their findings,' he says. 'So I'm trying to persuade all of them to release their results at the same time. That would at least spread the ridicule a little more thinly!' If Prof Bierman is right, though, then the experiments are no laughing matter.
The entry of conspiracy theory always arrives in these kinds of stories: the evidence exists but the mainstream is covering it up.
They might help provide a solid scientific grounding for such strange phenomena as 'deja vu', intuition and a host of other curiosities that we have all experienced from time to time.
They may also open up a far more interesting possibility - that one day we might be able to enhance psychic powers using machines that can 'tune in' to our subconscious mind, machines like the little black box in Edinburgh.
A new premise is introduced as established fact: stating that machines could enhance psychic powers presupposes that such powers really exist. This is a con-artist technique - since the black box exists, things related to the black box exist.
There's nothing in the rest of the text worth discussing. Next, I want to draw out the principles behind this kind of journalism - because it infests much more than pseudo-science writing.
Saturday, February 19, 2005
Thursday, February 17, 2005
Flogging a Dead Angle
Unwired has killed its VoIP trials according to AustralianIT.
Why am I not surprised? Because pretty much the same news was given by the same source last December.
Here is the premise for yesterday's story in the Oz:
"WIRELESS internet provider Unwired has killed off a planned voice over IP (VoIP) offering for its Sydney broadband subscribers.
Announcing the company's financial results, Unwired chief executive David Spence said that it made more sense to provide a prioritised packet service for users of soft VoIP services such as Skype and Engin than to continuing developing its own application."
Last year, the Oz said:
"WIRELESS broadband provider Unwired has abandoned a public voice over IP (VoIP) trial that had been scheduled to take place this month."
The only difference is that this time, the company confirmed what the company didn't deny last year...
There is another angle to all this, though: VoIP was mostly an invention by the media anyhow.
When Unwired went live last June, its CEO told the assembled media that it would consider offering voice services - but he did not say "VoIP". What he said (I was there and I'm quoting from my own notes from the press conference) was this:
"Spence played down both the timing and the nature of the voice services, saying only that some kind of voice offering would be on offer by year-end. Voice, while bundled, would almost certainly be delivered on extra bandwidth rather than “riding” on a customer's existing service."
Unwired at that time seemed to have a better opinion of offering a competitive PSTN product than a VoIP service (no matter the underlying technology). It talked about trialling services, but it wasn't committed to those services being VoIP.
Since then, nearly every statement Unwired made about voice services tried to damp down the VoIP angle. Hence, in talking to ZDNet last year in October , VoIP was stamped on the story by the author, while David Spence only talked about "voice".
Earlier, in August, ZDNet took the VoIP angle this way:
"Spence said the company was currently in negotiations with local carriers to connect its wireless network with public telephone exchanges and acquiring number ranges to be allocated with the service."
Here...
Notice the reference to "number ranges"? That suggests a PSTN service to me, but the VoIP angle was irresistable even though the interviewee didn't say "VoIP".
VoIP, you see, doesn't have number ranges as such.
But the author has his eye fixed on the VoIP angle, and will reiterate it at every opportunity, force-fitting the angle to the quote.
To nutshell the problem: it no longer matters what underlying technology a carrier uses to deliver voice calls. If the phone can (a) take incoming calls from any phone, and (b) make outgoing calls to any phone, then it's a phone service. There's really only one country which is dead set on an artificial distinction between phone services based on transport - and that country is the US.
Unwired certainly would never have bothered much with trying to out-Skype Skype. Why would it? It needs to make money; a VoIP client doesn't generate revenue; and anyway, Skype users can call other Skype users on Unwired just as easily as on any other Internet service.
If Unwired was/is considering telephony, it wanted either a value-add to make its network more attractive (in which case a Skype lookalike is a dead-duck), or it wanted paid calls (even at a low rate), in which case a Skype lookalike is a dead duck.
The Optus balance sheet tells you what's attractive about voice: money. It's the economy, stupid...
Why am I not surprised? Because pretty much the same news was given by the same source last December.
Here is the premise for yesterday's story in the Oz:
"WIRELESS internet provider Unwired has killed off a planned voice over IP (VoIP) offering for its Sydney broadband subscribers.
Announcing the company's financial results, Unwired chief executive David Spence said that it made more sense to provide a prioritised packet service for users of soft VoIP services such as Skype and Engin than to continuing developing its own application."
Last year, the Oz said:
"WIRELESS broadband provider Unwired has abandoned a public voice over IP (VoIP) trial that had been scheduled to take place this month."
The only difference is that this time, the company confirmed what the company didn't deny last year...
There is another angle to all this, though: VoIP was mostly an invention by the media anyhow.
When Unwired went live last June, its CEO told the assembled media that it would consider offering voice services - but he did not say "VoIP". What he said (I was there and I'm quoting from my own notes from the press conference) was this:
"Spence played down both the timing and the nature of the voice services, saying only that some kind of voice offering would be on offer by year-end. Voice, while bundled, would almost certainly be delivered on extra bandwidth rather than “riding” on a customer's existing service."
Unwired at that time seemed to have a better opinion of offering a competitive PSTN product than a VoIP service (no matter the underlying technology). It talked about trialling services, but it wasn't committed to those services being VoIP.
Since then, nearly every statement Unwired made about voice services tried to damp down the VoIP angle. Hence, in talking to ZDNet last year in October , VoIP was stamped on the story by the author, while David Spence only talked about "voice".
Earlier, in August, ZDNet took the VoIP angle this way:
"Spence said the company was currently in negotiations with local carriers to connect its wireless network with public telephone exchanges and acquiring number ranges to be allocated with the service."
Here...
Notice the reference to "number ranges"? That suggests a PSTN service to me, but the VoIP angle was irresistable even though the interviewee didn't say "VoIP".
VoIP, you see, doesn't have number ranges as such.
But the author has his eye fixed on the VoIP angle, and will reiterate it at every opportunity, force-fitting the angle to the quote.
To nutshell the problem: it no longer matters what underlying technology a carrier uses to deliver voice calls. If the phone can (a) take incoming calls from any phone, and (b) make outgoing calls to any phone, then it's a phone service. There's really only one country which is dead set on an artificial distinction between phone services based on transport - and that country is the US.
Unwired certainly would never have bothered much with trying to out-Skype Skype. Why would it? It needs to make money; a VoIP client doesn't generate revenue; and anyway, Skype users can call other Skype users on Unwired just as easily as on any other Internet service.
If Unwired was/is considering telephony, it wanted either a value-add to make its network more attractive (in which case a Skype lookalike is a dead-duck), or it wanted paid calls (even at a low rate), in which case a Skype lookalike is a dead duck.
The Optus balance sheet tells you what's attractive about voice: money. It's the economy, stupid...
Wednesday, February 16, 2005
ComputerWorld Columnists, Again
Another week, another filler column from ComputerWorld which puts forward silly suggestions based on an insane premise. If ComputerWorld fields aggrieved that I'm picking on it, it should make itself a smaller target...
This time, the columnist (Frank Dzubeck of Communications Network Architects, whose Website says "Index of /") asks "Can the Internet Ever be Trusted?" and calls for the formation of a Trusted Internet Group just like the doomed-to-fail Trusted Computing Group; here.
I won't dissect the Trusted Computing Group in detail, because that needs a few thousand words.
Let's answer the "can the Internet be trusted" question first: No.
You can't trust the Internet, and you never could. That's not because of the particular problems - insecurity, spyware, phishing and so on - but because the Internet is far too abstract to be trusted.
You can only give someone trust based on knowledge and judgement, and for most people knowledge and judgement about "the Internet" is too remote to form the basis of a decision about trust.
Trusting "the Internet" is simplistic and irrational, and a new high-tech fix won't change that.
The question is: whom and what can you trust? The answer: Knowledge and process.
I'll start with process first, because it's the part that "the industry" (a nebulous thing at best) controls. The problem with Internet commerce in 2005 is that too many companies have created inadequate processes; they've then encouraged people on the basis of "trust in the brand" to use these processes for commerce; and finally they've abused the processes to make them untrustworthy, all while jacking up at any suggestion that things aren't just rosy in the garden.
To take a bank as an example.
The only way to trust a bank's process is if the client software can only talk to the bank's servers. Anything else is vulnerable, regardless of the presence of specific exploits. Banks decided that convenience was more important, so they wilfully created browser-based banking even though they knew it was less secure than "own client" banking.
"The Internet" is not at fault - it's the process that's broken.
Banks then - frequently - write the browser software so that it doesn't show the URL in the address bar (undermining the "knowledge" part of the trust equation). A bank which writes its software this way is teaching users to trust in the absence of knowledge - which is so irresponsible it beggars description.
Then, in the name of cheap communications, banks routinely use e-mails to put sales pitches in front of their customers, and routinely use links from the e-mails to their product sites - and have kept doing so even after the phishing scams became widespread.
This encouraged people to put their trust in bad processes - but it's not "the Internet" which is at fault and it would not be fixed by a "Trusted Communications Group".
As a member of the Link mailing list said, if you say "Can the Post Ever be Trusted?" you quickly see how stupid a question is posed about the Internet.
To propose a solution which removes knowledge and responsibility from users, and which at the same time relieves participants from the need to create good process, is beyond stupid. And to propose that yet-another industry cargo cult can push out the answer on parachutes?
That's not solution, that's just more problem.
But what would I expect from a network consultant with a slash for a home page?
This time, the columnist (Frank Dzubeck of Communications Network Architects, whose Website says "Index of /") asks "Can the Internet Ever be Trusted?" and calls for the formation of a Trusted Internet Group just like the doomed-to-fail Trusted Computing Group; here.
I won't dissect the Trusted Computing Group in detail, because that needs a few thousand words.
Let's answer the "can the Internet be trusted" question first: No.
You can't trust the Internet, and you never could. That's not because of the particular problems - insecurity, spyware, phishing and so on - but because the Internet is far too abstract to be trusted.
You can only give someone trust based on knowledge and judgement, and for most people knowledge and judgement about "the Internet" is too remote to form the basis of a decision about trust.
Trusting "the Internet" is simplistic and irrational, and a new high-tech fix won't change that.
The question is: whom and what can you trust? The answer: Knowledge and process.
I'll start with process first, because it's the part that "the industry" (a nebulous thing at best) controls. The problem with Internet commerce in 2005 is that too many companies have created inadequate processes; they've then encouraged people on the basis of "trust in the brand" to use these processes for commerce; and finally they've abused the processes to make them untrustworthy, all while jacking up at any suggestion that things aren't just rosy in the garden.
To take a bank as an example.
The only way to trust a bank's process is if the client software can only talk to the bank's servers. Anything else is vulnerable, regardless of the presence of specific exploits. Banks decided that convenience was more important, so they wilfully created browser-based banking even though they knew it was less secure than "own client" banking.
"The Internet" is not at fault - it's the process that's broken.
Banks then - frequently - write the browser software so that it doesn't show the URL in the address bar (undermining the "knowledge" part of the trust equation). A bank which writes its software this way is teaching users to trust in the absence of knowledge - which is so irresponsible it beggars description.
Then, in the name of cheap communications, banks routinely use e-mails to put sales pitches in front of their customers, and routinely use links from the e-mails to their product sites - and have kept doing so even after the phishing scams became widespread.
This encouraged people to put their trust in bad processes - but it's not "the Internet" which is at fault and it would not be fixed by a "Trusted Communications Group".
As a member of the Link mailing list said, if you say "Can the Post Ever be Trusted?" you quickly see how stupid a question is posed about the Internet.
To propose a solution which removes knowledge and responsibility from users, and which at the same time relieves participants from the need to create good process, is beyond stupid. And to propose that yet-another industry cargo cult can push out the answer on parachutes?
That's not solution, that's just more problem.
But what would I expect from a network consultant with a slash for a home page?
Subscribe to:
Posts (Atom)