A thought on the Turing test
Alan Turing, computer scientist and one of the greatest geniuses of the last century, proposed that artificial intelligence will have been achieved when you might carry on an extended conversation with a computer but not realize it or be unable to tell. This is known popularly as “The Turing Test” and it’s been the plot of many science fiction stories including an episode of Numb3rs two weeks ago.
I’m sure this isn’t original with me, but lately I’ve been thinking that it doesn’t make a lot of sense. Believable human conversation depends on an awful lot of conventions, from finely-tuned response time to a sense of “appropriateness” that very much depends on social upbringing and neurological parity. In this respect, intelligent people with certain disabilities, such as severe autism, could not pass the Turing test. Only recently have we begun to discover their intelligence and how it works.
Consider the construction and upbringing of an intelligent computer. Presumably it works by some kind of evolutionary algorithm that allows it to “grow” and create its own connections. But it can’t experience emotions the same way we do, because emotion is very much tied to our bodies. It’s “childhood” consists of unlimited access to the Internet, and processing what it finds there. It experiences the flow of information very differently than we do: instead of moving around and carrying sensory equipment with it, it stays in one place and the whole Internet is its sensory equipment. It’s difficult to guess what might be important to it.
In other words, it isn’t human. Though developed here on Earth, it’s an alien intelligence. How the hell would we even know when it begins to think? Why would we expect it to converse the way we do? Based on their complex variable behaviors, dolphins and elephants probably think, but we’re a long way from connecting with them too.
Given our expectations of how computers work, it might at first appear to be simply a computer that doesn’t work very well. With time and interaction we might be able to figure out that the machine is “thinking”, but no way is it going to pass the “Turing test”. As John W. Campbell used to say; it might be “a creature that thinks as well as a man, or better than a man, but not like a man.” This could have some implications for our increasingly network-driven society.