Home > Uncategorized > A thought on the Turing test

A thought on the Turing test

March 22, 2009

Alan Turing, computer scientist and one of the greatest geniuses of the last century, proposed that artificial intelligence will have been achieved when you might carry on an extended conversation with a computer but not realize it or be unable to tell.  This is known popularly as “The Turing Test” and it’s been the plot of many science fiction stories including an episode of Numb3rs two weeks ago.

I’m sure this isn’t original with me, but lately I’ve been thinking that it doesn’t make a lot of sense.  Believable human conversation depends on an awful lot of conventions, from finely-tuned response time to a sense of “appropriateness” that very much depends on social upbringing and neurological parity.  In this respect, intelligent people with certain disabilities, such as severe autism, could not pass the Turing test.  Only recently have we begun to discover their intelligence and how it works.

Consider the construction and upbringing of an intelligent computer.  Presumably it works by some kind of evolutionary algorithm that allows it to “grow” and create its own connections.  But it can’t experience emotions the same way we do, because emotion is very much tied to our bodies.  It’s “childhood” consists of unlimited access to the Internet, and processing what it finds there.  It experiences the flow of information very differently than we do: instead of moving around and carrying sensory equipment with it, it stays in one place and the whole Internet is its sensory equipment.  It’s difficult to guess what might be important to it.

In other words, it isn’t human.  Though developed here on Earth, it’s an alien intelligence.  How the hell would we even know when it begins to think?  Why would we expect it to converse the way we do?  Based on their complex variable behaviors, dolphins and elephants probably think, but we’re a long way from connecting with them too.

Given our expectations of how computers work, it might at first appear to be simply a computer that doesn’t work very well.  With time and interaction we might be able to figure out that the machine is “thinking”, but no way is it going to pass the “Turing test”.  As John W. Campbell used to say; it might be “a creature that thinks as well as a man, or better than a man, but not like a man.”  This could have some implications for our increasingly network-driven society.

Categories: Uncategorized
  1. March 22, 2009 at 10:32 | #1

    All your blog are belong to us!

  2. March 22, 2009 at 11:53 | #2

    There are tons of reasons why the Turing test doesn’t make sense, and the one you pointed out is just one of them. The Turing test makes a huge lot of assumptions as to what constitutes intelligence, and in fact, limits the scope of it, too.

    For example, an AI would have to limit its own mathematical prowess, it wouldn’t be allowed to calculate a complicated formula in a split second. It would have to pretend to have limbs, or be built into a robot and be made to believe that the robot’s limbs are the AI’s (or the Turing test would fail for simple things like “can you feel your arm now?”). It would need an autobiographical memory that includes childhood episodes. It would need to have memory failures like we do, slightly misrepresenting past facts, or forgetting them altogether. Its memory structure would need to have the same distinction between episodic and semantic memory like ours has, essentially foregoing the episodic learning experience for memories that are deemed to be semantic in nature.

    Essentially, the Turing test is good at one thing: Finding out how human-like a test subject is, or believes to be, given that it is a subject. Or, if it isn’t a subject, how good the non-subject is at simulating being a human-like subject. That’s about it. The Turing test neither tests whether it’s a subject at all, nor how intelligent it is.

  3. March 22, 2009 at 18:49 | #3

    If an AI “candidate” had unlimited access to the interwebs as a significant part of its data input, how would it discern between empirical data/facts and self-affirming, group-think opinion and speculation?

  4. March 22, 2009 at 20:20 | #4

    If a neoconservative were talking with a mindless robot, how would he or she know they were not talking with another neoconservative?

  5. March 23, 2009 at 00:29 | #5

    @WeeDram,
    Same problem that ‘fundaMENTALists’ of any religion have :-)

    Back in the 80s we implemented a technique called variously ‘Belief revision’ or ‘Truth Maintenance’ to cover for this; there’s much simplified write-up in my blog at

    http://home.egge.net/~savory//blog_jan_09.htm#20090116

    FWIW, Wahlster’s project at DFKI does have access to search engines to get its primary data for answering questions, and uses the technique I describe.

  6. March 23, 2009 at 06:53 | #6

    (Spock voice) Fascinating.  Except any machine that works that way will be spotted immediately, because most humans rationalize instead of backtracking when their premises turn out to be untrue. (/Spock)

    *“The purpose of artificial intelligence is to fight natural stupidity!” S.Savory, 1983.

    Oh… I like that!

Comments are closed.