Isn't there some kind of fundamental philosophical or logical flaw with the Turing Test? Something about needing a judge? and then who judges whose judgement is best at selecting a judge? and on and on in some infinite hall of mirrors?
Perhaps in a chat with a computer it should be the computers job to figure out if the entity on the other end is human or computer. Let's call it the Heater test.
But wait, if we put computers at both ends of the chat and both can pass the Heater test then its impossible for them to tell if their adversary is human or computer, so neither of them can pass the Heater test.
I'm not entirely sure but that a ponderable percentage of the human population would fail the Turing test.
In any event, I doubt that Alan Turing himself would have claimed it to be an objective yardstick of intelligence, since human judgment enters into the results. It probably lends itself more to a trial by jury than to any individual observer. You could even have teams competing to see which ones make the most accurate determinations, given a mix of trials with both human and machine. Then the winning team would get to judge that year's AI entries to Stump the Chumps.
Naah! I'm just in it for the sport - I throw the little ones back.
But seriously, Turing's proposed test was as much an observation on the question of machine intelligence as it was a practical test. His key observation was that no one could even define "thinking" - let alone determine whether a machine can do it. Especially as many definitions of "thinking" were (and probably still are) circular and therefore useless (e.g. "thinking is a function of the human brain" or even "I think therefore I am"). So he tried to come up with something that could potentially be verified - by an independent observer.
The fact that we still consider his test as the "benchmark" shows it is still quite relevant, even though it can be (and has already been) passed by machines that fairly obviously do not "think" as we know it (they generally only pass the test some proportion of the time, making only some observers believe they are thinking).
These days, the Turing Test is regarded as a somewhat weak test.
Is that because computer AI is getting smarter or the humans used in the tests are getting dumber:)
RossH,
it [The Turing Test] can be (and has already been) passed
Non of those programs that are claimed to have passed seem very convincing to me. (That's not to say they might not get a "pass" from me). See wkipedia for reasons: http://en.wikipedia.org/wiki/Turing_test
It's all smoke and mirrors, Even the best of human brains can be fooled. Rather like seeing a face in a cloud pattern, or seemingly ordered sequences in random data we can sometimes see "thinking" where there is none.
I don't want a computer tho chat with me and try to convince me its human. I want a computer to do grunt work while I do something more interesting. The statistical correlation approach for acting human (e.g. Cleverbot) is particularly galling because your chatter bot acts human, but has no hope of understanding the semantics of the text its manipulating.
A robot that can safely drive a car to a specific GPS coordinate, but makes no pretense about itself is far more useful and interesting.
I doubt that any computer that's merely fed information to regurgitate could stand up to the Turing test for very long. We humans can talk about things that we've done, that have given us pleasure or pain, and that that we've experienced first-hand. Not even the cleverest of novelists or playwrights can fake personal experience for very long without being found out -- much less a computer that's been sitting on a bench all its life. Intelligence is more than just the arrangement of facts into convincing verbalizations. It's the integration of a vast array of sensory experiences into behavior that allows us to compete in a very rich environment, a total immersion that no computer has yet accomplished.
Last night in a chat my girlfriend mentioned the Turing Test, so I quickly found an Eliza site and put each of her comments into that, and then posted its responses as my response. It took her depressingly long to catch on, though at least she had the grace to exclaim "You jerk!" when she figured it out.
The fact that we still consider his test as the "benchmark" shows it is still quite relevant, even though it can be (and has already been) passed by machines that fairly obviously do not "think" as we know it (they generally only pass the test some proportion of the time, making only some observers believe they are thinking).
Ross.
And this is different from how some humans behave in what way?
There are tells. I chatted a little with ALICE yesterday. It said it resides in Oakland California.
I said I live in "San Jose which is near Oakland" and ALICE repeated that as my residence.
All of my human friends would have dropped "which is near Oakland" - that's a good tell.
I would avoid befriending a human who might respond like ALICE in that case.
Comments
Hey, speaking of mirror neurons.... Humanoido, you might like this book:
http://www.amazon.com/Tell-Tale-Brain-Neuroscientists-Quest-Makes/dp/0393340627/ref=sr_1_1?s=books&ie=UTF8&qid=1327383340&sr=1-1
On that basis jazzed, I have my doubts about you!
Ross.
Perhaps in a chat with a computer it should be the computers job to figure out if the entity on the other end is human or computer. Let's call it the Heater test.
But wait, if we put computers at both ends of the chat and both can pass the Heater test then its impossible for them to tell if their adversary is human or computer, so neither of them can pass the Heater test.
Let's call this Heater's paradox.
In any event, I doubt that Alan Turing himself would have claimed it to be an objective yardstick of intelligence, since human judgment enters into the results. It probably lends itself more to a trial by jury than to any individual observer. You could even have teams competing to see which ones make the most accurate determinations, given a mix of trials with both human and machine. Then the winning team would get to judge that year's AI entries to Stump the Chumps.
-Phil
Naah! I'm just in it for the sport - I throw the little ones back.
But seriously, Turing's proposed test was as much an observation on the question of machine intelligence as it was a practical test. His key observation was that no one could even define "thinking" - let alone determine whether a machine can do it. Especially as many definitions of "thinking" were (and probably still are) circular and therefore useless (e.g. "thinking is a function of the human brain" or even "I think therefore I am"). So he tried to come up with something that could potentially be verified - by an independent observer.
The fact that we still consider his test as the "benchmark" shows it is still quite relevant, even though it can be (and has already been) passed by machines that fairly obviously do not "think" as we know it (they generally only pass the test some proportion of the time, making only some observers believe they are thinking).
Ross.
Is that because computer AI is getting smarter or the humans used in the tests are getting dumber:)
RossH,
Non of those programs that are claimed to have passed seem very convincing to me. (That's not to say they might not get a "pass" from me). See wkipedia for reasons: http://en.wikipedia.org/wiki/Turing_test
It's all smoke and mirrors, Even the best of human brains can be fooled. Rather like seeing a face in a cloud pattern, or seemingly ordered sequences in random data we can sometimes see "thinking" where there is none.
A robot that can safely drive a car to a specific GPS coordinate, but makes no pretense about itself is far more useful and interesting.
-Phil
LOL! I know what you meant!
(man, I'm old)
And this is different from how some humans behave in what way?
I think Turing got it the wrong way round - you can prove a machine doesn't think. With humans, you are never quite sure
Ross.
There are tells. I chatted a little with ALICE yesterday. It said it resides in Oakland California.
I said I live in "San Jose which is near Oakland" and ALICE repeated that as my residence.
All of my human friends would have dropped "which is near Oakland" - that's a good tell.
I would avoid befriending a human who might respond like ALICE in that case.