Conversing with Computers
There has been plenty of commentary on both the Turing Test and the Loebner prize competition over the years, to which I don't really have anything to add, but I did want to just make a few comments on this article by one of this year's Loebner prize judges.
First, the author/judge asks both chat partners if they are human or a computer. Both reply ambiguously. This is just silly. In the Turing's conception of the game, the human is meant to try to help the judge, while the computer is meant to try to trick the judge. So what is the human doing here? Well, one explanation is that the rules for this version of the test are not the same as the version Turing envisioned, having been relaxed a bit to make some success possible 1. This is a reasonable approach to stimulate research and interest, since an impossible task might just drive away potential entrants. But if this is the case, I wish the news reports would be more upfront about that fact, because to do otherwise is to mislead the non-technical reading public.
The second possibility for the odd human response is the lack of understanding of the game. In this case, there is a simple solution -- offer incentives to the human chat partner for properly helping the judge discriminate the machine from the human. Of course, this changes the entire complexion of the game. Now the test becomes not just to chat in a passable human-like manner, but also to be able to deceive people convincingly!
While the science and technology of deception are very interesting from a research perspective, from a practical perspective, I can understand why simply being able to chat in a human-like way is of more immediate interest.
1 As of this writing, the rules for this year's competition have not been posted.