August 15, 2018, 8:51 PM : Please sign in or register for a free account. Get information about membership.
Who's chatting now:
News: Games

The Turing test and muds
By webmaster
On Sun May 23, 1999 11:36 AM
I've always wondered if the perfect medium to perform the turing test would be in a MUD (multi-user dungeon) environment. What do you think?

I've always wondered if the perfect medium to perform the turing test would be in a MUD (multi-user dungeon) environment. What do you think?

6 Replies to The Turing test and muds

re: The Turing test and muds
By BarMoCorp
On Mon Aug 16, 1999 08:23 AM
By Peter (unregistered )
On Thu Sep 02, 1999 08:16 AM
Turing testing in a MUD is a cool idea, but there are some difficulties one would have to overcome: If the other participants did not know that the test was taking place, they could not give their opinion about whether the AI is an AI or human. And if they did know, people might start behaving unnaturally in the MUD... It would be nice to just develop an AI that can act naturally in a MUD (somewhat limited domain), but if people knew that the test were taking place, they might start conversing in a manner that was out-of-character for the MUD and intended to test the humanity of the entities they meet. Nevertheless, it would be an interesting topic for a graduate student to program a MUD-AI and see.
Peter Henningsen
Besides which...
By Dragonflame (unregistered )
On Mon Dec 06, 1999 11:48 PM
I remember hearing about some of these MUD-AIs years ago, except they were called Bots, if I remember correctly. They'd hold long conversations with people; responding to keywords in the other person's sentences, warding off unprogrammed-for questions with vague answers, or subject changes or other questions. And they'd have plenty of responses to choose from, to avoid suspicious repitition. Basically, it takes more of a person clever with sentence generators and stock responses than a genius at artificial intelligence to write something that passes the Turing Test (it really is a bad indicator of self-awareness).

Daniel Tittle
Turing test
By Zemox (unregistered )
On Fri Dec 17, 1999 12:29 PM
I have to disagree with the previous post. Yes, an individual can be fooled by a well crafted bot. Many such bots have been created since people were wowed by Eliza nearly 30 years ago.
However, a Turing test used to determine intelligence would be based on results from skilled examiners. (I would choose a combination of AI researchers and psychologists as well as a few other fields.) Such expert examiners would be able to see through most tricks, especially if they were skilled at such programming. They depend on the examiner to "play along" and "stay in character". If the person moves out of character or does bizzare things during the conversation, the responses either fall to a handfull of generic responses to get erratic is a key word is present. It does not look like a human response at all.

Also, most scripts have short memories. If you say the same thing twice, the script might ask why are you repeating your self. If you repeat the same thing on every third line, it could care less.

Programs that actually parse natural language sentences usually fail by choosing gramatically correct but ambiguous sentences that few people would misinterperate. Humans are good at picking the real meaning out of sentences like these but computers are not. Also delving into the meaning of any sentence would flunk any of todays smartest machines.

I've often speculate that in order for a machine to think well enough to pass the Turing test it would either have to live like a human from birth (like the COG project at MIT is trying to work toward) or be many times smarter than a human to fake it.
Turing Test Judges
By KingCrutch
On Tue Mar 14, 2000 09:57 PM
The Turing Test has been a much debated test and these posts hit at a couple of key points. First, let me say that passing the Turing Test is not trivial. It's not easy to fool a human, even an unsuspecting 'average' human, through computer generated conversation, and considering the scope of the Turing Test the task becomes even more difficult.

The previous post brings up a key issue: Turing's idea of the "average interrogater" -- In his watershed paper he says that an intelligent computer program would have to fool the "average interrogater" -- but he never defines that term. Some people, like Zemox, think that a program would have to fool scientists trained in things like psychology, sociology, and linguistics in order to be deemed intelligent. Others argue that the average interrogater can be any human being and that if a program can fool the average human it can be deemed intelligent.

There's no question that these are two very different tasks.There is no right answer to the question of which is the correct interpretation - and that is probably just as Turing wanted it.
How close are we to creating an intelligence that passes the Turing Test?
By Jules (unregistered )
On Sat Apr 21, 2001 07:29 AM
The AI/CS community needs a new test to determine intelligence.
I find the Turing test flawed mainly because:

1.Its ambiguous, to say the least. Have you ever had a conversation with an "average" person on AOL or IRC ? How intelligent do they come accross as ?

2. The test defines the existance of intelligence as a yes/no value. This is plain stupid. I believe that intelligence needs to be graded or classified.

If you know of any research thats been done in benchmarking intelligence, i'd appreciate a link.



Powered by XP Experience Server.
Copyright ©1999-2018 XP.COM, LLC. All Rights Reserved.