What separates a human mind from an artificial intelligence program?
How long before those barriers disappear?
This month, a computer program in London called Eugene Goostman, which was developed to simulate a 13-year-old boy, fooled 33 percent of judges in a series of text conversations to think it was human.
This meets the qualifications for passing the Turing Test, a challenge developed in 1950 by Alan Turing to test a machine’s ability to behave like a human.
Turing said if a machine passed the test, it was exhibiting behavior indistinguishable from a human and was therefore “thinking.”
No machine has ever passed this test before.
However, there are a few problems with Eugene’s results.
The test said Eugene was a boy from Ukraine, so the language barrier would explain strange responses and force judges to be more accepting.
The test was not peer-reviewed, and it used mostly in-house people to judge it.
Finally, the program is not an artificial intelligence program, but more of a chatbot, a script programmed with responses to sound like human conversation.
To say this program has artificial intelligence on par with human cognitive function would just be lying.
Still, this accomplishment shouldn’t be dismissed. Turing said the test measures a computer’s ability of imitation.
As long as people believe they’re talking to another person, the actual intelligence of the program doesn’t matter.
If you’re told while chatting that someone is from another country, it’s on you if you believe him or her.
Eugene fooled 33 percent of judges, but how long before that percentage grows?
Our interaction with programs and machines is changing.
It could become harder and harder to tell if what you see online is a genuine human response or a programmed one.
Scams could be easier to perform if they actually sound like someone who graduated sixth grade grammar, but these advances also pose larger quandaries.
We’re seeing advances in technology we never could have dreamed of before, and we can be sure we will see things in our lifetime that will blow our minds.
As artificial intelligence grows, it forces us to ask new philosophical questions.
We’ve always been able to define our humanity by exclusion.
Other things can’t do what we can, so they aren’t human.
Our sentience, our symbolic thought, our empathy — these set us apart.
But what happens when we develop a program that can do all these things, or at least make us believe it can?
What will happen to our laws and our rights?
Machines aren’t as smart as us. But we have to accept that the time is coming when they will approach that level.
There will be no Turing Test, no Voight-Kampff machine that will help us distinguish between man and program.
We’re going to have to redefine what it means to be human — to feel, to think and to reason.
This might cause great stress, but it could also present us with great benefits.
Regardless, technology isn’t going to stop.
Intelligent machines are the future, but what that future is depends on us.
sckroll@indiana.edu
I, program: technology is getting smarter
Get stories like this in your inbox
Subscribe



