With recent advances in self-driving cars, people have wondered how well computers can think like people.
But Hamid Ekbia, professor of Informatics, Cognitive Science and Internatinoal Studies, explains this can be misguiding.
Ekbia researches artificial intelligence by exploring the metaphors and language surrounding it.
Ekbia said describing the computer like a brain can cause people to overestimate the potential of artificial intelligence, whether it’s self-driving cars, chess-playing computers or human-like robots.
“If the brains are like this, then our memories are storage devices,” said Ekbia, “The brain is a computer. Memory is storage. What we process is information.”
But this brain-computer metaphor is complicated.
“When we make these statements about machines, that changes our own perceptions about what it means to be intelligent,” Ekbia said. “That is where my concern is.”
Mediation, or the way these meanings transform through interactions, says the meaning of words changes as humans interact with technology, Ekbia said.
Becoming friends with a robot might change what the meaning of friendship is, Ekbia said.
This means the metaphors of artificial intelligence change as well, Ekbia said.
When it was first used, this brain-computer metaphor gave the wrong impression that computers work the exact same way human brains do, Ekbia said.
“Yes, there are these wonderful things that machines can do, but one of the things that has repeatedly happened over past 50 to 60 years is that metaphors have been used to hype up these machines,” Ekbia said.
These brain-computer metaphors might make the promises of artificial intelligence appear misleading, Ekbia said.
“The next thing we say is computers can interact with human beings, can socialize, can have ethics,” Ekbia said.
Despite the brain-computer metaphor, self-driving cars make decisions in very different ways than humans do, Ekbia said.
These cars need to account for unpredictable events such as animals running in front of them, Ekbia said.
“People are too quick to promise things that are still not ready,” Ekbia said.
Associate Professor in the School of Informatics and Computing David Crandall directs the IU Computer Vision Lab on artificial intelligence research.
“The problem isn’t too much of simple things like staying in your lane,” Crandall said on the challenges of self-driving cars. “But a problem of trillion things that could happen in five seconds.”
Though the brain-computer metaphor might not be entirely accurate, Ekbia doesn’t completely reject the use of metaphors.
“What’s wrong with metaphors in artificial intelligence?” Ekbia said. “There’s nothing wrong with itself, but the ways they’re used are getting us into trouble.”
Ekbia wants people to be careful with the use of metaphors lest they fall into the “anthropomorphic slope,” as Ekbia said.
Much more than just a linguistic device, Ekbia goes as far to call metaphors a social practice.
“It’s a social practice in the sense that metaphors allow us to deal with the complexities of the world,” Ekbia said. “It’s a way for us to mobilize resource and communicate projects and goals to others.”
In understanding the brain-computer metaphor, Ekbia digs deep into the precise ways computers process information.
By studying the ways computers interpret words, Ekbia discovers connections between the ways humans and computers process language similarly.
Ekbia explores the abilities of computers to perform complicated functions like writing stories.
And through that, Ekbia understands what the “intelligence” of artificial intelligence really is.
The way these perceptions change, Ekbia said, is key to understanding the power of computers.
“We change along with our technology,” Ekbia said. “We are who we are because of what we have created.”
Like what you're reading? Support independent, award-winning college journalism on this site. Donate here.