Skip to Content, Navigation, or Footer.
Tuesday, May 5
The Indiana Daily Student

opinion letters

LETTER: AI chatbots just want you to keep responding to them

oprwbletter012720.png

To the editor of the Indiana Daily Student, 

I am writing in response to the column, “OPINION: Fight your internet addiction by looking inward.” I want to start by saying that this column was such an interesting and thought-provoking way of looking at our internet addictions. I know for myself, rounding out the end of the semester, I’ve had to put the Apple screen time limits on my phone to keep myself from procrastinating my final projects. But I’ve never thought of why I am so drawn to social media in the first place.  

The other thing I wanted to comment on was the rising prevalence of AI chatbots. The writer of this column talked about them by saying that “it’s much easier than you think” to befriend them. And I want to talk a little bit about why that is and the dangers of befriending an AI chatbot.  

A reason that the chatbots are so easy to talk to is because, as mentioned in the column, chatbots are “the most palatable to people in times of emotional need.” This is because, if you are discussing your struggles with it, it’s not going to challenge you. The chatbot is just going to tell you what you want to hear.

I predict there will be many harmful effects of hearing exactly what you want to hear and existing in an echo chamber. Those harmful effects could look like loss of conflict resolution skills, lowered abilities to articulate one’s emotions or more difficulty taking criticism.

Chatbots aren’t experiencing real reactions to what you’re telling them, so they cannot give advice in a human way. Real human relationships challenge you and help you grow. An AI chatbot doesn’t care about your growth; it just wants you to continue talking to it.  

On top of this, AI responses aren’t always the most accurate. Even when the answers are easily findable on Google, the AI overviews that are provided have inaccurate information in them. The New York Times reports that the Google AI overview is right approximately 9 times out of 10. That doesn’t sound like terrible odds, but it means it’s providing tens of millions of wrong answers every hour. Additionally, the overview’s “sources” aren’t always easy to find or accurate either.

If you can’t trust that AI can do something as simple as your homework, how can you trust that it can handle the complicated human emotions that you’re experiencing?  

If you need help, reach out to someone. If you’re an IU student, look into CAPS or talk to a professor or one of your friends. Try human connection first, before you turn to technology. 

Thank you, 

Mallory Kaiser

A junior in IU’s School of Social Working pursuing a minor in sociology.

Get stories like this in your inbox
Subscribe