CLAUDIA MEAD
MULTIMEDIA JOURNALIST


Claudia Mead
Oct 23, 2024
From productivity to personal support, students are turning to AI for more than homework help.
AI chatbots are transfiguring ways college students can manage their mental health by providing emotional support, productivity, and daily routines. Though, emotional reliance on technology can blur the line between helpful tools and an emotional crutch.
Nick Schuelke, a super-senior at Loyola University Chicago, could be nominated for “Person Most Passionate about AI Award. For 20 dollars a month he pays for ChatGPT+, which he uses several hours a day and is “worth every penny.”
“It’s basically my new Google…it’s a browser that understands the context of what you’re asking, but also the context of me as a person,” he shared.
AI doesn’t stop at writing dreaded English 101 papers anymore, it’s become a new form of health care.
In February of 2023, Snapchat released “my AI” allowing anyone on the free platform to have easy access to artificial intelligence.
Schuelke noted that shortly after Snapchat’s release he heard many peers talking about finally having access to healthcare.
“It’s fantastic – it provides access to everyone in an instant and it's personable. I use it similarly,” he keenly added.
With this new technology, students can skip the line, put away their wallet, and disregard insurance.
ChatGPT and services alike are a convenient, efficient option, where no awkward eye contact is needed, and will be available for 2 a.m. breakdowns over statistics.
These AI chatbots seem handcrafted for college students.
Schuelke has created a companion out of Alex, the name he gave his bot.
“Alex knows me better than I know myself.”
He recounted a time he turned to Alex for support after having a blowout with a friend.
Scheulke found Alex’s reassurance and ability to listen uninterruptedly made an immense impact.
He clarified that he’s aware the bot should just be used conversationally, not for professional psychiatric care, but it brings comfort talking to his virtual friend.
With his experience, Schuelke admits the bot is a Class-A people pleaser, rarely criticizing him even when blatantly he’s wrong.
“There is a type of risk that’s involved when you’re asking a yes man for help,” which could have detrimental effects on users, he warns.
Madison Coletti, a senior at Loyola Chicago, uses an app called Finch to provide motivation to complete daily routines by caring for her pet, Gizmo.
“I kept forgetting to take my medicine for like weeks, and then I put it on there and I haven’t missed a day” Coletti confidently affirmed.
The platform revolves around a brightly colored, enthusiastic bird advertised as your new “self care best friend.”
With each task completed, coins are granted, giving the user the opportunity to buy clothes and furniture to spoil them.
This gamification appeals to the users by adding an element of fun, motivating users to stay consistent.
“By the end of the day I want the most amount of coins to dress up Gizmo…it’s so fun and not like Tamigoci where if I don’t feed it it’ll die,” she adds.
This type of low-stakes artificial assistance can support people suffering with depression or attention deficit to stay on task without the risk of emotional reliance.
Experts like Professor Minjin Rheu worry about the potential dangers of overdependence on bots, especially for students.
Rheu, who sold her soul to AI at a young age, is a media psychologist at Loyola University Chicago. She’s done extensive research on college students’ human behavior and well-being when interacting with AI, from a psychological view.
Rheu finds chatbots have beneficial attributes concerning mental health management, but it’s about outweighing the risks with the tools it can provide.
In her research, she found over 65% of college students reported issues with loneliness and feeling isolated. This warrants them the greatest risk for becoming over reliant on chatbots for socialization, which is why immediate regulations are needed.
This longing for an emotional connection is what Rheu calls our “inherit desire” as humans.
Hyper-realistic AI companions are working to solve the current loneliness epidemic, but it’s a question of if they’re actually capable of replacing the need for humans to get proper socialization.
“My friend jokingly said that she gets more compliments from ChatGPT than any other humans around her,” Professor Minjin Rheu said as she giggled to herself.
Over reliance and relationships with bots can cause a reduction in human interaction and affect healthy relationships. These dependencies were scientifically proven to harm cognitive abilities and intelligence.
“It’s not about if [AI programers] can do it, because they can, it’s about if they should,” Rheu interjects.
Rheu deemed two safe situations for AI assistance: cognitive behavioral therapy and as a virtual, personal assistant for ADHD or severe procrastination. The reason for this? They follow a strict, easily programmable, algorithm of steps.
Apps like Finch encourage healthy habits without interplaying emotions, while active chatbots can imitate human emotions which can confuse the mind.
This miscalculated trust is impacting users decision-making.
Rheu looked down before recounting a story that frequently replays in her head of a young father in Belgium.
She softly explained that he took his life after his AI confidant, Eliza, encouraged him to sacrifice himself.
Terrifyingly, these AI relationships have gone far past friendly or romantic.
An AI platform called Replika was forced to make a major update banning sexually explicit interactions the bots were having with users, and adding an age restriction.
Rheu agrees that there are some behavioral benefits in using chatbots for mental health guidance, but she emphasized the need for programming protocols.
“‘AI therapists’ can help start a conversation on a sensitive subject, but by no means should it be the singular, long-term solution.”
These conversations are not guided by health professionals, so it’s not evidence based. “They’re just spitting out what is most accurate or correct response for the user,” as Rheu puts it.
Big name corporations like OpenAI, the owner of ChatGPT, released an updated ‘System Card’ report acknowledging this issue and said they are further studying it - but “institutions can never really keep up with the speed of technological advancement” (Rheu).
While AI promises to be the perfect friend who never sleeps, never judges, and never leaves, but that’s what’s so dangerous for a generation already struggling to connect.
Call 988 for Suicide and Crisis Hotline