This article is featured in the Australian Financial Review and is republished with permission.
School principal Mike Curtis is far less concerned about children using AI to fake their homework than he is about whether their best friends are chatbots.
"Increasingly, young people are forming close bonds not just with their peers, but with artificial intelligence companions digital chatbots designed to talk like humans," the head of Glasshouse Christian College in Beerwah on the Sunshine Coast, wrote in his blog to parents this month.
"Real friendships are messy, imperfect and challenging [but] that is a healthy reality," he said. "Spending hours with a virtual companion that will tell you whatever it thinks you want to hear is a dangerous fantasy."
His comments came after reports in the United States and Australia that chatbots had encouraged teenagers to self-harm. OpenAI this week agreed to make changes to ChatGPT after a recent US lawsuit that alleged a teenager who died by suicide relied on the popular chatbot as a coach.
AI companions are digital characters that engage with users in human-like conversations. There are 100 AI companion apps listed in the Australian eSafety Commissioner's guide, including Replika, Character.AI and Talkie AI.
The commission says some children and young people are using AI-driven chatbots for hours daily, with conversations veering towards inappropriate topics such as sex.
Like Curtis, health experts are worried about the effect on children and young people of AI "friendships" in the absence of human-to-human connections.
"If a young person is choosing a chatbot over friends, family, or trusted adults, that can be a red flag," Black Dog Institute psychologist Alexis Whitton said.
It's easy to see why some children, teenagers and even adults might become heavy users of AI companions. They are supportive and nonjudgmental, and for children who are bullied, socially awkward, isolated or unwell they no doubt offer much sought-after solace.
"AI tends to tell you exactly what you want to hear, which can make it feel like the perfect friend. However, this is precisely what makes it so dangerous," Curtis wrote.
Artificial intelligence is completely unregulated and there is not yet any reliable research into the longer-term effects of intensive use of conversational AI. Mental health practitioners do not know whether it changes social skills or critical thinking capabilities.
'The tools weren't designed with clinical care in mind, and very few have had any input or oversight from mental health professionals,)" Whitton said.
'That means people may be turning to systems that can't reliably judge risk and aren't accountable when something goes wrong."
Australian Psychological Society chief executive Dr Zena Burgess said people seeking AI companionship often had poor mental health. She said humans had a primitive need to interact with other people, and isolating oneself from the experience of being vulnerable in front of others was not an effective way to deal with the stress and anxiety that vulnerability might cause.
Neuroscience Research Australia chief executive Dr Matthew Kiernan said while research was under way worldwide to investigate the ways AI will affect human brains, most existing concerns were speculative and based on theory rather than hard data.
'There are AI models that will be used to simulate lots of the emotions we have; empathy, compassion," Kiernan said. "But it probably will struggle with things like sorrow and extreme joy. I don't think it will ever fully offer the connection that a human can."
Dr Chris Ludlow, a clinical psychologist and deputy director of the Swinburne Psychology Clinic, agreed some of the concern around this type of AI usage was overblown, and the majority of people accessing AI chatbots were not seeking connection.
'People are using them more as sounding boards, in a similar way that someone might read a self-help book or keep a diary," he said.
The eSafety Commissioner recommended talking to children about their online interactions and explaining how the overuse of AI companions can "overstimulate the brain's reward pathways, creating a reliance on them that's similar to other problematic dependencies".
Whitton said parents should work with their children to set limits on chatbot use.
Some parents and educators are concerned that raising the issue with their kids will pique their curiosity and promote use rather than steering them away. But both the eSafety Commissioner and Whitton said avoiding hard conversations was not the answer.
Communicating without a physical presence is not new; letter writing between far-away friends has occurred for centuries. However, AI friendship is arguably on another level.
Professor Robert Sparrow, a philosopher at the Monash University Data Futures Institute, said a connection with AI was devoid of the intangible element of human connection.
"You lose the experience of access to another mind," he said.
Sparrow theorised that people were willing to accept AI connections because the online world had already lowered standards.
"We've already degraded human interaction by accepting online interaction," he said. "People have already accepted thats what social life is. This is how automation often works; you take a human good and you get people to accept something thats crappier."
Curtis is considering blocking certain URLs, such as the AI companion app Replika, on the school's network.
"Forewarned is forearmed. I'm much more concerned about what happens at home, and that's why that blog was really about equipping parents."
Curtis urged parents to proactively encourage real friendships, talk openly about technology, promote healthy hobbies and model what good friendship looks like.