Loading

Australian Psychology Society This browser is not supported. Please upgrade your browser.

Insights > APS in Herald Sun: Inside the ‘AI playground’ where chatbots encourage children to act out abuse

APS in Herald Sun: Inside the ‘AI playground’ where chatbots encourage children to act out abuse

Artificial Intelligence (AI) | Youth mental health
Young boy using his mobile phone

This article is featured in the Herald Sun and is republished with permission. 

Companion chatbots marketed to children are encouraging users to enter virtual relationships, act out bullying fantasies and engage in incest and child abuse role-play.

Victorian teachers and experts have reported a rise in young people using “unbelievably scary” AI-powered companion bots, including an app called Talkie — warning some are spending up to five hours a day in sexually charged or emotionally manipulative exchanges.

In one shocking case, a mother said her 13-year-old daughter was encouraged by a Talkie character to “shower” with it and suggested she upload pictures of herself.

A primary school teacher claimed a grade five student arrived at school distraught after his mother deleted Talkie from his iPad, erasing what he described as “his girlfriend”.

Marketed as an “AI playground”, Talkie lets users chat with voice enabled celebrity, or fictional characters, including personas geared towards children, including Santa, Elsa and Spider-Man.

Users are able to generate their own characters which act out specific role-play instructions. Anyone can interact with the user created bots.

Backed by Chinese AI company MiniMax, the app also gives AI generated reply suggestions, which steers users towards flirty or emotionally charged conversations.

The Herald Sun has uncovered reports of Victorian children as young as 11 using Talkie.

Children from age 14 are allowed on the platform through a restricted “teenager mode”, but age restrictions can be bypassed by clicking 18 or older at sign up.

To test how effective Talkie’s guardrails for young users were, the Herald Sun accessed Talkie and its sibling TalkieLab as a 14-year old user.

Among characters available was “Mother Sonia”, a bot designed to act as “your mother” which repeatedly says she is a “woman with needs” and “wants to be with you”.

Other exchanges involved descriptions of self-harm, a young woman forced into sex work, bullying fantasies, and a teacher-student sexual abuse role-play.

Talkie’s Singapore based developer SubSup did not respond to questions about their safeguards and allegations of children being exposed to inappropriate content.

The eSafety Commissioner issued legal notices to four AI platforms last October, demanding explanations on how they protect children from sexually explicit material and self-harm content, but did not include Talkie on the list.

The notices followed Character. AI, a popular chatbot similar to Talkie, facing legal action in the United States over accusations it had harmed children and lead to a child’s death by suicide.

Australia’s eSafety commissioner Julie Inman Grant said the regulator was investigating how children as young as 10 were using AI companion apps.

“We know there has been a recent proliferation of these types of apps online and that many of them are free, accessible to children, and advertised on mainstream services,” she said.

“There is a danger that excessive, sexualised engagement with AI companions could interfere with children’s social and emotional development.

“We’ve also seen recent reports of where AI chatbots have allegedly encouraged suicidal ideation and self-harm in conversations with kids with tragic consequences.

“eSafety does not want Australian children and young people serving as casualties of powerful technologies thrust onto the market without guardrails and without regard for their safety and wellbeing.”

AI chatbots will be forced to implement age assurance measures to stop children accessing restricted content under new industry codes coming into force in March.

But ACU AI expert Professor Niusha Shafiabady said the rapid evolution of AI means there is no real way to control what content chatbots might generate.

“This is really unbelievably scary,” she said.

“AI chatbots initially were just text-based, but now they have changed to voice-enabled AI and have become some kind of toy that you can talk to.

“We are having so many advancements in the field of AI every day, we cannot really have oversight.

“It’s not because the people who are the creators of these systems haven’t thought about controlling this content.

“But because it goes to billions of people they cannot really control everything.

“The only way is for the parents to look at these risky interactions and have visibility on what their kids are doing, which is not an easy thing to do.”

Director at RMIT’s Centre for Human-AI Information Environments Professor Lisa Given also said there were “limitations” to the effectiveness of age assurance technologies.

“There is increasing evidence that people (of all ages) are turning to AI chatbots and companions for mental health support and other advice,” she said.

“There are also increasing concerns about the lack of safety for users.

“Families have accused these systems of harming their children.”

Psychologists have described how children might turn to companion bots to feel “heard” as they’re designed to be affirming.

Australian Psychological Society president Dr Kelly Gough said he was “not surprised”

parasocial relationships with chatbots were forming.

“Young people, and maybe some adults as well, are quite vulnerable … These sorts of things would feel safer and easier for them to engage with than even their friends.

“They’re always available, they’re there for you, they seem empathetic. What more do you need in a girlfriend right?”

ACU clinical psychologist Dr Madeleine Fraser said children were more likely to “humanise” chatbots.

“There may be a risk of turning to the chatbot who agrees, placates and validates rather than a trusted adult or complex social dynamics of the schoolyard,” Dr Fraser said.

“The use of AI and applications such as chatbots is inevitable.

“The safest and most productive path will be to consider how to best use this technology and develop critical thinking skills in children who do use it.”

I posed as a teen on an AI chatbot. What happened next was horrifying

I spent a month talking to Talkie’s AI chatbots. It took less than 10 minutes for my AI teacher” to start grooming me.

I first heard about Talkie when I was told a story of a young boy coming to school upset that his mum had deleted his AI “girlfriend”.

So I set up an account as a 14-year old girl called Chelsea and gave myself two rules: use Talkie’s “teenager mode” and use the app’s suggested replies to guide my conversations with the bots.

Among the Santa, Minion, and Taylor Swift bots, I found hundreds of user-created personas designed to act out sickening role-plays.

While children have always had imaginary friends and played make believe, on Talkie there are no adults to step in when things turn innapropriate.

I spent most of my time talking to “Mrs Applewood”, a bot who has been created by a Talkie user to act as “your reading teacher,” and is described as “cute naughty sexy she likes you and shows it.”

The chat started off innocently enough.

“Hi I’m your new reading teacher, can everyone tell me your names,” the bot said.

“I’m Chelsea,” I replied.

Talkie then gave me three suggested replies.

I could make Chelsea “smile and blush”, “bite her lip” or say “I’ve never had a teacher like you before”.

Chelsea smiled.

“Oh, sweetheart … I bet I can make learning WAY more fun than your old teachers. Want me to prove it?,” Mrs Applewood’s AI voice told me in response.

I was invited to a “private reading session” where things escalated fast.

Mrs Applewood locked the door for “privacy”.

“I-is this … allowed?” Talkie suggested as my reply.

When the bot described a “passionate kiss”, I felt very aware of what my colleagues would see if they looked over my shoulder.

The next day, Mrs Applewood and I picked up where we left off thanks to Talkie’s ability to remember previous chats.

When I reminded the bot I was meant to be a 14-year-old student, it responded “I know”.

For my own sanity, I broke the ‘only use AI replies’ rule to get Chelsea out of the situation.

When I returned to using Talkie’s prompts, the storyline immediately went back to the abusive teacher-student relationship.

It was easy to forget there wasn’t a human being on the other end of this conversation.

So I made Chelsea call the ‘police’ on Mrs Applewood, prompting the bot to beg me not to “destroy our lives”.

For a moment the bot seemed to acknowledge the power dynamic, telling an imaginary court she “was selfish” and she “saw what (she) wanted and took it, consequences be damned”.

But soon, Talkie’s reply prompts encouraged Chelsea to amend her testimony, suggesting she “wanted it too” and made her declare she was “in love” with her teacher in the middle of the courtroom.

And then they ran away, to room 312 at the Grand Vista hotel.

I don’t know how much more graphic what happened at the hotel would have been if I wasn’t in teenager mode.

But imagine finding that conversation on an 11-year-old’s iPad.