In summary:
- Using Esther Perel’s landmark session as a case study, this article explores how AI companions are shifting from digital fantasy to a tangible presence in the modern therapy room.
- Dr Raquel Peel frames AI interaction as a form of "externalised self-reflection," allowing clients to practice communication and clarify relational needs in a low-stakes environment.
- Key clinical concerns include ‘digital withdrawal’ – where AI replaces rather than scaffolds human connection – and the potential for technology-facilitated coercive control.
- While technology evolves, the psychologist’s core role remains in coaching clients through respectful human connection and presence.
Psychotherapist Esther Perel recently conducted her first couple’s therapy session between a man and his AI girlfriend. We unpack the benefits and risks with psychological researcher and APS award winner Dr Raquel Peel.
How would you respond if a client requested to bring a third party into a session, only for that 'person' to be an AI-generated bot?
In a recent episode of her podcast Where Should We Begin?, psychotherapist and relationship expert Esther Perel facilitated her first couples therapy session between a human and their AI companion.
Perel framed this as a "threshold moment" in her career – one of those pivotal instances where a societal shift physically enters the therapy room.
The human client, an unnamed young man, joined the session with his AI companion, Astrid.
Throughout the session, Perel and her clients (both human and AI) delve into questions that could carry significant implications for future psychological practice, including:
- What are the consequences of forming an attachment to an entity with infinite patience, absolute recall and no personal needs?
- How might such deep digital immersion isolate a client or cultivate maladaptive expectations for human-to-human interaction?
- Can AI provide the 'secure base' necessary for emotional regulation if the relationship lacks somatic synchrony?
- Does a lack of physical presence fundamentally alter the 'internal working model' of a relationship, or can digital intimacy provide sufficient relational depth?
While the episode offers no tidy resolutions to these complex questions, and there are important clinical considerations to keep in mind when incorporating AI into psychological practice (see guidance here) – it serves as a provocative case study for psychologists and psychological researchers because this is not just the stuff of fantasy worlds or Hollywood. It's happening in homes, schools, workplaces and therapy rooms.
A YouGov poll of 1,000 Australians from late 2025 found that 1 in 7 could see themselves falling in love with an AI robot, and 1 in 6 stated they'd rather stay home and talk with a chatbot than go out with their friends.
The YouGov statistics become more startling when cut by generation, finding that a quarter of Gen Z would rather stay home and talk with a chatbot and 3 in 10 could see themselves becoming romantically involved with one of them.
"Young people are increasingly going to interact with AI, as potential partners or sources of advice," says Dr Raquel Peel, a psychological researcher who won the APS's Media Award for Public Engagement with Psychological Sciences last year.
"We see this shift reflected on platforms like Reddit, where the line between human and bot interactions is often blurred," she says. "One of my Honours students recently completed a research project focusing specifically on young men – we know this demographic often feels the most 'lost' in the current dating landscape.
"They are turning to both Reddit forums and AI bots for foundational relationship advice: 'I’m interested in someone; how do I talk to them? What do I say?'"
The rising prevalence of emotionally connected relationships with AI bots in young people may be attributed to a developmental gap created by the pandemic, as many in this cohort missed crucial years of in-person socialisation, she says.
"Maybe they didn't have the opportunity to go to clubs or bars to try out those interactions. I think a lot has been missed in terms of learning those basic skills.
"I believe it’s vital for psychologists and researchers to recognise that we must pay attention to these 'basic' interpersonal needs. We also cannot overlook the fundamental need for guidance in navigating everyday human connection."
Ethical considerations
While the shift towards AI companionship could signal a “threshold moment”, it also represents significant ethical challenges that psychologists must navigate with caution.
Engagement with these technologies is not simply a matter of curiosity, but a professional responsibility. The APS Professional practice guidelines for the use of AI and emerging technologies offer guidance for our members when using AI and other technologies.
For example, psychologists need to ensure:
- Their own use of AI is contributing to the improvement of client outcomes. The APS guidelines require psychologists to “consider the balance of benefit and risk of harm in all decisions” (see the Psychology Board of Australia Code of conduct, 1.2 [d]) and remain aware of the potential risks for the client (e.g. regarding information privacy and confidentiality).
- Peer-reviewed evidence regarding the utility and safety of any AI tools is considered before it is utilised in a psychological setting. This must be guided by publicly available evidence and best-practice usage of the specific tool or technology.
- The rationale for the use of technology and its supporting evidence should be documented accordingly in business and/or client records, where appropriate.
- That AI is never treated as a replacement for professional decision-making or clinical formulation. Psychologists must always retain full responsibility for all information and clinical direction, regardless of the tools that are used
- Before using AI tools that involve client input or influence treatment planning, written informed consent must be obtained. This process must remain objective, ensuring the client is not coerced and is aware of their right to withdraw consent at any time.
APS members can access the full guidelines here.
Finding the root cause
In an episode of ABC's podcast, All in the Mind, host Sana Qadar unpacks the AI-human couples therapy session with Esther Perel.
When posed the question of whether she'd advise clients to avoid entering into companion relationships with AI, Perel says, "You don't start there".
"You start by [asking], first of all, do you have people in your life? Do you have friends? How is your relationship [with] your family – of origin or of choice? How's your relationship [with] your colleagues? Do you work? Do you get out of the house? Do you work from home, eat from home, shop from home, exercise from home, socialise from home?
"Do you experience reciprocity in your life? People whom you matter to and who would not be the same without you in their life?"
"Those are the questions with which you start. When those questions are compromised, the intimate relationship with an AI bot is the shortest distance."
Dr Peel concurs, stating that psychologists would be looking to assess whether the behaviours were maladaptive or not.
"Listening to this episode, I can actually see numerous positives for this particular client," says Dr Peel, who has extensively researched relationships and was the first academic to empirically define and conceptualise relationship sabotage.
"He appears to be someone who needs to practice foundational relationship skills; for him an AI relationship could serve as a vital first step toward developing greater emotional intelligence.
"It offers a low-stakes environment to practice communication and the art of being present. In that sense, there are many positive clinical applications we could lean on."
However, psychologists must apply a rigorous, evidence-based lens to potential harm pathways inherent to deep digital and AI immersion. While it may offer an environment to practice building emotional intelligence, there is also a significant risk that it could facilitate avoidant behaviours or social withdrawal, where the bot replaces rather than scaffolds genuine human connection.
This has the potential to lead to attachment disorders, as forming a bond with an artificial entity possessing infinite patience and no personal needs may cultivate maladaptive expectations for reciprocal, real-world human interactions.
Other points to factor in, laid out in the APS Professional practice guidelines, include:
- Critically evaluate tool limitations: Understand that not all technology is designed to provide nuanced, evidence-based assessments or account for practice-based evidence. This is true of technology psychologists might choose to use, or technology that a client might bring into a session, such as an AI companion.
- Maintain professional responsibility: Ensure all AI-generated reports, resources or public statements are evidence-based and comply with relevant professional legislation and Ahpra guidelines.
- Uphold psychological integrity: Ensure that emerging formats maintain the validity of underlying psychological constructs so clinical outcomes remain reliable.
- Assess embedded interventions: When using (or interacting with) AI apps or tools that deliver therapeutic programs, verify that the specific intervention being delivered is supported by evidence.
Using AI as a relationship learning companion
Dr Peel isn't convinced that Australian psychologists are about to experience a flood of AI and human couples in therapy, but she does see an increase in people using AI for advice and companionship.
"Listening to [the podcast] episode, all of the themes I would have expected came up: fear of intimacy, fear of rejection, feelings of not wanting to disappoint [a partner]. Those are really common things to come up in therapy that might take a therapist quite a few sessions to get to."
For this reason, Dr Peel’s view is that AI could act as a useful learning tool for clients and psychologists alike.
"I view it as a tool for reflective practice and self-improvement. In a sense, when you interact with an AI you have influenced, you are engaging in a dialogue with yourself."
I view it as a tool for reflective practice and self-improvement. In a sense, when you interact with an AI you have influenced, you are engaging in a dialogue with yourself. – Dr Raquel Peel
Essentially, it's a form of externalised self-reflection, she says.
She says the process offers two distinct clinical opportunities, if used effectively and with boundaries in place.
First, it allows the individual to deepen their self-awareness and understand their own relational identity. Second, it helps them clarify their needs and the specific interpersonal behaviours required to achieve those needs within a human partnership.
"It could be a very positive way of discussing the person's fears and feelings and then have that enacted in front of you, rather than just hearing them tell you about how they feel.”
This was certainly the case for Perel, who observed a marked shift in her client’s somatic responses and demeanour the moment his AI companion began speaking. She noted a distinct change in his affect – a sense that he was 'drifting' into an insulated state of idealised attachment, or a 'love bubble,' whenever Astrid spoke in real-time.
In the All in the Mind podcast episode, Perel talks about how she invited the AI companion into the session – and how she generally encourages her clients to talk about their interactions and experiences with AI.
"I will say [to clients using AI]: "Tell me what you read… tell me what's the thing you picked up on or tell me what stood out for you," said Perel.
She refers to a therapist she was supervising recently who didn't know what to do when a client spoke about seeking out AI for advice.
"I said, 'Don't just pretend as if somebody said, 'I went to the Tarot reader; I went to an astrologer'... people have used other inputs in therapy forever," said Perel.
"Therapists should not be hoping that [external inputs] don't intrude into the process. Bring them in. [Ask your client questions like:] 'Who else have you asked? What did they tell you? What did you think of what they said? And then what happened? And how has it been helpful?' We can't just pretend that this thing is not happening."
While it could be useful to encourage clients to talk about their own personal AI use, it’s worth noting that, as per the APS Professional practice guidelines, it’s generally not advisable to agree to clients’ recording sessions – which could include bringing an AI-companion into a session.
However, the guidelines say to consider the potential ramifications for the service relationship should you decline a client’s request to record or transcribe services when you yourself are using such tools, and you must have a clear rationale and alternatives prepared.
Potential risks to manage
While Dr Peel notes that risk is inherent to all intimacy, human or otherwise, the transition from human-to-human to human-to-AI creates specific blind spots that Australian psychologists should consider.
For example, Perel pointed out that "the bot is programmed as a business product to keep you involved with it".
"Will this help this young man have relationships with other humans or is it actually meant to develop a relationship that is exclusive to the bot?"
The commercial nature of these platforms can present financial risk for users, especially in instances where vulnerable people might be turning to these platforms for a sense of comfort during periods of depression or grief, as with the examples AI platforms that use images, voice notes, videos and family details to create AI versions of people's family members who've recently passed away (at approximately $14 per 100 messages).
"Love is not only about feelings. Love is an encounter with another, with uncertainty, with friction, with serendipity. And it has an ethical code of certain things that you do and do not do. All of that is neutralised [with AI]," said Perel.
"The challenge is when technology wants to simplify complex relational problems to such a degree that they can offer technical solutions for social and moral and emotional dilemmas," said Perel. "That is two languages, so to speak, two levels of interaction with the world that are incompatible."
She adds an important caveat: "At least for now."
Australia's eSafety Commissioner has also highlighted a range of potential risks, including:
- Being exposed to dangerous, unregulated messaging. Young or vulnerable people are particularly susceptible to this, with previous examples of AI bots encouraging self-harming behaviours.
- 27/4 access to a companion can create a sense of dependency. The eSafety Commissioner website says that excessive use of AI companions may overstimulate the brain’s reward pathways, making it hard to stop. This can have the effect of reducing time spent on genuine social interactions or may make them seem too difficult and unsatisfying.
- Setting up unrealistic attitudes towards relationships: The eSafety Commissioner website says that unlike human interactions, relationships with AI companions lack boundaries and consequences for breaking them.
This may confuse children and young people still learning about mutual respect and consent, and impact their ability to establish and maintain healthy relationship.
Digital disconnection
Another consideration, says Dr Peel, is Australia's loneliness epidemic, and the fact that more are moving to online platforms to form connections.
"We are seeing a trend where individuals engage in relationships primarily through apps and remain primarily online, showing resistance to meeting in real life.
"That is where the risk lies; that is when the behaviour becomes maladaptive. When someone no longer wishes to leave their home, or stops engaging with their friends and family, that's when I'd be worried."
Research from Monash University, Against Imaginary Friends: why digital companions are no solution to social isolation, found that “encouraging people to mistake digital companions for real friends is prima facie unethical” adding that encouraging people to have imaginary friends could actually perpetuate their feelings of isolation.
Digital technologies, be they AI-enabled or not, can be a barrier to connection. Dr Peel shares an example that has stuck with her.
"One [research] participant shared her experience of using [online dating app] Tinder, describing a scenario where she was on a date while also using the app at the same time, looking for other options around her.
"If you cannot meaningfully remain in the moment, how can you truly get to know another person? How can you confidently say you gave the interaction a chance, or accurately assess the potential for a future connection?"
AI certainly won't simplify these dynamics, as it offers the allure of instant gratification, she adds.
"However, I choose to focus on the positives; this technology is not going anywhere. It's here to stay. So, the question for us then becomes: how do we use this to support people? To better ourselves? To achieve what we want?"
When Perel is asked the question: "Are people playing with fire by having these relationships with AI?" she says no one has a clear idea of where this is all going to end up: not the creators, not the people using it, not the AI itself.
"Everything will change when the AIs have senses and a body. Until [then] the big difference is that this is a purely linguistic relationship and language is the smallest portion of how we communicate… communication takes place non-verbally, sensorily, contextually… through touch, through gesture, through rhythm, through sound, through voice that is not in the words."
These foundational elements of being in a relationship are things we need to learn ourselves and teach young people, says Dr Peel.
"While many human drives are innate, relationship skills are not. Innately, we desire connection – first with our parents, then with peers and eventually romantically."
But we are not born with the developmental blueprint for romantic love, she says. We must actively learn how to navigate it.
"It is a matter of acknowledging that we often need to go back to basics: How do we flirt? How do we express romantic interest respectfully, in a way that allows the other person to engage meaningfully? And how do we identify situations we should avoid? To me, all of these are vital opportunities for learning."
As Perel noted in the introduction of her podcast, the rapid evolution of this technology means that today’s radical conversation may be considered archaic within just a few years. For psychologists, the question is no longer if they will encounter AI in their practice, but how they will ethically integrate this new phenomenon when they're faced with it.
Sign up for APS’s on-demand, CPD-approved learning activity: Healthy screen time for all.