In summary:
- Clinical accountability: Psychologists remain solely responsible for all outcomes; AI is an assistant, not a replacement for professional judgement.
- Scientist-practitioner rigour: Tools must meet evidence-based standards; evaluate accuracy and 'hallucinations' before use.
- Written, specific consent: Generic 'AI' disclosures are insufficient; the APS Professional practice guidelines require explicit consent for each specific tool and data use.
- Data security: AI outputs are part of the clinical record; ensure compliance with the Privacy Act and prevent data being used for training.
- Bias vigilance: Actively monitor for cultural or demographic biases in AI training data that may disadvantage clients.
- Digital literacy: Maintaining competence in how tools function and evolve is a mandatory professional responsibility.
The APS's recently released AI and emerging technologies professional practice guidelines offer a useful framework for psychologists who are navigating the complexities of AI in practice.
Artificial intelligence and emerging technologies are now a visible part of the psychological practice landscape – from AI-assisted notetaking to digital tools that support assessment, monitoring and client engagement.
While these technologies offer potential efficiencies and new ways of working, they also raise complex ethical, professional and clinical questions for psychologists and clients alike.
The APS Professional practice guidelines for the use of AI and emerging technologies, which are exclusive to APS members, provide psychologists with a clear, ethically aligned framework for navigating these challenges.
The guidelines emphasise that while technologies may evolve rapidly, psychologists’ professional obligations remain constant.
Below, Professor Gemma Sharp, who served as the lead expert in the creation of the guidelines, unpacks some of the practical implications of the guidelines for psychologists in day-to-day practice,
Professional integrity comes first – technology must benefit clients
A central message of the guidelines is that AI tools should only be used when they contribute to improved outcomes for clients, or, at minimum, do not adversely affect outcomes. Efficiency alone is not sufficient justification.
The guidelines ask psychologists to reflect on their obligation to act in clients’ best interests and to carefully consider the balance between potential benefit and potential harm – including risks to privacy, confidentiality and clinical quality.
Professor Sharp notes that much of the early uptake of AI in psychology has focused on administrative tools, particularly AI scribes.
“We have seen the use of AI scribes explode. That’s sort of the low-hanging fruit – the lowest technology complexity, comparatively speaking," says Professor Sharp, who is an NHMRC Investigator Fellow, School of Psychology at Adelaide University.
However, she cautions that even seemingly low-risk tools require clear boundaries.
“Some AI scribes now suggest ‘next steps’ for a client. The risk isn’t just potentially suboptimal treatment. It’s that clinicians might start to lean on the technology instead of their own clinical judgement.
"This can potentially narrow thinking, give false authority to generic suggestions and blur accountability, even though responsibility always remains with the psychologist.”
The guidelines reinforce that AI must not substitute for professional decision-making, formulation or clinical judgement. Psychologists remain fully responsible for all aspects of service delivery, including any outputs generated by technology.
Evidence-based practice still applies
The APS guidelines make it explicit that the same evidence-based standards must apply to AI and emerging technologies as to any other aspect of practice.
Psychologists are expected to understand a tool’s intended use and limitations, verify the accuracy of outputs, identify any privacy concerns, and critically evaluate the quality of evidence supporting its use.
This is particularly relevant in a fast-moving commercial environment where claims may outpace research.
This is also important in a vendor environment where sales professionals might not comprehensively grasp the important nuances required for evidence-based care.
“Just because you can do something with AI, doesn’t mean you should. I think there can be a rush to develop tools quickly, which haven’t been designed in an evidence-based manner and haven’t been properly evaluated."
Deep scrutiny and questioning of the digital and AI tools that are incorporated into your practice are critical, says Professor Sharp.
“We’re all trained in the scientist-practitioner model. Don’t forget the scientist part.”
The guidelines also highlight risks associated with generative AI tools trained on broad, unverified data sources, and the importance of checking cited references and underlying data.
Explore APS's diverse range of AI-related CPD-approved learning.
"The risk isn’t just potentially suboptimal treatment. It’s that clinicians might start to lean on the technology instead of their own clinical judgement." – Professor Gemma Sharp
Competence and digital literacy are professional responsibilities
The guidelines are clear that psychologists should only use AI tools within areas of their competence, and that AI does not extend or replace professional expertise.
Maintaining digital competence – including understanding how specific tools work, how they are updated, and where bias or ‘drift’ may occur – is an ongoing professional responsibility.
“I always see it as an assistant – never a replacement. And that’s why these critical skills are more necessary than ever," says Professor Sharp.
She also points to the risk of over-reliance, particularly when clinicians are tired or under time pressure.
“There could be a tendency to just copy and paste the AI generated output. But psychologists remain responsible for whether that’s an accurate reflection of what actually happened.”
The guidelines recommend regular review and auditing of tools, alongside consultation with experienced practitioners where needed.
Informed consent must be specific, transparent and ongoing
One of the most detailed sections of the guidelines addresses informed consent. Psychologists are required to obtain written informed consent before using AI or emerging technologies that involve client data, influence clinical decision-making, or contribute to client records.
Importantly, consent must be tool specific. Blanket consent for ‘AI use’ is not sufficient, says Professor Sharp.
“An AI scribe is different from a therapeutic assistant, which is different from a digitised mood diary,” she says. “They’re very different tools with very different risks.”
As part of gaining consent, the guidelines require psychologists to clearly explain:
- what information will be used and in what format
- how data will be stored and accessed
- any privacy risks and safeguards
- clients’ right to withdraw consent
Professor Sharp notes that psychologists may need to slow the consent process rather than assume familiarity.
“We’re actually finding that clients are almost too willing to give consent. It’s our role to go through everything and say, ‘This is where your data is stored. This is how it’s being used. Are you still okay with this?’”
Consent should also be revisited over time, particularly if circumstances change.
“It’s an ongoing discussion, not a once-off tick-box. Someone might come back the following week feeling completely differently after thinking through the discussion further.”
Refer to the APS Professional practice guidelines on informed consent.
Confidentiality, privacy and record keeping still apply
The guidelines reaffirm that AI-generated content (such as transcripts or clinical records) form part of the clinical record and must be stored securely in line with the Privacy Act and APS record-keeping guidelines.
Psychologists remain responsible for reviewing, editing and clearly identifying AI-produced content.
They also highlight risks associated with platforms that store or reuse data, even when information is de-identified.
“If I’m paying for an AI-enabled service, my data should not be the training data,” says Professor Sharp. “You really want to know what’s happening “under the hood” of the technology you engage with.”
The guidelines also address situations where clients request to record or transcribe sessions using their own technology, recommending careful discussion and clear rationale where psychologists might decline such requests. There are many reasons why it is generally not advisable to agree to clients recording your sessions with AI. For example, their technology might not pass the same rigorous standards as your own platform.
Bias, equity and cultural considerations cannot be overlooked
AI tools reflect the data they are trained on, which may embed cultural, demographic or linguistic bias. The guidelines urge psychologists to critically assess whether outputs are appropriate for the individuals and communities they serve.
“An LLM [large language model] is only as good as the data it’s trained on. If that data doesn’t represent certain groups, the LLM will likely make incorrect assumptions about these groups," says Professor Sharp.
Equity and accessibility are also key considerations, particularly where tools require a level of digital literacy or financial access.
“We make assumptions about digital literacy that aren’t always true,” she says. “That needs to be checked before any tool becomes part of service provision.”
Guardrails that will outlast the technology
While specific technologies will continue to change, Professor Sharp believes the core principles of the APS guidelines will remain relevant.
“The bulk of it will resonate for a long time – professional competence, privacy, consent, oversight. These are core skills that will always be crucial.”
Ultimately, the guidelines position AI as something to be approached with curiosity, caution and professional competence.
“It has the power to assist us in doing our jobs very well,” she says. “But therapy still has that fundamentally human-to-human connection.”