The APS believes that AI has the potential to reap considerable benefits for humanity, including improved health, wellbeing and human potential. Safeguarding mechanisms must, however, keep up with AI advancements to keep individuals and society safe. Psychologists possess extensive expertise to help guide the development of fast-changing AI technologies and shape the guardrails that govern their use in Australia.
The APS submission drew particular attention to the following considerations surrounding AI regulation:
- Policymakers and regulators must be aware of the extent of social and psychological benefits and harms associated with AI as they design, implement and evaluate AI safeguards. Integrating insights from psychological science and bringing together psychological practitioners and researchers, computer scientists and consumers to work closely with regulators and policymakers over time will be essential for a safe and trusted AI ecosystem that aligns with human values and societal needs.
- It would be undesirable for regulatory devices to transfer the burden of risk back to the individual. Instead, the APS supports a risk-based approach that focuses on identifiable harms, holds AI developers accountable in cases of misuse, and includes psychologists as an integral part of any impact assessment process.
- A regulatory system must also include public education and industry guidelines, standards and frameworks directly informed by industry peaks and professionals that can help to guide the safe use of available AI systems in a way that minimises the potential for negative psychological and social impacts.
- While the APS embraces validated AI advancements in health and mental health care, we remain wary of the use of AI in ways that overlook essential human needs and care system shortcomings.
View submission