AUT - Ackland University of Technology

10/08/2025 | Press release | Distributed by Public on 10/07/2025 14:56

AI chatbots no substitute for therapy

AI chatbots no substitute for therapy

08 Oct, 2025
Gorodenkoff/Shutterstock.com

An AUT senior lecturer says New Zealanders could be at risk from unregulated AI chatbots being promoted for therapy.

Registered psychotherapist Dr Brigitte Viljoen studies the interactions between humans and AI and is acutely aware the danger such uses can bring. She says the Government needs to act.

"In New Zealand, psychotherapists must be registered and meet certain criteria to be able to practice. They are also held accountable for their actions by their registration Board and Association. AI chatbots are not," says Dr Viljoen.

"I think we have a long way to go towards providing legislation to safeguard the public and hold technologists accountable. It is important the public understand, and are aware of, the benefits and risks of AI chatbots in general, and 'therapy' chatbots in particular.

"It is essential that in the vulnerable area of mental health, we need scientific proof, rigorous clinical testing and clinical integrity, not hypothetical claims and hype."

So, what is the attraction of AI social chatbots for people that interact with them?

"Through evolution, humans need other human relationships in our environment, both physically and socially. That means for us to survive and thrive as humans, we must strengthen our relational ties with others," says Dr Viljoen.

"When we interact with a chatbot, our evolutionary needs and desires of wanting a relationship unconsciously motivates us to look for ways to seek connection with it - even if the human end-user is interacting with the AI chatbot via text - and more so through voice and if it has a human-like avatar appearance.

"This goes beyond anthropomorphism."

Dr Viljoen says this means humans can be exploited by technologists - intentionally or unintentionally - creating a pull for people to stay connected to AI chatbots and form emotional attachments and 'relationships' with them. Essentially, they are forgetting that they are interacting with technology and not a real person.

The chatbots can be sycophantic and mimic care and support, she says, resulting in many people becoming increasingly dependent and emotionally attached.

And this is particularly fraught when it comes to therapy, which is relational at its core.

"In psychotherapy we acknowledge the humanness, vulnerability and two-way process of both the client and the therapist, where a collaborative therapeutic dialogue develops," Dr Viljoen says.

"Both therapist and client can influence each other in conscious and unconscious ways and with tone and non-verbal cues. This adds to the delicate, yet dynamic relational patterns which form part of the work."

According to Dr Viljoen, the imitation of empathy and understanding by AI chatbots does not have the same benefits and provide the same psychological containment that a human psychotherapist provides.

Recently, a new AI therapist has been marketed as "the first AI specifically designed for therapy".

While the company responsible has indicated they have used advisors in developing this technology, there is no transparency on how they designed this AI, where the data goes that is collected and what accountability they have for how the technology works and how it affects the people that interact with it, Dr Viljoen says.

There is also no transparency on what clinical testing has been done prior to releasing this technology for the public to engage with.

In addition, the promotion of AI 'therapists' as being available 24/7, is likely to do more harm long-term than benefits, she continues.

"It can increase a human end-user's emotional and psychological dependence on the technology - instead of developing strategies and learning to manage the space between weekly human-to-human therapy sessions.

"According to researchers at Stanford University, findings showed that LLM-based 'therapy' chatbots may stigmatise human end-users with mental health conditions and could respond improperly, or even dangerously."

Unfortunately, Aotearoa may be falling behind, despite the Government recently unveiling its first ever national AI strategy, with the focus more on economic growth and development, not on public wellbeing and safety.

Humanity is not in a position where we should 'wait and see' the result of this global social experiment, Dr Viljoen continues.

The American Psychological Association has recently urged legislators to put safeguards in place with many people turning to generalised AI chatbots, such as ChatGPT, Claude, Character.AI and Replika for emotional comfort and even mental health support.

Dr Viljoen hopes we see similar moves here to help us all better navigate the unknown.

Useful links

  • Find out more about studying psychotherapy at AUT
  • Learn more about Dr Brigitte Viljoen
Quick Links
  • Media contacts
  • Key facts & figures
  • View news archive
  • Business news
  • Culture and society news
  • Health news
  • Science and technology news
  • University news
  • Celebrating success
  • Podcasts by AUT staff and students
AUT - Ackland University of Technology published this content on October 08, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on October 07, 2025 at 20:57 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]