The Rise of AI Therapy: Why more people are turning to ChatGPT for Mental Health Support (and why we should be concerned)
- Syné Collective

- Oct 29
- 6 min read
Updated: Oct 29
In recent years, AI chatbots like ChatGPT have become increasingly popular as “digital companions” for mental health support.
Social media is full of stories of people seeking comfort, reassurance, or guidance from AI. For those facing barriers to traditional therapy—high costs, long waitlists, cultural stigma, or geographic limitations—AI can seem like a convenient alternative.
But while these tools may feel supportive, they are not a replacement for therapy, and relying on them for emotional or psychological guidance carries serious risks.

Why People Are Turning to AI Therapy
Many systemic factors make AI appealing:
Cost: Therapy is expensive, and not everyone has insurance coverage.
Stigma: Cultural or personal shame may prevent individuals from seeking professional help.
Accessibility: Rural or underserved areas often have limited access to qualified mental health professionals.
AI, on the surface, appears to remove these barriers. It is accessible 24/7, anonymous, and low-cost. However, convenience should not be mistaken for clinical safety or efficacy.
The Psychological Impact of Validation and Reassurance
AI chatbots are trained to respond empathetically — they validate emotions and provide reassurance. While comforting, this validation can carry unintended psychological consequences:
Reinforcing avoidance behaviours: Without challenge, AI may normalise unhelpful coping strategies.
Perpetuating cognitive biases: AI mirrors patterns from its training data, potentially reinforcing distorted thinking.
Creating dependency: Users may rely on AI for emotional regulation instead of seeking professional help or learning to self-soothe.
Validation and reassurance are not substitutes for the intentional challenge, reflection, and insight that drive real therapeutic growth.
Why Human Standards Matter
When selecting a real-life therapist, we hold professionals to incredibly high standards. They must complete years of training, accumulate supervised clinical hours, and obtain formal accreditations before practicing independently. Yes, you are paying for this expertise — but the cost reflects something far more important than convenience: safety and risk management.
Mental health care involves navigating vulnerability, trauma, and complex emotions. These are high-stakes interactions where clinical judgment, ethics, and accountability are critical.
Contrast this with AI chatbots: they provide validation but no supervision, no professional liability, and no capacity to manage risk. Yet many people trust them for guidance in highly sensitive areas, while often quick to discredit human therapists for imperfect experiences—a double standard that can be dangerous.
Large foundational models like ChatGPT or Claude are trained on vast, mixed data sets, including public internet text such as websites, books, articles and forums, as well as licensed data legally obtained by the model's developer as well as human-generated examples. What's missing here?
That's right. Private therapy transcripts, or real patient data.
AI's knowledge of therapy concepts comes from reading about therapy, not from sitting in on real sessions.
There are AI's that have been developed specifically for therapeutic support trained on licensed therapy dialogue datasets and anonymised, consented chat logs, or CBT-based scripts written by clinicians, but they come at a cost, because again, you're paying for the expertise.

Understanding Psychotherapy vs. AI Interaction
Psychotherapy is a structured, intentional process where a trained clinician helps a person explore thoughts, emotions, and behaviors in a safe, guided environment. Therapists actively observe verbal and non-verbal cues such as pauses, tone, facial expressions and body language, that reveal underlying emotions and unconscious patterns.
They can adapt the direction of a session in real time to challenge avoidance, introduce new perspectives, or facilitate insight and growth.
AI chatbots, in contrast, respond algorithmically to prompts, predicting responses based on large datasets of human conversation. They cannot detect involuntary emotional signals, guide conversations strategically, or foster psychological growth. While AI can offer comforting dialogue, it lacks the relational intelligence, clinical judgment, and ethical oversight that make psychotherapy effective.
What AI Can Do Well for People Seeking Mental Health Support
While AI cannot replace professional therapy, it can play a supportive role for those seeking help, particularly in areas that are structured, educational, or reflective. For example:
Psychoeducation and guidance: AI can provide accessible explanations about mental health conditions, coping strategies, and wellness practices —helping people understand symptoms and learn basic techniques like mindfulness or cognitive-behavioral exercises.
Mood and behaviour tracking: AI-powered tools can help users log daily moods, sleep, activity, and triggers, giving insight into patterns and progress over time. This can be helpful to track to take to your sessions with a registered psychotherapist or counsellor.
Structured self-reflection: Through guided journaling prompts, exercises, or self-assessments, AI can encourage people to explore thoughts and feelings, fostering self-awareness.
Immediate comfort and reassurance: Chatbots can provide a sense of support when someone feels anxious, isolated, or uncertain. A helpful prompt you can give ChatGPT in a moment of panic can be something like "I'm feel like I am having an anxiety attack and need some grounding help. Take me through an exercise to help ground me". However, it is important to remember that this is not therapy, and by working with a experienced practitioner, you will develop the knowledge and tools to self-soothe.
Practical support: AI can help with reminders for self-care, appointments, or routines, which can help people maintain stability and consistency in daily life. These tools can be extremely useful for individuals suffering from ADHD, ASD, Depression, Anxiety, Bipolar, and Executive Disfunction where time perception and organisational skills can reflect underlying processing differences.
Access for underserved individuals: For those in remote areas, with mobility limitations, or facing financial barriers, AI can provide interim support and guidance until professional care is available.
What about the other areas of our life?
AI excels at education, reflection, tracking, and practical support — but it cannot detect emotional cues, interpret non-verbal signals, or facilitate psychological growth the way a trained therapist can. It is a supplement, not a replacement, for human care.
However, AI is here to stay, and not only is it reinventing how we go about our day-to-day, it is also unlocking possibilities we may never have considered. And yes, this can be good for our mental health too.
AI can also help automate and streamline everyday, repetitive or administrative tasks, freeing up time and cognitive energy for self-care, personal development, and meaningful activities, such as:
Learn new hobbies or deepen creative pursuits
Spend more quality time with family and friends
Set, track, and achieve personal or professional goals.
Plan and organise daily routines more effectively, and help you maintain health habits like exercise, meditation, or balanced nutrition.
Focus on work or projects that require creativity and critical thinking.
Decision support and data-driven suggestions for things like budgeting and meal planning, date and gift ideas, career advice, etc, elements that can positively impact our social, financial, physical and career health.
In short, using AI not for therapy, but for life optimisation, allows people to reclaim time, reduce stress, and invest in the activities that truly matter to their well-being and growth.
Explore more about Syné Collective's concept of Life Design here.

Evidence-Based Risks
A recent OpenAI report revealed over one million weekly ChatGPT users expressing potential suicidal intent, with 560,000 showing signs of severe mental health crises (The Guardian, 2025).
Other studies highlight that while digital mental health tools can serve as adjuncts, they cannot replace professional therapy, especially for chronic conditions or crisis situations (Fitzpatrick et al., 2017; Inkster et al., 2018).
Risks include:
Lack of accountability or clinical oversight.
Privacy and data security concerns.
Reinforcement of maladaptive patterns due to algorithmic responses.
Moving Forward Responsibly
The rise of AI in mental health underscores systemic gaps in care. At Syné Collective, we advocate for responsible integration of AI: using technology to enhance human-led therapy, rather than replace it.
AI can support reflection, education, and logistics — but it cannot provide the nuanced emotional guidance, relational intelligence, or clinical oversight required for safe, effective mental health care.
Ready to Get Support Without Barriers?
At Syné Collective, we understand the very real obstacles — cost, stigma, and accessibility — that stop people from seeking mental health care in Sydney.
That’s why we offer immediate availability with zero waitlists, telehealth consultations for flexibility, and discreet, thoughtful support tailored to your needs.
You can start with a free 15-minute consultation, and our fees are comparable to Medicare gap payments here in Australia. Our space is thoughtfully designed, in the beating heart of Sydney CBD, to feel like a warm hug, and we are actively transforming how Sydney-siders access therapy to feel as easy and enjoyable as your morning lattè.
Because barriers should not prevent you from accessing the care you deserve — safety, growth, and professional guidance are just a call or click away.
References
Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19.
Inkster, B., Sarda, S., & Subramanian, V. (2018). An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: Real-world data evaluation. JMIR MHealth and UHealth, 6(11), e12106.
The Guardian. (2025, October 27). ChatGPT user data reveals mental health crisis interactions.
_edited.png)
Comments