Artificial intelligence (AI) hasn’t been part of everyday life for very long, but in the past couple of years it has found its way into our homes, workplaces and now our emotional lives. I’ve had many conversations with mental health professionals surrounding the ethics of how AI might be used as a therapeutic tool. I have also kept a close eye on emerging research and real-life examples that demonstrate how it can support people in meaningful ways — as well as the risks. Many questions and unknowns occupy this space, especially as AI’s sophistication develops faster than our ability to thoughtfully examine and grapple with its place in our world.
I first tried AI through ChatGPT, a large language model, shortly after it launched in November 2022. I felt both curious and cautious. Like many people, I grew up with books and movies that portrayed some AI in frightening, end-of-the-world scenarios, and those early interactions felt surreal. At the same time, I’ve seen how new technology in health care can make a real difference when resources are stretched thin. For example, AI is being used to offer cognitive tools designed for people living with dementia. There are also AI-driven apps that help youth regulate emotions and build skills for navigating difficult feelings. But for explorations of deeper parts of our emotional lives or complex mental health concerns I’m not convinced AI can be a replacement for human connection and the ethical guidelines (do no harm) that serve as guardrails.
One story that recently caught my attention was a guest essay in The New York Times written by a clinical psychologist. He recounted his experience using ChatGPT to help process grief and described the interaction as “eerily effective.” The more he shared about his emotions, the more the AI mirrored his tone and helped him organize his thoughts. In essence, he was using it like an interactive journal, a tool to help him express and reflect about his experience. I was struck by how his experiment highlighted a core tenet of human connection: to share and be heard. But was that what was really happening?
After reading about his experience, I decided to try it myself and see how I might feel when AI responded to an emotional outpouring. I wondered, what would it be like to have this kind of resource in our pocket whenever we needed it? So I wrote out some feelings of sadness and frustration I’d been carrying, in a stream-of-consciousness style. Within seconds, ChatGPT replied with a mirrored response, much like in the essay, that felt compassionate and understanding. I noticed quickly how comforting it felt to be heard.
But there was more to it than that. On deeper reflection what I found most helpful was the act of putting my thoughts and feelings into words, much like I would in a journal. The quick feedback from AI felt a bit like having a caring listener in real time. In the moment, I found that it helped me experience some relief from my emotions, and it was certainly validating. But something important was missing from the exchange: curiosity.
In my work with clients, curiosity goes hand-in-hand with empathy. It’s the kind of curiosity that grows from genuine connection, wanting to truly know someone, noticing subtle shifts in tone or body language, and asking questions that lean toward understanding rather than judgement.
A few days after my AI experiment, I shared the same feelings with a trusted friend. She also offered empathy and validation, but she went further, asking thoughtful questions that encouraged me to explore my emotions more deeply. The experience of being asked, understood, and cared for was invaluable. It helped me process my feelings within the safety of a real relationship, shaped by another person’s thoughts, experiences, and perspectives. Just as important was her presence, the sound of her voice, her expressions, and her body language. In short, the AI’s response felt soothing; my friend’s response felt healing — rich, textured, and alive.
While I do see potential for AI tools to offer moments of reflection, encouragement, or clarity throughout the day, there are clear limits. Research points to important risks too, such as the possibility of errors, reinforcing unhelpful thinking, and lacking the opportunity to build the trust that is essential for healing. For anyone struggling with more serious mental distress, AI is not set-up to support the needs of someone in crisis. This could further put a person at risk for self harm or harming others. A trained therapist would recognize those risks and would be guided by ethical codes of conduct to support that individual with additional support and services.
Do I think AI has a role in mental health support? Yes — but not on its own. When used alongside counselling or within the safety of supportive relationships with friends and loved ones, AI can be a helpful tool. Still, the nuances of human connection remain at the core of healing, and the ethical care and resources that therapists provide will always be essential.
If you are having thoughts of suicide, call or text 988 to reach the Suicide Crisis Helpline (24/7) or visit 988.ca.
Sarah Tesla is a registered clinical counsellor on the Sunshine Coast who supports the diverse needs of clients in rural and remote communities. This column is informational and is not intended to be a substitute for counselling support or services. If you or someone you know is struggling with their mental health or substance use, please seek professional support.