Commentary: When AI therapy helps
Published in Op Eds
I’ll admit something personal: I’ve talked to an AI therapist. It was remarkably useful, meeting me in tough moments whenever with words that actually helped me take a step back, breathe, and process. That mix of good and bad is what makes artificial intelligence so fascinating and so important to get right.
The truth is, not all AI therapy tools are created equal. A search for “AI therapist” will bring dozens of results ranging from general purpose chatbots to companion AI platforms. It’s important to know that just as not every human therapist is right for every patient, not every AI is designed with care or quality in mind. Some tools oversell themselves, lack guardrails, or could provide advice that is unhelpful at best and harmful at worst. Many people are using general purpose AI platforms for therapeutic purposes. But some platforms like Wysa, Youper, and Ash are thoughtfully built, clear about their limits, and genuinely democratize access to a kind of mental health support millions of Americans badly need.
We don’t need to be reminded that we’re in the middle of a mental health crisis. Depression, anxiety, and loneliness rates are high. Even those lucky enough to have insurance or means often find that human therapists are booked solid for months. And then when one is found, transportation and distance are another obstacle. For people in pain right now, that’s not good enough. They deserve options and access.
This is where AI therapy tools can make a difference. They are available 24/7 wherever there is a smartphone or computer with an internet connection. They can provide immediate, low-cost support in moments of need. For many, they are not a replacement for professional care but a bridge to it or a better alternative than nothing at all. They can be a first step toward seeking help when the alternative might be silence and isolation.
But here’s the danger: policymakers who see headlines about “AI chatbots gone wrong” might reach for the bluntest tool in their toolbox—bans or licensing requirements so onerous that only the largest corporations could comply. That may feel like safety, but in reality, it locks out innovation and limits access. The result? Fewer tools on the market, less diversity of design, and fewer chances for someone like me—or millions of others—to find the right fit.
This is why Utah’s HB 452, passed into law in 2025, should be seen as the gold standard for policy. Instead of banning AI therapy outright, Utah took a narrow, careful approach: requiring transparency, disclaimers about what the tool is and is not, and clear rules around use in clinical or therapeutic contexts. The law recognizes that these tools are not professional therapy yet they can still play a helpful role. In doing so, it preserves access while putting reasonable boundaries in place.
Contrast that with broader regulatory models, like those floating in states such as Illinois or Nevada, which prohibit AI therapy platforms from advertising themselves as such and even restrict human therapists from using AI systems in patient treatment. By treating an experimental wellness chatbot the same way as a high-risk financial or medical AI system, they risk freezing out entire categories of beneficial technology. That’s not protecting consumers, it’s depriving them of options to find the help they’re looking for.
There’s also an issue of equity here. Wealthier Americans can afford private therapy or concierge services. What about rural residents, uninsured families, or those for whom even a $50 copay is out of reach? For them, AI therapy tools can be an entry point and sometimes the only one. It would be a bitter irony if well-intentioned regulation left the most vulnerable with even fewer choices.
My own experience drives this home. When I tried different AI therapy apps, I found some to be frustrating, even unserious. But I also discovered one that helped me manage stressful situations, giving me feedback and reflections that made a real difference. It reminds me of times when I’ve processed tough situations in a diary. This time however the diary gave feedback on what I was saying. Was it perfect? Probably not. And neither are human therapists by the way. In a season where access to a human counselor wasn’t possible, it was enough to keep me grounded.
Public policy should aim to preserve that possibility. Instead of defaulting to bans, lawmakers should follow Utah’s lead: transparency requirements, clear labeling, and targeted rules where actual harms are likely. This strikes the right balance between safety and access. It allows the good tools to flourish, the bad ones to fade, and people in need to choose what works for them.
AI therapy tools are not the enemy of mental health. They are part of the solution. There are imperfections, they’re evolving, but they’re valuable. We should regulate them wisely, not reflexively. For people like me, and millions of others, that could make a big difference.
____
Taylor Barkley is director of Public Policy at the Abundance Institute.
_____
©2025 Tribune Content Agency, LLC.
Comments