Behavioral health has a (justifiable) reputation for being slow to embrace new technology. AI is changing that, and the field is warming up to the idea of using tech tools to alleviate some of the biggest challenges that come with this line of work. But as more clinicians leverage AI to ease their workloads and improve the care they provide, we need to pay attention to the effect this technology is having now—and how it might evolve in the years to come.
“It’s phenomenal to see how quickly the behavioral health field has adopted a mindset around AI,” said Jeremy Attermann, MSW, Senior Director of Strategy and Ventures at the National Council for Mental Wellbeing, during our most recent episode of the No Notes podcast. Attermann joined podcast host Dr. Denny Morrison, Chief Clinical Officer at Eleos, to explore how AI is reshaping behavioral health as we know it.
The pair also touched on findings from a recent National Council survey that looked at how behavioral health professionals are approaching AI today—what excites them, what concerns them, and where there’s still uncertainty. Read on for key takeaways from their conversation.
Want to listen to the full podcast episode? Check it out here.
Current State of AI Adoption in Behavioral Health
Historically, behavioral health has lagged behind other healthcare fields in adopting even the most basic tech systems—including electronic health records (EHRs). In fact, as of 2023, only 49% of psychiatric hospitals had implemented a certified EHR, compared to 96% of general and surgical hospitals. But according to Attermann, AI adoption in behavioral health has been unexpectedly swift, driven in part by the pandemic’s push toward virtual care.
“For a field that, for better or worse, has been deemed as non-innovative and non-technologically savvy, behavioral health has, I think, done a good job as a field and as a system of trying to embrace what is coming in AI,” Attermann shared.
One area where that’s particularly evident is in AI tools like Eleos, which are designed to simplify documentation and reduce the administrative burden on clinicians. These kinds of “augmented intelligence” solutions—the ones that support provider workflows rather than making clinical decisions—are becoming the AI “gateway tech” for many behavioral health organizations.
As Attermann noted, clinicians tend to be much more open to using AI for tasks like note-taking, which are not directly tied to client care. Tools that make clinical recommendations or interact with clients during the actual care process, on the other hand, tend to feel more risky.
“When we think about technologies to support the administrative burden of clinical documentation, that is a safer space than some of the other use cases that have come out or will come out around AI,” he said.
Check out Morrison’s blog post on the difference between augmented intelligence and digital therapeutics for a more in-depth look at these two distinct types of AI.
Impact of AI on Behavioral Health Providers and Clients So Far
While technology can’t fix every problem in behavioral health, it can make a meaningful impact on providers and their clients—and both Attermann and Morrison believe it already is.
Provider Impact
Clearly, one of AI’s biggest strengths is its ability to take on the administrative tasks that overwhelm and weigh down providers—giving them more time and energy to focus on their clients.
“Technology like AI targets providers specifically, making their lives easier,” said Attermann.
This shift allows clinicians to do more of the work that only they can do (e.g., provide great care) without stressing about paperwork. As Attermann and Morrison discussed, behavioral health providers didn’t enter the field because they wanted to spend their days filling out progress notes. They got into this profession because they wanted to help people, and AI is helping them do more of that—not only by freeing up more time for care, but also by creating breathing room during and between sessions, reducing stress, and improving their overall well-being.
Learn how GRAND Mental Health saved 10 weeks of staff time in 6 months with Eleos.
Client Impact
Speaking of clients—they tend to feel the benefits of AI in ways that aren’t always obvious, but are still important. By lightening the administrative load, AI helps therapists show up more engaged and focused during sessions. This, in turn, improves the therapeutic relationship—which is fundamental to effective therapy and optimal client outcomes.
“Even just having quality documentation from session to session makes a big difference as a clinician,” Attermann said, noting that AI tools help ensure the accuracy of client records, making it easier for providers to stay consistent in their therapeutic approach from session to session.
The research backs up the positive impact of AI technology on the care process. In 2023, the Eleos research team co-led a randomized controlled trial (RCT) to investigate the clinical impact of our technology, ultimately finding that clients whose therapists used Eleos attended 67% more sessions and achieved 3–4x better symptom reduction compared to treatment as usual (TAU). Additionally, those therapists submitted their notes 55 hours faster than those who had no AI documentation assistance.
What the Data Says: Survey Insights on AI in Behavioral Health
The National Council’s recent survey, conducted as part of their AI webinar series, paints a pretty clear picture of how behavioral health professionals view artificial intelligence. The results show a field that is curious, cautious, and still finding its footing.
Familiarity with AI
As Attermann explained on the podcast, when asked about their familiarity with AI, survey participants indicated a wide range of understanding:
- 15% said they were not familiar with AI at all.
- 40% described themselves as slightly familiar.
- 34% reported being moderately familiar.
These numbers tell us that while some professionals are still in the early stages of learning about AI, many are already actively engaging with the technology and its potential use cases in behavioral health.
Belief in AI’s Benefits
When it comes to AI’s impact on care, the responses leaned toward optimism:
- 50% agreed or strongly agreed that AI would enhance the quality of care.
- 47% were neutral, signaling some uncertainty about its potential.
- Fewer than 3% disagreed that AI could make a positive difference.
“This tells me that people are paying attention,” Attermann said. “They’re listening, they’re learning, and they’re open to what AI could bring to behavioral health.”
Current AI Adoption
Despite clear interest in AI among behavioral health professionals, actual adoption remains limited:
- 14% of respondents said their organization was already using AI.
- 59% reported that their organization was not currently using AI.
- 27% were unsure about their organization’s status with AI.
Attermann did note one particularly troubling trend among those who have started experimenting with AI: “About half of those using AI listed non-compliant tools like ChatGPT for clinical documentation,” he revealed. He cautioned against this practice, pointing out that such tools are not HIPAA-compliant and could put client data at risk.
Want to learn more about data privacy and security in behavioral health AI? Check out this podcast episode and this blog recap.
Emerging Opportunities with AI in Behavioral Health
Documentation is only the beginning of AI’s potential in behavioral health. As the technology evolves, new opportunities will emerge to improve training, support clinical decision-making, and enhance care delivery.
Clinical Decision Support
One of the most exciting potential use cases for AI in behavioral health is assistance with clinical decision-making. AI can quickly analyze large amounts of data, identify trends, and even surface implicit information that clinicians might otherwise miss.
“There’s so much opportunity to leverage technology to improve care pathways and treatment decisions,” Attermann said.
By flagging recurring patterns or subtle indicators across sessions, for example, AI can give clinicians a clearer picture of a particular client’s needs and progress.
Of course, this kind of support won’t replace provider expertise—but it will enhance it. With AI acting as an extra set of eyes, providers can feel more confident in their decisions and make sure their clients receive the right care at the right time.
Simulation Training
Another intriguing AI use case lies in training and education for behavioral health professionals. AI-powered simulations can create realistic treatment scenarios, giving providers a safe, controlled environment for practicing their skills in care delivery.
“Simulations could revolutionize training for case managers, peer specialists, and clinicians,” Attermann said. He believes AI can better prepare providers for high-stakes situations, such as crisis interventions—after all, traditional role-playing exercises and classroom vignettes only go so far. AI simulations, on the other hand, can mimic a range of scenarios with dynamic, adaptive feedback, offering providers a highly effective way to refine their skills before stepping into real-life situations.
Potential Risks and Challenges with AI
While AI holds incredible promise for behavioral health, there’s still always a possibility that things will go wrong. When speaking about that risk, Attermann didn’t mince words: “At the end of the day, when a technology is even informing or supporting a decision that could lead to anything from mild to catastrophic impact on a client, we have to really think about client safety,” he said.
If a tool has the potential to impact care, it must be subject to strict oversight to make sure it doesn’t do more harm than good.
Patient Safety
One of the biggest concerns with AI is making sure it doesn’t inadvertently harm clients.
“What happens if the technology is making a recommendation that ends up being non-productive, or even harmful, to the person’s outcomes?” Attermann asked.
Faulty recommendations, biased decision-making, or errors caused by flawed data models could all have serious consequences if left unchecked.
“The safety of clients must always come first,” Attermann said, emphasizing that while AI tools can assist clinicians, they should never replace provider judgment. Decisions about care must remain in the hands of trained professionals. AI should only serve as a support tool—not an autonomous authority.
Increased Caseloads
AI can make clinicians more efficient. However, Attermann cautioned against using that time savings as justification for piling more work onto already overburdened clinicians.
“If it saves hours in a given week for a provider, that’s great for their mental well-being,” he explained. “It should not be replaced then by, ‘Well, now you can take on additional clients.’”
He warned that increasing caseloads as a response to time saved could negate the very benefits AI is supposed to provide, leading to increased levels of burnout and souring provider sentiment toward the technology.
Ethical Concerns
Attermann also raised concerns about the use of non-compliant tools like ChatGPT in clinical settings. “About half of those using AI listed non-compliant tools like ChatGPT for clinical documentation,” he noted, referring back to National Council’s webinar survey results. These tools pose significant risks for both providers and clients, including potential breaches of sensitive data.
The lack of oversight and regulation in AI technology adds to these concerns. Behavioral health organizations have to make sure that any AI tools they adopt meet strict standards for privacy, security, transparency, and accountability. After all, client trust and safety are on the line.
To learn more about building internal AI policies that help mitigate some of these risks in your behavioral health organization, check out this No Notes podcast episode.
The Future of AI in Behavioral Health
The massive potential of AI in behavioral health is undeniable. Beyond automating tasks and reducing provider burdens, AI could play a central role in advancing data-driven care, improving client outcomes at scale, and solving the workforce crisis.
Building a More Data-Driven System
One of the most exciting possibilities for AI lies in its ability to support more objective, outcomes-focused care. Behavioral health has historically leaned heavily on subjective assessments, but AI’s capacity to analyze large datasets could help clinicians consistently make decisions informed by evidence rather than intuition alone.
“We need to shift to more objective, outcomes-focused care delivery—and AI is a big part of that,” said Attermann.
This shift could help organizations better understand what’s working, identify areas for improvement, and make sure clients receive the most effective interventions. Over time, this data-driven approach could raise the standard of care across the field.
Rethinking Client-Facing Tools
Attermann also highlighted one of the most critical challenges in the realm of healthcare AI: the rise of consumer-facing mental health apps. While these tools are designed to give clients greater access to care, many operate independently of clinicians, leaving gaps in oversight and accountability.
“There’s a lot of good intent in these apps, but without clinical integration or validation for outcomes, they risk creating silos instead of improving care,” Attermann said.
Building Seamless Integrations Between Tools
According to Attermann, the long-term success of these tools will depend on their ability to connect with clinician-facing systems. For instance, data from a mindfulness or cognitive behavioral therapy app should flow back to the therapist so they can monitor client progress and tailor care accordingly.
“When clients need a higher level of care, the transition shouldn’t feel like starting over,” he said. Integrated systems could allow clients to move fluidly between app-based tools and in-person therapy without losing valuable insights or progress.
Purposeful and Responsible AI Implementation Paves the Pathway to Success
Artificial intelligence is shaking up behavioral health in exciting ways that once felt impossible—streamlining work, improving care, and hinting at a future where data-driven insights seamlessly enhance every step of the therapy process.
But excitement alone isn’t enough to get us there. For AI to deliver on its undeniable potential, it has to be implemented with purpose. That means keeping client safety and provider well-being front and center, designing tools that make care more effective, and making sure that technology’s primary role is to strengthen the human connection that will always be at the heart of behavioral health.
Want to see for yourself how a purpose-built AI platform can alleviate provider burnout and enhance client outcomes? Request a demo of Eleos here.