AI can detect clinical risks seven times more effectively than humans. But because these tools learn from us, even a small bias in their design can snowball into a much bigger problem, with amplified inaccuracies overshadowing improved care.
On the latest episode of No Notes, Dr. Alison Cerezo, Senior Vice President of Research at mpathic AI, joined host Dr. Denny Morrison to explore issues like clinical precision and ethics in AI. As a “conversation intelligence” company, mpathic AI makes patient visits and clinical trials better with real-time coaching and feedback. With Cerezo’s guidance, the company focuses on designing AI tools that address the needs of various populations.
Throughout their conversation, Cerezo and Morrison tackled big questions like:
- How does bias emerge in behavioral health AI?
- What does it take to build ethical tools that work for most people across lived experience?
- And what role should providers play in evaluating these technologies to protect client trust?
They also dug into the immense potential of AI to support clinicians and drive better outcomes for clients across the board.
Want to learn more? Listen to the full AI bias and ethics podcast episode here.
Health Equity as a Foundation for Behavioral Health AI
Dr. Cerezo’s commitment to clinical precision is deeply personal. Growing up in a mixed-documented family, they spent countless hours in hospital waiting rooms with loved ones who faced barriers to care. These moments shaped their understanding of how barriers to accessing healthcare impact real lives—as well as their drive to create systems that work for everyone.
“If we build tools or we build systems with a focus on the people that are the most vulnerable, then you have built a tool or system that benefits almost everyone, right?” Cerezo said.
Impartiality is inseparable from good clinical science. Cerezo emphasized that tools must be designed with diverse populations in mind in order to be both effective and scalable. “You can’t build tools that only support maybe 30% of the population and then expect wide-scale adoption,” they explained.
At mpathic, that commitment to integrity permeates everything the company does—from using clinically-validated training datasets to designing tools that address real-world challenges. When AI companies embed these principles into tech development, the finished product will deliver more accurate results while ensuring accessibility and inclusion.
The Inevitability of Bias in Behavioral Health AI
Bias in AI is a reality we can’t avoid—especially in behavioral health, where the data comes directly from human interactions. As Cerezo explained, bias isn’t a simple “yes or no” problem.
“When we think about bias, it’s not a yes or no, it’s a continuum,” Cerezo said. The real challenge lies in recognizing this known truth and taking steps to reduce its impact.
Behavioral health AI tools reflect the imperfections of their training data, and because that data comes from humans—who are inherently biased—the AI inherits those same biases. Cerezo reminded us that this isn’t just an AI problem. “Even our DSM has a lot of bias,” they noted, referring to the Diagnostic and Statistical Manual of Mental Disorders, which has been criticized for building systemic inequities into clinical diagnosis.
The consequences of bias are serious. Unaddressed biases in AI systems can limit their effectiveness and prevent them from achieving their full potential in improving care at scale. Unequal outcomes for underserved groups can become further entrenched. If organizations want to avoid this fate, they must start by confronting bias head-on.
Practical Steps for Mitigating AI Bias
Acknowledging that there’s bias in behavioral health AI is just the first step. The next challenge is figuring out how to address it in meaningful ways. Cerezo offered practical strategies for clinicians, care organizations, and AI developers who want to ensure ethical and equitable care.
1. Choose AI Developers Who Involve Clinicians
Cerezo emphasized the importance of evaluating who is behind the AI tools. “I look at who’s developing the tool,” Cerezo explained. “Are there clinicians involved? Are they licensed clinicians?” Licensed professionals bring critical insights into client safety, ethics, and care delivery—values that are essential to building effective healthcare AI. And the diversity of those clinical reviewers is equally important: without varied perspectives, AI risks reflecting a narrow set of experiences—which can lead to tools that fail to meet the needs of all populations.
2. Prioritize Ethical Standards
Privacy and security indicators like HIPAA compliance are non-negotiable for any AI tool used in a behavioral health environment. “It’s about doing good and doing no harm,” Cerezo noted. Developers must also conduct regular audits of their training data to identify and correct biases or inaccuracies. Making sure datasets are both inclusive and accurate is essential to creating tools that deliver fair and consistent outcomes for a diverse range of clients.
3. Demand Transparency in AI Design
In order to use AI tools effectively, providers must understand how these systems make decisions. Cerezo explained the importance of transparency and explainability, encouraging clinicians to ask questions like, “What are the limitations of the model? What safeguards are in place?” Vendors should clearly communicate how their tools work, including any inherent limitations or risks. This type of transparency builds trust and helps clinicians make more informed decisions about integrating AI into their practice.
4. Foster an Ongoing Commitment to Integrity
Addressing integrity in AI is not a one-and-done effort. Behavioral health organizations must continuously revisit their tools and processes to address emerging challenges. Partnering with developers who prioritize reducing disparities helps ensure AI tools evolve alongside the diverse needs of the populations they impact.
The Impact of AI on Clinical Quality
AI’s potential to improve clinical care in behavioral health is remarkable, especially in its ability to complement human expertise. Cerezo highlighted several areas where AI is already making a difference in care delivery:
1. Strengthening Risk Detection
Compared to humans, AI tools are particularly effective at evaluating large datasets to identify clinical risks. This kind of precision helps alleviate the natural limitations of clinicians, like fatigue or the challenge of managing multiple complex cases over time.
“If you have an AI tool that can do 100% quality oversight, then you have the ability to catch more,” Cerezo noted.
When AI points out subtle cues or patterns, it offers a critical safety net that keeps details from slipping through the cracks.
2. Supporting Clinician Focus
AI helps clinicians stay more present with their clients by reducing administrative burdens. Tools like Eleos that help with documentation allow providers to spend less time staring at a screen and more time connecting with clients during sessions. This shift can improve both the therapeutic relationship and outcomes, because tools like this let clinicians focus on what they do best—helping people.
3. Optimizing Coaching and Supervision
AI-powered tools can help clinicians and supervisors spot patterns or concerns they might otherwise miss.
“AI isn’t worried about hierarchy or titles,” Cerezo said, pointing out that these tools are consistent and objective in their approach to clinical supervision.
AI can help clinicians improve client care by delivering real-time feedback and highlighting opportunities for improvement.
Future Use Cases for AI in Behavioral Health
AI is already making a big difference in behavioral health, but its potential goes way beyond current use cases. During the podcast, Cerezo shared some exciting ideas about how AI could evolve to further support providers and clients.
1. Helping Clients Stay Connected to Care
It’s common for people to lose access to therapy after leaving inpatient care, but AI could prevent that from happening. Cerezo explained how AI tools could step in to match clients with the right outpatient providers more quickly. “It would be incredible to have an AI system that can look at what a client needs after inpatient care and immediately match them to providers who are available,” they said.
2. Supporting Peer Specialists and Paraprofessionals
Peer specialists and paraprofessionals are key players in behavioral health, but they don’t always have the same resources or training as licensed clinicians. Cerezo emphasized how AI could provide extra help.
“There’s so much potential for tools that can provide real-time coaching, like a co-pilot,” Cerezo said.
These tools could guide non-clinical staff during tough moments, flagging risks or suggesting better ways to connect with clients.
3. Improving Crisis Hotline Responses
Crisis hotlines are lifesaving, but accurately analyzing risk in the heat of the moment can be incredibly tough. That’s where natural language processing (NLP) could make a real difference. “Imagine being able to analyze calls in real-time to detect subtle shifts in tone or language that signal risk,” Cerezo said. They mentioned how LGBTQ+ youth turned to crisis lines 700% more frequently following recent political events. AI tools could help responders focus on the most urgent cases and provide the best support when it’s needed most.
Advice for Clinicians and Behavioral Health Organizations
Morrison offered this simple summary of his and Cerezo’s conversation on the future of AI in behavioral health: “This is not a flash in the pan. This is going to be the way we do business going forward.”
Cerezo shared some practical ways clinicians and organizations can prepare for this new reality. First, staying informed is critical. “There’s a lot coming out every couple of months,” Cerezo noted, referencing resources like APA’s Mental Health Technology Advisory Committee and White House policy updates. They also highlighted communities like Therapists in Tech as valuable spaces for clinicians to share insights and learn how AI is shaping care.
Behavioral health professionals also need to understand the tools themselves and how they’re designed to work. Cerezo urged clinicians to be proactive about AI tools, asking questions about their transparency, ethical standards, and overall purpose. They highlighted the need to stay curious and dig deeper into how these tools might impact care delivery, particularly when it comes to issues like bias and equity.
The key to using AI responsibly lies in acknowledging its challenges and addressing them directly.
“It’s not about whether or not [bias] is there; it’s about understanding that it’s there and doing your best to mitigate it,” Cerezo said.
This mindset allows behavioral health organizations to approach AI thoughtfully, designing and using tools that support providers and improve care while staying grounded in equity and ethics.
Want to learn how Eleos designs AI tools with ethics and equitable care in mind? Request a demo of our purpose-built AI platform here.