How CHAI is turning good intentions into real-world safety protocols for healthcare AI vendors and users

Nearly every aspect of clinical care has to pass some kind of test to become an accepted, widely deployed practice.

Medications go through clinical trials. Procedures are reviewed—and then reviewed again. Even the billing codes providers submit to get paid for their services were vetted by multiple parties before landing in EHR and RCM platforms. But there’s one emerging area that—while exploding in usage—is still relatively “untested”: tech tools, especially artificial intelligence.

In the rush to fix documentation, reduce burnout, and uplevel care, AI has taken hold in behavioral health almost overnight. But the usual safety nets—like rules and regulations designed to keep clients, and their sensitive data, safe—haven’t kept up. What counts as a trustworthy AI tool? Who decides how to define “safe” and “effective” AI?

On a recent episode of the No Notes podcast, host Dr. Denny Morrison sat down with Dr. Brian Anderson, CEO and Cofounder of the Coalition for Health AI (CHAI), to tackle these questions head-on. Dr. Anderson brings years of experience leading both government and private-sector efforts to set practical (not just theoretical) standards for healthcare technology.

Their conversation covers where we’re falling short, why “responsible AI” can’t be just a buzzword, and what it’ll take for behavioral health to hold this rapidly expanding technology to the same standards as the rest of clinical care. From the early days of CHAI to the tough reality of building consensus among leaders from different areas of healthcare, they dig into what’s happening now—and what’s coming next for organizations that want to lead, not just react, in the age of AI.

Didn’t catch the episode? Keep reading for the highlights.

Want to listen to the full conversation? Check out the podcast episode here.

The Long Road from “Responsible Use” to Usable Standards

Almost everyone in healthcare can agree on big-picture principles—like fairness, transparency, and accountability. But agreeing that those things are important isn’t the same as deciding how they should be measured and upheld in the real world, especially when it comes to AI.

As Dr. Anderson explained, “We have general agreement at a 50,000-foot level about the principles of responsible AI…but at that technically specific level about what that actually means, we don’t have consensus.”

It’s one thing to say AI should be fair; it’s another to codify exactly what “fair” means in theory, policy, and day-to-day clinical workflows.

Dr. Brian Anderson, CEO of the Coalition for Health AI (CHAI), explains the origin of the organization’s mission to bring together the public and private sectors to define what responsible AI actually looks like in healthcare.

That’s why Dr. Anderson and CHAI are focused on building a “common definition about what good, responsible AI looks like.” To be useful, this consensus must move from lofty ideals to detailed, concrete standards that developers and clinicians can incorporate into their daily work.

CHAI’s methods are hands-on and collaborative: they bring stakeholders together in working groups to hash out best practices for real clinical scenarios. “CHAI is all about bringing together organizations and facilitating best-practice conversations,” Dr. Anderson said. “The atomic unit of CHAI is these working groups.”

It’s a slower path than simply issuing a set of guidelines, but it’s necessary—especially because the field is still debating even the most basic technical questions. As Dr. Anderson pointed out, “We don’t have agreement in the generative AI space on [questions like] how do you measure bias? What does bias look like at a technically specific level?”

So, progress is measured in small, deliberate steps that eventually add up to bigger advancements in our collective clarity around—and adherence to—healthcare AI best practices

“We are very much at the beginning,” Dr. Anderson noted. “The hope is that as we take these little tiny bites out of this big elephant, use case by use case, we’re going to begin to develop a set of testing and evaluation criteria, a set of best practices and standards that the industry can use and customers can use to evaluate model performance.”

Still, considering the urgent appetite for standards among healthcare providers and administrators, the pressure is definitely “on.” At CHAI, that interest has given way to rapid growth. “We started with eight organizations,” Dr. Anderson said. “We now have over 3,000. I think we’re the largest health AI coalition on the planet.”

Standards are Just the Beginning of Trust

Granted, adopting a set of standards or sporting a “certified AI” label isn’t enough to build trust in behavioral health. Providers and clients want to see real proof that the AI involved in their care works safely in the real world.

As Dr. Anderson put it, “Trust in this space is not going to come from industry and technology companies saying, ‘Hey, we have standards.’ Trust is going to come when those models are deployed and customers…actually see those models working.”

At a minimum, transparency should be non-negotiable. Even if a client isn’t asked for full consent, they deserve to know when AI is part of their care. “As a minimum level of transparency, patients should know as a disclosure if AI is involved in their clinical care,” Dr. Anderson noted. “I would think that’d be helpful.”

Dr. Brian Anderson, CEO of the Coalition for Health AI (CHAI), explains that to build trust in AI among stakeholders in the healthcare community, those stakeholders must see the models actually working and adding value for providers and patients.

But transparency is only one piece of the puzzle. The rules that protect patient data (like HIPAA) don’t always follow when clients take their information elsewhere—which creates a lot of murkiness. “If I download my data from a health system…it’s my data, so I can download it,” Dr. Anderson explained. “But once I have it, if I share that data with a third party…that company is not required to abide by HIPAA.”

While national standards are slow to materialize, states are stepping in to fill the gap, each with their own approach. “I think it’s going to be harder to get consensus at the federal level,” Dr. Anderson said. “Now, at the state level, we’re talking to multiple states—about 20 or so right now—that are contemplating how to do this.”

Genomic data offers a warning: some types of health data may never be fully de-identified, no matter how careful we are. “Genomic data…can genomic data ever really be de-identified?” Dr. Anderson asked. “[But] it’s so important when we think about targeted therapeutic development, or different drugs and how they interact with our bodies, or predispositions to certain kinds of diseases.”

In the end, he argued, it all comes back to education. Clinicians and patients alike need to be equipped to use these tools wisely—and to ask the right questions before firing them up in a treatment scenario. “You want to teach someone how to use it before they use it…that’s a massive undertaking,” Dr. Anderson said.

The Promise and Limits of AI in Behavioral Health

For all the fears and troubling headlines, artificial intelligence does bring new possibilities to behavioral health, especially where access and resources are in short supply.

“One of the things that excites me the most about AI is its ability to meet people where they are and provide the kind of access that we really need if we want to live healthier, more complete lives,” Dr. Anderson said.

In places where waiting lists stretch for months or clinicians are few and far between, AI-powered tools could help close critical gaps. And for some, AI is even perceived as “more empathetic,” at least when it comes to patience and non-judgment. “The experience by the end user, by the patient, [is] perceived as more empathetic,” Dr. Anderson explained, adding that an AI tool is never rushed, never distracted, never watching the clock.

Dr. Brian Anderson, CEO of the Coalition for Health AI (CHAI), shares how AI could radically expand access to mental health care—whether in rural America or major cities.

Still, AI-led care is not the same as a real therapeutic relationship. True empathy and deep understanding aren’t things you can automate or code.

That’s why behavioral health is largely moving toward augmented intelligence: a flavor of AI where technology supports and enhances what clinicians do, but never replaces them. Technology can lighten the load, spot patterns, and offer another perspective or way of thinking. But it can’t build trust or walk with someone through their hardest moments. That work is, and always will be, human.

There’s also the alignment problem—the challenge of trying to get AI to reflect a person’s values or truly “get” what matters most. Dr. Anderson put it plainly: “Solving that alignment problem is going to be a real challenge. We haven’t solved it yet…to build models in a way that is aligned to your values, to our values.”

What’s Next for AI in Behavioral Health

So, where is all of this heading? For Dr. Anderson and the team at CHAI, the work is just beginning. The goal is to keep building clear standards one use case at a time, rather than chasing some mythical, universal rulebook for all of healthcare.

As Dr. Anderson explained, “The only way we’re going to eat this 800-pound elephant or gorilla is one use case at a time.” That means getting specific about what “good AI” looks like for each real-world scenario.

Dr. Brian Anderson, CEO of CEO of the Coalition for Health AI (CHAI), explains the organization’s use-case-by-use-case approach to defining standards for responsible AI creation and usage.

The next big move is a nationwide registry, where anyone—including clinicians, organizations, and maybe even clients—can see how different AI models are performing out in the wild.

“Imagine a future very soon where you and I can go to a nationwide registry that’s publicly available and see how models are performing from site to site,” Dr. Anderson said.


Despite the challenges ahead, Dr. Anderson’s vision is hopeful: “My hope is that AI as a tool will allow for that same kind of human flourishing…much healthier, emotionally healthier lives, more connected lives with the people that we love and the people that we care for.”

For now, the task is to set up the right guardrails, keep asking hard questions, and always remember that technology—no matter how advanced—should always serve real people, not the other way around.