Not every problem needs a policy. Policies shouldn’t exist just to check a box—they should actually shape how people work and make their jobs easier.
But when it comes to AI in behavioral health, Ashley Newton, CEO of Centerstone’s Research Institute, doesn’t hesitate: this is one area where a robust, carefully thought-out policy is non-negotiable. AI can revolutionize care, but it also brings risks related to data privacy, ethics, and client trust. In other words, the implementation and use of AI in behavioral health requires clear guardrails.
On the latest episode of the No Notes podcast, Newton spoke with host Dr. Denny Morrison about how Centerstone constructed an AI policy that was more than just rules on paper. Their discussion dives into the challenges of writing a policy that works in the real world—and why it’s crucial to keep both staff and clients in mind when bringing AI into the therapy room.
Didn’t have time to tune in? No worries—keep reading for the highlights!
Want to listen to the AI policy podcast episode in full? Check it out here.
Why AI Policies Are Essential
For Newton, a good policy does more than outline what is allowed and what isn’t. Ideally, policies help guide behavior in any situation—even those that have yet to be experienced.
Policies set expectations, provide clear standards, and help organizations (and their employees) navigate uncharted terrain. And let’s be honest—few terrains are as uncharted as artificial intelligence in behavioral health.
Luckily, Centerstone is no stranger to big questions about innovating responsibly. As one of the largest behavioral health providers in the country, they operate across multiple states, delivering care in a variety of settings like outpatient clinics, schools, and even client homes. At the heart of Centerstone’s vast, multi-state care system is their Research Institute—where Newton and her team focus on advancing behavioral healthcare through data, technology, and quality improvement.
When AI first entered the behavioral health picture, the leadership team at Centerstone knew it would change the way their clinicians worked. But as they began exploring the ways AI could help reduce the administrative burden on clinicians, they found many more questions than answers. What tools should staff use? How would sensitive data be protected? Should clients know AI is being used? These details were fundamental to how Centerstone planned to maintain both provider and client trust while empowering their workforce with innovative tech.
“When we’re introducing something as new and impactful as AI, having those conversations early on is critical,” Newton said.
How Centerstone Developed Their AI Policy
For the team at Centerstone, building an effective AI policy started with asking the right questions.
“We really encourage folks in our organization to think first about the problem they’re trying to solve, rather than the solution,” Newton said.
At Centerstone, those needs vary widely. Their services span everything from outpatient clinics to schools to home-based care, and what works for one setting doesn’t always work for another. “If you work in a hospital, what you need or what might help you…is going to be different than someone who’s working out in the community or in a residential site,” Newton emphasized.
Need help creating an airtight AI policy for your behavioral health org? Download our free AI policy template here.
To address this, Newton’s team partnered closely with operations staff across the organization to:
- explore the unique challenges each setting faced, and
- identify patterns where a single solution—like AI—could make a meaningful difference.
One recurring challenge stood out: the overwhelming administrative burden placed on clinicians. But as the team dug deeper, more problems emerged:
- Clinicians were spending too much time on documentation, taking away from time focused on clients.
- Administrative tasks created inconsistencies across different care settings.
- Staff lacked clarity on which tools they could trust with sensitive client information.
- Emerging AI tools raised ethical questions around transparency and client consent.
Centerstone’s AI policy was formed with these issues in mind, establishing clear guardrails to prevent problems down the road. For example, it requires all tools to demonstrate HIPAA compliance and lays out a pre-approval process for any tools that interact with sensitive data.
AI Governance and Multistate Challenges
Creating an AI policy is one thing; implementing it across multiple states and locations (each with different regulations and organizational cultures) is another challenge entirely. Centerstone’s footprint spans a variety of care settings and jurisdictions, so they had to balance flexibility with consistency.
To manage this complexity, they needed to designate individuals who would be responsible for evaluating new tools, identifying potential risks, and making sure that every IT decision aligned with Centerstone’s clinical and ethical values.
“We wrote [into the policy] that we would create and maintain an AI governance committee,” Newton said. The team includes a variety of representatives from key organizational roles and has the authority to guide decision-making on new AI implementations.
With respect to navigating regulations across states, Newton shared that while many rules are similar, subtle differences can significantly impact implementation. “When we were writing [the policy], we looked at things like national regulations, such as the Office of Civil Rights, and also state-level requirements,” she said.
Key Regulatory Considerations
Centerstone has committed to staying ahead of the constantly shifting regulatory landscape by closely monitoring both federal and state guidelines.
“There are some existing regulations, primarily through the Office of Civil Rights, that have more to do with how you capture and store data,” Newton explained. “But outside of that, the actual regulations and guidance that exist—at least that would apply to the kinds of technologies we’re using—there’s not as much.”
But even without a lot of specific regulatory guidance, Newton pointed to frameworks like the Biden administration’s trustworthy AI guidelines and the CHAI framework as helpful tools for shaping best practices within your organization. “Even though they’re not regulations, what are the key themes across them? What can we learn from them that can help to shape our practice as an organization?” she said.
Newton also emphasized the importance of building a policy that can adapt to evolving standards. “We anticipate [the policy] will change over time,” she noted. “We’ve already updated it since we wrote it less than a year ago, and we think that will continue to be true.”
Staff Engagement and Feedback Loops
According to Newton, a successful AI implementationI starts with clear communication—from leadership to staff and vice versa. She placed particular emphasis on making sure providers understand that AI is a tool to support their work—not replace it.
“We have been pretty vocal with staff about our policy,” Newton explained, adding that transparency has been key to gaining trust. “The notion of, ‘We’re not replacing humans with AI—we’re humans using AI as a tool,’ is a pillar of our policy.”
Centerstone has also leaned heavily on feedback loops to refine their approach to AI—and technology in general. “We’ve had surveys about changes we’re making to the EHR that we think will make your experience better,” Newton said. “[We want to know], are they actually making it better?”
In addition to running surveys, Centerstone organizes focus groups with staff who are piloting new AI tools. “We actually spend time with [employees] to understand and talk about their experiences,” she noted. Leadership also keeps the lines of communication open at all times for staff to share their concerns and suggestions via whatever channel is most comfortable for them.
By building these opportunities into their process, Centerstone ensures that their implementation of AI tools is guided by the people actually using the technology every day.
“We really wanted to take that feedback throughout the entire implementation of these tools, so that if we needed to make changes, we weren’t waiting six months or a year to understand that—we could go ahead and make that tweak or pivot to our plans right away,” Newton said.
Advice for Organizations Developing AI Policies
Every organization tackling AI implementation needs an AI policy—and Newton offered some practical advice for organizations that are just starting their AI journey. Here are her key recommendations:
1. Learn from your peers.
“Are there peer organizations that you might talk with who are doing the same work?” Newton suggested. Comparing notes with others in the field can help you navigate the shifting AI landscape. “The way that we think about moving forward with AI may look really different than how another organization thinks about it,” she added.
2. Study existing frameworks.
It’s important to review existing guidelines, even if they’re not legally binding. “I spend a lot of time out there, kind of searching and reading through these documents to see, even though they’re not regulations, what are the key themes across them?” Newton explained.
3. Build governance structures.
A cross-functional governance committee is a must for managing AI adoption. At Centerstone, this group helps evaluate tools, identify risks, and ensure policies align with organizational values. “There’s a process and a structure,” Newton said. “We wanted to approach AI solutions in a responsible way.”
4. Engage your staff.
Involving staff early and often fosters trust and alignment. “The more that you can stay in tune with how your workforce is feeling about the journey toward using AI…the better equipped you’re going to be to respond,” Newton noted.
Want more expert tips to make your AI rollout a success? Download our Complete Guide to Behavioral Health AI Implementation.
5. Clarify AI’s role.
Be clear from the outset that AI is a tool to complement providers, not replace them. Newton stressed that this message should be a cornerstone of any AI policy. “AI is used as a tool here, right?” she said. “It’s not to replace your clinical judgment.”
AI Governance Policies in Behavioral Health Are Always the Right Call
As AI continues to evolve, so do the opportunities—and the responsibilities—for behavioral health organizations. For Centerstone, creating an AI policy was a way to build a foundation for thoughtful innovation.
The process wasn’t perfect, and it isn’t static. Newton and her team approached it with the understanding that good policies, like good care, have to adapt over time. “We knew this couldn’t be a static document,” she said. “The AI landscape is changing fast, and we needed a policy that could grow with it.”
Guardrails for this technology are important because of the vulnerability of the people who rely on it. AI is a tool, but it can only make a positive impact when organizations stay open to feedback, collaborate across teams, and prioritize solutions that make care easier, better, and more human.
Curious how purpose-built, provider-centered AI can make a difference in your behavioral health org? Request a demo of Eleos now.