Healthcare IT leaders are some of the biggest unsung heroes in the biz. (Okay, as a tech company, we might be a bit biased, but hear us out.)
The healthcare industry—and behavioral health, specifically—is notoriously “left behind” on the tech advancement curve. As the speed of innovation has accelerated, the gap between tech-at-large and healthcare tech has widened, and that puts CIOs and other IT leaders in a tough spot—one where they must constantly walk a fine line between equipping staff with cutting-edge tools that have the potential to revolutionize care, and aligning those tools with the rest of their (clunky and outdated) tech stack. Oh, and then there’s the massive pressure to remain compliant with some of the most rigorous privacy and security standards in existence.
All of that is to say, IT leaders have a lot on their plates—and never has that been more true than in this current era of AI.
We recently welcomed about 20 of these leaders into a special forum where they could discuss their AI challenges, successes, and strategies with their peers. The second installment in our CIO Summit series, this discussion built on last year’s inaugural event, which we recapped here.
Read on for an inside look at how behavioral health CIOs are shepherding the AI journey within their respective organizations—including top takeaways from their exploration of best practices in the evaluation, implementation, and governance of this transformative technology.
Governing AI use is more about internal policy and education than technical safeguards.
The discussion kicked off with a conversation on AI governance and policy. One thing everyone agreed on: Scaling detection of AI use across an entire behavioral health organization isn’t as simple as installing a firewall. Sure, certain monitoring and filtering systems might get you part of the way. But for CIOs looking to flag entry of protected health information (PHI) into an AI system—as just one example—there’s not really a reliable, systematic way to do that. At least not yet.
And even when you are able to pinpoint potentially risky AI use, it’s more reactive than proactive. As one attendee put it, “It’s a game of whack-a-mole.”
That’s why savvy behavioral health organizations are taking more of an offensive—as opposed to defensive—approach to AI governance. Enter: AI policies and steering committees.
“I don’t know any other way to approach this outside of policy,” that same attendee continued.
Writing a Forward-Looking Behavioral Health AI Policy
So, what does a “good” AI policy look like in behavioral health? And what’s the right time to implement one?
Many attendees admitted to penning their policies after implementing an AI solution, but most agreed that with more and more tools entering the market every single day, your best bet is to get something on paper sooner rather than later. After all, as people get more comfortable using AI in their daily lives, they’ll naturally start to integrate it into their work lives—whether you’ve provided guidelines or not.
While some attendees have built more flexibility into their policies than others, one universally agreed-upon policy component was forbidding the entry of any PHI into external systems (e.g., AI tools the organization does not have a formal, contractual relationship with). For example, one attendee said that while their policy allows staff to use tools like ChatGPT to write emails or review administrative documents, they are explicitly prohibited from entering PHI for clinical note-writing (or any other purpose).
“If you’re using ChatGPT and asking it to write a note with PHI, that data becomes part of ChatGPT,” explained one attendee. “So the worry is that if someone queried ChatGPT about that person, it could spit that information back. That’s the risk of using that kind of tool rather than something like Eleos.”
Other policies allow staff to use only the tools that have been vetted and approved by the organization. One attendee said that while their AI policy puts fairly stringent restrictions in place, it also lays out a process for requesting approval of a particular tool. “In our policy, we state that if you want to venture into the things that are off limits, you can set up a consultation with our IT staff who have been trained on these things,” that attendee noted.
Need help creating an airtight AI policy for your behavioral health org? Download our free AI policy template here.
The group was also aligned on the importance of clearly communicating the reasoning behind the policy—especially regarding the consequences of misuse.
“Make sure you’re educating staff about the sensitivity of [protected health] information,” one attendee said.
Forming a Multidisciplinary AI Governance Committee
The AI landscape is changing fast, which means your policy—and your overall approach to AI as an organization—will need to evolve, too. That’s why it’s crucial to form a dedicated AI governance committee who will not only advise on the policy itself, but also commit to staying on top of the tech and regulatory advancements that may necessitate updating it. This committee can also advise on requirements for any future AI technologies the organization may consider.
Most attendees agreed that the committee should include representatives from a variety of departments and professional backgrounds.
“We pulled a multidisciplinary team together to act as our steering committee,” one attendee explained. “That way we can all work together to solve any problems or complaints. It’s a new area, and we don’t know what problems are going to come out of it—so we have different areas of expertise.”
According to attendees whose organizations have formed AI governance committees, some of the internal teams represented include:
- Legal
- Quality/CQI
- Human resources
- IT
- Security
- Health informatics
- Senior leadership
Beyond vetting AI technologies and resolving any kinks in implementation and use, this committee should also help design your organization’s educational strategy. Ensuring staff understand not only the technical aspects of the tools they’re using, but also potential problems to look out for, is crucial to identifying issues before they fester into major concerns.
For example, one attendee mentioned keeping a keen eye for bias. “There’s been study after study about cultural bias showing up in AI,” they said. “We’re going to have to improve the tech to weed out that bias, but also as consumers, and supporting our staff as consumers, we have to train them on how bias shows up in AI and bring a level of skepticism to them.”
When it comes to staff education efforts, look to this article by tech consultant and psychologist Dennis Morrison, PhD, for an excellent starting point. It delves into the inner workings of AI through a behavioral health lens.
AI literacy is foundational to the culture of your organization in this new era.
But staff education isn’t just important when it comes to actually using an AI tool. For organizations planning to implement or expand AI initiatives, creating a healthy culture around AI is critical to success.
Busting AI Myths
That means making sure everyone in your organization understands not only the risks inherent to AI use—and how to avoid and prevent them—but also the massive upside to using these advanced technologies safely and responsibly.
“I’m finding there is still some stigma-busting happening,” said one attendee. “We’re still convincing people that it’s secure—that it’s not ‘icky.’”
Another attendee compared the onslaught of AI to the proliferation of social media. “It was already hard to sort out what is and isn’t real,” they explained. “So there’s this cultural thing of, ‘How do we shift to living with AI in our daily lives?’”
Calming AI Fears
Interestingly, a lot of the fear many behavioral health leaders anticipated—for example, providers being worried about AI taking over their jobs—has been overshadowed by concern over what it means for the future of healthcare and the world in general.
And as CIO Summit guest speaker Neerav Kingsland, Chief Business Officer of AI safety company Anthropic, stated during his presentation, we’re already seeing about 10x improvement of AI technology every 1–2 years—and 5–10 years from now, there “won’t be a ceiling.” In other words, the AI sky’s the limit.
That can be tough for people to even imagine—let alone accept. But any way you slice it, AI isn’t going away—which is why several attendees emphasized the need to embrace the undeniable benefits, particularly in behavioral health.
“With the workforce shortage right now, any time you can save 50% of the time clinicians are spending on writing notes, and you’re not waiting two days to complete a note, those are things that add value to your organization,” said one attendee.
“What clinician really likes to type?” said another attendee. “AI is significant in what it is offering you. What clinician wouldn’t like to leave at the time the clinic closes?”
Earning Buy-in from the Bottom Up
The key is clearly communicating those benefits to providers without glossing over the risks. Transparency is paramount. “Education is important to reduce the stigma,” one attendee said. “Words are important.” That attendee recommended emphasizing how AI will transform the providers’ experience with their EHR—something they typically see as a burden.
“I feel it’s better from the bottom up,” another attendee said. “We found it helpful to have an open conversation with our team of clinicians, because we know that if they understand the value of AI, they will promote it.”
That attendee went on to say that Eleos Health’s AI platform has made a measurable impact on provider retention and job satisfaction—and that none of their providers have refused to use the tool. “We’ve had people comment that if they don’t have Eleos, they’ll leave,” the attendee noted, adding that an organization leveraging automation in a way that benefits direct care providers will have a competitive advantage in the current hiring environment. “With the workforce shortage, if your peers around you start implementing this, you’ll see people jump ship,” they said.
Preparing for a Tech-forward Workforce
Furthermore, as more young people enter the workforce, they’re going to expect employers to make AI tools available. One attendee noted that the average college undergraduate uses six AI tools at any given time.
“AI is like the Internet—it’s going to change the world,” said one attendee. “We have to embrace it. We have to adapt to it. But we also have to keep people from jumping ahead of our policy and protections.”
In fact, several attendees voiced concern over providers becoming too reliant on, and trusting of, AI technology—by blindly accepting AI-generated note content, for example. Others brought up worries about automated notes becoming too similar, to the point of appearing “cloned”—particularly for organizations that tend to serve a lot of clients in the same population.
Establishing Clear Staff Protocols
“How do we make sure our documentation is patient-centered, accurate, and current when the inputs for the AI are similar across the population and we end up with documentation that seems like it’s carry-forward or copy-paste?” one attendee asked.
“We are concerned about staff taking AI output at face value without any review or modifications,” expressed another attendee.
The answer, other attendees explained, lies in:
- Using a behavioral health-specific tool that generates unique note content for each individual session.
- Training staff to review and revise AI-generated content before finalizing and submitting notes.
- Monitoring the percentage of generated note content accepted by providers (and making sure the tool can provide that information).
“We don’t want to see users accepting 100% of the AI note content, because then we know they are rubber-stamping these things,” one attendee said.
Through this combination of purpose-built technology and intentional process, multiple attendees said they’ve actually seen higher-quality, more person-centric notes.
At Eleos, we take a human-centered approach to AI—leaning into augmented intelligence rather than the traditional artificial intelligence. That means a human being always has the final say on documentation content, allowing each clinician’s unique point of view to shine through every note.
When choosing an AI vendor, who they are is just as important as what they offer.
Policy, governance, and staff education are crucial, but ultimately, the most exciting part of your AI journey is actually selecting and implementing a solution. And while features, functionality, and security considerations are all important, several attendees said that the vendor’s alignment with your organization’s culture and values is perhaps the most critical factor in your success with AI.
“There’s something to be said for a vendor organization that solely focuses its resources on AI, versus an organization that offers AI as a feature,” said one attendee. “Tech can be copied, but you can’t copy the people and the culture.”
Aligning on Data Privacy and Security
Part of that culture, added another attendee, is full alignment with your organization’s data security standards. “It’s so important that partners are aligned on our data policy,” that attendee said. “Mental health data exposure is incredibly risky.”
That same attendee went on to explain that it’s not just about HITRUST, SOC 2, and other industry standards and certifications. It’s also about overall philosophy, because it’s going to take time for standards and regulations to totally catch up to AI technology.
“My fear is that legislators don’t fully understand AI,” that attendee noted. “We have the most sensitive data in health care, and until the legislation gets there, it’s really for our partners to adjust.”
Visit Eleos Health’s comprehensive Trust Center to learn more about our data privacy and security measures.
On that note, another attendee pointed out the benefits of partnering with a vendor that specializes in behavioral health. “We’re looking for a partner who understands the industry,” the attendee said. “Behavioral health is different, and Eleos happened to show us that they get the behavioral health industry—that they understand our users. That’s the soft side of looking for a tech partner.”
And you can’t discount the importance of a clinician focus. As one attendee said, “For us, clinician satisfaction is the most important thing. I want users to think that this is making their jobs easier.”
Vetting AI Features and Functionality
Once you’re confident that a vendor is on the same page as your organization, culture-wise, it’s time to get down to brass tacks (a.k.a. evaluate the platform’s features and functionality). One attendee offered a six-point framework for vetting an AI solution’s capabilities:
- Integration: Will the AI tool easily integrate with our EHR and our existing tech stack, or is it a standalone solution?
- Models: What are the AI models based on? Are they large language models (LLMs) or machine learning (ML) models? Is it generative AI? Deep learning?
- Security: Where does the data go, and what’s being done with it?
- Stability of the Vendor Organization: How many customers (not individual users) does the company serve? How big are those customer organizations? Do they have funding/financial backing? What’s the size of the team working on their AI products?
- Features: What features does the tool really have? What use cases can they really serve?
- Cost: What is the pricing model, and how does it weigh against all the factors above?
One specific vendor comparison question that came up was whether the “all in one” allure of an EHR-offered AI tool truly matters. The consensus was that it depends—especially with respect to the data sources you hope to incorporate into your AI capabilities and processes going forward. If you want to leverage multiple sources outside of your EHR, then an independent, integrated tool will likely be the best choice.
Beyond that, even keeping all of your documentation tools within the same “ecosystem” comes with its drawbacks. “The problem with an ecosystem is that they are blind to their own blind spots,” one attendee explained. “But with best of breed solutions that integrate, you have a better chance of building a true high-quality platform.”
Today’s behavioral health CIOs have a lot to contend with—and their efforts to put the latest technology in the hands of providers who need it most are nothing short of heroic. But here at Eleos, we believe even the heroes shouldn’t have to walk alone—which is why, in addition to offering the best AI software in behavioral health, we provide the highest quality support throughout the implementation and adoption process.
Ready to see why Eleos is the most widely-deployed AI platform in behavioral health? Request a personalized demo with an AI expert.