See your Eleos ROI with our new calculator.

Try Now

So, 20 behavioral health CIOs walk into a Zoom call on the future of healthcare AI…

It might sound like the beginning of a bad joke, but it was actually the start of an incredibly refreshing, thought-provoking conversation our team had the pleasure of leading on July 25, 2023.

During Eleos Health’s first-ever CIO Summit, we welcomed information technology leaders from top behavioral health organizations across the country—and boy, were they a chatty bunch (in the best way possible, of course). From ethics and security to compliance and clinical outcomes, we covered a lot of ground in just a few hours.

The overarching consensus? AI is here to stay—and we’ve barely scratched the surface of its capabilities in the behavioral health realm. But before healthcare organizations can realize the full potential of AI innovation, they have to hop over a few hurdles.

With that in mind, here are some of the biggest AI obstacles and opportunities our group of innovation-minded attendees zeroed in on.

AI Obstacles in Behavioral Health

With any new technology, there are bound to be a few kinks to work out. These are the biggest barriers to optimal use of AI in behavioral health that surfaced during the event. 

There are already too many disparate systems that don’t integrate.

Tech bloat is a very real thing, and for CIOs in organizations that are trying to condense—not expand—their technology stack, the idea of adopting yet another platform can be a tough one to stomach. And then there’s the challenge of getting buy-in from the rest of the leadership team and the board—a crucial step when introducing a solution whose success hinges on a concerted, collaborative implementation effort.

This is where integration and workflow are key. Before introducing any AI technology in your behavioral health organization, it’s critical to ensure that:

  1. It connects (or in the case of Eleos, embeds) seamlessly with existing systems, especially your EHR.
  2. It aligns with—and causes zero disruption to—your users’ existing workflows.

In the words of Alon Joffe, the CEO of Eleos Health, “Workflow is always what will win over the clinician. If you ask them to log into a different system, you’ve lost them. If there are glitches, you’ve lost them. If the button was green and now it’s blue, you’ve lost them.”

For more detailed advice on selecting and implementing the right AI technology in your organization, be sure to check out our Quickstart Guide to Behavioral Health Technology.

Overcoming preconceived notions and fears is difficult.

According to one attendee, there are 9–12 AI companies launching each day in the current environment. So, “There are going to be some winners and losers.”

Unfortunately, some of the losers will tarnish the reputation of AI overall. And that’s exactly what makes the proliferation of AI solutions a double-edged sword: on the one hand, most people are at least familiar with the concept of AI and its potential value to healthcare providers. But on the other hand, they are also aware of its pitfalls—some of which stem from inaccurate or overblown information.

The point is, it’s important to enter into discussions about AI with the assumption that folks have already formed an opinion about it. Err on the side of over-educating them on the technical aspects of the specific technology you’re thinking of implementing, and have multiple open discussions about their fears and concerns.

The ethical implications of AI are still being worked out.

Whenever a tool has the potential to impact patient care, ethics become a concern. And in the case of AI, the boundaries defining ethical usage are still a little fuzzy.

To help shed light on this evolving discussion, Alon Rabinovich, Eleos Health CTO, laid out four main ethical considerations for behavioral health AI to CIO Summit attendees:

  1. Ensuring data privacy and security: Any system that touches protected health information (PHI) must be held to a rigorous standard when it comes to keeping that sensitive data safe.
  2. Monitoring and addressing issues with bias and fairness: Assuming that humans are prone to bias and that AI models are developed by humans, it’s important to deliberately configure those models in a way that accounts and solves for bias.
  3. Preserving human oversight and accountability: Human providers must retain ultimate responsibility for patient care and outcomes. Additionally, human experts must be deeply involved in the development of clinically focused AI models.
  4. Making AI models transparent and explainable: When patient care is at stake, it’s important that the human providers using the technology fully understand what it is and how it works. 

As Joffe mentioned during the ethics portion of the event, when it comes to AI, ethical review is just as important as security review. The points above provide a foundation for ethical due diligence when vetting AI solutions

People are resistant to change, even when the benefits are clear.

This stat might be familiar to behavioral health professionals: on average, it takes 18 to 254 days for someone to form a new habit—and 66 days for a new behavior to become automatic.

In other words, change doesn’t happen overnight. Just because you introduce a new technology to providers—even one that objectively helps them—doesn’t mean they’ll pick it up and use it immediately and consistently.

To keep behavioral inertia from throwing a wrench in your AI implementation plans, you have to prioritize education—and lots of it. That includes not only training staff on the tool or application (i.e., how to use the AI solution), but also proactively addressing gaps in understanding about the technology itself (i.e., how the AI works).

If you’re noticing a theme in the advice offered by our CIO attendees, you’re spot on. Communication and coaching were popular discussion topics, with many folks emphasizing the importance of establishing talking points early on and using them to foster open dialogue throughout the implementation and adoption process.      

AI Opportunities in Behavioral Health

Okay, now for the good stuff. The potential value of AI in behavioral health practice is immense—and that makes working to overcome the obstacles totally worthwhile. Here’s a taste of the benefits our CIO Summit attendees called out during the event.

EHRs become less burdensome and more useful.

As one attendee so eloquently put it, “EHRs suck.” (They said it—not us.)

The fact is, EHRs have been molded by payers and regulators—both of which constantly churn out new and updated rules that providers must comply with in order to get paid. So, over the years, the volume and variety of healthcare documentation requirements have steadily increased.

According to one CIO Summit attendee, over the last 20 years, the amount of documentation a clinician must complete for the same service has increased by about 50%—though the service itself hasn’t become any more or less effective. Joffe also highlighted this research showing that mental health clinicians spend more hours in their EHR per patient encounter than practitioners in any other healthcare specialty.

So, it’s no surprise that documentation is often cited as a top contributor to provider burnout in behavioral health. And that’s exactly why documentation automation technologies—specifically ones that embed seamlessly into existing EHR workflows—represent such a huge opportunity in the behavioral health space. By alleviating one of the most burdensome parts of a provider’s job, behavioral health organizations can increase staff satisfaction, reduce turnover, and free clinicians to focus more time and effort on delivering great care—all of which leads to better treatment outcomes.

Meeting hyper-specific documentation requirements becomes easier.

In addition to payer- and state-specific documentation requirements, many behavioral health organizations also must satisfy funding-specific reporting requirements. Different programs or grant agencies want different data points—and manually recording, extracting, and delivering that information can be time-consuming and overwhelming.

One CIO Summit attendee said their organization currently has 85 separate data portals—far from the ideal. With more organizations entering into models like the Certified Community Behavioral Health Center (CCBHC) program, this issue will only get worse—and AI presents a huge opportunity to help automate and streamline data collection, analysis, and submission.

While technology is still evolving in this area, the organizations that embrace AI now will position themselves to lead the way—and will be poised to capitalize on future funding opportunities with less administrative lift.  

Compliance review becomes unlimited.

For most behavioral health organizations, compliance review efforts are still largely manual. One attendee commented that reviewing 100% of notes is literally impossible in the current environment—and that while 10% is a more realistic benchmark, it doesn’t necessarily make them feel good about the quality of their documentation.

But, as that attendee also pointed out, “Computers and AI—they don’t get tired.”

With AI, comprehensive note review becomes possible. This has major implications for not only billing and regulatory compliance, but also evidence-based practice fidelity. AI is capable of reviewing notes at scale and flagging any potential issues with the provider’s documentation (e.g., cloned or copy-and-paste notes) or the treatment itself. That ultimately means higher-quality care and better first-pass claim acceptance and reimbursement—not to mention less audit risk for your organization. 

The story your data is telling becomes more clear.

It’s one thing to have access to a large quantity of data. It’s quite another to actually draw meaning from it.

While many systems collect and store data, the task of analyzing it often falls to the user—and as reporting needs pile up, that manual effort quickly becomes unmanageable. AI systems are skilled in processing large quantities of data fast—and presenting conclusions in a way that’s not only clear and actionable, but also connected. In other words, instead of spitting out 20 different reports that you have to puzzle-piece together to see the full picture, AI can show you that picture upfront—which means you can focus on the strategy side.

This holistic view also helps highlight priorities at the individual and organizational level, so you can stop trying to solve 100 problems at once and truly focus on the initiatives that will make the most impact. It also helps mitigate situations where leaders and external entities want information, but don’t necessarily know what information they want—or how to go about getting it. This is crucial, because as one attendee pointed out, “Our biggest issue with getting data to decision makers is they don’t know what data they need—or how to organize it.”

The best part is that this kind of intelligence isn’t just impactful on the business side of your organization. Through session intelligence and provider-specific reporting, clinicians also have a window into their own clinical performance—enabling them to not only address their opportunities, but also gain confidence in the things they’re doing well. The result? A more fulfilling work environment and a constant feedback loop that promotes meaningful professional development, better experiences for both providers and clients, and higher-quality care.

When you get a group of innovation-focused behavioral health leaders in the same Zoom room, you can’t help but walk away full of ideas and inspiration. We’re already looking forward to the next meeting of the minds.

Interested in learning how purpose-built behavioral health AI is already making a major impact in the behavioral health space? Check out our next webinar event: A Frontline Story on Adopting Augmented Intelligence for Behavioral Health: Gaudenzia + Eleos Scribe.