Ever tried shoveling the driveway in the middle of a snowstorm?

No matter how fast you work, it’s almost impossible to finish before a new layer of fluff begins to settle on top of your progress—and you have to start all over again.

Keeping up with all the different AI privacy and security rules, regulations, and guidelines presents a similar challenge. Just when you thought you had a handle on current requirements and best practices, another new law or framework falls out of the sky—and you have to plow your proverbial shovel back over the same patch of concrete.

But as the expert panelists in our most recent webinar event—“Fortifying the Future: Data Privacy & Security in Behavioral Health AI”—explained, the shifting regulatory landscape shouldn’t stop you from taking advantage of a technology that is so clearly beneficial. The upside is simply too great to ignore.

With a cautious-yet-flexible approach, your organization can absolutely leverage purpose-built AI safely and securely—all while staying on the right side of constantly changing standards for data privacy and security. Here are 5 ways to do your due diligence—and make sure you get the most out of this truly amazing technology.

1. Allocate internal resources to staying on top of emerging regulations and guidelines—especially at the state level.

As Raz Karmi, Eleos CISO, noted during the webinar, the AI regulatory landscape is divided into several domains, including:

It’s a lot to keep tabs on, especially at the state level—as these developments may not generate as much industry “buzz” as their federal counterparts. And the number of state-specific regulations might come as a surprise:

  • 45 states introduced AI bills in 2024.
  • 31 states adopted new AI regulations in 2024.

Given that this area of healthcare IT is so new—and the rules and best practices guiding its use are still very much in flux—it’s critical to explicitly designate internal “owners” who are responsible for monitoring the changing regulatory environment, even if you don’t have the resources to support a dedicated team or full-time employee.

“It’s important to recognize that AI is not going to work unattended,” said Rony Gadiwalla, CIO of GRAND Mental Health. “It seems like regulations change on a daily basis. You really have to be proactive.”

Rony Gadiwalla, CIO of GRAND Mental Health, explains why it’s critical to explicitly designate internal “owners” who are responsible for monitoring the changing AI regulatory environment.

It’s crucial to allow these internal experts and stakeholders adequate time and space to give this responsibility the attention it deserves—and to create a structure to foster internal accountability (e.g., a steady cadence of meetings and/or asynchronous updates).

To learn more about existing and developing regulations and frameworks, be sure to check out the full webinar recording here and our security-focused Q&A with Raz here.

2. Create a flexible AI policy—and a committee to monitor and adapt it over time.

AI privacy and security is not a “set it and forget it” kind of endeavor. Continuous improvement is the name of the game.

But first, you have to establish a starting point, and that’s where an official AI policy comes into play. If you haven’t created one for your org yet, there’s never been a better time to get started.

As Gadiwalla emphasized during the webinar, it’s important to gather input from a variety of internal stakeholders as you develop your policy. That way, you can be sure that you look at AI risk from all angles—and educate all appropriate parties and teams.

Rony Gadiwalla, CIO of GRAND Mental Health, discusses his team’s approach to developing an AI policy, emphasizing the importance of gathering input from a variety of internal stakeholders.

At GRAND, the compliance team takes the lead—but they have assembled a cross-functional group to help guide their policy creation, enforcement, and management process.

Once you have a policy in hand, that same committee should meet on a regular basis to review and update it based on the changing regulatory environment as well as any gaps observed post-implementation.

“Once the policy is written, it doesn’t mean it’s filed away somewhere,” Gadiwalla said. This also means building a solid working relationship with your AI vendor, he stressed, because there may come a point that they’ll need to adapt their system to comply with your policy. “It’s important for us to be proactive and adaptable,” he continued. “Those are the two most important things for us to do.”

Need help creating an airtight AI policy for your behavioral health org? Download our free AI policy template here.

3. Select AI vendors thoughtfully and cautiously.

As Gadiwalla pointed out, privacy and security shouldn’t be managed in a vacuum—especially in an area like AI, which is relatively new and undefined compared to more established healthcare tech categories (EHRs, for example).

You can’t protect your organization from all possible risks on your own—nor can you rely entirely on your AI vendor to cover every aspect of security from an internal operations standpoint. Instead, privacy and security is a shared responsibility—one that requires constant communication and collaboration.

“The vendor is supposed to build a product that is secure by design,” Gadiwalla said. “But organizations are also responsible for doing their due diligence. And from an operational perspective, the organization is responsible for ensuring secure day-to-day operation of the product.” That means consistently conducting gap analyses and security testing—and reaching out to your vendor-partner when issues are identified. “The partnership between the two sides is really important,” he continued. “Otherwise, it’s like buying a safe car and then driving it recklessly.”

When it comes to certifications and other demonstrations of data security and compliance, Gadiwalla said his team always makes sure any technology partner meets basic standards like HITRUST, SOC2, and HIPAA. “That’s an important baseline for us because we know a third party has come in and looked at their practices,” he said. “So look for the signs that vendors are putting in the effort.”

Rony Gadiwalla, CIO of GRAND Mental Health, talks about why AI privacy and security is a shared responsibility between vendors and their customers.

Karmi said it’s also important for vendors to demonstrate an ongoing commitment to upholding privacy and security best practices—especially in the realm of AI, where those best practices are constantly evolving. “As an AI vendor, we are responsible for implementing the appropriate security measures and data protection measures, and ensuring data privacy,” he said. “We understand it’s not a one-time effort, but that it requires continued management.” He also highlighted transparency—in the event of a security incident or data breach, for example—as an important trait to look for in an AI vendor.

Raz Karmi, CISO at Eleos, explains why it’s important for tech vendors to demonstrate an ongoing commitment to upholding privacy and security best practices—especially in the realm of AI, where those best practices are constantly evolving.

Gadiwalla added that the way a vendor handles data is also critical, urging leaders to vet potential AI partners for things like encryption, anonymization, minimization (i.e., only providing the minimum data access in any given scenario), sharing (i.e., with third parties), and storage (i.e., whether the data is retained, and if so, for how long).

“The vast amount of data that AI has access to and how it processes it, the speed at which it processes it, and the way it could affect care, even at an augmented level, means there’s always a concern,” Gadiwalla said. “This is very private data, and you want the AI to use it in a responsible way.”

Rony Gadiwalla, CIO of GRAND Mental Health, explains why the way a vendor handles data is so critical—and how to vet potential AI partners for this requirement.

This is especially crucial during the adoption phase, because many providers will resist using a technology if they aren’t sure how client data is being used and protected. “How is the data used, how is AI coming up with results, that’s what builds trust,” Gadiwalla said. “The trust between a clinician and a client is critical, and we can’t do anything that compromises that trust.” 

4. Focus on managing the risks so you can reap the rewards.

It’s important to acknowledge that the security risk surrounding AI is very real—and in a space like health care, that risk is even more pronounced compared to other sectors. The data at stake is extremely sensitive, and the repercussions of a leak or breach are extremely serious.

“Cybersecurity, or any kind of security, is a matter of risk management,” Gadiwalla said. “So, when you’re looking at your pros and cons, you’re evaluating whether the benefits are worth the risk you would be taking.”

But that risk shouldn’t completely prevent you from embracing and using this technology for good—because the benefits are simply too great to ignore. Instead, Gadiwalla said, take stock of all the risk management tools in your toolkit—including mitigation, management, and avoidance.

Rony Gadiwalla, CIO of GRAND Mental Health, explains that it’s important to acknowledge that the security risk surrounding AI is very real—while also recognizing that it is something that can be managed.

Furthermore, it’s important to communicate any risk management actions you’ve taken to the people who are actually using the product: your providers. This will help increase their trust and confidence in the tool, which in turn will help them effectively position it to clients.

“I will tell you right now that if a clinician feels uncomfortable, they will tell you almost immediately,” Gadiwalla said. “And good luck operationalizing it if the clinician doesn’t like it, or feels it’s unsafe.”

Rony Gadiwalla, CIO of GRAND Mental Health, discusses the importance of communicating any AI risk management actions you’ve taken to the providers using the tool.

In addition to creating transparency around the tool itself, Gadiwalla recommends building a culture of transparency around provider feedback—because that’s the best way to help shape future AI development and enhancement efforts.

“There is a gap right now because there is so much unknown in terms of ethical use and AI bias—things we will learn over time, but shouldn’t stop us from leveraging AI today,” he said. “We will only get better if we use and learn from it.”

Karmi echoed Gadiwalla’s sentiments, emphasizing that we are very much still in the learning phase, which means we might not even be aware of all the risks we need to manage—which is a risk in and of itself.

There’s no singular, simple solution to this challenge, but both speakers pointed back to the culture of the AI vendor as the most important factor in risk management. Dig into their stance on, and approach to, things like:

  • How, and what, data comes into—or goes out of—the AI system. “It’s important to make sure that we’re not adding bias or somehow tipping the scale on some of the decision points,” Gadiwalla explained.
  • How the AI models are trained. “Not all AI is the same,” Gadiwalla noted, pointing to the behavioral health-specific nature of Eleos training data as an example.
  • Whether there is a human expert in the loop. “A lot of people might think AI is the smartest in the class—that it doesn’t need any human oversight, which is totally wrong,” Karmi said. “Human monitoring and ethical considerations are crucial. That’s why at Eleos, we have multiple human layers to make sure the data produced is relevant and accurate.” This includes our in-house clinical analysts, who work constantly to make sure our AI models are primed for accuracy in the context of behavioral health.
Raz Karmi, CISO of Eleos, and Rony Gadiwalla, CIO of GRAND Mental Health, explain that while there are gaps in our understanding of AI—and how it affects things like bias and ethics in health care—the only way to address those gaps is by actually using the technology.

5. Collaborate with other leaders, organizations, and vendors.

This is truly a “we’re all in it together” moment—and an opportunity to reach out to your fellow behavioral health leaders across the country for the good of the entire community. A rising tide lifts all boats, after all, and as Nisheeta Setlur, VP of Customer Success at Eleos, mentioned during the webinar, there’s never been a better time to lean on your network for their advice.

“Everybody’s learning from each other,” she said. “Even when there is no defined standard, there are good learnings we can share with each other. Don’t do your own homework—copy from your neighbors…copy from your peers.”

For example, Gadiwalla said the team at GRAND connected with several other organizations as they developed their internal AI policy and management framework. “There were quite a few agencies we reached out to when we were working on our policy,” he said. “You learn from people who have done it before you.”

Nisheeta Setlur, VP of Customer Success at Eleos, explains why the lack of definition around AI privacy and security standards presents a huge opportunity for organizations to work together and collaborate for the good of the entire industry.

When you’re in the thick of a snowstorm, it’s hard to know what everything is going to look like once the skies clear. But, the more proactive you are now, the less shoveling you’ll do later—and the sooner you’ll be able to enjoy the benefits of your hard work.

While the AI privacy and security landscape still looks a bit fuzzy, innovation-focused organizations across the country are already leveraging this incredible technology in a secure and effective way—saving hours of time, exponentially increasing staff satisfaction and productivity, and scaling their impact on the communities they serve.

Ready to see how Eleos can help your organization move safely and confidently into the era of AI? Request a demo of our purpose-built platform here.