In pretty much any conversation about behavioral health AI, two big questions are bound to come up:
Is it safe?
And:
Can we trust it?
In behavioral health especially, these aren’t just technical boxes to check—privacy and security are directly tied to client trust. A breach could damage the relationship between clinicians and clients, putting their health and wellbeing at risk.
“Unauthorized access to data is the baseline [concern],” said Rony Gadiwalla, CIO at GRAND Mental Health, during a recent episode of the No Notes podcast. “We can’t even start talking about other things if we don’t address that first.”
Gadiwalla and Eleos Health CISO Raz Karmi sat down with host Dr. Denny Morrison, PhD, to talk about privacy, security, and what behavioral health leaders should expect from any AI vendors they trust with sensitive data. From client consent policies to data retention protocols to non-negotiables like HIPAA compliance, this trio of behavioral health experts broke down key privacy and security risks and shared their tips for staying ahead of the curve.
Want to listen to the AI security podcast episode in full? Check it out here.
The Extraordinary Nature of Privacy and Security in Behavioral Health
All healthcare data is subject to privacy regulations and protections, but therapy data is on a whole other level. When a client participates in therapy, they aren’t just sharing information about their weight or their blood sugar. They are disclosing their most personal fears, regrets, and parts of themselves they might not even reveal to their closest friends.
“I don’t know of any other aspect of healthcare that worries more about privacy and security than behavioral healthcare,” Morrison emphasized.
Privacy has always been core to the trust most client-therapist relationships are built on, but the rise of AI in behavioral health has created new layers of complexity.
How is data collected? How long is it stored? Is it shared with third parties? And perhaps the most unsettling question: who really controls it? In some cases, the answers aren’t yet clear.
As Karmi put it, “AI is relatively new to all of us, and it’s developing at a very fast pace…we’re still not aware of all the risks.”
During their conversation, the experts identified several key issues that behavioral health leaders should keep in mind:
- The Data has to be Managed: AI tools often collect data continuously, sometimes in real-time. Understanding how that data is captured, where it goes, and who has access to it is essential. “How is [the AI tool] collecting this data? How long is it being retained? Is it being used in any other way? Or is it even shared with third parties? That lack of transparency can lead to some mistrust,” Gadiwalla warned.
- Data Use Can Change with Technology Updates: A one-time signature does not always solve the data usage issue. As Gadiwalla explained, AI systems “learn” as they operate. “Whatever you tell it, you cannot untell it,” he said. This means clinicians and organizations have to make sure the AI only uses the information gathered in line with the relevant usage agreement. Otherwise, AI could apply learned data in unintended ways—and outside the bounds of what has been authorized.
- Regulations are Still Evolving: Unlike HIPAA, which provides clear and established guidelines, AI regulations are still in flux. “On a weekly basis, there’s a new framework or regulation,” Karmi shared. Regulations also differ by state and country, making it even harder for providers to keep up. This lack of universal industry standards means behavioral health organizations must be proactive in setting their own privacy and security policies.
Ultimately, data privacy in behavioral health is just as much about protecting trust as it is about protecting information. If clients feel exposed, they’re less likely to be honest. As Gadiwalla put it, “You want that comfort level. You want that transparency.”
Technical Challenges with AI Privacy and Security
Philosophical questions around privacy and trust get a lot of attention, but the technical challenges of AI security in behavioral health are just as real. Gadiwalla and Karmi identified several technical hurdles that healthcare leaders need to understand and address.
1. Tech Complexity
AI systems often function as “black boxes,” meaning their decision-making processes are hidden and difficult to interpret. This can make it difficult for providers to gain and maintain client trust. That’s why it’s important to look for vendors who are forthcoming about how their technology works—and who will help educate providers on the technical aspects of the AI so they can confidently explain it to clients.
2. Access Control
Gadiwalla rightfully described unauthorized access as “the baseline concern.” If that’s not addressed, nothing else matters. Providers have to make sure:
- data is encrypted (both in transit and at rest),
- only authorized individuals have access, and
- those individuals can only access the data necessary to do their jobs.
Another layer of complexity comes with learned data. Going back to the point about usage above, when AI systems learn from client interactions, that learned information may persist in ways that aren’t always clear or fully controlled.
3. Size of Datasets
Unlike traditional tech platforms, AI systems, by default, collect massive amounts of data—in part so they can use it to improve their models. But behavioral health organizations should make sure their AI vendors are only collecting the data necessary for the task at hand. Gadiwalla emphasized the importance of limiting data collection to reduce potential exposure. Data minimization is also something to look for, because once the data’s purpose is fulfilled, it should be deleted or anonymized to reduce risk in the event of a breach.
Common Myths About AI Security
Complexity Doesn’t Equal Security
People tend to think of AI as precise and foolproof, but that belief can lull behavioral health leaders into a false sense of security. The truth is, because it’s so complex, AI is easier to trick. For example, there’s a tactic called “adversarial inputs,” where data is intentionally manipulated to confuse the system into making wrong decisions. It’s a type of attack that has happened in real life. To avoid this, organizations should work only with AI vendors who have solid defenses in place to block these types of vulnerabilities.
Artificial Intelligence Doesn’t Equal Independent Intelligence
Another common assumption is that AI can run on autopilot—no humans required. But anyone who’s ever worked with this technology knows it’s never that simple. AI systems are only as secure as their underlying design, and no system is flawless. Without human oversight, mistakes, security gaps, and unpredictable outcomes can sneak in. Providers who stay engaged in their use of AI tools—and who actually understand how these tools work—are in a much better position to catch potential issues before they cause major problems.
Non-Negotiables for AI Privacy and Security
When it comes to privacy and security in behavioral health AI, Gadiwalla and Karmi made it clear that some things aren’t up for debate. These are the non-negotiables—the bare minimums that every behavioral health organization should expect from their AI vendors.
1. HIPAA Compliance
This one’s a no-brainer. “We need [AI] to be HIPAA-compliant. That’s something that we can’t compromise on,” Karmi said. Any AI tool touching client information has to meet that standard—no exceptions.
2. Data Encryption
Encryption is the digital equivalent of putting your data in a lockbox while it’s being sent (i.e., in transit) and while it’s in storage (i.e., at rest). Without it, you’re basically sending private information on a postcard. Vendors should be able to explain exactly how they’re doing this at every stage of the data journey.
As Karmi warned, “The smallest breach could kill a business today.”
3. Data Minimization
If the AI doesn’t need it, it shouldn’t take it—simple as that. Collecting data “just in case” it might be useful later is a recipe for unnecessary risk. The best vendors limit collection to only what’s absolutely essential—and delete any data that’s no longer needed as quickly as possible.
4. Data Anonymization
Even if data is somehow exposed, it should be disconnected from any particular client’s identity. Anonymization makes it so the data is essentially useless to anyone who gets their hands on it for the purpose of identifying clients. Without this, a breach could put personal client details out in the open.
Gadiwalla emphasized that if the data is anonymized to the point that it cannot be traced back to the client, then “the client’s privacy has not been compromised.”
5. Data Transparency
No one wants to guess how their data is being handled. Vendors should be upfront about their security policies, how they respond to breaches, and what measures they have in place to protect client information. If they’re dodgy about this, then that’s a big red flag. “Transparency is a must,” said Karmi. He also explained that clear, open communication between vendor and provider is essential to maintaining trust and ensuring privacy and security measures are upheld.
The Future of AI Regulation (Inside and Outside of Behavioral Health)
Laws always seem to take too long to catch up to tech innovation, which is why things feel very “Wild, Wild West” in the AI space right now. Gadiwalla and Karmi made it clear, though, that the days of waiting for AI rules to slowly unfold are almost over—especially as more bodies like NIST (National Institute of Standards and Technology) and the FTC (Federal Trade Commission) step in to set standards.
One way we could see regulations bubble up is in the form of federal executive orders. Executive orders act as a temporary measure to get something on the books while lawmakers work out the more permanent rules.
Another development to watch is the new ISO 42001 certification. This international standard is designed for AI, with a special focus on privacy, security, and accountability. (Side note: Eleos is already on track to obtain the ISO 42001 certification!)
Gadiwalla noted that during this period of flux, it’s especially important to select vendor-partners who can keep up with the pace of change and proactively adapt their tools to comply with emerging standards.
Advice for Today’s Adopters of Behavioral Health AI Tools
Today’s adopters of behavioral health AI still face a lot of unknowns, but they also have a unique opportunity to shape their own path toward safe and effective AI use. To that end, Gadiwalla and Karmi shared a few must-dos:
1. Do your homework on vendor security practices.
Karmi’s advice is simple: “Don’t be afraid of AI,” but do your homework. No system is perfectly secure, but understanding how a vendor handles encryption, access control, and data retention can give providers a better idea of who to trust. “If the platform is not built with security in mind, it could leak PHI,” Gadiwalla warned.
2. Create an internal AI policy.
Gadiwalla stressed the importance of developing a solid internal AI policy. This policy should outline your organization’s privacy standards, security processes and protocols, and vendor expectations. It’s not a “set it and forget it” document—but rather, a “living document” that should be reviewed and updated as AI evolves.
Need help creating an airtight AI policy for your behavioral health org? Download our free template.
3. Build a cross-functional AI committee.
AI policies shouldn’t be written in a vacuum. Gadiwalla and Karmi recommended forming cross-functional committees that include clinicians, compliance officers, and IT leaders. These groups can spot issues and opportunities from different angles, ensuring the policy works for everyone.
4. Vet vendors for value alignment.
Not every AI vendor is a good match for every organization. Gadiwalla and Karmi emphasized that providers should look for partners whose values align with their own. This includes transparency, privacy, and a commitment to reducing AI bias and ensuring ethical AI use.
At its core, behavioral health is about human connection. As AI plays a bigger role in care, the responsibility to protect that connection becomes more urgent. It’s up to providers to make sure AI strengthens—rather than threatens—the care process. That means asking the right questions, taking the necessary operational precautions, and perhaps most importantly, selecting and implementing AI tools with care.
Ready to see how Eleos can help your organization move safely and confidently into the era of AI? Request a demo of our purpose-built platform here.