I landed in Denver last week with a pretty clear picture of where the behavioral health field stood. H.R.1 signed into law. The federal fraud crackdown in full swing. Medicaid under more scrutiny than it’s been in a generation. The organizations I talk to every week are navigating real fear—about funding, about audits, about what the next twelve months actually look like.

NatCon26 had a different energy than last year’s conference. In Philadelphia, the big question was whether AI could really do what it was promising. In Denver, that question was settled. The conversations have moved on—to survival, to strategy, and to what kind of organizations are actually going to come out the other side of this.

Chaos can either paralyze you or push you toward excellence. What I saw on the floor convinced me that the best organizations in this field are choosing the latter.

Here’s what I heard, and how I think it’ll shape the industry moving forward.

The Ground Is Already Shifting

Before I get into what happened at the conference, I want to be honest about the environment everyone was carrying walking in.

H.R.1—the One Big Beautiful Bill Act—was signed on July 4, 2025. Most of the field already knows the headline numbers: $911 billion in Medicaid cuts over a decade, an estimated 10 to 12 million people losing coverage by 2034. 

But the first wave isn’t theoretical anymore. Enhanced Federal Medical Assistance Percentage (FMAP) increases have already begun expiring. Six-month redetermination cycles kick in at the end of 2026. Work requirements follow in 2027. And organizations are spending a staggering amount of time managing eligibility paperwork—a third of them are logging more than 100 staff hours per month on it today, before the new requirements even hit.

Then, in March 2026, the White House signed an Executive Order creating a Task Force to Eliminate Fraud, chaired by Vice President Vance, with a mandate to coordinate government-wide action on fraud, waste, and abuse in federal benefit programs. CMS followed with its CRUSH initiative—Comprehensive Regulations to Uncover Suspicious Healthcare—expanding audit activity and inviting public input on new enforcement mechanisms. Behavioral health, SUD, peer support, and community-based services have been explicitly called out as high-risk service areas.

The burden of proof, in many cases, now falls on providers.

For the behavioral health community, this lands hard. 

As our Chief Clinical Officer, Dr. Denny Morrison, has said plainly: “The documentation requirements in our industry are just onerous—more so than almost any other aspect of healthcare that I’m aware of.” That was true before the crackdown. It’s a much bigger problem now.

But here’s what Dr. Morrison also said, and it stuck with me heading into Denver: “While these changes may be painful in the short term, they can position organizations for long-term sustainability and enhanced community impact.”

The word “can” is doing a lot of work in that sentence. Because it’s not automatic. It depends on what leaders choose to do.

That was the undercurrent at NatCon26.

For more information on recent H.R.1 (OBBBA) and Medicaid funding updates, check out this blog.

Observation 1: Revenue Protection Is the New North Star—But It’s Only Half the Equation

I kept hearing the same two words on the floor last week: Revenue protection.

That’s what the field is focused on right now. Not productivity. Not clinician time savings. Revenue protection. How do we make sure we get paid for the care we’re already delivering?

I get it. When audits are expanding, Medicaid dollars are contracting, and your compliance team is drowning, you focus on what keeps the lights on.

But here’s the thing: Revenue protection and operational efficiency aren’t two different goals. Treat them that way, and you’ll miss both.

If a clinician is documenting in real time (instead of days later), and the note is payer-ready before it ever reaches billing, you’ve protected revenue. If eligibility issues are flagged during the session—not after the claim is denied—you’ve protected revenue. If compliance gaps are caught before a note is ever signed, rather than discovered during an audit six months later, you’ve protected revenue.

The organizations that understand this aren’t choosing between speed and compliance. They’re building systems where the two reinforce each other. That’s the difference between reactive and proactive, and it’s a gap that’s only going to widen.

Observation 2: Buyers Are Done with Point Solutions

A few years ago, the conversation was, “Which AI tool should we try?” In Denver, it was “How do we consolidate what we already have—and stop buying more things that don’t talk to each other?”

The appetite for point solutions is fading fast. Budget pressure is a big reason. But it’s also just change management and tech fatigue. Organizations have accumulated tools for documentation, tools for billing, tools for compliance review—and they’re all running on separate rails. They don’t work well together, and clunky workflows keep users and executives chasing their tails.

What buyers are looking for now is a platform. One vendor. One workflow. One place where clinical, compliance, and revenue cycle work together instead of against each other. Because when you think about it, it’s all connected—conversation to payment.

This is a meaningful shift. The organizations asking that question are the ones making better decisions. They’re thinking about long-term sustainability, not just short-term fixes.

Observation 3: The Market Knows the Difference Between “AI Tourists” and “AI Natives”

This was one of the most encouraging things I observed at NatCon26: Buyers are getting sharper.

I heard the phrase “AI tourists” used more than once, in more than one meeting—a way of describing vendors who have added AI features to existing archaic platforms without fundamentally building for it. They’re visiting AI. They’ve taken the trip. But they don’t live there.

The contrast is an AI-native company—one that was built from the ground up for this problem, in this care setting, using clinical data from this field.

The distinction matters. A general-purpose AI wrapped in a behavioral health logo has been trained on the internet—which means it can write a progress note and a cookie recipe with equal confidence. That’s not just a quirk. It’s a signal that the model has no clinical guardrails, and that means it can be prompted to generate content that has no place in a therapy session. 

Purpose-built clinical AI, like Eleos, prevents that at the model level, because it has been trained on real session data, reviewed by licensed clinicians, and designed to understand the nuances of therapeutic relationships and behavioral health documentation—not approximated from a general model.

Buyers in Denver were asking sharper questions than I’ve seen before: 

  • Where does my PHI go after the session? 
  • Who trained this model, and on what data? 
  • If I ask this system a clinical question, how does it know it’s right? 
  • When insights are generated, who verifies those insights are right?

Those are exactly the right questions. I encourage behavioral health leaders to keep asking them. And be prepared to vet the answers.

Observation 4: AI Security Is Under-Discussed. That Worries Me.

The security conversations at NatCon26 were more sophisticated than a year ago. More organizations are asking about HIPAA compliance and SOC 2 certification before they sign. 

That’s progress. But it’s not enough.

HIPAA and SOC 2 are the floor. They’re table stakes. What the field is not yet talking about consistently are the risks specific to AI systems—and those risks are categorically different from the ones traditional healthcare IT security was designed for.

The OWASP Top 10 LLM risks include things like prompt injection, training data poisoning, insecure output handling, and model inversion attacks. These are documented, real-world vulnerabilities that apply directly to any AI system handling sensitive clinical data. They are not theoretical edge cases. And most behavioral health organizations—and frankly, most vendors—aren’t asking about them at all.

Here’s what makes me lose sleep: 57% of providers are already using unauthorized AI tools in their clinical workflows. That means the majority of the AI touching behavioral health data right now was not selected, vetted, or secured by anyone at the organizational level. It just crept in.

Behavioral health data is among the most sensitive data that exists. A client’s therapy session contains things they may not have told anyone else in their lives. The idea that this data is flowing through unsecured AI systems—because the person doing the work just needed a way to get work done—is a real problem.

At Eleos, we’ve made the investments: A full-time CISO, behavioral-health-only training data, no storage of raw session recordings, the SAIL framework for AI risk management, and certifications that go well beyond the minimum. Not because it’s a selling point. Because we genuinely believe trust is non-negotiable in this field.

But this conversation needs to be louder across the whole industry—not just among vendors competing on security features.

If you want to go deeper on what rigorous AI security actually looks like in behavioral health—and what questions to ask any vendor you’re evaluating—our Ultimate Guide to Behavioral Health AI Security & Privacy is a good place to start.

Observation 5: Necessity Is the Mother of Invention—and a Reckoning Is Coming

Here’s the thing I kept coming back to as I walked the exhibit floor.

Necessity is the mother of all inventions. And the field has never had more necessity than it does right now.

H.R.1. CRUSH. Fraud, waste, and abuse. A workforce already stretched to its limits. Demand that keeps rising while reimbursement tightens. Organizations that are being asked to do more with less—by regulators, by payers, by the need for the communities they serve.

In this environment, you don’t get to stand still. The organizations that are going to come out the other side of this are the ones that treat this moment as a forcing function—that build the operational infrastructure now, that adopt technology not as an experiment but as a core strategy, that create the systems that let them serve more people without burning out the people who care for them.

As my friend and Eleos CCO Denny Morrison put it, “You can’t save enough on paper clips and copiers to make a dent.” Cost-cutting alone isn’t a strategy. Operational excellence is.

The gap between organizations that act and organizations that wait is going to become very visible, very quickly. The leaders in Denver who are moving—who are asking the right questions about platforms, about security, about real AI versus borrowed AI—I feel genuinely good about their odds.

The ones waiting for clarity before they invest? I’m more worried about them.

I’ll close with what I always come back to when the environment feels particularly heavy: Chaos can be a catalyst for excellence. The question is whether you choose to let it be.

The clinicians doing this work—the ones who went into this field because they wanted to help people—they deserve organizations that are fighting to be excellent, not just surviving. And they deserve technology that was built to serve them.

That’s what we’re here for. 

—Joffe