Can you believe it’s already April? It feels like the holiday season was just yesterday—and yet also like it was a year ago, given everything that has already happened in 2026.
That feeling is likely mutual for behavioral health and substance use disorder (SUD) organizations. With H.R.1 policies going into effect and the uncertainty around the SUPPORT Act, leaders are navigating a complex and evolving landscape—creating quite the regulatory knot to untangle.
With all that change comes added pressure for your staff and your margins—making it more important than ever to create operational efficiencies. And of course, 2026 is absolutely the year to invest in AI power—but most orgs don’t know where to start.
Every solution promises impact, but the wrong investment can quickly turn into an operational nightmare—seriously raising the stakes on the decision-making process.
To help you cut through the noise and make an informed decision, we created an Advanced AI Buyer’s Guide for behavioral health leaders. Download your copy here.
And to bring the guide to life, we hosted a webinar with leaders across clinical, operational, and technical roles in community-based care:
- Kate Benedetto, Customer Insights Manager at Eleos
- Nikki Stanaitis, Chief Clinical Officer at New Vista
- Sarah Nagle, Director of IT at Metro Care Services
- Felicia Jeffery, CEO of Gulf Coast Center
They shared how they evaluated AI in their organizations, what they learned along the way, and how they separated software vendors from true partners in a crowded landscape.
Want to hear the whole conversation? Watch the webinar recording here.
The Community-Based Care Guide to Buying AI: Key Questions You Should Ask
We’ve pulled together the most important questions these leaders asked throughout their AI buying process and the insights that helped them make confident decisions. If you’re looking for the quick version of the guide and webinar, here’s where to start.
1. Does The AI Do What They Say It Does?
The first rule of shopping in a crowded AI marketplace is not to take anything at face value. It may sound obvious, but those sales pitches and demos can be hard to resist!
That’s why the most effective buyers start with evidence.
To ensure you start off on the right foot, be sure to ask all potential AI vendors for cold, hard proof, such as:
- Return on investment
- Note efficiency
- Time savings
- Compliance coverage
- Clinician wellbeing
- Workflow efficiency
- Clinical outcomes
For Nikki Stanaitis and the team at New Vista, this step proved critical early in their AI journey.
They began by exploring the AI capabilities within their existing EHR. On the surface, it seemed like a natural place to start—familiar, convenient, and already embedded in their workflows. But once they looked closer, it became clear that the tool wouldn’t meaningfully reduce clinician burden in the way they needed.
Without the evidence step, they may never have expanded their search.
“Social workers don’t really learn about AI in school and how to shop for it,” said Stanaitis. “It was a daunting shift from our EHR vendor, which is comfortable… to the wild west of all these AI vendors in the field.”
Rather than rushing into a decision, the team took a deliberate approach. They started by exploring—and by asking better questions, like:
- How much time will it save?
- What kind of efficiencies will it create?
- What does note completion time look like?
- What do clinicians say about it?
- What does the return on investment look like dollar for dollar?
They weren’t even considering vendors who couldn’t show them proof of how the tool would operate in real-world workflows. They were clear about wanting an AI solution that allowed notes to be completed in real time, in session, and that accurately captured what was happening in real clinical language.
“We wanted to make sure that what we were ultimately creating was an environment where clinicians wanted to stay, clinicians felt more present, clients felt heard, and we were able to do the hard work,” said Stanaitis.
Because New Vista wasn’t just looking for productivity gains—they were looking for a tool that improved staff and client retention.
Identifying your goal in implementing AI and looking for evidence of those outcomes in every vendor you evaluate is a great starting point for your search.
2. Does the AI “Speak” Community-Based Care?
After you’ve started to dig in, it’s also important to make sure you’re only looking at serious clinical contenders. Many generic models aren’t going to cut it when it comes to HIPAA compliance and behavioral health-specific workflows.
Because, as Stanaitis put it, “Not all AI tools are created equal.”
Beyond the generic models, there’s a big difference between a tool built for the medical space and one built for behavioral health.
Behavioral health and SUD clients are sharing their most vulnerable information, and sessions are much longer than the average healthcare visit. That means there is added context and nuance that the AI needs to pick up on. So, it’s crucial that it’s built to speak the language—and gives you confidence in standing behind it when your clinicians are talking to clients about bringing AI into their sessions.
As you evaluate AI vendors, look for signals of true clinical intelligence, such as:
- Representative clinical corpus,
- Clinician-engineer partnership,
- Golden-thread labeling,
- Grounding against hallucinations,
- Privacy-first pipelines, and
- Quality and rigor.
That said, technology is never perfect, which is why the strength of the partnership matters just as much as the product.
For New Vista, that meant choosing a vendor who didn’t just deliver a tool but actively improved it alongside their team. Clinicians were encouraged to give real-time feedback, flag issues, and challenge outputs. And more importantly, that feedback didn’t disappear into a void.
When concerns about hallucinations came up, they didn’t ignore them or work around them. They partnered closely with Eleos to investigate, ultimately tracing the issue back to faulty audio equipment (not the AI).
That kind of collaboration made all the difference, and it wouldn’t have been possible if they hadn’t chosen a partner who deeply understood their field.
Hear more about New Vista’s AI journey, and how they scaled care without sinking morale.
3. Can the Solution Keep Up With Changes?
One thing about behavioral health and SUD… they’re always changing. Whether it’s policies, funding, or regulatory requirements, things are constantly evolving—and the needs of your clients and clinicians continue to grow alongside them.
So, when you make a big investment in technology like AI, you want to be sure that it’s not going to stand still. The right partner won’t just meet your current needs. They’ll demonstrate a clear ability to evolve with your organization over time.
To suss that out, ask potential partners if they have:
- A secure RAG system,
- Feedback loops and continuous updates,
- Enterprise security and backups,
- Clear connection points (public APIs), and
- An auto-scaling platform.
And future-proofing isn’t just about technology.
As Felicia Jeffery, CEO of Gulf Coast Center, explained: “When I think about future-proofing… I start with trust, sustainability, and impact.”
For her team, this meant looking beyond features and asking whether a solution could support their organization long term—not just operationally, but clinically and culturally.
That mindset ultimately shaped how they evaluated every vendor. Not by what the product could do today—but by how it would hold up over time.
Put simply, it comes down to a few critical questions about the AI platform you’re evaluating:
- Will it protect our data?
- Will it grow our people?
- Will it serve our needs as we change?
“We want to build something today that doesn’t impact how we provide care tomorrow,” Jeffery concluded.
Because future-proofing isn’t about predicting what’s next, it’s about choosing a partner you trust to evolve with you.
4. Is My Data Safe?
This seems like a no-brainer in healthcare, but when it comes to AI, privacy and security become much more complex.
That’s especially true in behavioral health and SUD, where documentation often includes deeply personal, highly sensitive information. Any AI that touches these sessions has to earn trust—with safeguards that are clear, verifiable, and built into every layer of the product.
Before moving forward, check for these green flags:
- Specific certifications and audits (ISO 42001, SOC 2, and HIPAA for starters),
- Business Associate Agreement (BAA),
- Data minimization,
- Encryption,
- Access controls,
- PHI model training,
- Subprocessors and third-party risk,
- Retention and deletion,
- Incident response, and
- Trust and transparency.
But as Sarah Nagle, Director of IT at Metrocare Services, pointed out, checking the boxes is only the beginning.
Her team started with the fundamentals—ensuring their existing systems, devices, and infrastructure could support the technology, and validating that the vendor met strict security, legal, and compliance standards.
“Us nonprofit folks don’t always have the money for the most upgraded, top-of-the-line laptops, or the best software,” Nagle explained. “We had a whole team of people—our compliance, legal, and security teams—looking at all the specifications, making sure that it met the requirements.”
Once that foundation was in place, a new challenge emerged. In addition to considering whether the platform was secure, they also needed to ensure they could clearly communicate that it was. That means translating complex technical safeguards into language that feels clear and reassuring to both clinicians and clients.
If clinicians can’t clearly explain how AI is being used and feel confident that the data is protected, adoption stalls, clients hesitate to consent, and your organization will never see the true value of AI. So, it’s an important piece to think through ahead of implementation.
For a deeper dive into AI security and privacy, download the guide.
5. Will It Do No Harm?
AI should improve care quality and protect clients, not create new risks. That’s what we mean when we say “do no harm.”
With AI still evolving and regulations only just beginning to take shape, organizations need to take an active role in evaluating quality, safety, bias, and ethics. That means thinking beyond features and asking how a solution is governed and validated over time.
Trusted vendors should be able to demonstrate the guardrails they have in place across three layers:
- Layer 1: Bias & Safety Process: The vendor runs a loop of planning, testing, and monitoring before launch and on an ongoing schedule to check for bias and other errors.
- Layer 2: Human Oversight & Governance: The vendor keeps clinicians in control and makes accountability clear, while connecting to recognized frameworks such as CHAI and CARF.
- Layer 3: Evidence & Validation: The vendor demonstrates that the tool works and stays safe over time. Think third-party studies, external evaluations, and real-world monitoring.
At Metrocare, Sarah Nagle and her team didn’t just take these principles at face value. They brought in staff members to evaluate the quality of AI-generated suggestions, ensuring outputs met clinical standards before being trusted in documentation.
Clinicians weren’t expected to copy and paste. They were expected to review, edit, and make final decisions about what entered the chart.
As Nagle put it, “Making sure we have that human in the loop, and that they aren’t using [AI] as a shortcut.”
To reinforce that, the team implemented ongoing oversight efforts, such as:
- Monitoring documentation patterns through dashboards,
- Reviewing notes directly in the EHR, and
- Comparing AI-assisted notes with traditional documentation.
By consistently monitoring and validating the quality of the outputs, the Metrocare team saw lower rates of late documentation—around 9% when using Eleos compared to 20% for those who were not.
Those improvements came from pairing AI with clear oversight and continuous validation. Any AI tool you choose should do the same. Because “do no harm” means ensuring the technology actively supports safe, high-quality care—every time it’s used.
6. Will the Platform Work Across Settings and Use Cases?
As we all know, community-based care doesn’t typically happen in a controlled environment. It happens in schools, homes, treatment centers, and other various community settings—across psychiatry, case management, group care, and even home health.
Psst. The home health care buyer’s guide is coming soon, so stay tuned!
So, when you’re watching a demo on a programmer’s laptop in a quiet office, you’re not getting the full picture. You have to be able to verify that the AI platform you choose will actually work for your team in the environments they operate in every day.
To get a more realistic view, ask vendors to see real-world scenarios, such as:
- A real dashboard tour,
- A start-to-finish visit,
- A psychiatry-specific template,
- A live group note,
- A mobile demo outdoors,
- A peer note walkthrough,
- A quick view of role-based settings, and
- A demo of how supervisors review and approve notes.
Because if a solution can’t demonstrate value across your actual use cases, it won’t hold up after implementation.
Felicia Jeffery and her team took this a step further. They didn’t just evaluate the tool in one setting—they pressure-tested it across their organization.
From prescribers to substance use programs to group care, they looked at how the platform performed in different workflows and ecosystems. And just as importantly, they used data to validate what was working—and what wasn’t.
That visibility helped drive alignment, improve adoption, and ensure the tool was delivering value across teams.
7. Who’s Got My Back Post-Launch?
If all else fails, make sure this one is at the top of your list.
Good partners don’t just sell you software, they have your back after rolling out the platform—and for however long your partnership lasts. Because in reality, implementation is just the beginning. The real work starts once your teams are using the platform day to day.
Before you settle on a partner, check that they’ll be around after go-live. Here are some tell-tale signs that you can count on them:
- A fast start (i.e. you’re able to use the tool right away),
- A named point person,
- Training you can actually use,
- Help with change management,
- A usage and outcomes dashboard,
- Written support promises, and
- Regular check-ins to see how things are going.
As Jeffery explained, “When you’re looking for a partner, you want to make sure that they will help you through the change process, because the biggest threat to any of this is change management.”
For Sarah Nagle and her team at Metrocare, that support made all the difference. After coming off a lengthy EHR implementation, they expected another difficult rollout. Instead, the experience was notably different—fast, smooth, and highly collaborative.
“They came on-site and did in-person training, and folks left that training pumped up,” said Nagle. “They wanted to use it, they wanted to be part of the pilot, and I had honestly never seen that level of excitement from our clinical staff before.”
And the support and excitement shouldn’t just stop after training. In AI—where the technology is evolving quickly, and the stakes are high—you’re not just choosing a product. You’re choosing who you want to navigate that change with.
So when you’re evaluating vendors, don’t just ask what happens at go-live. Ask what happens after. Because the right partner doesn’t disappear once the contract is signed.
Once you have the right partner in place, check out this guide to nail your AI implementation.
Buying AI in Community-Based Care Isn’t Just an AI Decision
At the end of the day, this isn’t just about buying AI.
With tightening margins and ongoing staff shortages, organizations are looking for ways to reduce administrative burden and give clinicians the space to focus on care.
AI can be that support, but only if you choose the right solution—and the right partner.
As Jeffery put it, “The future of AI is already here—and you cannot lead in that future with yesterday’s mindset.”
While creating checklists and comparing features matters, what matters more is who you choose to work with:
- A partner who understands your world.
- A partner who listens and adapts.
- A partner who’s invested in helping you solve real problems—not just selling a product.
- A partner who’s in it for the long haul.
Get that right, and AI becomes more than another software you’re adding to your toolbox. It becomes the tool you use to strengthen your teams and deliver better care.
If you’re interested in seeing how Eleos answers all these questions and delivers on true partnership, request a demo today.