In behavioral health, the tension between care quality and accessibility has always been a challenge. Clinicians want to provide personalized, high-quality therapy—but how can they do that when so many people need help and so many areas are affected by massive behavioral health workforce shortages?

In a recent episode of No Notes, Dr. Denny Morrison sat down with Dr. Donna Sheperis, Professor and Chair of the Department of Counseling at Palo Alto University, to explore how AI is helping clinicians balance the quality and quantity of their sessions in ways that once felt impossible.

Sheperis and Morrison discussed how AI is improving training for new clinicians, strengthening therapeutic relationships, and tackling real-world challenges like lengthy waitlists and clinician burnout. From tools that give students immediate feedback during supervision to those that equip parents with actionable support while they wait for their children to get into therapy, AI technology is ushering in a whole new era of behavioral healthcare. Read on for the highlights from this fascinating conversation. 

Want to listen to the AI and care quality podcast in full? Tune in right here.

Getting Acquainted with Palo Alto University and the eClinic

First founded as the Pacific Graduate School of Psychology, Palo Alto University has always focused on graduating “helping” professionals—including counselors, psychologists, and social workers—in the Bay Area and beyond. But as Sheperis shared on the podcast, the university is taking that mission to new heights with its eClinic.

The PAU eClinic is an entirely virtual clinic that was created to train psychology and counseling students in cutting-edge practices like telemental health and digital therapeutics. Originally, this idea faced resistance from traditionalists who doubted that good therapy could happen outside of a face-to-face treatment environment.

“People would say, ‘You can’t do good work if you’re not in the same room with your client,’” Sheperis explained. But COVID-19 changed everything, making telemental health a necessity for clinicians worldwide. The eClinic suddenly became a model for the future of therapy training and care delivery.

Today, the eClinic helps students get comfortable with all kinds of modern therapy tools, from Zoom-based counseling to digital therapeutics that offer clients support outside of the traditional 50-minute session.

“We created a zero-footprint clinic where we could teach and train our psychology and counseling students…to bring in what we call digital therapeutics, to help our clients between sessions, or to help our clinicians in different ways than we had been in brick-and-mortar clinics,” Sheperis said.

Leveraging AI Tools for Supervision and Training

In such a future-looking educational environment, it should come as no surprise that AI is already transforming student training and supervision in the eClinic—with AI technologies helping PAU students develop the critical skills they need to succeed in the field.

Thanks to AI-powered supervision, for example, students can get immediate post-session feedback—including detailed treatment metrics, transcript analysis, and key moments from therapy sessions. Because students sometimes struggle to recall details long after a session, this real-time information is invaluable in reinforcing session content and guiding them on correcting any issues with their approach. Instead of relying on fragmented memories or spending hours re-watching video recordings, students have instant access to clear, actionable insights that help them reflect and grow.

“Supervision is really about, ultimately, the clinician developing an internal supervisor,” Sheperis explained. “And this is one way to really grow that internal critical analytical supervisor.”

AI tools also give supervisors a more comprehensive view of a student’s progress over time. When they can see detailed data and view patterns across sessions, they can provide student therapists with more targeted, helpful guidance.

But the eClinic does more than give students practice with tools that have already been implemented. Students are also encouraged to actively explore and introduce new technologies that could benefit their future clients. For instance, students have brought in gamification tools to help clients with attention or task management issues. By testing and rating these tools within the clinic, students actively contribute to the program’s growing knowledge base.

“I don’t think there’s anyone that could ever stay on top of how quickly this is all moving,” Sheperis noted. “It’s so fast.” And that makes this collaborative approach to vetting new solutions especially important.

Improving Care Access and Quality Through AI

At baseline, AI tools improve how clinicians work by making time-consuming tasks like documentation easier and faster. By taking care of these responsibilities, AI allows clinicians to focus fully on their clients during sessions.

“If AI is simply transcribing my sessions, I can be more present in session,” Sheperis said, adding that when AI reduces the mental load of remembering every detail in real-time, it’s much easier for clinicians to engage deeply with the clients in front of them.

But beyond documentation, AI is helping to address long-standing challenges in care delivery—including waitlists. For example, at a clinic with hundreds of children waiting for therapy services, Sheperis helped spearhead the implementation of AI-driven parenting programs.

“We’re taking parents into our clinic while their child is still on the waitlist and teaching them parenting skills,” Sheperis shared. “The AI provides education, real-time guidance during tantrums, and psychoeducation outside of therapy. By the time their child is seen—or even earlier—they may no longer need to be on the waitlist.”

Thanks to AI, families are getting immediate support and clinicians are freeing up appointments so people with the most critical needs can access care faster. Sheperis believes this is just the tip of the iceberg—that AI has the potential to address much larger public health gaps. “It has so many advantages to address real-world problems and help a lot of people,” she said.

Addressing Resistance to AI in Behavioral Health

Even as AI reshapes healthcare and other industries, many mental health professionals remain hesitant to adopt it. According to Sheperis, much of this resistance comes down to unfamiliarity. But as she pointed out, AI isn’t a passing trend—it’s here to stay.

“I don’t think AI is a genie we can put back in the bottle,” Sheperis said. “And so it’s not about when or if; it’s about how.”

Unfortunately, universities and accrediting bodies have been slow to incorporate AI into their policies and processes. Sheperis noted that while clinicians will inevitably encounter AI-supported tools in their careers, many graduate programs still aren’t preparing students to use them effectively. And accrediting agencies, she argued, are failing to keep pace with these changes.

“The word ‘technology’ is used exactly one time in the accrediting agency that accredits my program—one time,” she shared. “And that’s in relation to an electronic health record. So that is pretty substandard.”

This gap is a disservice to the students who deserve to enter the workforce with a strong understanding of AI’s ethical and practical implications, she argued. Instead of leaving new clinicians to figure it out on their own, Sheperis advocates for integrating AI education into current curriculums.

“If our agencies are going to be expecting our students to be able to use, at minimum, an AI-supported progress note…and we’re not teaching them how to even begin to understand how to ethically incorporate AI into a progress note, then we’re failing our students,” Sheperis said.

Digging into Ethics and Responsible AI Use

Ethics should be top of mind for behavioral health leaders and clinicians alike as AI becomes more entrenched in the field. Providers aren’t just responsible for using these tools correctly—they also need to understand the intricacies of AI privacy, transparency, and data security to ensure they’re protecting their clients.

“It’s just a language like any other to understand the privacy, the transparency, the benefits of using it—because ultimately those are the things you, as a clinician, have to be able to explain in an understandable way to any client that you’re working with,” Sheperis said.

Get up to speed on critical AI privacy and security concepts for behavioral health professionals. Download our AI privacy and security guide here.

To guide ethical AI integration, Sheperis highlighted a “4 C’s” framework for selecting healthcare AI tools:

  • Consequences – Weigh the potential risks and benefits of AI-assisted care.
  • Control – Maintain human oversight to ensure AI is supporting, not replacing, clinical decision-making.
  • Critique – Evaluate the development of AI tools, including their research foundation and potential biases present in their models.
  • Care – Thoughtfully implement AI in ways that enhance, rather than disrupt, the therapeutic process.

She also added a fifth “C” to this model: Cultural Responsiveness. Many AI tools are designed without input from the people who will actually use them—whether they are clinicians or clients.

“Too often, tools are developed by us ivory tower folks—or researchers and scientists—and not by the people who are actually going to be using the technology,” she said. “And that means that folks get left out, and it’s not really helpful to the population that we’re trying to reach.”

For AI to work well within behavioral health, it has to be unbiased, transparent, and ethical—and Sheperis encouraged clinicians to assert an active role in shaping how it’s used.

Imagining the Future of AI in Behavioral Health

Automated documentation is a game-changer in behavioral health, but the real potential of AI lies in improving care delivery and outcomes. Sheperis is confident that clinicians who embrace AI will be better equipped to provide effective care—as long as they use it thoughtfully and cautiously.

“Clinicians who use AI at some level will likely be greater than clinicians who don’t use it at all,” Sheperis said.

One of the clearest ways AI improves care is by acting as a quality check for clinical decisions. It won’t replace a clinician’s expertise, but AI can help verify treatment plans and assessments to ensure they are in line with the best available evidence.

“Why not have a cross-check for a treatment plan? Why not have a cross-check for a diagnosis or an assessment? It seems like a really good idea,” Sheperis said.


Due Diligence Lays the Groundwork for AI Success

Throughout her conversation with Morrison, Sheperis made one thing clear: AI is already part of behavioral health. Now, it’s up to clinicians, educators, and accrediting bodies to understand, teach, and apply AI tools in ways that align with ethical standards as well as provider and client needs.

Like many early adopters, Sheperis believes AI can enhance care quality, improve training, and make services more accessible—but only if it is developed and implemented with intention.

Want to learn how Eleos is already helping behavioral health organizations level up care quality with AI? Request a demo of our purpose-built platform here.