LAS VEGAS — AI may have a part to play in delivering mental healthcare, despite the empathetic and human nature of the specialty, but experts differ on what is considered safe.
Instead of replacing humans or dispensing advice, AI can encourage users to respond through motivational interviewing techniques or reflexive empathy, and connect them to support resources they would need at the moment, Headspace chief product and design officer Leslie Witt said during an Endpoints News event at the HLTH conference in Las Vegas on Sunday.
The mental health and meditation app launched an empathetic AI companion in October that helps users reflect on their emotions and receive recommendations for meditations from Headspace’s library.
Chatbots aren’t going to replace humans, American Psychological Association senior director of healthcare innovation Vaile Wright said, but providers can use AI to make their workflow more efficient, such as using it to collect data and summarizing it quickly.
Online psychiatry startup Talkiatry’s CEO and co-founder Robert Krayn said any patient-facing AI that isn’t FDA-approved would not be successful.
“I wouldn’t buy it and I wouldn’t build it,” he said.
“There’s a million AI therapy companies probably here at HLTH, and I think some of them are doing some really interesting things,” Krayn said. “I’d love to talk to the ones that are going through the FDA approval route, because I think those will be the ones that will be successful at the end of the day.”
(This story is from our Health Tech newsletter. If you’d like to sign up, just click here.)