Chatbots raise questions about transparency in mental health care

TeaThe mental health field is increasingly looking to chatbots to relieve increasing pressure on a limited pool of licensed therapists. But they are entering uncharted ethical territory as they face questions about how closely AI should be involved in such deeply sentient support.

Researchers and developers are in the very early stages of figuring out how to safely blend artificial intelligence-powered tools like ChatGPT, or even homegrown systems, with natural empathy assisted by humans. is – especially on peer counseling sites where visitors can ask other Internet users. User for sympathetic messages. These studies seek to answer deceptively simple questions about AI’s ability to generate empathy: How do peer counselors feel about receiving assistance from AI? How do they feel when visitors find out? And does knowing this change how effective the support proves to be?

They are also dealing with a thorny set of ethical questions for the first time, including how and when to inform users that they are participating in what is essentially an experiment to test an AI’s ability to generate feedback. Huh. Because some of these systems are built to let peers send helpful texts to each other using message templates, rather than provide professional medical care, some of these tools may fall into a gray area where clinical The tests do not require the required inspection.

Advertisement

“The field is sometimes evolving faster than the ethical discussion,” said Ipsit Vahia, M.D., head of Maclean’s Digital Psychiatry Translation and Technology and Aging Lab. Vahia said the coming years are likely to see more experiments in the field.

That use may carry risks: Experts said they worry about inadvertently encouraging self-harm or missing signals that an help-seeker might need more intensive care.

Advertisement

But they are also concerned about rising rates of mental health issues and the lack of easily accessible support for many people struggling with conditions such as anxiety or depression. This is why striking the right balance between safe, effective automation and human intervention becomes so essential.

“In a world where there aren’t nearly enough mental health professionals, lack of insurance, stigma, lack of access, anything that can help really plays an important role,” said Tim Althoff, assistant professor of computer science at the University of Washington. could.” “It has to be evaluated with all [the risks] Bear in mind, that creates a particularly high bar, but the potential is there and that potential is what drives us.

Althoff co-authored a study published Monday in Nature Machine Intelligence that examined how peer supporters on a site called TalkLife felt about responses to visitors co-authored by a homegrown chat tool called Heli. In a controlled trial, researchers found that about 70% of supporters felt that Hailey enhanced their ability to empathize – a sign that AI guidance, when used carefully, could potentially help other humans. Can enhance the supporter’s ability to communicate in depth. Supporters were informed that they could be given AI-guided suggestions.

Instead of telling the help seeker, “Don’t worry,” Hailey might suggest to the supporter something like, “This must be a real struggle,” or ask about a possible solution, for example.

The positive results in the study, Althoff stressed, are the result of years of incremental academic research dissecting questions like “what is empathy in clinical psychology or a peer support setting,” and “how do you measure it.” His team didn’t present the co-written responses to TalkLife visitors at all — their goal was simply to understand how supporters might benefit from AI guidance before sending AI-guided replies to visitors, he said. Previous research from her team suggested that peer-supporters reported struggling to write helpful and empathetic messages on online sites.

In general, developers exploring AI interventions for mental health – even in peer support – “would be well served by being conservative around ethics rather than being bold,” Vahia said. .

Other efforts have already sparked outrage: Tech entrepreneur Rob Morris drew condemnation on Twitter after describing an experiment involving Cocoa, a peer-support system he developed that anonymously sends visitors to platforms including WhatsApp and Discord. Allows soliciting or offering sympathetic support. Coco offered AI-directed responses to a few thousand peer supporters based on the incoming message, which supporters were free to use, reject, or rewrite.

Visitors to the site were not explicitly told that their peer supporter may have been directed by AI – instead, when they received a response, they were informed that the message may have been written with the help of a bot Is. AI scholars decried that approach in response to Morris’s posts. Some said she should have sought support from an Institutional Research Review Board – a process that academic researchers typically follow when studying human subjects for experimentation.

Morris told STAT that he did not believe such approval was needed for this use because it did not involve personal health information. He added that the team was only testing a product feature, and that the original Cocoa system stemmed from earlier academic research that had gone through IRB approval.

Morris stopped the experiment when he and his staff concluded internally that they didn’t want to spoil the natural empathy that comes from pure human-to-human contact, he told STAT. “The actual writing may be perfect, but if a machine writes it, it’s not thinking about you … it’s not learning from its own experiences,” he said. “We are very particular about user experience and we look at data from the platform, but we also have to rely on our own intuition.”

Despite fierce online protests, Morris said he was encouraged by the discussion. “Whether this kind of work can and should be done through IRB processes outside of academia is a really important question and I’m really excited that people are so excited about it.”

Leave a Comment