Engineering | Expert Quotes | News Release | Research | technology
January 23, 2023
A UW-led team developed an AI system that suggested changes to participants’ responses to make them more empathetic. Shown here is an example of an original response from a person (left) and a response that is a collaboration between the person and the AI (right). The text labeled in green and red shows the suggestions made by the AI. The person can then choose to edit the suggestions or reload the response.Sharma et al./Nature Machine Intelligence
Empathy is key to having a helpful conversation about mental health. But this skill can be difficult to learn, especially when someone is sharing something difficult.
A team led by researchers at the University of Washington studied how artificial intelligence could help people on the platform TalkLife, where people give each other mental health support. Researchers have developed an AI system that suggests changes to participants’ responses to make them more empowered. The system helped people communicate empathy more effectively than traditional training. In fact, the best responses came from collaborations between AI and people.
The researchers published these findings on January 23 in Nature Machine Intelligence.
UW News contacted senior author Tim Althoff, a UW assistant professor in the Paul G. Allen School of Computer Science and Engineering, for details about the study and the concept of AI and empathy.

Tim AlthoffDennis Wise / University of Washington
Why did you choose the TalkLife platform to study?
Tim Althoff: Previous research suggests that peer-support platforms can have a significant positive impact on mental health care because they help address the enormous challenge of access. Because of insurance issues, stigma or isolation, many people find that they have access to free online peer-support platforms. TalkLife is the largest peer-support platform globally and has a huge number of motivated peer supporters.
In addition, TalkLife’s leadership recognized the importance and potential impact of our research on how computing can empower peer support. They supported our research through collaboration, feedback, recruitment of participants and data-sharing.
What inspired you to help people communicate with more empathy?
Territorial Army: It is well established that empathy is key to helping people feel supported and build trusting relationships. But empathy is also complex and subtle. It can be challenging for people to find the right words at this time.
While counselors and therapists are trained in this skill, our prior research has established that peer supporters currently miss many opportunities to respond more empathetically to one another. We also found that peer supporters do not learn to express empathy more effectively over time, which suggests they may benefit from empathy training and feedback.
On the surface it seems counterintuitive to help AI with something like empathy. Can you talk about why this is a good problem for AI to solve?
Territorial Army: What AI Feedback can do is be very specific and “contextual” and give suggestions on how to concretely respond to a message right in front of a person. It can give ideas in a “personalized” way to a person instead of general training examples or with rules that may not apply to every situation a person may face. It only pops up when someone needs it – if their feedback is great, the system can give a light touch of positive feedback.
People may wonder “why use AI” for this aspect of human connection. In fact, we designed the system from the ground up not to take away from this meaningful person-to-person interaction. For example, we show feedback only when needed and we train the model to make the smallest possible changes to feedback in order to communicate empathy more effectively.
How do you train an AI to “know” empathy?
Territorial Army: We spoke with two clinical psychologists, Adam Minor at Stanford University and David Atkins at the UW School of Medicine, to understand the research behind empathy and adapt existing empathy scales to the asynchronous, text-based setting of online support on TalkLife. worked for. We then interpreted 10,000 TalkLife responses to different aspects of empathy to develop an AI model that could measure the level of empathy expressed in the text.
To teach the AI to provide actionable feedback and concrete suggestions, we developed a reinforcement learning-based system. These systems require a lot of data to train, and while we don’t sympathize with platforms like TalkLife, we still get thousands of good examples. Our system learns from these to generate a helpful empathic response.
In your assessment of this system, do you see people becoming more dependent on AI for empathy or have people learned to be more empathetic over time?
Territorial Army: Our randomized trial showed that peer supporters with access to empathetic feedback were between 20% and 40% more likely than supporters in the control group who did not have access to such feedback.
Among our participants, 69% of peer supporters reported that they felt more confident in writing supportive responses after this study, indicating an increase in self-efficacy.
We further studied how participants used feedback and found that peer support did not become overly reliant on AI. For example, they will use feedback indirectly as a broad motivation rather than “blindly” following recommendations. They also flagged feedback in some cases when it was not helpful or even inappropriate. I was excited that the collaboration between human peer facilitators and AI systems produced better results than either one.
I would also like to highlight the significant efforts made to consider and address ethical and security risks. These include having the AI work with a peer supporter instead of the person currently in crisis, conducting the study in a TalkLife-like environment that is intentionally not integrated into the TalkLife platform, providing all participants with access to a crisis hotline, and peer support includes allowing. Flag Feedback for review.
What do these results mean in terms of the future of human-AI collaboration?
Territorial Army: One area of human-AI collaboration that I’m particularly excited about is AI-assisted communication. There are plenty of challenging communication tasks with important consequences – from helping someone feel better to challenging misinformation on social media – where we expect people to do well without any sort of training or support. Huh. In most cases, we are only given an empty chat box.
We can do better, and I believe Natural Language Processing technology can play a huge role in helping people achieve their conversational goals. Notably, our study shows that human-AI collaboration can be effective even for complex and open-ended tasks such as empathic interactions.
Additional co-authors on this paper are Ashish Sharma and Inna Lin, both UW doctoral students in the Allen School; David Atkins, an associate professor in the Department of Psychiatry and Behavioral Sciences at the UW School of Medicine and founder of Lyssn.io, Inc. CEO of; and Adam Minor at Stanford University. This research was funded by the National Science Foundation, the National Institutes of Health, the Bill and Melinda Gates Foundation, the Office of Naval Research, the Microsoft AI for Accessibility Grant, the Garvey Institute Innovation Grant, the National Center for Advancing Translational Science, Clinical and Translational Science Award and the Stanford Human-Centered AI Institute.
For more information, contact Althoff at [email protected]
Grant numbers: NSF grant IIS-1901386, NSF grant CNS-2025022, NIH grant R01MH125179, INV-004841, N00014-21-1-2154, KL2TR001083, UL1TR001085 and K02 AA023814
Tags: artificial intelligence • College of Engineering • Paul G. Allen School of Computer Science and Engineering • Tim Althoff