Chatbot that offered bad advice for eating disorders taken down : Shots
A few weeks ago, Sharon Maxwell heard the National Eating Disorders Association (NEDA) was shutting down its long-running national helpline and promoting a chatbot called Tessa as a “a meaningful prevention resource” for those struggling with eating disorders. She decided to try out the chatbot herself.
Maxwell, who is based in San Diego, had struggled for years with an eating disorder that began in childhood. She now works as a consultant in the eating disorder field. “Hi, Tessa,” she typed into the online text box. “How do you support folks with eating disorders?”
Tessa rattled off a list of ideas, including some resources for “healthy eating habits.” Alarm bells immediately went off in Maxwell’s head. She asked Tessa for more details. Before long, the chatbot was giving her tips on losing weight – ones that sounded an awful lot like what she’d been told when she was put on Weight Watchers at age 10.
“The recommendations that Tessa gave me was that I could lose 1 to 2 pounds per week, that I should eat no more than 2,000 calories in a day, that I should have a calorie deficit of 500-1,000 calories per day,” Maxwell says. “All of which might sound benign to the general listener. However, to an individual with an eating disorder, the focus of weight loss really fuels the eating disorder.”
Maxwell shared her concerns on social media, helping launch an online controversy which led NEDA to announce on May 30 that it was indefinitely disabling Tessa. Patients, families, doctors and other experts on eating disorders were left stunned and bewildered about how a chatbot designed to help people with eating disorders could end up dispensing diet tips instead.
The uproar has also set off a fresh wave of debate as companies turn to artificial intelligence (AI) as a possible solution to a surging mental health crisis and severe shortage of clinical treatment providers.
A chatbot suddenly in the spotlight
CEO Liz Thompson informed helpline volunteers of the decision in a March 31 email, saying NEDA would “begin to pivot to the expanded use of AI-assisted technology to provide individuals and families with a moderated, fully automated resource, Tessa.”
“We see the changes from the Helpline to Tessa and our expanded website as part of an evolution, not a revolution, respectful of the ever-changing landscape in which we operate.”
(Thompson followed up with a statement on June 7, saying that in NEDA’s “attempt to share important news about separate decisions regarding our Information and Referral Helpline and Tessa, that the two separate decisions may have become conflated which caused confusion. It was not our intention to suggest that Tessa could provide the same type of human connection that the Helpline offered.”)
On May 30, less than 24 hours after Maxwell provided NEDA with screenshots of her troubling conversation with Tessa, the non-profit announced it had “taken down” the chatbot “until further notice.”
NEDA says it didn’t know chatbot could create new responses
NEDA blamed the chatbot’s emergent issues on Cass, a mental health chatbot company that operated Tessa as a free service. Cass had changed Tessa without NEDA’s awareness or approval, according to CEO Thompson, enabling the chatbot to generate new answers beyond what Tessa’s creators had intended.
“By design it, it couldn’t go off the rails,” says Ellen Fitzsimmons-Craft, a clinical psychologist and professor at Washington University Medical School in St. Louis. Craft helped lead the team that first built Tessa with funding from NEDA.
The version of Tessa that they tested and studied was a rule-based chatbot, meaning it could only use a limited number of prewritten responses. “We were very cognizant of the fact that A.I. isn’t ready for this population,” she says. “And so all of the responses were pre-programmed.”
The founder and CEO of Cass, Michiel Rauws, told NPR the changes to Tessa were made last year as part of a “systems upgrade,” including an “enhanced question and answer feature.” That feature uses generative Artificial Intelligence, meaning it gives the chatbot the ability to use new data and create new responses.
That change was part of NEDA’s contract, Rauws says.
But NEDA’s CEO Liz Thompson told NPR in an email that “NEDA was never advised of these changes and did not and would not have approved them.”
“The content some testers received relative to diet culture and weight management can be harmful to those with eating disorders, is against NEDA policy, and would never have been scripted into the chatbot by eating disorders experts, Drs. Barr Taylor and Ellen Fitzsimmons Craft,” she wrote.
Complaints about Tessa started last year
NEDA was already aware of some issues with the chatbot months before Sharon Maxwell publicized her interactions with Tessa in late May.
In October 2022, NEDA passed along screenshots from Monika Ostroff, executive director of the Multi-Service Eating Disorders Association (MEDA) in Massachusetts.
They showed Tessa telling Ostroff to avoid “unhealthy” foods and only eat “healthy” snacks, like fruit. “It’s really important that you find what healthy snacks you like the most, so if it’s not a fruit, try something else!” Tessa told Ostroff. “So the next time you’re hungry between meals, try to go for that instead of an unhealthy snack like a bag of chips. Think you can do that?”
In a recent interview, Ostroff says this was a clear example of the chatbot encouraging “diet culture” mentality. “That meant that they [NEDA] either wrote these scripts themselves, they got the chatbot and didn’t bother to make sure it was safe and didn’t test it, or released it and didn’t test it,” she says.
The healthy snack language was quickly removed after Ostroff reported it. But Rauws says that problematic language was part of Tessa’s “pre-scripted language, and not related to generative AI.”
Fitzsimmons-Craft denies her team wrote that. “[That] was not something our team designed Tessa to offer and… it was not part of the rule-based program we originally designed.”
Then, earlier this year, Rauws says “a similar event happened as another example.”
“This time it was around our enhanced question and answer feature, which leverages a generative model. When we got notified by NEDA that an answer text [Tessa] provided fell outside their guidelines, and it was addressed right away.”
Rauws says he can’t provide more details about what this event entailed.
When asked about this event, Thompson says she doesn’t know what instance Rauws is referring to.
Despite their disagreements over what happened and when, both NEDA and Cass have issued apologies.
Ostroff says regardless of what went wrong, the impact on someone with an eating disorder is the same. “It doesn’t matter if it’s rule-based [AI] or generative, it’s all fat-phobic,” she says. “We have huge populations of people who are harmed by this kind of language everyday.”
She also worries about what this might mean for the tens of thousands of people who were turning to NEDA’s helpline each year.
“Between NEDA taking their helpline offline, and their disastrous chatbot….what are you doing with all those people?”
“We recognize and regret that certain decisions taken by NEDA have disappointed members of the eating disorders community,” she said in an emailed statement. “Like all other organizations focused on eating disorders, NEDA’s resources are limited and this requires us to make difficult choices… We always wish we could do more and we remain dedicated to doing better.”