The Challenges in Designing a Prevention Chatbot for Eating Disorders: Observational Study

Paper · Source
Psychology Chatbots ConversationAssistants Personalization

Background: Chatbots have the potential to provide cost-effective mental health prevention programs at scale and increase interactivity, ease of use, and accessibility of intervention programs.

Objective: The development of chatbot prevention for eating disorders (EDs) is still in its infancy. Our aim is to present examples of and solutions to challenges in designing and refining a rule-based prevention chatbot program for EDs, targeted at adult women at risk for developing an ED.

Methods: Participants were 2409 individuals who at least began to use an EDs prevention chatbot in response to social media advertising. Over 6 months, the research team reviewed up to 52,129 comments from these users to identify inappropriate responses

that negatively impacted users’ experience and technical glitches. Problems identified by reviewers were then presented to the entire research team, who then generated possible solutions and implemented new responses.

Results: The most common problem with the chatbot was a general limitation in understanding and responding appropriately to unanticipated user responses. We developed several workarounds to limit these problems while retaining some interactivity.

Conclusions: Rule-based chatbots have the potential to reach large populations at low cost but are limited in understanding and responding appropriately to unanticipated user responses. They can be most effective in providing information and simple conversations. Workarounds can reduce conversation errors.

Authoring appropriate responses to nearly all user comments is one of the biggest challenges in creating a chatbot. For instance, our initial goal in creating the chatbot was to provide encouragement to continue with the program through positive responses, for example, “Great!” and “Wonderful!” While the positive responses were appropriate for many user responses, these positive responses did not work for some interactions. For example, when the chatbot asked, “Do you want to commit to NO FAT TALK, say for the next month?” The user replied, “Haha.” The prescripted response was “Wonderful! You might want to let your friends know that you are committed to NO FAT TALK for the next month.” We also found that positive responses unexpectedly reinforced harmful behaviors at times. For example, the chatbot prompted, “Please share with me a few things that make you feel good about yourself. For example, your humor, grace, personality, family, friends, achievements and more!” The user replied, “I hate my appearance, my personality sucks, my family does not like me, and I don’t have any friends or achievements.” The chatbot responded by saying, “Keep on recognizing your great qualities! Now, let’s look deeper into body image beliefs.” See Table 1 for additional examples.