Humanizing the Inhuman: How Giving your Health Chatbot a Persona Could Change User Perceptions and Behaviors

Image: University of Wisconsin Milwaukee Lubar School of Business

One of the main critiques of chatbots is that they are incapable of conveying emotions, but using a chatbot persona may be a helpful tool to build an emotional connection and deeper engagement with your users.

In our recent publication, The Chatbots for Change Playbook: A practical guide and modular toolkit on how to create, grow, and sustain your chatbot for social good, created with Meta, we outline how patient-facing chatbots for maternal, newborn, and child health can act as automated conversational agents to promote health by:

  • Supporting self-care, monitoring and medical adherence

  • Providing health education and counseling

  • Prompting behavior change

  • Enhancing linkages to health care through scheduling, reminders and risk stratification

  • Opening opportunities for feedback on services received

In each of these use cases, chatbots must be developed and implemented in the right way with the right partners and in an enabling environment to bring about their intended impact. For example, increasing health service uptake or increasing the cost-effectiveness of health services.

One of the key areas that we prioritize in the Playbook is how to create high quality, engaging content. It’s not enough for chatbot health content to be credible—it also must be understandable, compelling, actionable, grounded in local culture and realities.

  • The quality of the content impacts user’s perceptions of the chatbot. If it’s not useful and enjoyable, users likely won’t engage in a significant way.

  • Research suggests that chatbots that engage in relationship building with users are perceived as more credible, sympathetic, empathetic, and that their  messages are perceived to be more sincere.

  • Research has also found that people interact with computers as they do with other people, without even being aware that they are doing so. They form perceptions of computers and humans in the same way, even though they know computers are machines.

This is why creating a persona is smart. Personas may consist of a name, avatar, linguistic style, tone (which may change depending on the subject matter in the conversation or interaction type), and graphical appearance including colors and emoticons/emojis.

Here are some examples and findings from relevant research:

1. AskNivi: Health chatbot AskNivi uses a trusted aunt persona in India and a big sister persona in Kenya.

2. Yukti: The Yukti chatbot, which supports breastfeeding women in India, used an avatar of a woman in her late 30s, which users perceived differently. “Some correlated its persona as a lady doctor and some as a friendly sister like ASHA [a type of accredited community health worker]. The way users framed their questions and reacted reflected their perceptions.”

3. Woebot: Some research suggests that interacting with human-like AI can result in a sense of unease and “creepiness.” To combat this, the mental health chatbot, Woebot, was designed to transparently present itself as an archetypal robot. The developers speculated that transparency is a key driver of bond development and as such, “Woebot explicitly references its limitations within conversations and provides positive reinforcement and empathic statements alongside declarations of being an artificial agent.”

4. Dr. Joy: The Korean chatbot, Dr. Joy, was designed to lead users to perceive enjoyment when seeking health information and medical help for their prenatal and postnatal care. To look more professional Dr. Joy was given a “humanlike” female medical doctor persona and a formal, firm tone, particularly when answering questions, but a warm tone (an informal, pleasant tone and manner, and emoji use) when treating users in other scenarios.

5. In a study on how to boost the effectiveness of chatbots for increasing purchase intention: researchers tested two different chatbot personas—one “warm” and one “competent.” If the intention was to strengthen the customer-chatbot relationship, researchers recommended the using a “warm” persona, while if the purpose was to strengthen user perception of message quality, they recommended a “competent” persona.

As these examples highlight, the chatbot persona type and its success in engaging users varies widely depending on the use case and audience. Using human-centered design and conducting formative research with end users to choose whether and which type of persona to use is essential.

Finally, while giving a chatbot a persona may be a helpful way to build a connection to drive trust and engagement between the bot and your end users, it’s not a substitute for a real person. Users should be able to clearly see how to get in touch with a human who can answer questions about the information or services provided by the chatbot, and who can help with emergencies or additional queries.

 

For more information, tools, exercises, and resources based on barriers and enablers to successfully developing, implementing, scaling, and sustaining chatbots for health and broader social good, please visit our Chatbots for Change Playbook or get in touch with us (tara@katicollective.com) to learn more.

Previous
Previous

CSO Engagement is Key to Localization Efforts

Next
Next

My Data, My Body, My Rights