CASE STUDY: The Ethics of Legacy Chatbots
The death of a loved one is a traumatic experience in a person’s life. But does it really signal the end of one’s interactions with the deceased? Some companies, are looking to technology to give us a new hope of talking to the dead. Eternime wants to create a platform that allows for friends and relatives of deceased loved ones to hold conversations with an artificial intelligence powered “chatbot” and avatar that resembles the past person. Eugenia Kuyda, a co-founder of the AI startup Luka, created a digital “version” of her deceased friend Roman, as well as an app-based chatbot version of the celebrity Prince (MacDonald, 2016). These legacy chatbot programs function by using advanced computing algorithms and access to a range of digital records of the person they are simulating, including Twitter posts, Facebook posts, text messages, Fitbit data, and more. While technology has not yet advanced enough to create chatbots that resemble specific people with complete accuracy, companies like Eternime and Luka are working hard to perfect it. Eternime is allowing interested customers to sign up now and provide information to the website so that the company will have all the information they need to create a carbon copy of a user in the future. When the technology catches up with the idea, people will be able to communicate with their deceased friends and relatives. Will this concept ease the loss of a loved one or will it damage those grieving?
One of the challenges that people face when losing a loved one is the loss they feel. Mourners are sometimes left with little reminding them of the deceased person beyond a few photographs, possessions, or memories of the deceased. While these things can be valuable to people that have lost someone close, they may not feel like enough. This is where Eternime is hoping to assist. If the chatbots emulate the deceased persons as well as some expect, these conversations might feel as authentic as chat conversations with the real person. Some even argue that the artificial intelligence powered chatbot may provide a more truthful representation of a person’s life than the person him- or herself. Simon Parkin believes “Eterni.me and other similar services counter the fallibility of human memory; they offer a way to fix the details of a life as time passes.” Being able to learn the experiences of a person now deceased could be invaluable to a grieving person. Eternime’s co-creator Marius Ursache, believes that Eternime “could be a virtual library of humanity” (Parkin, 2015).
Leaving a legacy is important to many people because it ensures their life has importance even after they die. Most people achieve this by having children to share their stories or by creating something that will be used or remembered after they die. However, Eternime provides a new avenue to store a legacy. Ursache asserts that the purpose of Eternime is “about creating an interactive legacy, a way to avoid being totally forgotten in the future ” (Parkin, 2015). However, critics argue that this may not be a legacy that you want to leave. Laura Parker argues that “an avatar with an approximation of your voice and bone structure, who can tell your great-grandchildren how many Twitter followers you had, doesn’t feel like the same thing [as the traditional ways of leaving a legacy]” (Parker, 2014).
Others worry that platforms like Eternime will not assist those dealing with grief, but rather prevent them from accepting the loss and moving on. Joan Berzoff, a specialist in end-of-life issues, believes that “a post-death avatar goes against all we know about bereavement” (Parker, 2014). Living life with an algorithm that is merely attempting to mimic a deceased person could distance a user from reality, a reality that includes the fact that when a person dies, their biological presence and form are gone forever. Applications like Eternime could prevent users from accepting the reality of their situation and cause the users additional grief.
Another issue with chatbots of deceased people is the fact that a program would have to synthesize and access large amounts of private and personal information. Algorithms that mimic the user may be very advanced and realistic. However, they will never be perfect. For example, people often keep certain information secret or only share it with certain people. Will the AI chatbot know how to discreetly continue such practices of strategic information presentation, or maintain lies or secrets that the deceased lived with? The chatbot may pick up information or behavior from a text to a close friend and share it with a child. While this could lead to an honest-talking and transparent version of the deceased individual, it would not represent the deceased person in the way they would might wish to be represented, judging on their information sharing practices in their past interactions. Would we find that we judge a deceased individual differently based upon how a less-nuanced, but perhaps more truthful, chatbot simulates them after they are dead?
The technology to achieve the goal of legacy chatbots that are indistinguishable from real humans is not yet a reality, but it will be soon. The idea of deceased people being immortalized on the internet or in an AI-driven program gives us a digital way to talk to the dead. Will this technology lessen the burden of losing a loved one or make it harder to accept their loss? Will it create legacies of people to be past down to future generations, or tarnish our perception of people of the past?
- Do you think it’s realistic, given enough person data, to create an interactive simulation of that person in chatbot form?
- Would legacy chatbots be a helpful thing for those mourning the loss of a loved one? Might they help new people learn about the deceased person?
- Does the deceased person have a right to any privacy of their information? Should someone make a chatbot of a deceased person without their permission?
- Could a deceased individual be harmed by truthful things that their legacy chatbot says on their behalf?
- What ethical interests are in conflict with legacy chatbots? How might you balance these issues, should people and companies want to build such bots?
Hamilton, Isobel. “These 2 tech founders lost their friends in tragic accidents. Now they’ve built AI chatbots to give people life after death,” Business Insider, November 17, 2018. Available at: https://www.businessinsider.com/eternime-and-replika-giving-life-to-the-dead-with-new-technology-2018-11
MacDonald, Cheyenne. “Would you resurrect your dead friend as an AI? Try out ‘memorial’ chatbot app – and you can even talk to a virtual version of Prince,” Daily Mail, October 6, 2016. Available at: https://www.dailymail.co.uk/sciencetech/article-3826208/Would-resurrect-dead-friend-AI-Try-memorial-chatbot-app-talk-virtual-version-Prince.html
Parker, Laura. “How to become virtually immortal,” The New Yorker, April 4, 2018. Available at: https://www.newyorker.com/tech/annals-of-technology/how-to-become-virtually-immortal
Parkin, Simon. “Back-up brains: The era of digital immortality,” BBC, January 23, 2015. Available at: http://www.bbc.com/future/story/20150122-the-secret-to-immortality
Colin Frick & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
December 4, 2018
Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They are produced for educational use only. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.