Media Ethics Initiative

Home » Case Studies

Category Archives: Case Studies

First, Do No Harm?

CASE STUDY: Doctor-Patient Confidentiality and Protecting Third Parties

Case Study PDF | Additional Case Studies


As a client seeking help from a mental health professional, one expects to have the ability to be open and honest with our thoughts, feelings, and emotions. As a mental health professional, one strives to create an environment that is safe and inviting for all conversations. When disclosing physical symptoms with our medical doctors we typically do so without fear of those details being revealed to others because of the legal and ethical ideal of doctor-patient confidentiality. Should this feeling of security be any different when divulging personal details to a mental health professional?

In 2010, the case of Volk vs. DeMeerleer made this application of confidentiality guidelines more complex. A treating psychiatrist was held liable in a patient’s homicidal actions. One of the fundamental questions of the case was the obligations of mental health professionals to protect third parties from the violent acts of their patients. Jan DeMeerleer was a patient of the psychiatrist, Dr. Howard Ashby, on and off over the course of 9 years. DeMeerleer voiced thoughts of harm both to himself and to others intermittently throughout his sessions over this time period. On what would be his last visit with Dr. Ashby, DeMeerleer did not mentioned any thoughts of violence. But a few months following this final appointment, DeMeerleer murdered his ex-fiancé, Rebecca Schiering, one of her children, and then committed suicide. Following their deaths, Schiering’s mother sued Dr. Ashby for not “mitigating DeMeerleer’s dangerousness or warning the victims.”

The case against Dr. Ashby was motivated by the idea that he should have tried to reduce the chances of harm to third parties or should have warned any targeted third parties of the possible danger that they were in. As the American Medical Association put it, physicians face both an “ethical obligation to protect confidentiality with their patients and their legal responsibilities as a physician.” Dr. Ashby mentions in his notes from the final session with DeMeerleer the positive changes that he noticed in the patient. He observed the occasional suicidal thoughts that DeMeerleer experiences and noted that “at this point he’s not a real clinical problem, but that he’ll keep an eye out.” According to Baird Holm, “Volk is a somber reminder that mental health professionals should always be vigilant and, perhaps, err on the side of disclosing when a patient exhibits violent thoughts and behavior.” Should Dr. Ashby have blood on his hands if he didn’t truly see any potential danger during his last visit with DeMeerleer?

Those defending Dr. Ashby’s reluctance to share information about his patient place more importance on the norm of patient-doctor confidentiality. For doctors like Ashby, it is not uncommon to hear expressions of violent thoughts from people. Telling the difference between vague violent plans revealed in utterances and actionable threats is hard in practice. According to Troy Parks on the AMA Wire, the ruling against Dr. Ashby “requires a psychiatrist to do the impossible: predict imminent dangerousness in patients who have neither communicated recent threats, indicated intent to do harm, nor indicated a target for a potential threat.” Many believe that incorporating this possible risk when taking on new clients would have the potential to limit mental health professionals’ ability to treat patients. It may also make psychiatrists more reluctant to treat clients with difficult cases with a history of violence.

If doctors begin to report more threats observed in client communication, many would worry about the effect of this on other patients. Troy Parks poses the question of what happens “if patients no longer feel safe sharing personal—yet crucial—information with their physicians?” The American Psychological Association emphasized that patients need to feel comfortable talking about private and revealing information. They also mentioned that this should be a safe place for patients to talk about anything without fear of the information leaving the room. This in turn might affect the efficiency/effectiveness of mental health professionals. According to Jennifer Piel and Rejoice Opara, “The [AMA] Code strikes a balance in respecting confidentiality while providing an exception to allow disclosures of patient confidences under reasonable and narrow circumstances to protect identifiable third persons.” Yet the Ashby case throws this balance into question. How can doctors balance the need to create a level of confidence between the physician and the patient with the need to look out for the safety of others?

Discussion Questions: 

  1. What are the ethical values in conflict in the Ashby case? Why are these general values important?
  2. On what side would you err: toward preserving confidentiality in questionable cases, or toward warning others (and consequently violating patient confidence)?
  3. Are there creative ways to uphold both the value of confidentiality and the safety of third parties?

Further Information:

American Psychological Association, “Protecting your privacy: Understanding confidentiality.” American Psychological Association. Available at: http://www.apa.org/helpcenter/confidentiality.aspx

Gene Johnson, “Killer’s psychiatrist can be sued by victim’s family, Washington Supreme Court says.” The Seattle Times, December 24, 2016. Available at: https://www.seattletimes.com/seattle-news/court-says-killers-psychiatrist-can-be-sued-by-victims-family/

Troy Parks, “Court case threatens physician-patient confidentiality.” AMA Wire, October 14, 2015. Available at: https://wire.ama-assn.org/practice-management/court-case-threatens-physician-patient-confidentiality 

Troy Parks, “Patient-psychiatrist confidentiality hampered in liability ruling.” AMA Wire, January 4, 2017. Available at: https://wire.ama-assn.org/practice-management/patient-psychiatrist-confidentiality-hampered-liability-ruling

Jennifer L. Piel, JD, MD, and Rejoice Opara, MD, “Does Volk v DeMeerleer Conflict with the AMA Code of Medical Ethics on Breaching Patient Confidentiality to Protect Third Parties?” AMA Journal of Ethics, January 20, 2018. Available at: http://virtualmentor.ama-assn.org/2018/01/peer2-1801.html

Author:

Haley Turner
Media Ethics Initiative
University of Texas at Austin
April 27, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Finsta Fad

CASE STUDY: The Real Ethics of Fake Social Media Profiles

Case Study PDF | Additional Case Studies


Instagram is the largest social media platform for teenagers. Not only are teens posting images for their numerous followers on their primary Instagram account to see, they are now creating a more selective, secondary account they refer to as their “Finsta,” short for “fake Instagram” account. Their real account is used to post pictures of their “best-self” having various fun and exciting experiences, while their “Finsta” is often used to post pictures of their unfiltered—and perhaps more authentic—self. What ethical decisions come with the ability to have an Instagram account for a filtered self and a fake profile for a “real” self?

insta

Template: Jim Covais

Many teens use their Finsta account to post pictures that would not be posted on their “real” Instagram. Finsta posts might include pictures of teens’ goofy, emotional, or rebellious activities. Teens will typically create Finstas under a name that is not related to their own so that this profile or its posts will not be traced back to them; these accounts are often set to “private,” limiting them only to their approved followers.  Many adults might see this as a red flag that their teen is participating in scandalous and inappropriate behavior that they would not want their followers on their primary account to see. This is not always the case.

Instead of hiding harmful behavior, some Finstas seem to exist to escape the pressure to be perfect in the digital panopticon of the online world. Many Finsta owners see their second account as a way to post and like what they want without the judgment and criticism of friends, family, and co-workers. Teens see their Finstas as a release from crushing social media pressures to be popular and as a way to truly express themselves. The privacy of a Finsta creates a sense of freedom to show their silly and vulnerable self to their small group of friends they have approved to follow their Finsta.

Finstas hold the potential for harm, however. The idea that only their small circle of followers have access to these posts may encourage teens to post inappropriate, mean, and scandalous pictures or comments. This sense of privacy, however, can prove illusory. There is nothing to stop a follower—perhaps a friend or ex-friend in real life of the account holder—from taking a screenshot of a post and showing others this supposedly private content. In this way, comments and pictures can be shared far beyond the limited group of approved followers of one’s Finsta account. Beyond the negative comments of the Finsta account holder, such shared content could be used in courses of bullying against the account holder themselves. In the search for a safe way to separate different images of who they are for different audiences, Finsta owners may find out the hard way that what they post is still traceable and potentially very public.   In many ways, Finstas represent the Faustian bargain of the ability to create and remake our online selves—they can be a helpful outlet for teens and their processes of self-discovery, but they can quickly create real repercussions.

Discussion Questions: 

  1. What are the ethical issues with having both a “real” Instagram account and a Finsta?
  1. How might the use of Finstas create good consequences for teens? How might they go wrong?
  1. Do Finstas involve deception? If so, how do you balance this ethical concern against any good consequences they may enable?
  1. Are there ethical limits to how teens (or others) should use their Finstas? Or are the ethical limits mainly for their “public” and “real” account?

Further Information:

Joanne Orlando, “How Teens use Fake Instagram Accounts to Relieve the Pressure of Perfection.” Available at: http://theconversation.com/how-teens-use-fake-instagram-accounts-to-relieve-the-pressure-of-perfection-92105

Karena Tse, “Pro/Con: The Finsta Phenomenon.” April 10, 2017. Available at: https://www.chsglobe.com/29354/commentary/pro-con-the-Finsta-phenomenon/

Savannah Dube, “Finstas and Self-Idealizing in the Digital World.” October 3, 2016. Available at: https://www.theodysseyonline.com/Finstas-self-idealizing-digital-world

Author:

Morgan Malouf
Media Ethics Initiative
University of Texas at Austin
April 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Collateral Damage to the Truth

CASE STUDY: Reporting Casualties in Drone Strikes

Case Study PDF | Additional Case Studies


In a report released in July 2016, the Obama administration revealed the number of enemy combatants and civilians killed via drone strikes since 2009 in counter-terrorism efforts. This report came after President Obama signed an executive order with the intention of making the once-secret drone program more transparent and protective of citizens through the routine disclosure of civilian deaths. According to CBS News, the release claimed that 2,300 enemy combatants were killed and anywhere from 64 to 116 civilian deaths occurred as collateral damage. Controversy quickly circled around the U.S. government’s attempt at transparency in drone use. There was disagreement about the accuracy of the numbers reported. Critics also questioned whether the administration’s decision to disclose this information so soon after significantly expanding the counter-terrorism drone program was an attempt to mislead the public into thinking that they were being fully informed about drone warfare and its costs. Naureen Shah, an affiliate of the human rights organization Amnesty International, said “We’re going to be asking really hard questions about these numbers. They’re incredibly low for the number of people killed who are civilians.”

Watchdog groups suggest the U.S. government’s estimates of civilian death are exponentially lower than the real death toll. The Long War Journal, a blog run by the non-profit media organization Public Multimedia Incorporated, reported the civilian death toll at 207 from operations in Pakistan and Yemen alone. This was the lowest count provided by a watchdog groups. The Bureau of Investigative Journalism placed the death toll closer to “a maximum of 801 civilian deaths,” with a possible range of “anywhere from 492 to about 1,100 civilians killed by drone strikes since 2002.” Additionally, the government report excludes the numbers of casualties from Iraq, Afghanistan and Syria, places where the U.S. has conducted thousands of drone operations. The Bureau of Investigative Journalism generated its numbers from both local and international journalists, field investigations, NGO investigators, court documents and leaked government files.

Speaking for the accuracy of the government report, some argue that government officials have access to information the public does not. The U.S. government asserts that they utilize refined methodologies in calculating post-strike numbers, and they have access to information that is generally unavailable to non-governmental organizations. This information influences both the motivations for carrying out a drone strike as well as the validity of the number of casualties reported from various sources. Many terrorists groups spread incorrect information about the U.S. as propaganda which can mislead watchdog groups’ statistics. The report from the Obama administration voices this worry, stating that “The U.S. Government may have reliable information that certain individuals are combatants, but are being counted as non-combatants by nongovernmental organizations.”

All of this poses a problem for news reporters who rely on the government to supply information necessary for their stories. A lack of clarity in how the government defines terms such as “civilian” or “enemy combatant” in their reports causes a discrepancy in the interpretation of the information journalists then relay to the public. According to this view, the public deserves the unspun data on the costs of certain policies, no matter how bracing it may be. Josh Ernest, a spokesperson for the White House, countered: “There are obviously limitations to transparency when it comes to matters as sensitive as this.” According to such a position, government officials have access to the most accurate and thorough information, and are best equipped to make sense of it and to wisely use it in protecting national interests. For instance, the government may be best positioned to evaluate how many innocent civilians are worth putting at risk for successful targeting of a combatant. Some ambiguity, or possibly opacity, in reporting causalities due to drone warfare would then seem to be the best way to protect national security. How are we to balance the needs of the press, the citizenry, and those conducting drone operations for national security in our journalistic information gathering and story writing?

Discussion Questions: 

  1. What are the values in conflict in the struggle over whether drone casualty figures are released?
  1. What are the concerns of the government, and what are the concerns of the press in reporting on drone casualties?
  1. Would there still be ethical worries if the government was obscuring or inflating only the combatant death estimates?
  1. Can you identify a creative way that the government can uphold some measure of transparency in its drone operations and still effectively pursue its military operations?

Further Information:

“Obama Administration Discloses Number of Civilian Deaths Caused by Drones.” CBS News, July 1, 2016. Available at: www.cbsnews.com/news/obama-administration-discloses-civilian-deaths-drones/

“White House Tally of Civilian Drone Deaths Raises ‘Hard Questions.’” CBS News, 5 July 2016, www.cbsnews.com/news/white-house-report-obama-administration-drones-civilian-casualties-raises-questions/

Director of National Intelligence. “Summary of Information regarding U.S. Counterterrorism Strikes outside Areas of Active Hostilities.” July 2016. Available at: https://www.dni.gov/files/documents/Newsroom/Press%20Releases/DNI+Release+on+CT+Strikes+Outside+Areas+of+Active+Hostilities.PDF

Jack Serle, “Obama Drone Casualty Numbers a Fraction of Those Recorded by the Bureau.” The Bureau of Investigative Journalism, July 1, 2016. Available at: www.thebureauinvestigates.com/stories/2016-07-01/obama-drone-casualty-numbers-a-fraction-of-those-recorded-by-the-bureau

Scott Shane, “Drone Strike Statistics Answer Few Questions, and Raise Many.” The New York Times, July 3, 2016. Available at: www.nytimes.com/2016/07/04/world/middleeast/ drone-strike-statistics-answer-few-questions-and-raise-many.html

Authors:

Sabrina Stoffels & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 27, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Swipe Right to Expose

CASE STUDY: Journalism, Privacy, and Digital Information

Case Study PDF | Additional Case Studies


In a world where LGBTQ people still often lack full protection and equal rights, it can be a challenge for someone to be public about their sexuality. Some have taken to dating apps such as Grindr, Tinder, and Bumble, which allow for a more secure way for people to chat and potentially meetup outside of cyberspace. On such apps, one’s dating profile can often be seen by anyone who is also using the app, illustrating how these services blur the line between private and public information.

Nico Hines, a straight and married reporter for news site, The Daily Beast, decided to report on the usage of dating apps during the 2016 Rio Olympics in the Olympic Village. Relying upon the public and revealing nature of profiles—at least to potential dates—Hines made profiles on different dating apps and used them to interact with a number of athletes. Most of his interactions were through the Grindr app which is a dating app for gay men. This app works through geotagging so that people can match up with others who are geographically near them. Profiles include information such as height, ethnicity, and age which can often be used to identify a person even if a full name isn’t given. He eventually wrote up his experiences in the article “The Other Olympic Sport in Rio: Swiping.”

To preserve the anonymity of the individuals with whom he was interacting, Hines did not use specific athletes’ names in his story. He did reveal details about those seeking dates including their physical features, the sport they were competing in, and their home country.  Readers and critics found that it was relatively easy to identify which athletes he was talking about using the information he provided. Since many of these athletes were not openly identified as LGBTQ, critics argued that he was “potentially outing” many of the athletes by describing them in the course of his story. Amplifying this concern was the fact that in some of the home countries of men who were potentially outed, it was dangerous or illegal to be openly gay.

In his defense, some pointed out that Hines didn’t intend to out or harm specific vulnerable individuals in the course of his story about the social lives of Olympic Athletes. His published account didn’t include the names of any male athletes he interacted with on Grindr, and he only named some of the straight women who he found on the Tinder app. The Daily Beast’s Editor-in-chief, John Avalon, stated that Hines didn’t mean to focus mainly on the Grindr app but since he “had many more responses on Grindr than apps that cater mostly to straight people,” Hines decided to write about that app. When Hines interacted with the athletes on the various dating apps, he didn’t lie about who he was and, as Avalon noted, Hines “immediately admitted that he was a journalist whenever he was asked who he was.”

The controversy eventually consumed Hines’ published story. After the wave of criticism crested, The Daily Beast first removed names and descriptions of the athletes in the article. But by the end of the day, the news site had completely removed the article with Avalon replacing it with an editor’s note that concluded: “Our initial reaction was that the entire removal of the piece was not necessary. We were wrong. We’re sorry.” Regardless of the decisions reached by this news site, difficult questions remain about what kinds of stories—and methods—are ethically allowed in the brave new world of digital journalism.

Discussion Questions: 

  1. What are the ethical values or interests at stake in the debate over the story authored by Hines?
  2. Hines tried to preserve the anonymity of those he was writing about. How could he have done more for the subjects of his story, while still doing justice to the story he wanted to report on?
  3. There are strong reasons why journalists should ethically and legally be allowed to use publicly-available information in their stories. Is the information shared through dating apps public information?
  4. How does Hines’ use of dating profile information differ, if at all, from long-standing practices of investigative or undercover journalism?

Further Information:

Frank Pallotta & Rob McLean, “Daily Beast removes Olympics Grindr article after backlash.” CNN, August 12, 2016. Available at: http://money.cnn.com/2016/08/12/ media/daily-beast-olympics-article-removal/index.html

John Avalon, “A Note from the Editors.” The Daily Beast, August 11, 2016. Available at: https://www.thedailybeast.com/a-note-from-the-editors

Curtis M. Wong, “Straight Writer Blasted For ‘Outing’ Olympians In Daily Beast Piece.” Huffington Post, August 11, 2016. Available at: https://www.huffingtonpost. com/entry/daily-beast-grindr-article_us_57aca088e4b0db3be07d6581

Authors:

Bailey Sebastian & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Biometrics, Data, and Privacy

CASE STUDY: Facing the Challenges of Next Generation Identification: Biometrics, Data, and Privacy

Case Study PDF | Additional Case Studies


In 2011, the FBI signed a contract with the military developer Lockheed Martin to launch a pilot program called “Next Generation Identification” (NGI). This program was designed to utilize biometric data to assist the investigations of local and federal law enforcement agencies as part of anti-terrorism, anti-fraud, and national security programs. By late 2014, the program’s facial recognition sector became fully operational and was used regularly by the FBI and other law enforcement agencies to aid in the identification of persons of interest in open investigations. This program compares visuals from surveillance cameras or other imaging devices with the world’s largest database of photographs to find similarities in facial details and provide identification. When the program was first introduced, the FBI used biometric data from known and convicted criminals to compile the database. However, the database has since been expanded through a program run by the FBI’s Criminal Justice Information Services (CJIS) called FACE (facial analysis, comparison, and evaluation), and now includes driver license photos from individuals that have been issued a photo ID from participating states.

Objections to the inclusion of driver license photos of law abiding citizens have been raised by many organizations, including the Government Accountability Office, the Electronic Frontier Foundation (EFF), and the Electronic Privacy Information Center. This controversy primarily stems from the perceived lack of disclosure by the FBI over the specifics of the NGI and FACE programs, or the ability for citizens to agree to, or opt-out of, the use of their images. Senior Staff Attorney for the EFF, Jennifer Lynch, raised concerns about the implications of such technology, as well as its legal validity. She notes that “data that’s being collected for one purpose is being used for a very different purpose.” Lynch argues that due to facial recognition technologies, “Americans cannot easily take precautions against the covert, remote, and mass capture of their images,” especially if they are not made aware that such capture and retention is taking place in the first place. These organizations argue that this goes against federal law (The Privacy Act of 1974) that states that images of faces are protected personal information and alters “the traditional presumption of innocence in criminal cases by placing more of a burden on the defendant to show he is not who the system identifies him to be.”

Those who are not worried about the NGI program or the inclusion of law-abiding citizens’ photographs in the database say that biometric data, including a person’s face, is no different than collecting a person’s fingerprints, and that this information is crucial for national security. Such information has gained a renewed importance in light of recent terror attacks, both domestically and abroad. Stephen Morris, the assistant director of the CJIS, states that “new high-tech tools like facial recognition are in the best interests of national security” and argues that it aids law enforcement officials in identifying and capturing terrorists and other criminals. The FBI also “maintains that it searches only against criminal databases” and that requests can be made to include other outside databases, such as various state and federal databases (including state driver license photo databases, and the Department of Defense passport photo database) if and when the FBI deems it necessary for a specific criminal investigation. This highlights the fact that facial recognition technology cannot be considered independently of the databases it uses in its search for more information about imaged persons of interest. Against those that call for more human oversight across database requests integral to facial recognition technology, Morris argues that those who think that “collecting biometrics is an invasion of folks’ privacy” should instead be concerned with how to best “identify…the right person.” How will our society face the conflicting interests at stake in the collection and use of biometric data in maintaining public safety and national security?

Discussion Questions: 

  1. What are the ethical values or interests at stake in the debate over using photo databases in the NGI program?
  2. Do you believe the government can use databases not intended for biometric identification purposes? If so, what limits would you place on these uses?
  3. As facial recognition technology gets more advanced, what sort of ethical limitations should we place on its use by government or private entities?
  4. What would the ethical challenges be to using extremely advanced facial recognition technology in situations not concerning national security—such as online image searches?

Further Information:

Rebecca Boyle, “Anti-Fraud Facial Recognition System revokes the Wrong Person’s License.” Popular Science, July 18, 2011. Available at: www.popsci.com/gadgets/article/ 2011-07/anti-fraud-facial-recognition-system-generates-false-positives-revoking-wrong-persons-license

Eric Markowitz, “The FBI Now has the Largest Biometric Database in the World. Will it lead to more Surveillance?” International Business Times, April 23, 2016. Available at: www.ibtimes.com/fbi-now-has-largest-biometric-database-world-will-it-lead-more-surveillance-2345062

Sam Thielman, “FBI using Vast Public Photo Data and Iffy Facial Recognition Tech to find Criminals.” The Guardian, June 15, 2016. Available at: www.theguardian.com/us-news/2016/jun/15/fbi-facial-recognition-software-photo-database-privacy

Authors:

Jason Head & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 17, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Dying to Be Online

CASE STUDY: The Ethics of Digital Death on Social Media

Case Study PDF | Additional Case Studies


Facebook_-_Facebook_Search_-_2018-304-10_18.31.03

Social media has become a feature of the daily lives of many people. How will it feature in the time after their daily lives end? Most people don’t stop think about what happens to their online selves after they pass away. Who has the rights to the digital image or record of a person after they are dead? Social media platforms have been adapting to a growing presence of the online dead. When Facebook is alerted of a user death, they can freeze or “memorialize” the account. This serves two functions: first, it prevents anyone from hacking the account or gaining access to it. Second, it turns the profile into a memorial space which allows friends and family to publicly post their memories about the deceased individual.

fbdeath

Screencaptures from Facebook.com

Such memorialization prevents embarrassing “normal” online interactions with the account of the deceased such as the sending of friend requests or invitations to shared events. But are these online memorials without controversy? David Myles and Florence Millerand point out that norms have yet to be established when it comes to online mourning, which leads to controversy “regarding what constitute acceptable ways of using SNS [social network sites] when performing mourning activities online.” For example, many people may view the act of posting on a deceased person’s Facebook page as immoral or as disrespectful for close family and friends trying to privately grieve. At the same time, this digital realm of mourning could provide closure for a geographically-dispersed group of acquaintances and allow people to continue talking to their loved ones by honoring their legacy. As the Huffington Post notes, death surrounds us online, as “increasingly the announcements and subsequent mourning occur on social media.”

Facebook’s memorialization policy can be seen as a way to protect the privacy and dignity of the deceased. But the initiation of the memorialization process does not have to come from special agents such as one’s lawyers, parents, or relational partners. This feature of Facebook’s memorialization process came into focus in the aftermath of the tragic death of a 15-year-old German girl in 2015. The girl died when a train struck her at a Berlin railway station and a loyal friend quickly reported her death to Facebook, initiating the memorialization process. Only at a later point did the deceased girl’s parents wonder if her Facebook messages could answer questions about her death being a suicide spurred on by online bullying. The girl’s mother reported having her daughter’s login information, but she could not access the account because of its “memorialized” status. Facebook refused to budge from its policy governing locked memorial accounts, since the conversations people have on Facebook messenger are considered private and often involve third parties. Facebook stated that they were “making every effort to find a solution which helps the family at the same time as protecting the privacy of third parties who are also affected by this.” Revealing the daughter’s messages may prove the existence of online bullying, but it would surely reveal conversations between the girl and various parties—bullies and non-bullies—that all or some of the participants thought were private. The German court system eventually agreed that this concern for privacy was paramount, ruling that the parents “had no claim to access her details or chat history,” and that this decision was “based on weighing up inheritance laws drawn up almost 120 years ago.” The parent’s need to know (and to respond to any bullying) was judged as less important than upholding a predictable and general principle of privacy governing the exchange of messages between third parties that affected a vast majority of social media users.

Even though this tragic case is unusual, it highlights the challenges Facebook and its users are navigating as more and more Facebook users wind up dead. How do we deal with the fact that social media is, in some ways, quickly becoming a virtual graveyard? How do we deal with the digital remains of friends, loved ones, and strangers in a way that respects the living and the dead?


Discussion Questions: 

  1. What is the best way to treat the digital remains of those who have died? How is this optimal for the deceased—and those still living on social media?
  2. Do you agree with Facebook’s policy on not allowing access to messages of the deceased? If you disagree, how would you protect the privacy interests of third parties who corresponded with the deceased individual?
  3. If the parents of the deceased girl wanted access to her emails for a less pressing matter—perhaps to gather more material to remember her by—would this alter your stance on the strong privacy stance taken by Facebook concerning memorialized accounts?
  4. What do you want done to your social media accounts, if any, after you die? What ethical values do your choices emphasize?

Further Information:

Kate Connolly, “Parents lose Appeal over Access to Dead Girl’s Facebook Account.” The Guardian, May 31, 2017. Available at: https://www.theguardian.com/technology/2017/may/31/parents-lose-appeal-access-dead-girl-facebook-account-berlin

David Myles & Florence Millerand. “Mourning in a ‘Sociotechnically’ Acceptable Manner: A Facebook Case Study.” In: A. Hajek, C. Lohmeier, & C. Pentzold (eds) Memory in a Mediated World. 2016. Palgrave Macmillan, London. https://link.springer.com/chapter/10.1057/9781137470126_14

Jaweed Kaleem, “Death on Facebook Now Common As ‘Dead Profiles’ create vast Virtual Cemetery.” The Huffington Post, December 6, 2017. Available at: https://www.huffingtonpost.com/2012/12/07/death-facebook-dead-profiles_n_2245397.html

Authors:

Anna Rose Isbell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 12, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Anonymity, Doxing, and the Ethics of Reddit

CASE STUDY: Too Much of a Good Thing? Anonymity, Doxing, and the Ethics of Reddit

Case Study PDF | Additional Case Studies


The popular social media site Reddit is no stranger to controversy. Through the creation of “handles” or account names, it creates an anonymous or pseudo-anonymous forum that is meant to “encourage a fair and tolerant place for ideas, people, links, and discussion.” Sometimes, this is what happens. The anonymous nature of the forum, while encouraging free expression and speech, has also caused numerous problems. The current CEO, Steve Huffman, stated that “legal content should not be removed… even if we find it odious or if we personally condemn it.” This allows for the flourishing of subreddits, or specific communities of commenters and topics, that otherwise would be seen as distasteful and worthy of deletion by many other constituencies using Reddit.

However, this is not to say that Reddit has a complete open door policy. Aja Romano notes that Reddit has banned at least three “alt-right” subreddits permanently for their breaking of aspects of the Reddit User Agreement since 2017. More specifically, the subreddits in question and their members were accused of “doxing,” or the releasing of an individual’s identifying information without permission, an action that is forbidden by Reddit’s “Content Policy.” For example, members of r/altright attempted to dox and place a “bounty” on the man who punched the white nationalist Richard Spencer on Donald Trump’s inauguration day. A significant factor in the decision to ban these alt-right subreddits was the political worries swaths of the online public held concerning the alt-right movement and its members’ activities on the site in general. Reddit’s banning of these alt-right subreddits has led to an outcry for the closure of other controversial subreddits. One targeted subreddit, the notable r/The_Donald, stands as the site’s most visible conservative platform, having over half a million subscribers. Steve Huffman, the CEO of Reddit, has stated that he fears sliding down a slippery slope of censorship—in other words, he is concerned that the censoring any content, alt-right or not, in the site’s earlier years could lead to further repression of ideas and speech. This regard for preserving free speech and avoiding unnecessary censorship persists in the ways that Reddit deals with censorship now.

Going down this path means that Reddit is tasked with deciding what speech is permissible and which communities must be banned or encouraged to flourish. Many claim that censoring the anonymous online community is a slippery slope that can lead to suppression of groups that may not be popular in the public sphere. Others might argue that the fun and usefulness of the online world lies in its creative anarchy that often challenges accepted norms and standard views. Who decides which controversial views fall into the category of hateful or reprehensible views? In contrast, those who advocate the removal of such subreddits claim that Reddit has a duty to keep the community of their site feeling safe and protected as a whole. They also note that Reddit can set its terms of service however it wants, and that doxing targeted individuals is currently forbidden. Should Reddit tolerate speech that many of its members find to be intolerant?


Discussion Questions:

  1. What are the values at stake in Reddit’s deliberation over shutting down certain subreddits?
  2. Is Reddit responsible for any harassment or harms that may come from information posted in its forums?
  3. What are some of the ethical challenges of defining “distasteful,” “hateful,” or even “racist” content on forums such as Reddit?
  4. Should an entity like Reddit enable all types of speech? In other words, does Reddit have a legal or ethical duty to help all views be heard? Or can it pick and choose what views are privileged on its platform?
  5. Some argue that the best way to resist and reform reprehensible views is to allow them out into the open so they can be refuted. Do you think this tactic would work in public forums such as Reddit?

Further Information:

Reddit Content Policy. Available at: https://www.reddit.com/help/contentpolicy

Reddit User Agreement. September 27, 2017. Available at: https://www.reddit.com /help/useragreement/

“Reddit will not ban ‘distasteful’ content, chief executive says.” October 17, 2012. Available at: http://www.bbc.com/news/technology-19975375

Aja Romano, “Reddit shuts down 3 major alt-right forums due to harassment.” Vox, February 03, 2017. Available at: https://www.vox.com/culture/2017/2/3/ 14486856/reddit-bans-alt-right-doxing-harassment

Author:

James Hayden
Media Ethics Initiative
University of Texas at Austin
April 4, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

%d bloggers like this: