Media Ethics Initiative

Home » 2018 » April

Monthly Archives: April 2018

Finsta Fad

CASE STUDY: The Real Ethics of Fake Social Media Profiles

Case Study PDF | Additional Case Studies

Instagram is the largest social media platform for teenagers. Not only are teens posting images for their numerous followers on their primary Instagram account to see, they are now creating a more selective, secondary account they refer to as their “Finsta,” short for “fake Instagram” account. Their real account is used to post pictures of their “best-self” having various fun and exciting experiences, while their “Finsta” is often used to post pictures of their unfiltered—and perhaps more authentic—self. What ethical decisions come with the ability to have an Instagram account for a filtered self and a fake profile for a “real” self?


Template: Jim Covais

Many teens use their Finsta account to post pictures that would not be posted on their “real” Instagram. Finsta posts might include pictures of teens’ goofy, emotional, or rebellious activities. Teens will typically create Finstas under a name that is not related to their own so that this profile or its posts will not be traced back to them; these accounts are often set to “private,” limiting them only to their approved followers.  Many adults might see this as a red flag that their teen is participating in scandalous and inappropriate behavior that they would not want their followers on their primary account to see. This is not always the case.

Instead of hiding harmful behavior, some Finstas seem to exist to escape the pressure to be perfect in the digital panopticon of the online world. Many Finsta owners see their second account as a way to post and like what they want without the judgment and criticism of friends, family, and co-workers. Teens see their Finstas as a release from crushing social media pressures to be popular and as a way to truly express themselves. The privacy of a Finsta creates a sense of freedom to show their silly and vulnerable self to their small group of friends they have approved to follow their Finsta.

Finstas hold the potential for harm, however. The idea that only their small circle of followers have access to these posts may encourage teens to post inappropriate, mean, and scandalous pictures or comments. This sense of privacy, however, can prove illusory. There is nothing to stop a follower—perhaps a friend or ex-friend in real life of the account holder—from taking a screenshot of a post and showing others this supposedly private content. In this way, comments and pictures can be shared far beyond the limited group of approved followers of one’s Finsta account. Beyond the negative comments of the Finsta account holder, such shared content could be used in courses of bullying against the account holder themselves. In the search for a safe way to separate different images of who they are for different audiences, Finsta owners may find out the hard way that what they post is still traceable and potentially very public.   In many ways, Finstas represent the Faustian bargain of the ability to create and remake our online selves—they can be a helpful outlet for teens and their processes of self-discovery, but they can quickly create real repercussions.

Discussion Questions: 

  1. What are the ethical issues with having both a “real” Instagram account and a Finsta?
  1. How might the use of Finstas create good consequences for teens? How might they go wrong?
  1. Do Finstas involve deception? If so, how do you balance this ethical concern against any good consequences they may enable?
  1. Are there ethical limits to how teens (or others) should use their Finstas? Or are the ethical limits mainly for their “public” and “real” account?

Further Information:

Joanne Orlando, “How Teens use Fake Instagram Accounts to Relieve the Pressure of Perfection.” Available at:

Karena Tse, “Pro/Con: The Finsta Phenomenon.” April 10, 2017. Available at:

Savannah Dube, “Finstas and Self-Idealizing in the Digital World.” October 3, 2016. Available at:


Morgan Malouf
Media Ethics Initiative
University of Texas at Austin
April 24, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Collateral Damage to the Truth

CASE STUDY: Reporting Casualties in Drone Strikes

Case Study PDF | Additional Case Studies

In a report released in July 2016, the Obama administration revealed the number of enemy combatants and civilians killed via drone strikes since 2009 in counter-terrorism efforts. This report came after President Obama signed an executive order with the intention of making the once-secret drone program more transparent and protective of citizens through the routine disclosure of civilian deaths. According to CBS News, the release claimed that 2,300 enemy combatants were killed and anywhere from 64 to 116 civilian deaths occurred as collateral damage. Controversy quickly circled around the U.S. government’s attempt at transparency in drone use. There was disagreement about the accuracy of the numbers reported. Critics also questioned whether the administration’s decision to disclose this information so soon after significantly expanding the counter-terrorism drone program was an attempt to mislead the public into thinking that they were being fully informed about drone warfare and its costs. Naureen Shah, an affiliate of the human rights organization Amnesty International, said “We’re going to be asking really hard questions about these numbers. They’re incredibly low for the number of people killed who are civilians.”

Watchdog groups suggest the U.S. government’s estimates of civilian death are exponentially lower than the real death toll. The Long War Journal, a blog run by the non-profit media organization Public Multimedia Incorporated, reported the civilian death toll at 207 from operations in Pakistan and Yemen alone. This was the lowest count provided by a watchdog groups. The Bureau of Investigative Journalism placed the death toll closer to “a maximum of 801 civilian deaths,” with a possible range of “anywhere from 492 to about 1,100 civilians killed by drone strikes since 2002.” Additionally, the government report excludes the numbers of casualties from Iraq, Afghanistan and Syria, places where the U.S. has conducted thousands of drone operations. The Bureau of Investigative Journalism generated its numbers from both local and international journalists, field investigations, NGO investigators, court documents and leaked government files.

Speaking for the accuracy of the government report, some argue that government officials have access to information the public does not. The U.S. government asserts that they utilize refined methodologies in calculating post-strike numbers, and they have access to information that is generally unavailable to non-governmental organizations. This information influences both the motivations for carrying out a drone strike as well as the validity of the number of casualties reported from various sources. Many terrorists groups spread incorrect information about the U.S. as propaganda which can mislead watchdog groups’ statistics. The report from the Obama administration voices this worry, stating that “The U.S. Government may have reliable information that certain individuals are combatants, but are being counted as non-combatants by nongovernmental organizations.”

All of this poses a problem for news reporters who rely on the government to supply information necessary for their stories. A lack of clarity in how the government defines terms such as “civilian” or “enemy combatant” in their reports causes a discrepancy in the interpretation of the information journalists then relay to the public. According to this view, the public deserves the unspun data on the costs of certain policies, no matter how bracing it may be. Josh Ernest, a spokesperson for the White House, countered: “There are obviously limitations to transparency when it comes to matters as sensitive as this.” According to such a position, government officials have access to the most accurate and thorough information, and are best equipped to make sense of it and to wisely use it in protecting national interests. For instance, the government may be best positioned to evaluate how many innocent civilians are worth putting at risk for successful targeting of a combatant. Some ambiguity, or possibly opacity, in reporting causalities due to drone warfare would then seem to be the best way to protect national security. How are we to balance the needs of the press, the citizenry, and those conducting drone operations for national security in our journalistic information gathering and story writing?

Discussion Questions: 

  1. What are the values in conflict in the struggle over whether drone casualty figures are released?
  1. What are the concerns of the government, and what are the concerns of the press in reporting on drone casualties?
  1. Would there still be ethical worries if the government was obscuring or inflating only the combatant death estimates?
  1. Can you identify a creative way that the government can uphold some measure of transparency in its drone operations and still effectively pursue its military operations?

Further Information:

“Obama Administration Discloses Number of Civilian Deaths Caused by Drones.” CBS News, July 1, 2016. Available at:

“White House Tally of Civilian Drone Deaths Raises ‘Hard Questions.’” CBS News, 5 July 2016,

Director of National Intelligence. “Summary of Information regarding U.S. Counterterrorism Strikes outside Areas of Active Hostilities.” July 2016. Available at:

Jack Serle, “Obama Drone Casualty Numbers a Fraction of Those Recorded by the Bureau.” The Bureau of Investigative Journalism, July 1, 2016. Available at:

Scott Shane, “Drone Strike Statistics Answer Few Questions, and Raise Many.” The New York Times, July 3, 2016. Available at: drone-strike-statistics-answer-few-questions-and-raise-many.html


Sabrina Stoffels & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 27, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Swipe Right to Expose

CASE STUDY: Journalism, Privacy, and Digital Information

Case Study PDF | Additional Case Studies

In a world where LGBTQ people still often lack full protection and equal rights, it can be a challenge for someone to be public about their sexuality. Some have taken to dating apps such as Grindr, Tinder, and Bumble, which allow for a more secure way for people to chat and potentially meetup outside of cyberspace. On such apps, one’s dating profile can often be seen by anyone who is also using the app, illustrating how these services blur the line between private and public information.

Nico Hines, a straight and married reporter for news site, The Daily Beast, decided to report on the usage of dating apps during the 2016 Rio Olympics in the Olympic Village. Relying upon the public and revealing nature of profiles—at least to potential dates—Hines made profiles on different dating apps and used them to interact with a number of athletes. Most of his interactions were through the Grindr app which is a dating app for gay men. This app works through geotagging so that people can match up with others who are geographically near them. Profiles include information such as height, ethnicity, and age which can often be used to identify a person even if a full name isn’t given. He eventually wrote up his experiences in the article “The Other Olympic Sport in Rio: Swiping.”

To preserve the anonymity of the individuals with whom he was interacting, Hines did not use specific athletes’ names in his story. He did reveal details about those seeking dates including their physical features, the sport they were competing in, and their home country.  Readers and critics found that it was relatively easy to identify which athletes he was talking about using the information he provided. Since many of these athletes were not openly identified as LGBTQ, critics argued that he was “potentially outing” many of the athletes by describing them in the course of his story. Amplifying this concern was the fact that in some of the home countries of men who were potentially outed, it was dangerous or illegal to be openly gay.

In his defense, some pointed out that Hines didn’t intend to out or harm specific vulnerable individuals in the course of his story about the social lives of Olympic Athletes. His published account didn’t include the names of any male athletes he interacted with on Grindr, and he only named some of the straight women who he found on the Tinder app. The Daily Beast’s Editor-in-chief, John Avalon, stated that Hines didn’t mean to focus mainly on the Grindr app but since he “had many more responses on Grindr than apps that cater mostly to straight people,” Hines decided to write about that app. When Hines interacted with the athletes on the various dating apps, he didn’t lie about who he was and, as Avalon noted, Hines “immediately admitted that he was a journalist whenever he was asked who he was.”

The controversy eventually consumed Hines’ published story. After the wave of criticism crested, The Daily Beast first removed names and descriptions of the athletes in the article. But by the end of the day, the news site had completely removed the article with Avalon replacing it with an editor’s note that concluded: “Our initial reaction was that the entire removal of the piece was not necessary. We were wrong. We’re sorry.” Regardless of the decisions reached by this news site, difficult questions remain about what kinds of stories—and methods—are ethically allowed in the brave new world of digital journalism.

Discussion Questions: 

  1. What are the ethical values or interests at stake in the debate over the story authored by Hines?
  2. Hines tried to preserve the anonymity of those he was writing about. How could he have done more for the subjects of his story, while still doing justice to the story he wanted to report on?
  3. There are strong reasons why journalists should ethically and legally be allowed to use publicly-available information in their stories. Is the information shared through dating apps public information?
  4. How does Hines’ use of dating profile information differ, if at all, from long-standing practices of investigative or undercover journalism?

Further Information:

Frank Pallotta & Rob McLean, “Daily Beast removes Olympics Grindr article after backlash.” CNN, August 12, 2016. Available at: media/daily-beast-olympics-article-removal/index.html

John Avalon, “A Note from the Editors.” The Daily Beast, August 11, 2016. Available at:

Curtis M. Wong, “Straight Writer Blasted For ‘Outing’ Olympians In Daily Beast Piece.” Huffington Post, August 11, 2016. Available at: https://www.huffingtonpost. com/entry/daily-beast-grindr-article_us_57aca088e4b0db3be07d6581


Bailey Sebastian & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 24, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

What Exactly is Revenge Porn or Nonconsensual Pornography?

Scott R. Stroud & Jonathan Henson (University of Texas at Austin)*

In light of the recent Texas 12th Court of Appeals ruling striking down Texas Penal Code 21.16(b), otherwise known as the 2015 Texas Revenge Porn Law, it is useful to reconsider the complexity inherent in this awful online phenomenon. Is online revenge porn or nonconsensual porn as simple and straightforward as these laws and policy-makers lead one to believe? The court specifically criticized the law as “an invalid content-based restriction and overbroad in the sense that it violates rights of too many third parties by restricting more speech than the Constitution permits” (commentary here and here). How might third parties be implicated in such broad legal characterizations of revenge or nonconsensual porn? Being careful, honest, and precise about the various parameters of this online behavior are vital to constructively addressing it through legal, policy, and educational means.

All of the passionate rhetoric and calls for legislative action heard over the past few years assume that we know exactly what we mean when we issue calls against “revenge porn” or “nonconsensual pornography.” We want to argue that there is actually a plethora of behaviors potentially existing in the category of revenge porn, and not all of them entail the same harms, culpability, or, as we will discuss, the same issues of consent and privacy.

First, let us see how some leading advocates of criminalizing revenge porn—also discussed as nonconsensual pornography—define the term. Citron & Franks (2014, 1) stipulate that: “Nonconsensual pornography involves the distribution of sexually graphic images of individuals without their consent. This includes images originally obtained without consent (e.g., hidden recordings or recordings of sexual assaults) as well as images originally obtained with consent, usually within the context of a private or confidential relationship (e.g., images consensually given to an intimate partner who later distributes them without consent, popularly referred to as “revenge porn”).” This definition parses into two categories, images obtained through secret recordings or voyeurism and those obtained with consent (but not consent to publish outside of the relationship, presumably). There is no mention here of any diversity among posting behaviors online or on social media; it is simply assumed that the image content carries a consent-status, and that harm comes from distributing images with a negative consent status. In another attempt at a definition, Franks (2015, August 17) states that “Nonconsensual pornography refers to sexually explicit images disclosed without consent and for no legitimate purpose. The term encompasses material obtained by hidden cameras, consensually exchanged within a confidential relationship, stolen photos, and recordings of sexual assaults.” Here the same idea is proffered—imagistic content that is simply “disclosed” that comes from knowing or unknowing subjects depicted within the content. A new aspect is added, however, that defines nonconsensual pornography as images that uphold two criteria—they lack consent (presumably to distribute to others) and they serve “no legitimate purpose.” Of course, the latter clause is tough to operationalize in practice, as individuals will often think of speech that they disagree with as serving no legitimate purpose.

Both of these definitions of revenge porn might serve us well if we are fixated on the horrible case of one former relational partner attempting to harm another partner through the disclosure of sexual images online. They do not serve us well, however, when we start to ask specific questions about the different ways that this content can be posted, or how it can remain identifiable or linked to that real-life subject. In other words, these delineations of the category of revenge porn do not do justice to the various ways that the simple-sounding act of “disclosure” can happen online, and they do not seriously interrogate what it means for an image to have negative consequences through its connection with that identifiable person. These are, however, vital points, since the whole crux of controlling a phenomenon like revenge porn rests on having an understanding of what exactly one is trying to control with laws and moral approbation. Building on the actual and potential differences in revenge porn noted in previous studies (e.g., Stroud, 2014), let us divide up the category of revenge porn (or nonconsensual porn) into four variable dimensions: (1) the source of the content posted, (2) the consent-status of the material posted, (3) the intent of the agent doing the posting, and (4) identifying features resident in the imagistic content. These can be found in Table 1. We will then describe each one of these aspects in further detail to arrive at a clear specification of the range of potential revenge or nonconsensual pornography practices in the online world.

Table 1: Types of Revenge/Nonconsensual Porn Posting Behaviors

Content Source Consent Status Poster Intent Identifying Content
Self Granted Praise subject Known identifiers
Other Not granted Harm subject Unknown identifiers
Online Uncertain Other intentions No possible identifiers

The first dimension of revenge porn posting behavior involves where it came from prior to a given act of posting. This can be labeled as the source dimension. Did it come from the poster’s actions, such as the use of their camera? Or was it sent to them from someone else—a relational partner or other conversant—who created that content? Thus, the source dimension can be divided into poster­-created and other-created content. Some of this other created content could be from a relational partner, or it could be from online conversational partners that send one nude images, as appeared to be the case in some of the major revenge porn sites (Peterson, 2013, February 18). One could also stumble across such content online, with no attributable source evident, of course. Call this material simply online content, since its story of authorship is unclear or hidden. This is what one might find, if one googled a random term in an image search. Who exactly knows where the resulting images came from?

The second dimension of revenge porn postings involve what we will call the consent status of the images. We will discuss consent more in the following sections, as this is a vital point about the ethics of posting such material. For now, however, it will help any analysis of the issue to be clear about the range of consent-statuses that such material can possess. There is a tendency in how partisans talk about revenge porn images as coming with consent or without consent to be shared outside of the original instance of sharing. This is misleading in its simplification of the issue. First of all, the image is simply the image. One cannot look at an image and see the consent granted to its use. Thus, consent-status is something behind and beyond the specific image, and it lay in the people involved in the transaction. An image can therefore be exchanged with consent granted for further distribution by the giving partner, or it can be exchanged without consent being granted for further distribution. Furthermore, the consent status of a given image could be uncertain. This is likely to be the case when an image is found online, or when the parties do not openly talk about the limits of future distribution. We will return to these issues in the following sections.

The third dimension that varies in revenge porn posting behaviors is the intention of the posting agent. Why do they post this material? In many cases, it is to harm or shame a former relational partner (Stroud, 2014). Another possibility is that someone posts material to praise the subject, either in their actions, character, or more likely, physical appearance. This appears to be a common practice, at least early on, in the history of revenge porn sites; Hunter Moore, if we are to trust his early media pronouncements, indicated that around 50% of his site’s submissions were from individuals seeking their own quick internet fame (Hill, 2012, April 5). Some find this to be an incredible claim, and with good reason considering the recent evolution of how revenge porn is often used in courses of harassment and stalking. Regardless of the accuracy or veracity of Moore’s claims, we do see the conceptual room for the intention to praise the posted subject—especially when we move into content simply found on the internet (online sources) and shared. There can be other intentions, of course, some connected with posting for entertainment value or posting connected to financial gains (as seemed to be the motive in many revenge porn sites).

The most pernicious effects of revenge porn, clearly acknowledged in cases such as those that involve a person seeking to harm an ex-relational partner, come not simply from the image existing or being seen by others. The worst harms—threats, stalking, targeted harassment, family embarrassment, and financial or job loss—all turn on damaging content being connected to an identifiable individual. Thus, we can discern another dimension of revenge porn posting: the presence and type of identifying content resident in the image or along with the image. There are millions of sexual or nude images online at any given moment, but when one is identified as connected to that image, and when an audience exists for that image, harm can occur. Usually this is through contacting real-life people connected to that individual and sharing the embarrassing image. Many posters want this outcome, so they post some amount of identifying material along with the image. This can include a range of information about the subject: first name, first name with last initial, full name, address, town, email address, employer’s address or contact information, family member information, and even their social security number. It is important to stress that various sites and webmasters post various amounts of this information; some are rather minimal in what identifying information they post (Stroud, 2014).

These bits of informational can be called known identifiers, since they are attached to the image and connect the depicted person to an identifiable individual who putatively does not want the world to see them nude. As previous research has made clear, sometimes the crowd-sourced aspects of the internet communities surrounding the posting of revenge porn content supply the identifying information. In the usual story, the spurned relational partner posts the content and identifies the subject; in the wild west of the internet, however, often content is posted and the amorphous, unknown crowd then comments upon it, supplying more and more detail about the subject (Stroud, 2014). This is often due to others recognizing the individual in the picture, either through their face, objects in the background (e.g., a diploma on the wall, name tag, etc.), or other identifying marks (distinctive tattoos, a recognizable dorm room, etc.). Often these marks become useful through the crowd-sourced, mass agency of online users combining their powers of identification and inference.

Thus, someone could post an image with no known identifying content, but still leave room for future identification through others noticing heretofore unknown identifiers resident in the image. As facial “tagging” and recognition software advances, one could argue that many pictures could contain such unrealized clues to the subject’s real-life identify. Conceivably, one could post a photo with no possible identifiers in it; perhaps this would be the case in a close up shot of a body part with no identifying marks or background objects visible. Such non-identifying shots could be used for the purposes of revenge porn by posting them next to a non-nude image of some identifiable person, thereby making the visual argument that that person was the same person that was nude in the second, non-identifiable picture featuring nudity. This contingency, like many of the permutations mentioned previous are all ignored in most passionate discussion of revenge porn in favor of the standard “relationship gone bad” story discussed in the introduction. Yet one thing is clear: revenge porn is not one behavior, but instead a cluster of activities that vary in certain dimensions. What impact will the realization of its complexity have on our discussion of its harms? 

*Excerpt adapted from Scott R. Stroud & Jonathan Henson, “Social Media, Online Sharing, and the Ethical Complexity of Consent in Revenge Porn,” The Dark Side of Social Media: A Consumer Psychology Perspective, Angeline Close Scheinbaum (ed.), Routledge, 2018.

Dr. Scott R. Stroud is an associate professor in the Department of Communication Studies at the University of Texas at Austin and director of the Media Ethics Initiative. Jonathan Henson is a doctoral student in the Department of Communication Studies at the University of Texas at Austin.


Citron, D. K., & Franks, M. A. (2014). Criminalizing Revenge Porn. Wake Forest Law Review, 345(49), 1-38. Online.

Franks, M. A. (2015, August 17). Drafting an effective “revenge porn” law: A guide for legislators. Online.

Hill, K. (2012, April 5). Why we find Hunter Moore and his “identity porn” site, IsAnyoneUp, so fascinating. Forbes. Online.

Peterson, H. (2013, February 18) Revenge porn website operator accused of “catfishing” to trick woman into sending him nude photos so he can upload to site. Daily Mail. Online.

Stroud, S. R. (2014). The Dark Side of the Online Self: A Pragmatist Critique of the Growing Plague of Revenge Porn, Journal of Mass Media Ethics, 29 (3), 168-183.



Media Ethics and Virtual Reality

The Media Ethics Initiative Presents:

The Ethics of Virtual Reality

Dr. Donald Heider

Dean and Professor of Communication
Loyola University Chicago

April 10 ¦ Moody College of Communication ¦ University of Texas at Austin

Dr. Donald Heider is the Associate Provost for Strategy & Innovation and Founding Dean at the School of Communication at Loyola University Chicago and Founder of the Center for Digital Ethics & Policy.  He is author, co-author, or editor of eight books including  two volumes of Ethics for a Digital Age, the latest of which is due out in 2018. Dr. Heider is a multiple Emmy-award winning producer who spent ten years in news before entering the academy. He earned his B.A. from Colorado State University, his M.A. from American University, and his Ph.D. from the University of Colorado.  He served previously as Associate Dean at the Philip Merrill College of Journalism at the University of Maryland and was on the faculty at the University of Texas at Austin, Colorado, and University of Mississippi. Co-sponsored by the School of Journalism, UT Austin.

DacyWVkU0AANEql.jpg large

Biometrics, Data, and Privacy

CASE STUDY: Facing the Challenges of Next Generation Identification: Biometrics, Data, and Privacy

Case Study PDF | Additional Case Studies

In 2011, the FBI signed a contract with the military developer Lockheed Martin to launch a pilot program called “Next Generation Identification” (NGI). This program was designed to utilize biometric data to assist the investigations of local and federal law enforcement agencies as part of anti-terrorism, anti-fraud, and national security programs. By late 2014, the program’s facial recognition sector became fully operational and was used regularly by the FBI and other law enforcement agencies to aid in the identification of persons of interest in open investigations. This program compares visuals from surveillance cameras or other imaging devices with the world’s largest database of photographs to find similarities in facial details and provide identification. When the program was first introduced, the FBI used biometric data from known and convicted criminals to compile the database. However, the database has since been expanded through a program run by the FBI’s Criminal Justice Information Services (CJIS) called FACE (facial analysis, comparison, and evaluation), and now includes driver license photos from individuals that have been issued a photo ID from participating states.

Objections to the inclusion of driver license photos of law abiding citizens have been raised by many organizations, including the Government Accountability Office, the Electronic Frontier Foundation (EFF), and the Electronic Privacy Information Center. This controversy primarily stems from the perceived lack of disclosure by the FBI over the specifics of the NGI and FACE programs, or the ability for citizens to agree to, or opt-out of, the use of their images. Senior Staff Attorney for the EFF, Jennifer Lynch, raised concerns about the implications of such technology, as well as its legal validity. She notes that “data that’s being collected for one purpose is being used for a very different purpose.” Lynch argues that due to facial recognition technologies, “Americans cannot easily take precautions against the covert, remote, and mass capture of their images,” especially if they are not made aware that such capture and retention is taking place in the first place. These organizations argue that this goes against federal law (The Privacy Act of 1974) that states that images of faces are protected personal information and alters “the traditional presumption of innocence in criminal cases by placing more of a burden on the defendant to show he is not who the system identifies him to be.”

Those who are not worried about the NGI program or the inclusion of law-abiding citizens’ photographs in the database say that biometric data, including a person’s face, is no different than collecting a person’s fingerprints, and that this information is crucial for national security. Such information has gained a renewed importance in light of recent terror attacks, both domestically and abroad. Stephen Morris, the assistant director of the CJIS, states that “new high-tech tools like facial recognition are in the best interests of national security” and argues that it aids law enforcement officials in identifying and capturing terrorists and other criminals. The FBI also “maintains that it searches only against criminal databases” and that requests can be made to include other outside databases, such as various state and federal databases (including state driver license photo databases, and the Department of Defense passport photo database) if and when the FBI deems it necessary for a specific criminal investigation. This highlights the fact that facial recognition technology cannot be considered independently of the databases it uses in its search for more information about imaged persons of interest. Against those that call for more human oversight across database requests integral to facial recognition technology, Morris argues that those who think that “collecting biometrics is an invasion of folks’ privacy” should instead be concerned with how to best “identify…the right person.” How will our society face the conflicting interests at stake in the collection and use of biometric data in maintaining public safety and national security?

Discussion Questions: 

  1. What are the ethical values or interests at stake in the debate over using photo databases in the NGI program?
  2. Do you believe the government can use databases not intended for biometric identification purposes? If so, what limits would you place on these uses?
  3. As facial recognition technology gets more advanced, what sort of ethical limitations should we place on its use by government or private entities?
  4. What would the ethical challenges be to using extremely advanced facial recognition technology in situations not concerning national security—such as online image searches?

Further Information:

Rebecca Boyle, “Anti-Fraud Facial Recognition System revokes the Wrong Person’s License.” Popular Science, July 18, 2011. Available at: 2011-07/anti-fraud-facial-recognition-system-generates-false-positives-revoking-wrong-persons-license

Eric Markowitz, “The FBI Now has the Largest Biometric Database in the World. Will it lead to more Surveillance?” International Business Times, April 23, 2016. Available at:

Sam Thielman, “FBI using Vast Public Photo Data and Iffy Facial Recognition Tech to find Criminals.” The Guardian, June 15, 2016. Available at:


Jason Head & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 17, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Dying to Be Online

CASE STUDY: The Ethics of Digital Death on Social Media

Case Study PDF | Additional Case Studies


Social media has become a feature of the daily lives of many people. How will it feature in the time after their daily lives end? Most people don’t stop think about what happens to their online selves after they pass away. Who has the rights to the digital image or record of a person after they are dead? Social media platforms have been adapting to a growing presence of the online dead. When Facebook is alerted of a user death, they can freeze or “memorialize” the account. This serves two functions: first, it prevents anyone from hacking the account or gaining access to it. Second, it turns the profile into a memorial space which allows friends and family to publicly post their memories about the deceased individual.


Screencaptures from

Such memorialization prevents embarrassing “normal” online interactions with the account of the deceased such as the sending of friend requests or invitations to shared events. But are these online memorials without controversy? David Myles and Florence Millerand point out that norms have yet to be established when it comes to online mourning, which leads to controversy “regarding what constitute acceptable ways of using SNS [social network sites] when performing mourning activities online.” For example, many people may view the act of posting on a deceased person’s Facebook page as immoral or as disrespectful for close family and friends trying to privately grieve. At the same time, this digital realm of mourning could provide closure for a geographically-dispersed group of acquaintances and allow people to continue talking to their loved ones by honoring their legacy. As the Huffington Post notes, death surrounds us online, as “increasingly the announcements and subsequent mourning occur on social media.”

Facebook’s memorialization policy can be seen as a way to protect the privacy and dignity of the deceased. But the initiation of the memorialization process does not have to come from special agents such as one’s lawyers, parents, or relational partners. This feature of Facebook’s memorialization process came into focus in the aftermath of the tragic death of a 15-year-old German girl in 2015. The girl died when a train struck her at a Berlin railway station and a loyal friend quickly reported her death to Facebook, initiating the memorialization process. Only at a later point did the deceased girl’s parents wonder if her Facebook messages could answer questions about her death being a suicide spurred on by online bullying. The girl’s mother reported having her daughter’s login information, but she could not access the account because of its “memorialized” status. Facebook refused to budge from its policy governing locked memorial accounts, since the conversations people have on Facebook messenger are considered private and often involve third parties. Facebook stated that they were “making every effort to find a solution which helps the family at the same time as protecting the privacy of third parties who are also affected by this.” Revealing the daughter’s messages may prove the existence of online bullying, but it would surely reveal conversations between the girl and various parties—bullies and non-bullies—that all or some of the participants thought were private. The German court system eventually agreed that this concern for privacy was paramount, ruling that the parents “had no claim to access her details or chat history,” and that this decision was “based on weighing up inheritance laws drawn up almost 120 years ago.” The parent’s need to know (and to respond to any bullying) was judged as less important than upholding a predictable and general principle of privacy governing the exchange of messages between third parties that affected a vast majority of social media users.

Even though this tragic case is unusual, it highlights the challenges Facebook and its users are navigating as more and more Facebook users wind up dead. How do we deal with the fact that social media is, in some ways, quickly becoming a virtual graveyard? How do we deal with the digital remains of friends, loved ones, and strangers in a way that respects the living and the dead?

Discussion Questions: 

  1. What is the best way to treat the digital remains of those who have died? How is this optimal for the deceased—and those still living on social media?
  2. Do you agree with Facebook’s policy on not allowing access to messages of the deceased? If you disagree, how would you protect the privacy interests of third parties who corresponded with the deceased individual?
  3. If the parents of the deceased girl wanted access to her emails for a less pressing matter—perhaps to gather more material to remember her by—would this alter your stance on the strong privacy stance taken by Facebook concerning memorialized accounts?
  4. What do you want done to your social media accounts, if any, after you die? What ethical values do your choices emphasize?

Further Information:

Kate Connolly, “Parents lose Appeal over Access to Dead Girl’s Facebook Account.” The Guardian, May 31, 2017. Available at:

David Myles & Florence Millerand. “Mourning in a ‘Sociotechnically’ Acceptable Manner: A Facebook Case Study.” In: A. Hajek, C. Lohmeier, & C. Pentzold (eds) Memory in a Mediated World. 2016. Palgrave Macmillan, London.

Jaweed Kaleem, “Death on Facebook Now Common As ‘Dead Profiles’ create vast Virtual Cemetery.” The Huffington Post, December 6, 2017. Available at:


Anna Rose Isbell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 12, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

%d bloggers like this: