Media Ethics Initiative

Home » Posts

Category Archives: Posts

First, Do No Harm?

CASE STUDY: Doctor-Patient Confidentiality and Protecting Third Parties

Case Study PDF | Additional Case Studies

As a client seeking help from a mental health professional, one expects to have the ability to be open and honest with our thoughts, feelings, and emotions. As a mental health professional, one strives to create an environment that is safe and inviting for all conversations. When disclosing physical symptoms with our medical doctors we typically do so without fear of those details being revealed to others because of the legal and ethical ideal of doctor-patient confidentiality. Should this feeling of security be any different when divulging personal details to a mental health professional?

In 2010, the case of Volk vs. DeMeerleer made this application of confidentiality guidelines more complex. A treating psychiatrist was held liable in a patient’s homicidal actions. One of the fundamental questions of the case was the obligations of mental health professionals to protect third parties from the violent acts of their patients. Jan DeMeerleer was a patient of the psychiatrist, Dr. Howard Ashby, on and off over the course of 9 years. DeMeerleer voiced thoughts of harm both to himself and to others intermittently throughout his sessions over this time period. On what would be his last visit with Dr. Ashby, DeMeerleer did not mentioned any thoughts of violence. But a few months following this final appointment, DeMeerleer murdered his ex-fiancé, Rebecca Schiering, one of her children, and then committed suicide. Following their deaths, Schiering’s mother sued Dr. Ashby for not “mitigating DeMeerleer’s dangerousness or warning the victims.”

The case against Dr. Ashby was motivated by the idea that he should have tried to reduce the chances of harm to third parties or should have warned any targeted third parties of the possible danger that they were in. As the American Medical Association put it, physicians face both an “ethical obligation to protect confidentiality with their patients and their legal responsibilities as a physician.” Dr. Ashby mentions in his notes from the final session with DeMeerleer the positive changes that he noticed in the patient. He observed the occasional suicidal thoughts that DeMeerleer experiences and noted that “at this point he’s not a real clinical problem, but that he’ll keep an eye out.” According to Baird Holm, “Volk is a somber reminder that mental health professionals should always be vigilant and, perhaps, err on the side of disclosing when a patient exhibits violent thoughts and behavior.” Should Dr. Ashby have blood on his hands if he didn’t truly see any potential danger during his last visit with DeMeerleer?

Those defending Dr. Ashby’s reluctance to share information about his patient place more importance on the norm of patient-doctor confidentiality. For doctors like Ashby, it is not uncommon to hear expressions of violent thoughts from people. Telling the difference between vague violent plans revealed in utterances and actionable threats is hard in practice. According to Troy Parks on the AMA Wire, the ruling against Dr. Ashby “requires a psychiatrist to do the impossible: predict imminent dangerousness in patients who have neither communicated recent threats, indicated intent to do harm, nor indicated a target for a potential threat.” Many believe that incorporating this possible risk when taking on new clients would have the potential to limit mental health professionals’ ability to treat patients. It may also make psychiatrists more reluctant to treat clients with difficult cases with a history of violence.

If doctors begin to report more threats observed in client communication, many would worry about the effect of this on other patients. Troy Parks poses the question of what happens “if patients no longer feel safe sharing personal—yet crucial—information with their physicians?” The American Psychological Association emphasized that patients need to feel comfortable talking about private and revealing information. They also mentioned that this should be a safe place for patients to talk about anything without fear of the information leaving the room. This in turn might affect the efficiency/effectiveness of mental health professionals. According to Jennifer Piel and Rejoice Opara, “The [AMA] Code strikes a balance in respecting confidentiality while providing an exception to allow disclosures of patient confidences under reasonable and narrow circumstances to protect identifiable third persons.” Yet the Ashby case throws this balance into question. How can doctors balance the need to create a level of confidence between the physician and the patient with the need to look out for the safety of others?

Discussion Questions: 

  1. What are the ethical values in conflict in the Ashby case? Why are these general values important?
  2. On what side would you err: toward preserving confidentiality in questionable cases, or toward warning others (and consequently violating patient confidence)?
  3. Are there creative ways to uphold both the value of confidentiality and the safety of third parties?

Further Information:

American Psychological Association, “Protecting your privacy: Understanding confidentiality.” American Psychological Association. Available at:

Gene Johnson, “Killer’s psychiatrist can be sued by victim’s family, Washington Supreme Court says.” The Seattle Times, December 24, 2016. Available at:

Troy Parks, “Court case threatens physician-patient confidentiality.” AMA Wire, October 14, 2015. Available at: 

Troy Parks, “Patient-psychiatrist confidentiality hampered in liability ruling.” AMA Wire, January 4, 2017. Available at:

Jennifer L. Piel, JD, MD, and Rejoice Opara, MD, “Does Volk v DeMeerleer Conflict with the AMA Code of Medical Ethics on Breaching Patient Confidentiality to Protect Third Parties?” AMA Journal of Ethics, January 20, 2018. Available at:


Haley Turner
Media Ethics Initiative
University of Texas at Austin
April 27, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

The Ethics of Media Criticism

The Media Ethics Initiative Presents:

Media Criticism in Turbulent Times: A Panel Discussion

April 25, 2018 ¦ Moody College of Communication ¦ University of Texas at Austin

Dr. Rod Hart / Communication Studies ¦ Dr. Trish Roberts-Miller / Rhetoric & Writing
Dr. Barry Brummett / Communication Studies ¦ Dr. Michael Butterworth / Communication Studies
Moderated by Dr. Scott Stroud / Communication Studies



Finsta Fad

CASE STUDY: The Real Ethics of Fake Social Media Profiles

Case Study PDF | Additional Case Studies

Instagram is the largest social media platform for teenagers. Not only are teens posting images for their numerous followers on their primary Instagram account to see, they are now creating a more selective, secondary account they refer to as their “Finsta,” short for “fake Instagram” account. Their real account is used to post pictures of their “best-self” having various fun and exciting experiences, while their “Finsta” is often used to post pictures of their unfiltered—and perhaps more authentic—self. What ethical decisions come with the ability to have an Instagram account for a filtered self and a fake profile for a “real” self?


Template: Jim Covais

Many teens use their Finsta account to post pictures that would not be posted on their “real” Instagram. Finsta posts might include pictures of teens’ goofy, emotional, or rebellious activities. Teens will typically create Finstas under a name that is not related to their own so that this profile or its posts will not be traced back to them; these accounts are often set to “private,” limiting them only to their approved followers.  Many adults might see this as a red flag that their teen is participating in scandalous and inappropriate behavior that they would not want their followers on their primary account to see. This is not always the case.

Instead of hiding harmful behavior, some Finstas seem to exist to escape the pressure to be perfect in the digital panopticon of the online world. Many Finsta owners see their second account as a way to post and like what they want without the judgment and criticism of friends, family, and co-workers. Teens see their Finstas as a release from crushing social media pressures to be popular and as a way to truly express themselves. The privacy of a Finsta creates a sense of freedom to show their silly and vulnerable self to their small group of friends they have approved to follow their Finsta.

Finstas hold the potential for harm, however. The idea that only their small circle of followers have access to these posts may encourage teens to post inappropriate, mean, and scandalous pictures or comments. This sense of privacy, however, can prove illusory. There is nothing to stop a follower—perhaps a friend or ex-friend in real life of the account holder—from taking a screenshot of a post and showing others this supposedly private content. In this way, comments and pictures can be shared far beyond the limited group of approved followers of one’s Finsta account. Beyond the negative comments of the Finsta account holder, such shared content could be used in courses of bullying against the account holder themselves. In the search for a safe way to separate different images of who they are for different audiences, Finsta owners may find out the hard way that what they post is still traceable and potentially very public.   In many ways, Finstas represent the Faustian bargain of the ability to create and remake our online selves—they can be a helpful outlet for teens and their processes of self-discovery, but they can quickly create real repercussions.

Discussion Questions: 

  1. What are the ethical issues with having both a “real” Instagram account and a Finsta?
  1. How might the use of Finstas create good consequences for teens? How might they go wrong?
  1. Do Finstas involve deception? If so, how do you balance this ethical concern against any good consequences they may enable?
  1. Are there ethical limits to how teens (or others) should use their Finstas? Or are the ethical limits mainly for their “public” and “real” account?

Further Information:

Joanne Orlando, “How Teens use Fake Instagram Accounts to Relieve the Pressure of Perfection.” Available at:

Karena Tse, “Pro/Con: The Finsta Phenomenon.” April 10, 2017. Available at:

Savannah Dube, “Finstas and Self-Idealizing in the Digital World.” October 3, 2016. Available at:


Morgan Malouf
Media Ethics Initiative
University of Texas at Austin
April 24, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Collateral Damage to the Truth

CASE STUDY: Reporting Casualties in Drone Strikes

Case Study PDF | Additional Case Studies

In a report released in July 2016, the Obama administration revealed the number of enemy combatants and civilians killed via drone strikes since 2009 in counter-terrorism efforts. This report came after President Obama signed an executive order with the intention of making the once-secret drone program more transparent and protective of citizens through the routine disclosure of civilian deaths. According to CBS News, the release claimed that 2,300 enemy combatants were killed and anywhere from 64 to 116 civilian deaths occurred as collateral damage. Controversy quickly circled around the U.S. government’s attempt at transparency in drone use. There was disagreement about the accuracy of the numbers reported. Critics also questioned whether the administration’s decision to disclose this information so soon after significantly expanding the counter-terrorism drone program was an attempt to mislead the public into thinking that they were being fully informed about drone warfare and its costs. Naureen Shah, an affiliate of the human rights organization Amnesty International, said “We’re going to be asking really hard questions about these numbers. They’re incredibly low for the number of people killed who are civilians.”

Watchdog groups suggest the U.S. government’s estimates of civilian death are exponentially lower than the real death toll. The Long War Journal, a blog run by the non-profit media organization Public Multimedia Incorporated, reported the civilian death toll at 207 from operations in Pakistan and Yemen alone. This was the lowest count provided by a watchdog groups. The Bureau of Investigative Journalism placed the death toll closer to “a maximum of 801 civilian deaths,” with a possible range of “anywhere from 492 to about 1,100 civilians killed by drone strikes since 2002.” Additionally, the government report excludes the numbers of casualties from Iraq, Afghanistan and Syria, places where the U.S. has conducted thousands of drone operations. The Bureau of Investigative Journalism generated its numbers from both local and international journalists, field investigations, NGO investigators, court documents and leaked government files.

Speaking for the accuracy of the government report, some argue that government officials have access to information the public does not. The U.S. government asserts that they utilize refined methodologies in calculating post-strike numbers, and they have access to information that is generally unavailable to non-governmental organizations. This information influences both the motivations for carrying out a drone strike as well as the validity of the number of casualties reported from various sources. Many terrorists groups spread incorrect information about the U.S. as propaganda which can mislead watchdog groups’ statistics. The report from the Obama administration voices this worry, stating that “The U.S. Government may have reliable information that certain individuals are combatants, but are being counted as non-combatants by nongovernmental organizations.”

All of this poses a problem for news reporters who rely on the government to supply information necessary for their stories. A lack of clarity in how the government defines terms such as “civilian” or “enemy combatant” in their reports causes a discrepancy in the interpretation of the information journalists then relay to the public. According to this view, the public deserves the unspun data on the costs of certain policies, no matter how bracing it may be. Josh Ernest, a spokesperson for the White House, countered: “There are obviously limitations to transparency when it comes to matters as sensitive as this.” According to such a position, government officials have access to the most accurate and thorough information, and are best equipped to make sense of it and to wisely use it in protecting national interests. For instance, the government may be best positioned to evaluate how many innocent civilians are worth putting at risk for successful targeting of a combatant. Some ambiguity, or possibly opacity, in reporting causalities due to drone warfare would then seem to be the best way to protect national security. How are we to balance the needs of the press, the citizenry, and those conducting drone operations for national security in our journalistic information gathering and story writing?

Discussion Questions: 

  1. What are the values in conflict in the struggle over whether drone casualty figures are released?
  1. What are the concerns of the government, and what are the concerns of the press in reporting on drone casualties?
  1. Would there still be ethical worries if the government was obscuring or inflating only the combatant death estimates?
  1. Can you identify a creative way that the government can uphold some measure of transparency in its drone operations and still effectively pursue its military operations?

Further Information:

“Obama Administration Discloses Number of Civilian Deaths Caused by Drones.” CBS News, July 1, 2016. Available at:

“White House Tally of Civilian Drone Deaths Raises ‘Hard Questions.’” CBS News, 5 July 2016,

Director of National Intelligence. “Summary of Information regarding U.S. Counterterrorism Strikes outside Areas of Active Hostilities.” July 2016. Available at:

Jack Serle, “Obama Drone Casualty Numbers a Fraction of Those Recorded by the Bureau.” The Bureau of Investigative Journalism, July 1, 2016. Available at:

Scott Shane, “Drone Strike Statistics Answer Few Questions, and Raise Many.” The New York Times, July 3, 2016. Available at: drone-strike-statistics-answer-few-questions-and-raise-many.html


Sabrina Stoffels & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 27, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Swipe Right to Expose

CASE STUDY: Journalism, Privacy, and Digital Information

Case Study PDF | Additional Case Studies

In a world where LGBTQ people still often lack full protection and equal rights, it can be a challenge for someone to be public about their sexuality. Some have taken to dating apps such as Grindr, Tinder, and Bumble, which allow for a more secure way for people to chat and potentially meetup outside of cyberspace. On such apps, one’s dating profile can often be seen by anyone who is also using the app, illustrating how these services blur the line between private and public information.

Nico Hines, a straight and married reporter for news site, The Daily Beast, decided to report on the usage of dating apps during the 2016 Rio Olympics in the Olympic Village. Relying upon the public and revealing nature of profiles—at least to potential dates—Hines made profiles on different dating apps and used them to interact with a number of athletes. Most of his interactions were through the Grindr app which is a dating app for gay men. This app works through geotagging so that people can match up with others who are geographically near them. Profiles include information such as height, ethnicity, and age which can often be used to identify a person even if a full name isn’t given. He eventually wrote up his experiences in the article “The Other Olympic Sport in Rio: Swiping.”

To preserve the anonymity of the individuals with whom he was interacting, Hines did not use specific athletes’ names in his story. He did reveal details about those seeking dates including their physical features, the sport they were competing in, and their home country.  Readers and critics found that it was relatively easy to identify which athletes he was talking about using the information he provided. Since many of these athletes were not openly identified as LGBTQ, critics argued that he was “potentially outing” many of the athletes by describing them in the course of his story. Amplifying this concern was the fact that in some of the home countries of men who were potentially outed, it was dangerous or illegal to be openly gay.

In his defense, some pointed out that Hines didn’t intend to out or harm specific vulnerable individuals in the course of his story about the social lives of Olympic Athletes. His published account didn’t include the names of any male athletes he interacted with on Grindr, and he only named some of the straight women who he found on the Tinder app. The Daily Beast’s Editor-in-chief, John Avalon, stated that Hines didn’t mean to focus mainly on the Grindr app but since he “had many more responses on Grindr than apps that cater mostly to straight people,” Hines decided to write about that app. When Hines interacted with the athletes on the various dating apps, he didn’t lie about who he was and, as Avalon noted, Hines “immediately admitted that he was a journalist whenever he was asked who he was.”

The controversy eventually consumed Hines’ published story. After the wave of criticism crested, The Daily Beast first removed names and descriptions of the athletes in the article. But by the end of the day, the news site had completely removed the article with Avalon replacing it with an editor’s note that concluded: “Our initial reaction was that the entire removal of the piece was not necessary. We were wrong. We’re sorry.” Regardless of the decisions reached by this news site, difficult questions remain about what kinds of stories—and methods—are ethically allowed in the brave new world of digital journalism.

Discussion Questions: 

  1. What are the ethical values or interests at stake in the debate over the story authored by Hines?
  2. Hines tried to preserve the anonymity of those he was writing about. How could he have done more for the subjects of his story, while still doing justice to the story he wanted to report on?
  3. There are strong reasons why journalists should ethically and legally be allowed to use publicly-available information in their stories. Is the information shared through dating apps public information?
  4. How does Hines’ use of dating profile information differ, if at all, from long-standing practices of investigative or undercover journalism?

Further Information:

Frank Pallotta & Rob McLean, “Daily Beast removes Olympics Grindr article after backlash.” CNN, August 12, 2016. Available at: media/daily-beast-olympics-article-removal/index.html

John Avalon, “A Note from the Editors.” The Daily Beast, August 11, 2016. Available at:

Curtis M. Wong, “Straight Writer Blasted For ‘Outing’ Olympians In Daily Beast Piece.” Huffington Post, August 11, 2016. Available at: https://www.huffingtonpost. com/entry/daily-beast-grindr-article_us_57aca088e4b0db3be07d6581


Bailey Sebastian & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 24, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

What Exactly is Revenge Porn or Nonconsensual Pornography?

Scott R. Stroud & Jonathan Henson (University of Texas at Austin)*

In light of the recent Texas 12th Court of Appeals ruling striking down Texas Penal Code 21.16(b), otherwise known as the 2015 Texas Revenge Porn Law, it is useful to reconsider the complexity inherent in this awful online phenomenon. Is online revenge porn or nonconsensual porn as simple and straightforward as these laws and policy-makers lead one to believe? The court specifically criticized the law as “an invalid content-based restriction and overbroad in the sense that it violates rights of too many third parties by restricting more speech than the Constitution permits” (commentary here and here). How might third parties be implicated in such broad legal characterizations of revenge or nonconsensual porn? Being careful, honest, and precise about the various parameters of this online behavior are vital to constructively addressing it through legal, policy, and educational means.

All of the passionate rhetoric and calls for legislative action heard over the past few years assume that we know exactly what we mean when we issue calls against “revenge porn” or “nonconsensual pornography.” We want to argue that there is actually a plethora of behaviors potentially existing in the category of revenge porn, and not all of them entail the same harms, culpability, or, as we will discuss, the same issues of consent and privacy.

First, let us see how some leading advocates of criminalizing revenge porn—also discussed as nonconsensual pornography—define the term. Citron & Franks (2014, 1) stipulate that: “Nonconsensual pornography involves the distribution of sexually graphic images of individuals without their consent. This includes images originally obtained without consent (e.g., hidden recordings or recordings of sexual assaults) as well as images originally obtained with consent, usually within the context of a private or confidential relationship (e.g., images consensually given to an intimate partner who later distributes them without consent, popularly referred to as “revenge porn”).” This definition parses into two categories, images obtained through secret recordings or voyeurism and those obtained with consent (but not consent to publish outside of the relationship, presumably). There is no mention here of any diversity among posting behaviors online or on social media; it is simply assumed that the image content carries a consent-status, and that harm comes from distributing images with a negative consent status. In another attempt at a definition, Franks (2015, August 17) states that “Nonconsensual pornography refers to sexually explicit images disclosed without consent and for no legitimate purpose. The term encompasses material obtained by hidden cameras, consensually exchanged within a confidential relationship, stolen photos, and recordings of sexual assaults.” Here the same idea is proffered—imagistic content that is simply “disclosed” that comes from knowing or unknowing subjects depicted within the content. A new aspect is added, however, that defines nonconsensual pornography as images that uphold two criteria—they lack consent (presumably to distribute to others) and they serve “no legitimate purpose.” Of course, the latter clause is tough to operationalize in practice, as individuals will often think of speech that they disagree with as serving no legitimate purpose.

Both of these definitions of revenge porn might serve us well if we are fixated on the horrible case of one former relational partner attempting to harm another partner through the disclosure of sexual images online. They do not serve us well, however, when we start to ask specific questions about the different ways that this content can be posted, or how it can remain identifiable or linked to that real-life subject. In other words, these delineations of the category of revenge porn do not do justice to the various ways that the simple-sounding act of “disclosure” can happen online, and they do not seriously interrogate what it means for an image to have negative consequences through its connection with that identifiable person. These are, however, vital points, since the whole crux of controlling a phenomenon like revenge porn rests on having an understanding of what exactly one is trying to control with laws and moral approbation. Building on the actual and potential differences in revenge porn noted in previous studies (e.g., Stroud, 2014), let us divide up the category of revenge porn (or nonconsensual porn) into four variable dimensions: (1) the source of the content posted, (2) the consent-status of the material posted, (3) the intent of the agent doing the posting, and (4) identifying features resident in the imagistic content. These can be found in Table 1. We will then describe each one of these aspects in further detail to arrive at a clear specification of the range of potential revenge or nonconsensual pornography practices in the online world.

Table 1: Types of Revenge/Nonconsensual Porn Posting Behaviors

Content Source Consent Status Poster Intent Identifying Content
Self Granted Praise subject Known identifiers
Other Not granted Harm subject Unknown identifiers
Online Uncertain Other intentions No possible identifiers

The first dimension of revenge porn posting behavior involves where it came from prior to a given act of posting. This can be labeled as the source dimension. Did it come from the poster’s actions, such as the use of their camera? Or was it sent to them from someone else—a relational partner or other conversant—who created that content? Thus, the source dimension can be divided into poster­-created and other-created content. Some of this other created content could be from a relational partner, or it could be from online conversational partners that send one nude images, as appeared to be the case in some of the major revenge porn sites (Peterson, 2013, February 18). One could also stumble across such content online, with no attributable source evident, of course. Call this material simply online content, since its story of authorship is unclear or hidden. This is what one might find, if one googled a random term in an image search. Who exactly knows where the resulting images came from?

The second dimension of revenge porn postings involve what we will call the consent status of the images. We will discuss consent more in the following sections, as this is a vital point about the ethics of posting such material. For now, however, it will help any analysis of the issue to be clear about the range of consent-statuses that such material can possess. There is a tendency in how partisans talk about revenge porn images as coming with consent or without consent to be shared outside of the original instance of sharing. This is misleading in its simplification of the issue. First of all, the image is simply the image. One cannot look at an image and see the consent granted to its use. Thus, consent-status is something behind and beyond the specific image, and it lay in the people involved in the transaction. An image can therefore be exchanged with consent granted for further distribution by the giving partner, or it can be exchanged without consent being granted for further distribution. Furthermore, the consent status of a given image could be uncertain. This is likely to be the case when an image is found online, or when the parties do not openly talk about the limits of future distribution. We will return to these issues in the following sections.

The third dimension that varies in revenge porn posting behaviors is the intention of the posting agent. Why do they post this material? In many cases, it is to harm or shame a former relational partner (Stroud, 2014). Another possibility is that someone posts material to praise the subject, either in their actions, character, or more likely, physical appearance. This appears to be a common practice, at least early on, in the history of revenge porn sites; Hunter Moore, if we are to trust his early media pronouncements, indicated that around 50% of his site’s submissions were from individuals seeking their own quick internet fame (Hill, 2012, April 5). Some find this to be an incredible claim, and with good reason considering the recent evolution of how revenge porn is often used in courses of harassment and stalking. Regardless of the accuracy or veracity of Moore’s claims, we do see the conceptual room for the intention to praise the posted subject—especially when we move into content simply found on the internet (online sources) and shared. There can be other intentions, of course, some connected with posting for entertainment value or posting connected to financial gains (as seemed to be the motive in many revenge porn sites).

The most pernicious effects of revenge porn, clearly acknowledged in cases such as those that involve a person seeking to harm an ex-relational partner, come not simply from the image existing or being seen by others. The worst harms—threats, stalking, targeted harassment, family embarrassment, and financial or job loss—all turn on damaging content being connected to an identifiable individual. Thus, we can discern another dimension of revenge porn posting: the presence and type of identifying content resident in the image or along with the image. There are millions of sexual or nude images online at any given moment, but when one is identified as connected to that image, and when an audience exists for that image, harm can occur. Usually this is through contacting real-life people connected to that individual and sharing the embarrassing image. Many posters want this outcome, so they post some amount of identifying material along with the image. This can include a range of information about the subject: first name, first name with last initial, full name, address, town, email address, employer’s address or contact information, family member information, and even their social security number. It is important to stress that various sites and webmasters post various amounts of this information; some are rather minimal in what identifying information they post (Stroud, 2014).

These bits of informational can be called known identifiers, since they are attached to the image and connect the depicted person to an identifiable individual who putatively does not want the world to see them nude. As previous research has made clear, sometimes the crowd-sourced aspects of the internet communities surrounding the posting of revenge porn content supply the identifying information. In the usual story, the spurned relational partner posts the content and identifies the subject; in the wild west of the internet, however, often content is posted and the amorphous, unknown crowd then comments upon it, supplying more and more detail about the subject (Stroud, 2014). This is often due to others recognizing the individual in the picture, either through their face, objects in the background (e.g., a diploma on the wall, name tag, etc.), or other identifying marks (distinctive tattoos, a recognizable dorm room, etc.). Often these marks become useful through the crowd-sourced, mass agency of online users combining their powers of identification and inference.

Thus, someone could post an image with no known identifying content, but still leave room for future identification through others noticing heretofore unknown identifiers resident in the image. As facial “tagging” and recognition software advances, one could argue that many pictures could contain such unrealized clues to the subject’s real-life identify. Conceivably, one could post a photo with no possible identifiers in it; perhaps this would be the case in a close up shot of a body part with no identifying marks or background objects visible. Such non-identifying shots could be used for the purposes of revenge porn by posting them next to a non-nude image of some identifiable person, thereby making the visual argument that that person was the same person that was nude in the second, non-identifiable picture featuring nudity. This contingency, like many of the permutations mentioned previous are all ignored in most passionate discussion of revenge porn in favor of the standard “relationship gone bad” story discussed in the introduction. Yet one thing is clear: revenge porn is not one behavior, but instead a cluster of activities that vary in certain dimensions. What impact will the realization of its complexity have on our discussion of its harms? 

*Excerpt adapted from Scott R. Stroud & Jonathan Henson, “Social Media, Online Sharing, and the Ethical Complexity of Consent in Revenge Porn,” The Dark Side of Social Media: A Consumer Psychology Perspective, Angeline Close Scheinbaum (ed.), Routledge, 2018.

Dr. Scott R. Stroud is an associate professor in the Department of Communication Studies at the University of Texas at Austin and director of the Media Ethics Initiative. Jonathan Henson is a doctoral student in the Department of Communication Studies at the University of Texas at Austin.


Citron, D. K., & Franks, M. A. (2014). Criminalizing Revenge Porn. Wake Forest Law Review, 345(49), 1-38. Online.

Franks, M. A. (2015, August 17). Drafting an effective “revenge porn” law: A guide for legislators. Online.

Hill, K. (2012, April 5). Why we find Hunter Moore and his “identity porn” site, IsAnyoneUp, so fascinating. Forbes. Online.

Peterson, H. (2013, February 18) Revenge porn website operator accused of “catfishing” to trick woman into sending him nude photos so he can upload to site. Daily Mail. Online.

Stroud, S. R. (2014). The Dark Side of the Online Self: A Pragmatist Critique of the Growing Plague of Revenge Porn, Journal of Mass Media Ethics, 29 (3), 168-183.



Media Ethics and Virtual Reality

The Media Ethics Initiative Presents:

The Ethics of Virtual Reality

Dr. Donald Heider

Dean and Professor of Communication
Loyola University Chicago

April 10 ¦ Moody College of Communication ¦ University of Texas at Austin

Dr. Donald Heider is the Associate Provost for Strategy & Innovation and Founding Dean at the School of Communication at Loyola University Chicago and Founder of the Center for Digital Ethics & Policy.  He is author, co-author, or editor of eight books including  two volumes of Ethics for a Digital Age, the latest of which is due out in 2018. Dr. Heider is a multiple Emmy-award winning producer who spent ten years in news before entering the academy. He earned his B.A. from Colorado State University, his M.A. from American University, and his Ph.D. from the University of Colorado.  He served previously as Associate Dean at the Philip Merrill College of Journalism at the University of Maryland and was on the faculty at the University of Texas at Austin, Colorado, and University of Mississippi. Co-sponsored by the School of Journalism, UT Austin.

DacyWVkU0AANEql.jpg large

%d bloggers like this: