Media Ethics Initiative

Home » Posts tagged 'internet'

Tag Archives: internet

“One Does Not Simply Create a Meme”

CASE STUDY: The Ethics of Internet Memes

Case Study PDF | Additional Case Studies

Most people could never predict that they would become a viral internet sensation overnight. Canadian teenager Ghyslain Raza never thought about this possibility of digital fame until one day he found that a video he created of himself fighting imaginary enemies with a golf-ball retriever had been uploaded on Kazaa, a collective file-sharing network. According to BBC, classmates discovered this video on a school computer and shared it, reaching around 900 million views. It was labeled as the most viral video in 2006 (Tunison, 2017). The “Star Wars Kid” video continued to be shared, sometimes remixed and edited with funny music and visual effects. Ghyslain Raza had unintentionally become a meme.

There are many internet personalities who would relish in the digital limelight that Raza inadvertently stepped into, but he did not. The sudden popularity of his video caused a severe psychological effect on Raza. He immediately faced aggressive bullying at school. “In the common room, students climbed onto tabletops to insult me,” he shared in an interview with L’Actualite. “People made fun of my physical appearance and my weight. I was labeled the ‘Star Wars Kid’… [they] were telling me to commit suicide.” Raza eventually dropped out of school before spending time in a psychiatric institution for severe depression. His parents sued the classmates who uploaded the video without permission, which led to further bullying after some in the media claimed that the family was “greedy” (Zimmerman, 2013). Raza eventually overcame the negative repercussions of his unwanted celebrity. He went on to obtain a law degree from McGill University and has been a public supporter for victims of cyberbullying.

Raza’s experience with becoming a meme sparked debate over the ethical concerns of meme creation and sharing, especially when they use images or videos that depict identifiable individuals. Many memes originate from video or pictures being “repurposed” for the goals of the meme creator, including political commentary, satire, and “lulz.” Whitney Phillips and Ryan M. Milner, co-authors of The Ambivalent Internet, emphasize that memes are never “just” memes: “The problem is that the ‘just’ framing (just joking, just a meme on the internet, just a new kind of hazing ritual) posits what we describe… as a fetishized gaze, one that obscures everything but the joke itself” (Phillips & Milner, 2017). They argue that regardless of the medium, real people are almost always affected by a meme, whether directly (as in Raza’s case) or indirectly (in the case of a general racist meme).

However, it is the age of the Internet, and who knows how all the videos or images that we post or comment on will be taken by others. Screenshots and online archives simply continue the permanence and ability of others to comment on this content, often in ways we can’t anticipate. The consequences of being immortalized in a popular meme are difficult to predict, given the ever-evolving use of the meme. Sometimes, these unintended uses of images, videos, or other content seems to be a boon to the unsuspecting person depicted in the meme. Kyle Craven, better known as the subject of the “Bad Luck Brian” meme, has made “between $15,000 and $20,000 in [three years] between licensing deals and T-shirts” (Garsd, 2015). “Overly Attached Girlfriend,” a.k.a. Laina Morris, used her meme fame to launch her comedic acting career. Memes can even be considered a part of modern language, signaling a way of communicating among a technological in-group using digital discourse. Above all, memes strive to be clever, creative, and humorous in their appropriation of content and images that most likely where not intended to be comedic in that specific way.

As with many internet phenomena, meme creation often foregrounds a conflict between the freedom of expression of creative meme makes and the privacy concerns of those that may find themselves featured in the meme. How much control should we have over our images and videos, and at what cost to the creativity of the digital world?

Discussion Questions:

  1. What are the ethical issues with taking a picture or video and making it into a humorous meme?
  2. What concerns about consent are implicated in making image-based memes? Are these concerns with consent present in our other commentary or use of public images?
  3. Does the intention of the meme maker matter? Does it matter if they do not know (or care) about the subject depicted in the image or video content that the meme is based upon?
  4. What ethical guidelines would you propose for those creating image-based memes? How might these avoid harmful consequences or ethical transgressions—foreseen or unforeseen?

Further Information:

Garsd, Jasmine. “Internet Memes and ‘The Right To Be Forgotten.’” NPR, March 3, 2015. Available at:

Phillips, Whitney and Milner, Ryan. “The Harvard Case Shows a Meme Is Never ‘Just’ A Meme.” Motherboard, June 6, 2017. Available at:

Phillips, Whitney and Milner, Ryan. “The Complex Ethics of Online Memes.” The Ethics Centre, October 26, 2016. Available at:

Tunison, Mike. “The incredibly sad saga of Star Wars Kid.” The Daily Dot, August 6, 2017. Available at:

Weisblott, Marc. “‘Star Wars Kid’ goes on media blitz 10 years later.”, May 9, 2013. Available at:

Zimmerman, Neetzan. “‘Star Wars Kid’ Breaks Silence, Says Online Fame Made Him Suicidal.” Gawker, May 10, 2013. Available at:


Alex Purcell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
January 14, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Just Do It?

CASE STUDY: Nike, Social Justice, and the Ethics of Branding

Case Study PDF | Additional Case Studies


ViktorCylo / CC BY 3.0 / Modified

In September of 2018, Nike unveiled their 30th anniversary “Just Do It” campaign, featuring prominent athletes such as Serena Williams, LeBron James, Lacey Baker, and Odell Beckham Jr. Also featured in the series is former San Francisco 49ers quarterback turned activist Colin Kaepernick, who has been a controversial figure since early August of 2016 when he protested racial injustice in America by sitting and later kneeling during the national anthem at the start of football games. Kaepernick’s Nike advertisement, which he posted to social media sites on September 3, 2018, displays a close-up image of his face with the words “Believe in something. Even if it means sacrificing everything” written across the image. Some have praised the advertisement as taking a stand in the nationwide debate over the state of minority rights while others have been concerned with Nike’s movement into the arena of political advocacy.

Gino Fisanotti, Nike’s vice president of brand marketing for North America, defended the company’s featuring of Kaepernick, who has not played in the NFL since the 2016 season when he refused a contract with the 49ers: “We believe Colin is one of the most inspirational athletes of this generation, who has leveraged the power of sport to help move the world forward.” Additionally, many high-profile athletes and celebrities have voiced their support for Nike and Kaepernick, including LeBron James and Serena Williams, both outspoken figures about social justice in their own right. “He’s done a lot for the African American community, and its cost him a lot. It’s sad,” Williams said of Kaepernick. “Having a huge company back him,” she continued, “could be a controversial reason for this company, but they’re not afraid. I feel like that was a really powerful statement to a lot of other companies.”

Other observers see Nike’s move from the commercial to the political as potentially concerning. Michael Serazio worries that this is just another sophisticated trick from a corporate powerhouse: “Getting us to think we’re making a statement by buying Nike is the long con advertising has played, and it has played it well.” Increasingly, brands are giving in to a recent demand for politicization, forcing consumers to question the political participation of various corporations. Some argue that Nike is using a popular movement to increase its own sales, and taking advantage of the prestige and celebrity status of its minority athletes while doing so. Another worry is that it distracts attention from how Nike products are made, often by workers in difficult working conditions in developing countries. As Serazio puts it, the new campaign risks diverting our focus from “the marginalized who make stuff rather than the posturing it affords those privileged enough to own it.”

The advertisement campaign is a risky move for Nike, who might garner heightened attention to its products and brand, but who also runs the risk of alienating part of its consumer base by becoming too politicized. Swaths of the football-watching public, and public at large, are divided by the anthem protests carried on by Kaepernick and others. By featuring the originator of this series of protests, many fans might view Nike as standing with black athletes and their concerns. Yet others may view the advertisement as an attempt to profit off of a protest that strikes at the heart of patriotic values that some hold dear. Some owners of Nike products even illustrated their disgust with the campaign by burning their shoes, and then subsequently posting the flaming images on social media. So far, however, Nike has not sacrificed anything due to the gamble that this advertising campaign represents: Nike stock is up 5% since the advertisement hit the public, representing $6 billion increase in Nike’s market value.

Nike’s campaign was meant to garner attention and make a statement on its 30th anniversary. It succeeded at accomplishing these goals. But many are still wondering: was Nike primarily interested in taking a courageous stand on an important political issue of our time, or were they simply using Kaepernick as a clever ploy to sell more shoes?

Discussion Questions:

  1. Should a company like Nike get involved in matters of political controversy and social justice?
  2. Is Nike misusing Kaepernick and the NFL protests in its recent campaign? If you judge this to be the case, what other ways could Nike do if they wanted to bring attention to these issues and protests?
  3. Do you think that these advertisements will hurt Nike’s brand or bottom line? Do you think this is an important ethical consideration for Nike?
  4. Should companies take stands on controversial debates orbiting around justice and the public good in their advertisement campaigns? Why or why not?
  5. Nike clearly has the ability—and right—to take a stand on this issue. What should the virtuous consumer do in reacting to Nike’s campaign? What about if they disagree with Nike’s stance?

Further Information:

Anderson, Mae. “Good for business? Nike gets political with Kaepernick ad.” September 4, 2018. Available at:

Belvedere, Matthew J. “Sorkin: Nike’s Kaepernick ad decision was based on ‘attracting big name athletes’ who side with his cause.” September 7, 2018. Available at:

Boren, Cindy. “As Trump tweets, Colin Kaepernick shares new Nike ad that reportedly will air during NFL opener.” Washington Post. September 5, 2018. Available at:

Reints, Renae. “Colin Kaepernick Pushes Nike’s Market Value Up $6 Billion, to an All-Time High.” Fortune. September 23, 2018. Available at:

Rovell, Darren. “Colin Kaepernick part of Nike’s 30th anniversary of ‘Just Do It’ campaign.” ESPN. September 3, 2018. Available at:

Serazio, Michael. “Nike isn’t trying to be ‘woke.’ It’s trying to sell shoes.” Washington Post. September 5, 2018. Available at:


Holland J. Smith & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
September 24, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Does the Photo Fit the News?

CASE STUDY: The Ethics of Powerful Images in the Immigration Debate

Case Study PDF | Additional Case Studies

Sharon Lauricella, Ph.D., University of Ontario Institute of Technology


Photo: John Moore / Modified

In June 2018, discourse around immigration to the U.S. came to a peak when award-winning Getty Images photographer John Moore captured an image of a distraught, crying, two-year-old Honduran child beside her mother and a U.S. border agent. The image of the toddler, with her shoelaces removed and mouth agape in a wail, came to represent Trump’s “zero tolerance” policy toward undocumented immigrants; thousands of children, parents, and family members were separated at the U.S. border as a form of punishment for attempting to cross illegally.

The photo galvanized the public to contact both Republican and Democratic representatives with their objections, and also inspired people to donate to legal defense services for refugees. The emotional photograph had such a significant public impact that shortly after its publication, Trump issued an unusual retreat and ended the hardline policy of separating families.



Just over a week after the photo was taken, and subsequent to its viral spread, Time magazine’s cover of June 21, 2018 featured a photo illustration of the child, without her mother or the border agent, opposite an image of a menacing Trump staring down at her. The caption read, “Welcome to America.”

When Moore took the photo, it was unknown whether the mother, Sandra Sanchez, and her daughter, Yanela, would be separated during the immigration process. Subsequent to the publication of the original photo, it was discovered that the child and her mother were detained without being separated. Carlos Ruiz, the border patrol agent whose legs are in the original photo, told CBS news that upon Sanchez’s illegal crossing, he detained her and her daughter for a formal search, which took “less than two minutes.” Ruiz reported that as soon as the brief search was completed, Sanchez picked up her daughter, who immediately stopped crying.

The Trump administration harshly criticized Time, accusing it of running “fake news” given that the child in the photo illustration and her mother were not separated at the time that the photograph was taken, nor were they separated afterward. Former Speaker of the House Newt Gingrich called for Time to apologize to Trump, and Press Secretary Sarah Sanders tweeted that Democrats and the media were guilty of exploiting the little girl to “push their agenda” to revise Trump’s hardline approach to immigration. Trump also weighed in, tweeting that the Democrats have no intentions of resolving this “decades old problem,” and that “we can pass great legislation [on immigration] after the Red Wave,” referring to the then upcoming elections in November 2018. Other news media outlets argued that printing the photo was a blunder on the part of Time, and that the intended effect of the image “oversold” the problem of families detained and separated at the border.

Time issued a correction which made clear that the child was not taken away from her mother by border agents. However, Time defended its decision to run the cover’s photo illustration, indicating that the image of the distraught child is representative of the thousands of immigrant children separated from their parents and of whom there are not any photographs. Time’s editor-in-chief, Edward Felsenthal, issued a statement arguing that the photo captured the terror of what was indeed happening at the border and in detainee centers:

The June 12 photograph of the 2-year-old Honduran girl became the most visible symbol of the ongoing immigration debate in America for a reason: Under the policy enforced by the administration, prior to its reversal this week, those who crossed the border illegally were criminally prosecuted, which in turn resulted in the separation of children and parents. Our cover and our reporting capture the stakes of this moment.

Felsenthal and those supporting publication of the cover argue that the image of the toddler on its cover was representative of immigration-related political discourse, and that the photo illustration conveyed contemporary treatment of undocumented immigrants. Alongside government-issued photographs of children in cage-like pens in detainee centers, the photo on the cover of Time served as a poignant representation of the difficulties experienced by undocumented immigrants.

Moore is an experienced photojournalist and had covered immigration issues previously. He expressed that he captured a raw and honest image that did much to publicize the terror experienced by many undocumented immigrants. He said he believes that his job as a photojournalist is to inform and report what is happening, and that when he took the photo, he feared that the child and her mother would be separated in the detainment process. Moore also argues that in his work, “it is important to humanize an issue that is often reported in statistics.” He issued no objections to the use of his photograph as it appeared on the Time cover.

Discussion Questions:

  1. Moore’s photo served to educate and galvanize the public in relation to immigration issues. Does the fact that the toddler was not separated from her mother matter, given that thousands of other children were?
  2. Viewers of the Time cover could assume that the toddler was separated from her mother while being detained at the U.S. border. Given that this was not the case, should Time have run the cover with the photo illustration featuring this child?
  3. What ethical values serve as the basis for photojournalism? What interests are often in conflict in photojournalism?
  4. What values are in conflict in the use of Moore’s photo? How might you best balance the conflicting interests if you were an editor?

Further Information:

Kirby, J. (2018, June 22). “Time’s crying girl photo controversy, explained.” Vox. Retrieved from

Blake, A. (2018, June 22). “Time magazine’s major mistake on the crying-girl cover.” The Washington Post. Retrieved from

CBS News. (2018, June 22). “Crying girl in iconic image was never separated from mother, ICE says.”

Dwyer, C. (2018, June 22). “Crying toddler on widely shared Time cover was not separated from mother.” NPR. Retrieved from

Holson, L. M. & Garcia, S. E. (2018, June 22). “She became a face of family separation at the border. But she’s still with her mother.” The New York Times. Retrieved from

Time Staff. (21 June 2018). “The story behind Time’s Trump ‘Welcome to America’ cover.” Time. Retrieved from:

Schmidt, S. & Phillips, K. (2018, June 22). “The crying Honduran girl on the cover of Time was not separated from her mother.” The Washington Post. Retrieved from

Cases produced in cooperation with the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative, the author, and the University of Texas at Austin. They are produced for educational uses only. They can be used in unmodified PDF form without permission or cost for classroom or educational uses. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Digital Immortality on an App

CASE STUDY: The Ethics of Legacy Chatbots

Case Study PDF | Additional Case Studies


Diego / Unsplash / Modified

The death of a loved one is a traumatic experience in a person’s life. But does it really signal the end of one’s interactions with the deceased? Some companies, are looking to technology to give us a new hope of talking to the dead. Eternime wants to create a platform that allows for friends and relatives of deceased loved ones to hold conversations with an artificial intelligence powered “chatbot” and avatar that resembles the past person. Eugenia Kuyda, a co-founder of the AI startup Luka, created a digital “version” of her deceased friend Roman, as well as an app-based chatbot version of the celebrity Prince (MacDonald, 2016). These legacy chatbot programs function by using advanced computing algorithms and access to a range of digital records of the person they are simulating, including Twitter posts, Facebook posts, text messages, Fitbit data, and more. While technology has not yet advanced enough to create chatbots that resemble specific people with complete accuracy, companies like Eternime and Luka are working hard to perfect it. Eternime is allowing interested customers to sign up now and provide information to the website so that the company will have all the information they need to create a carbon copy of a user in the future. When the technology catches up with the idea, people will be able to communicate with their deceased friends and relatives. Will this concept ease the loss of a loved one or will it damage those grieving?

One of the challenges that people face when losing a loved one is the loss they feel. Mourners are sometimes left with little reminding them of the deceased person beyond a few photographs, possessions, or memories of the deceased. While these things can be valuable to people that have lost someone close, they may not feel like enough. This is where Eternime is hoping to assist. If the chatbots emulate the deceased persons as well as some expect, these conversations might feel as authentic as chat conversations with the real person. Some even argue that the artificial intelligence powered chatbot may provide a more truthful representation of a person’s life than the person him- or herself. Simon Parkin believes “ and other similar services counter the fallibility of human memory; they offer a way to fix the details of a life as time passes.” Being able to learn the experiences of a person now deceased could be invaluable to a grieving person. Eternime’s co-creator Marius Ursache, believes that Eternime “could be a virtual library of humanity” (Parkin, 2015).

Leaving a legacy is important to many people because it ensures their life has importance even after they die. Most people achieve this by having children to share their stories or by creating something that will be used or remembered after they die. However, Eternime provides a new avenue to store a legacy. Ursache asserts that the purpose of Eternime is “about creating an interactive legacy, a way to avoid being totally forgotten in the future ” (Parkin, 2015). However, critics argue that this may not be a legacy that you want to leave. Laura Parker argues that “an avatar with an approximation of your voice and bone structure, who can tell your great-grandchildren how many Twitter followers you had, doesn’t feel like the same thing [as the traditional ways of leaving a legacy]” (Parker, 2014).

Others worry that platforms like Eternime will not assist those dealing with grief, but rather prevent them from accepting the loss and moving on. Joan Berzoff, a specialist in end-of-life issues, believes that “a post-death avatar goes against all we know about bereavement” (Parker, 2014). Living life with an algorithm that is merely attempting to mimic a deceased person could distance a user from reality, a reality that includes the fact that when a person dies, their biological presence and form are gone forever. Applications like Eternime could prevent users from accepting the reality of their situation and cause the users additional grief.

Another issue with chatbots of deceased people is the fact that a program would have to synthesize and access large amounts of private and personal information. Algorithms that mimic the user may be very advanced and realistic. However, they will never be perfect. For example, people often keep certain information secret or only share it with certain people. Will the AI chatbot know how to discreetly continue such practices of strategic information presentation, or maintain lies or secrets that the deceased lived with? The chatbot may pick up information or behavior from a text to a close friend and share it with a child. While this could lead to an honest-talking and transparent version of the deceased individual, it would not represent the deceased person in the way they would might wish to be represented, judging on their information sharing practices in their past interactions. Would we find that we judge a deceased individual differently based upon how a less-nuanced, but perhaps more truthful, chatbot simulates them after they are dead?

The technology to achieve the goal of legacy chatbots that are indistinguishable from real humans is not yet a reality, but it will be soon. The idea of deceased people being immortalized on the internet or in an AI-driven program gives us a digital way to talk to the dead. Will this technology lessen the burden of losing a loved one or make it harder to accept their loss? Will it create legacies of people to be past down to future generations, or tarnish our perception of people of the past?

Discussion Questions:

  1. Do you think it’s realistic, given enough person data, to create an interactive simulation of that person in chatbot form?
  2. Would legacy chatbots be a helpful thing for those mourning the loss of a loved one? Might they help new people learn about the deceased person?
  3. Does the deceased person have a right to any privacy of their information? Should someone make a chatbot of a deceased person without their permission?
  4. Could a deceased individual be harmed by truthful things that their legacy chatbot says on their behalf?
  5. What ethical interests are in conflict with legacy chatbots? How might you balance these issues, should people and companies want to build such bots?

Further Information:

Hamilton, Isobel. “These 2 tech founders lost their friends in tragic accidents. Now they’ve built AI chatbots to give people life after death,” Business Insider, November 17, 2018. Available at:

MacDonald, Cheyenne. “Would you resurrect your dead friend as an AI? Try out ‘memorial’ chatbot app – and you can even talk to a virtual version of Prince,” Daily Mail, October 6, 2016. Available at:

Parker, Laura. “How to become virtually immortal,” The New Yorker, April 4, 2018. Available at:

Parkin, Simon. “Back-up brains: The era of digital immortality,” BBC, January 23, 2015. Available at:


Colin Frick & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
December 4, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They are produced for educational use only. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Is It Only A Game?

CASE STUDY: The Ethics of First-Person Shooter Video Games

Case Study PDF | Additional Case Studies


Screen capture from Active Shooter

Eric Harris and Dylan Klebold slaughtered thirteen people at Columbine High School with two shotguns, a semi-automatic pistol, and a Hi-Point carbine in 1999. Adam Lanza murdered twenty children and six adult staff members at Sandy Hook Elementary School with a rifle and two handguns in 2012. Nikolas Cruz killed seventeen students and staff members at Marjory Stoneman Douglas High School with a semi-automatic rifle in 2018. Several qualities connect these killers with each other, but for some, one fact sticks out: they were all avid video gamers.

Many of the games that these individuals enjoyed are called “first-person” shooters, or games in which the player assumes a first-person perspective making it appear as if the player is, in fact, using the character’s weapon themselves. In an article exploring concerns with such games, Emma Lindsay says that it is “hard to find a mass shooter who didn’t play violent video games.” After a shooting resulted in two deaths at a high school, Kentucky governor Matt Bevin claimed that “we can’t celebrate death in video games… and then expect that things like this are not going to happen” (Ray & Schreiner, 2018). Charles N. Cox, a former video game developer at Microsoft and Sierra Studios, wrote a blog post explaining that he would never work on a first-person shooter again, stating that “money isn’t an acceptable stand-in for ethical behavior… just as legality doesn’t equal morality.”

Others disagree that violent video games share much of the blame in actual violence. Christopher J. Ferguson argues that any scientific evidence asserting violence in video games as a precedent to real-world violence is usually “methodologically messy and often contradictory.” His personal research focuses on finding biases in scholarly journals, which led to his discovery that research articles finding a relationship between video games and violence were “more likely to be published than studies that had found none” (Ferguson, 2018). A meta-analysis done by Johannes Breuer found that any statistical relationship between violent video games and real-life aggression was at a level that could be “considered trivial” (Breuer, 2015). Emma Lindsay, however, says that “people who debate the question ‘do violent video games make people violent?’ are missing the mark… the question is, ‘why do we want to play violent video games?’… What is appealing about performing 83,000 virtual murders?” 83,000 is the total number of kills Adam Lanza, the Sandy Hook shooter, racked up on his favorite first-person shooter game, Combat Arms.

Some first-person shooter games have gained extra notoriety, despite sharing similar aesthetics to other shooter games. One such game is Active Shooter, an independent game created by Ata Berdiyev, who has now been deemed a “troll.” The game is billed as a “dynamic SWAT simulator” and allows the player to play the role of either “a SWAT team attempting to disarm the shooter, or the shooter themselves” (Molina, 2018). Steam, a video game marketplace where independent game developers can test their game designs on a real audience, was scheduled to release Active Shooter on its platform, but they quickly pulled the game after lawmakers and families of shooting victims expressed their outrage.

While the ethical battle goes on, some video game companies are taking it upon themselves to create something of a middle ground. 2K Games has developed a game called Spec Ops, what initially looks like your typical warzone first-person shooter. However, it serves as an exploration into the player’s own moral limits. Brian Crecente writes that “there was a moment in the game when I was asked to decide the fate of two men… instead, I opened fire on the people asking me to make the decision and it didn’t punish me.” The game offers a wide range of options for the player, and not all of them include killing someone. The lead game designer Cory Davis says, “Our goal is to invite you to have a deeper emotional reaction.” Each decision comes with a different consequence, some that affect your fellow soldiers’ impressions of you, allowing you to choose your rewards and punishments. “The unspoken choice here,” Walt Williams, lead writer for 2K Games, says, “is ‘are you going to obey the video game? Are you going to do what you’re told to do?’” But the deeper ethical worry with any of these games is this: are we giving these games too much power by placing ourselves into the role of a “shooter” in the first place?

Discussion Questions:

  1. Is it ethically problematic to play (or to enjoy) first-person shooter video games? Why or why not?
  2. Is establishing a link between actual violence and enjoyment of these shooter games important to their ethical evaluation? Might it still be problematic to enjoy something that doesn’t have many real world effects?
  3. What is particularly worrisome about the Active Shooter game? How is it different from other violent first-person shooter games?
  4. Is there a way to design and enjoy first-person shooter games in an ethical fashion? Explain where you think the moral lines should be drawn.

 Further Information:

Cox, Charles N. “Why I’ll Never Work on First-Person Shooters Again.” Charles N. Cox- On the 1’s and 2’s, December 26, 2012. Available at

Crecente, Brian. “What If a War Game Made You Question the Morality of Killing?” Rotaku, November 22, 2011. Available at

Ferguson, Christopher J. “It’s time to end the debate about video games and violence.” The Conversation, February 16, 2018. Available at

Lindsay, Emma. “The Trouble with First Person Shooters is Deeper than First Person Shooting.” Medium, March 5, 2018. Available at

Molina, Bret. “’Active Shooter’ video game simulating school shootings developed by ‘a troll,’ pulled from platform.” USA Today, May 30, 2018. Available at

Ray, Robert & Schreiner, Bruce. “Kentucky governor says school shootings are a ‘cultural problem,’ declares day of prayer.” Chicago Tribune, January 26, 2018. Available at

Wenner Moyer, Melinda. “Yes, Violent Video Games Trigger Aggression, but Debate Lingers.” Scientific American, October 2, 2018. Available at


Alex Purcell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
December 1, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

“Some of My Best Friends are Internet Vigilantes”

CASE STUDY: Racists Getting Fired and the Ethics of Fighting Racism

Case Study PDF | Additional Case Studies


Screen capture from Racists Getting Fired

The accessibility of the internet allows nearly anyone to say nearly anything, regardless of its wisdom. On the web, power seems to be in the hands of the people posting content. Emerging from this chaos of expression and speech is a digital system of checks-and-balances, in which those speaking so freely online can be called out for their negative utterances by any other denizen of the online world. This criticism of what is said online can easily travel into the lives and jobs of offline persons. Take the Tumblr account Racists Getting Fired as an example. The page is “dedicated to publicly revealing the identities of people who use social media to express racist viewpoints, and then contacting their employers” (Haggerty, 2014). The blog originated in response to the heated reactions and racially-charged postings that followed the shooting of Michael Brown in Ferguson, Missouri. Eventually, the website’s wider potential in sharing and shaming racists became apparent; One such posting screen captured and listed on the site was a tweet by an individual who said of Baltimore rioters (after a controversial death in police custody), “I’m sure all those pieces of shit went right to the pharmacy and stole every drug in that CVS.” Another site visitor comments: “So this woman think she can not lose her job. It’s crazy in Baltimore right now and She saying racists things. Please call her job […]. Please REBLOG! We came together before, let’s put a end to ignorance! I will be calling the corporate office Monday morning!” Another signaled their support with the succinct command to those listening to “DESTROY HER INCOME.” When University of Oklahoma fraternity members were caught on video singing a racist song, their names and family information were quickly posted on the site, along with comments such as “The expelled OU racists’ names are […]. Spread their names. Make them exist in infamy. Make sure they can never get jobs beyond (maybe) working for their parents.” Successful campaigns are listed in the section labeled “gotten,” dedicated to the success stories of the racists who had been fired from their jobs after being featured on this site. Many of these “vigilantes” are proud of the blog’s victories, as it “provides a gleeful jolt of dopamine when there are actual results” (McDonald, 2014).


“Code word” tags for submitted images and posts from Racists Getting Fired

While it does not completely disable structural racism in our society, defenders point to the blog as providing a method of delivering justifiable consequences to outed individual racists. Speech has consequences, such defenders say, and the Racists Getting Fired blog is merely coordinating individual efforts to point out such hateful utterances to the alleged racists’ employers and schools. People are free to be racists online, but they should not expect to be free of the criticism and the negative publicity that this expressivity might bring with it.

But critics worry that crowd-sourced vengeance for societal wrongs might go too far. One concern is that by going beyond mere criticism to the organized contacting of a alleged racist’s employer or educational institution, the anti-racist activists transgress the limits of ethical speech. Accountability of such campaigns is compromised, and there are often few checks of the quality of claims that allege that an individual is actually a racist. Take the example of Brianna Rivera. Rivera’s vengeful ex-boyfriend created a fake profile posing as Rivera, posted a number of racist remarks, and then sent screenshots of the faux posts to the Racists Getting Fired Tumblr blog (Levine, 2014). It didn’t take long for the blog’s community to begin their attacks on her and her employer. Thanks to observant users, the post was revealed to be fake and the blog ceased to share it, although versions of the original post remained on Tumblr. Rivera was able to keep her job, though she could never reclaim her reputation as her name would always be associated with the original Tumblr post. This case shows that even with screenshot “evidence,” innocent people could be harmed. Furthermore, even if an individual actually made a racist post or comment, one might still object that being a racist shouldn’t automatically result in being fired or losing one’s economic livelihood. After all, there are many professions were workers can do their jobs adequately while subscribing to incorrect—or even hateful—views.

Motivating supporters of Racists Getting Fired is the sense that they are creating social sanctions and economic harms for those who deserve such a punishment for holding racist views; this seems to be part of a strategy that will de-incentivize the holding of racist views in our communities. Meredith Haggerty doubts this tactic will work, and argues that these blogs aren’t going to truly change anything in society: “while watching [a racist] apologize for his comments is cathartic, and honestly, hilarious, it seems naive and unreasonable to think that there is now less hate in his heart” (Haggerty, 2014). She declares that public outings like this are “dangerous impulses.”  Asam Ahmad argues for the practice of “calling in” instead of “calling out” or shaming those who make racist remarks: “calling in means speaking privately with an individual who has done some wrong, in order to address the behaviour without making a spectacle of the address itself.” Ahmad advocates this manner of reacting to racists as it seems to holds the best chance for fixing racist attitudes—public shaming simply harms racists, and makes them all the more defensive against the activists causing this harm.

As activism grows in dispersed but unified online publics, the question becomes all the more imperative: Do these crowd-sourced sites and tactics assist in the battle against hate and social injustice, or do they simply represent a new means of abuse and create further animosity among fellow citizens?

Discussion Questions:

  1. What ethical values are in conflict with sites such as Racists Getting Fired?
  2. Do you agree with the “naming and shaming” approach to those who make racist (or sexist) comments online? Why or why not?
  3. If false accusations (such as that made against Rivera) could be eliminated, would such sites be ethical means of anti-racist activism?
  4. Do such crowd-sourced tactics stand a chance of creating a less racist society?
  5. How are such sites ethically different from in-person forms of activism such as picketing and boycotts?

 Further Information:

Ahmad, Asam. “A Note on Call-Out Culture,” Briarpatch Magazine, March 2, 2015. Available at:

Haggerty, Meredith. “‘Racists Getting Fired’ and the Desire to Do Something About Ferguson.” WNYC, December 2, 2014. Available from

Levine, Sara. “Vengeful Ex Posts Fake Rant to Racists Getting Fired Tumblr, Gets Busted By Internet Sleuths.” Bustle, December 2, 2014. Available from

McDonald, Soraya Nadia. “‘Racists Getting Fired’ exposes weaknesses of Internet vigilantism, no matter how well-intentioned.” The Washington Post, December 2, 2014. Available from


Alex Purcell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
November 30, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Filtering Out Cyberbullying

CASE STUDY: The Ethics of Instagram’s Anti-Bullying Filters

Case Study PDF | Additional Case Studies



Of the many new ethical challenges raised by social media, one of the more worrisome is cyberbullying. A recent Pew survey found that 59% of American teens say they have experienced some type of cyberbullying online or on their cell phone. Bullying has been especially prominent on one of the most popular social media platforms, the image-intensive site Instagram. On October 9, 2018, Instagram announced that they would introduce a feature that utilizes machine learning techniques to detect bullying in images and captions posted on the platform. When detected, the images are reviewed by an Instagram employee to determine if they should be deleted. This new feature builds off a tool implemented last year that detects hurtful or offensive comments.

Instagram’s new tools are intended to prevent bullying among users of its platform, a response to criticisms that it has failed to prevent bullying—in a recent study, for instance, “Instagram was highlighted as having become the vehicle most used for mean comments” (The Annual Bullying Survey 2017). Taylor Lorenz claims that “Teenagers have always been cruel to one another. But Instagram provides a uniquely powerful set of tools to do so” (2018). The head of Instagram, Adam Mosseri, hopes to limit the ability for users to utilize this “powerful set of tools” in a negative manner. In a blog post announcing the release of the new anti-bullying measures, he commented that they will “help us protect our youngest community members since teens experience higher rates of bullying online than others” (Mosseri, 2018). Many observers are hopeful that features like these will reduce the amount of negative posts and comments and help the many young Instagram users that struggle with bullying on the Internet.

As laudable as these goals are, the introduction of Instagram’s anti-bullying measures has been meet with concern from many of its users. Some critics believe that these features are another instance of social networks limiting free speech on their sites for the purpose of making their platforms more desirable, thereby increasing the number of users and the site’s profit (Blakely & Balaish, 2017). Many believe that companies like Facebook, who owns Instagram, have too much power when it comes to deciding who gets to say what on social media. In discussing the new anti-bullying filters, Kalev Leetaru argues that Silicon Valley is transitioning from the “early days of embracing freedom of speech at all costs” to becoming “the party of moderation and mindful censorship” (2018).

Critics of Instagram’s latest moves are also concerned about the way these changes were planned and implemented. Instagram has not been fully transparent in revealing the techniques used to filter out negative images and comments:  “no detail is given beyond that ‘machine learning’ is being used, no reassuring statistics on how much training data was used or the algorithm’s accuracy rate when it entered service. Subsequent public statements are typically vague, claiming ‘successes’ or misleadingly worded statements that are not corrected when the media reports them wrong and rarely include any kind of accuracy statistics” (Leetaru, 2018). The lack of transparency, explicit standards, and accuracy data are a cause for concern. Due to this lack of transparency, many worry about implicit bias in the anti-bullying filters—or human moderators—that could eliminate posts which are not truly bullying in nature. However, Instagram argues that allowing external oversight into their filters and providing information on how they exactly work would allow for malicious users to “game” the system and avoid the anti-bullying controls.

Another worry centers on the issue of whether image content can be reliably identified as “bullying” in nature. For instance, some “split-screen” images compare a picture of a user to another photo in a negative fashion, but this isn’t always the function of comparative collages. What makes an image—whether individually or as part of a collage—an essential part of an act of cyberbullying? Would posting or tagging an unflattering picture or undesirable image constitute bullying behavior? What about a user who posts an image mocking a public figure, politician, or famous celebrity? In more general terms, what differentiates a course of bullying from harsh criticism?

Instagram’s new bullying filters could help to protect younger users from harmful attacks and hateful comments. However, these preventative measures come at a cost to the freedom of speech on the Internet and give more power and control of online content to large social media companies like Facebook. How much filtering do we need to purify our online ecosystems of cyberbullying?

Discussion Questions:

  1. Should social media websites be held responsible for the content that is posted on their platforms by individual users?
  2. How would you define cyberbullying? What features must a post have to count as cyberbullying? Can you think of another sort of non-bullying post that might have some or all of these features?
  3. What ethical values are in conflict in Instagram’s attempt to fight cyberbullying? How might you balance these values in dealing with cyberbullying?
  4. What are the drawbacks to having human moderators determine which comments and posts are harmful and which should be allowed to remain? What are the drawbacks to machines or programs doing most of this work?

Further Information:

“The Annual Bullying Survey 2017: What 10,000 People Told Us About Bullying.” Ditch the Label, July 2017. Available at:

Blakely, Jonathan, and Timor Balaish. “Is Instagram going too far to protect our feelings?” CBS News, August 14, 2017. Available at:

Leetaru, Kalev. “Why Instagram’s New Anti-Bullying Filter Is So Dangerous” CNN Business, October 11, 2018. Available at:

Lorenz, Taylor. “Teens Are Being Bullied ‘Constantly’ on Instagram” The Atlantic, October 10, 2018. Available at:

“A Majority of Teens Have Experienced Some Form of Cyberbullying.” Pew Research Center, Washington, D.C. (September 17, 2018). Available at:

Mosseri, Adam. “New Tools to Limit Bullying and Spread Kindness on Instagram.” October 9, 2018. Available at:

Wakefield, Jane. “Instagram tops cyber-bullying study” BBC News, July 19, 2017. Available at:

Yurieff, Kaya. “Instagram says it will now detect bullying in photos.” Forbes, October 9, 2018. Available at:


Colin Frick & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
November 20, 2018

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They are produced for educational use only. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

%d bloggers like this: