Media Ethics Initiative

Home » Case Studies

Category Archives: Case Studies

To Catfish or Not to Catfish?

CASE STUDY: The Ethics of Online Deception

Case Study PDF | Additional Case Studies


Thanks to director, producer, and writer Nev Schulman, most of us know the dangers of “catfishing” because his popular MTV television show, Catfish, highlights how quickly online deception can go wrong. “Catfishing” has gained enough traction in popular usage to warrant its addition to our lexicon. One newly-added definition of “catfishingis “to deceive (someone) by creating a false personal profile online” (Merriam-Webster, 2018).

Why would anyone engage in such digital deception? The person doing the catfishing—the catfish, in other words—may lie for a number of reasons, but in most cases this deception comes from a desire to create and sustain a relationship (usually romantic) with someone they believe would reject them were they to act in a fully transparent manner. A catfish often feels motivated by and overwhelmed with feelings of uncertainty, loneliness and dejection. By creating a persona that is far from their own, they feel more confident to connect with others online; some even develop lasting relationships with their “catfishee.” Online identities are easily fabricated because there is no fact-checker peering over our shoulders to make sure we present ourselves as accurately as possible. Sharon Coen of the University of Salford says that catfishing “offers an opportunity for people to try on different identities, and interact with others on the basis of that identity.” She emphasizes that younger people often experience this need for experimentation with their identities, including catfishing “to express parts of one’s identity which are not acceptable according to social norms.” This could include a fear of publicizing their sexuality, race, physical appearance, health issues, or other such factors open to social pressures. For these reasons, many people like Nev Schulman empathize with the acts of the catfisher. Beyond this, he even claims that there is some benefit to the catfishee—they are flattered and take part in a meaningful relationship that, while not real in some respects, is a rewarding fantasy in other ways. In his book, Schulman says that “addicted, emotionally dependent, and in too deep, the catfished hopefuls end up turning a blind eye… the catfish makes them feel special… they make them feel loved” (Schulman, 2014).

Many still believe the ethical problem with catfishing is clear. It’s a deceptive presentation of self in relationships that are supposed to be very close and trusting. Even though many catfishers do not intend to act maliciously with their lies, some utilize this deception to gain monetary favors and gifts from the deceived catfishee.  Furthermore, the catfishee is robbed of time and effort they have invested in a relationship with someone who isn’t who they said they were. As shown on Catfish, the catfishees express a number of emotions when they discover the true identity of their online relational partner: some display feelings of anger and betrayal, others feel worthless for not being told the truth, and some feel more insecurity as a result of this deception. The deception hurts, but the illusion could be comforting if it’s maintained.  Things that could be verified or falsified with a quick Google search are ignored. “Finding out that the thing that makes you happy and distracts you from all your problems isn’t real is not what people want” Schulman says, “so they choose not to” (Horn, 2014).

Catfishing is only becoming easier—and more worrisome–with the increasing popularity of social media and online dating apps like Tinder and Bumble. Some may even consider sprinkling white lies across your dating profile a form of catfishing. This further complicates the line on what constitutes catfishing and what is just image management in a complex online environment.  How much truth should there be in our relationships and identities in the digital realm?

Discussion Questions:

  1. What is the ethical problem with catfishing? What values are in conflict in this case study?
  2. Does it matter if the catfish does not hope for an “offline” relationship, or if the relationship was composed of only online interactions?
  3. Assuming that the act of catfishing was not connected to a plan of defrauding the catfishee of money or goods, is the act of catfishing morally worrisome? Why or why not?
  4. How much of your real identity do you owe to people online? What principles should guide us in our interactions with others who may not fully know who we are?
  5. Why is transparency of identity ethically good in online interactions? Can you see times when hiding one’s identity online—or changing it—is an ethically good thing?

Further Information:

Coen, Sharon. “Not all online catfish are bad, but strong communities can net the ones that are.” The Conversation, September 28, 2015. Available at: https://theconversation.com/not-all-online-catfish-are-bad-but-strong-communities-can-net-the-ones-that-are-47981

Horn, Leslie. “The Psychology of Catfishing, From Its First Public Victim.” Gizmodo, September 2, 2014. Available at: https://gizmodo.com/the-psychology-of-catfishing-from-its-most-public-vict-1629558216

Martin, Denise. “Here’s How MTV’s Catfish Actually Works.” Vulture, May 21, 2014. Available at: http://www.vulture.com/2014/05/catfish-mtv-casting-production-process.html

Ossad, Jordana. “’Catfish’ Detective Nev Schluman’s Top Digital Dating Tips: Less Emojis, More Face Time.” MTV, June 26, 2014. Available at: http://www.mtv.com/news/1855372/nev-schulman-catfish-dating-tips/

Puppyteeth, Jaik. “Is it OK to Be a Casual Catfish?” Vice, February 5, 2017. Available at: https://www.vice.com/en_us/article/qkzz45/is-it-ok-to-be-a-casual-catfish

Schulman, Nev. In Real Life: Love, Lies & Identity in the Digital Age. New York: Grand Central Publishing, 2014.

Authors:

Alex Purcell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
November 5, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Values Preserved in Stone

CASE STUDY: The Ethics of Confederate Memorials

Case Study PDF | Additional Case Studies


Jesus Nazario Photo cropped

Photo: Jesus Nazario

In the dead of night of August 20, 2018, the University of Texas removed four statues on the south mall of the campus. Three of these statues depicted the Confederate leaders: Robert E. Lee, Albert Sidney Johnston, and John Reagan. The statues were removed in response to violent protests over Confederate monuments in Charlotte, Virginia. Two years before these statues were removed, the university took down a prominent statue of Jefferson Davis, the Confederate president. The University of Texas president, Gregory Fenves, stated that these statues were removed because they are depictions “that run counter to the university’s core values.” The men portrayed by the statues were important figures in the history of the Civil War and the United States, but many argue that they stood for a cause that is contrary to the values of our current society. Much debate has risen as to whether figures like these should be presented and honored in our public areas.

Some argue that these statues must remain to remind us of our nation’s history, whether that history is good or bad. They claim that these monuments do not have to represent a celebration of slavery or a commemoration to the Confederacy, but rather as a reminder of our past. For instance, Lawrence Kuznar of the Washington Post argues that “when racists revere these monuments, those of us who oppose racism should double our efforts to use these monuments as tools for education. Auschwitz and Dachau stand as mute testimonials to a past that Europeans would never want to forget or repeat. Why not our Confederate monuments?” In an attempt to shift the focus of these monuments to education, many local governments have chosen to supplement confederate monuments with additional context and information to increase its educational value. For example, a historical commission in Raleigh, North Carolina voted to supplement its confederate statues “with adjacent signs about ‘the consequences of slavery’ and the ‘subsequent oppressive subjugation of African American people.’” Supplemental information is provided to help to relieve the negative effect of the statues while maintaining or increasing their educational value.

To many other observers, however, these statues can only be perceived as a symbolic celebration of white power and racial inequality because of the people they depict. To some, the symbolic significance of the statues outweighs any historical significance. Ilya Somin, a professor at George Mason University, believes that “no one claims that we should erase the Confederacy and its leaders from the historical record. Far from it. We should certainly remember them and continue to study their history. We just should not honor them.” The men commemorated by the statues in discussion fought to defend slavery. Although the institution of slavery has been removed from the United States, racial divides still dominate much of the social and political discourse. For opponents of these statues on Texas’s campus, honoring these white men who risked their lives and the unity of their country to protect their right to enslave others does not present a positive image for furthering the contemporary cause of racial equality. The historical lesson depicted by these statues could, some argue, be preserved in a museum without the implications of placing a negative symbol in a public area.

empty statue

Photo: Colin Frick

Proponents of the removal of confederate statues often challenge the primary motivations for building these statues. Typically, the statues in question were not created during the Civil War or even during the reconstruction era. The four confederate statues at UT were placed in 1933, nearly 70 years after the end of the Civil War. Some argue that the decision to commemorate the Confederate leaders may reveal more about the statues’ builders’ desires to maintain segregation in the 1930s than the commemorated men’s fight to protect slavery in the late 1800s. The University of Texas president, Gregory Fenves, defended the University’s decision to remove the statues by referencing the historical context of when the statues were built. He argued that because the statues were “erected during the period of Jim Crow laws and segregation” this implied that “the statues represent the subjugation of African Americans.” The statues were commissioned by George Littlefield, a man who supported segregation so strongly that “the inscription on his namesake fountain honors the South’s fight for secession.” Supporters of the removal of the statues contend that the statues do not create a welcoming educational environment for African American students, past or present.

Statues of notable figures as those that dot campuses such as the University of Texas at Austin provide a powerful way to integrate the history of regions and institutions into everyday spaces. But they also enshrine histories that are value-laden, and our moral values change over time. Who should be honored in our monuments, and when should we knock certain figures off their literal pedestals in our public spaces?

Discussion Questions:

  1. What are the various values in conflict when it comes to monuments honoring confederate leaders?
  2. What role to the motives or intentions of those who installed the statue play in the ethics of removing or leaving in place such statues? What role do our contemporary reactions to the memorialized figures play these disputes?
  3.  To what lengths can communities go in their quest to remove these statues? Is vandalism and destruction of these statues ethically problematic? Why or why not?
  4. Is there any hope for these statues being “reframed” in public spaces? Or is removal the only option?
  5. Are there any ethical issues or considerations that must be dealt with if the removed statues are displayed elsewhere, such as in a museum?

Further Information:

Nelson, Sophia. “Opinion: Don’t Take Down Confederate Monuments. Here’s Why.” NBC News, June 1, 2017. Available at: https://www.nbcnews.com/think/news/opinion-why-i-feel- confederate-monuments-should-stay-ncna767221

Kuznar, Lawrence. “I detest our Confederate monuments. But they should remain.” The Washington Post, August 18,  2017. Available at: https://www.washingtonpost.com/opinions/i-detest-our-confederate-monuments-but-they-should-remain/2017/08/18/13d25fe8-843c-11e7-902a-2a9f2d808496_story.html

Watkins, Matthew. “UT-Austin removes Confederate statues in the middle of the night.” Texas Tribune, August 21, 2017. Available at: https://www.texastribune.org/2017/08/20/ut-austin-removing-confederate-statues-middle-night/

Somin, Ilya. “The case for taking down Confederate monuments.” The Washington Post, May 17, 2017. Available at https://www.washingtonpost.com/news/volokh-conspiracy/wp/2017/05/17/the-case-for-taking-down-confederate-monuments/

Waggoner, Martha & Robertson, Gary. “North Carolina will keep 3 Confederate monuments at Capitol.” Associated Press, August 22, 2018. Available at https://apnews.com/c0cb1c1ed22a4302bb638748a4e62217

Fenves, Gregory (2017). Confederate Statues on Campus [transcript]. Retrieved from https://president.utexas.edu/messages/confederate-statues-on-campus

McCann, Mac. “Written in Stone: History of racism lives on in UT monuments” Austin Chronicle, May 29, 2015. Available at https://www.austinchronicle.com/news/2015-05-29/written-in-stone/

Authors:

Colin Frick & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
October 31, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Gaming the System?

CASE STUDY: Digital Ethics and Cheating in Fortnite

Case Study PDF | Additional Case Studies


Since the creation of online multiplayer video games, conniving gamers have lurked in the dark corners of the digital realm, waiting to pounce on unsuspecting players with software and codes designed to give them a competitive advantage. Such cheat codes manipulate glitches or bugs in the game design to create “advantages beyond normal gameplay,” like gaining in-game money or increasing the likelihood of hitting an enemy target. “Massive Multiplayer Online” (MMO) games are a growing category of multiplayer gaming which permits a “large number of players to participate simultaneously over an internet connection” (Hardy, 2009). Fortnite is one of the newest MMOs to take the world by storm, allowing for up to 100 players to play together in a single round, making it one of the most competitive online games in existence. Launched in July 2017, Fortnite has attracted over 125 million players and earns about $300 million every month from in-game purchases. It also creates an ideal environment for hackers and sophisticated cheaters to make an appearance.

Cheating in MMOs creates an uneven playing field for the devoted gamers who just want to enjoy their favorite pastime. Video game companies across the country are attempting to crack down on MMO cheaters by issuing severe real-life consequences. However, in an attempt to create a friendly, cheating-free gaming atmosphere, many organizations are finding themselves involved in legal battles, like in the case between Epic Games (the creators of Fortnite) and Caleb Rogers, a 14-year-old boy. The video game company filed a lawsuit against Rogers after he was caught using cheating software while playing Fortnite: Battle Royale that made it easier for him to aim weapons at oblivious players. Rogers knew what he was doing, as he also created and shared a YouTube video that gave step-by-step instructions on accessing and using this “aimbot” software. While it is not clear whether Epic Games knew of the boy’s age before filing the lawsuit, a spokesperson for the company stated that “Epic is not okay with ongoing cheating or copyright infringement from anyone at any age… we’ll pursue all available options to make sure our games are fun, fair, and competitive for players.” They argued that the YouTube video encouraged others to also download the cheating software, further promoting the Fortnite MMO to be filled with cheating players. In response to the lawsuit, Rogers refused to take down the how-to video and instead proudly “admitted to using the software, live streaming himself cheating” in another post on YouTube (Statt, 2017).

While the company’s intentions to sustain a fair gaming environment might seem justifiable, the 14-year-old’s mother would argue otherwise. Lauren Rogers sent a letter relaying her anger to the U.S. District Court Eastern District of North Carolina. In it she asserted that her son did not develop or distribute the cheating software; he only downloaded it for his own use. She contended that Epic Games, like many other video game companies, forces a vague end-user licensing agreement (EULA) on users that was so complex that Caleb could not possibly agree to because of his status as a minor. Further, there was no parental consent agreement, meaning that she nor any other adult was required to read the terms and conditions. Rogers asserts that regardless of Caleb’s age, the EULA was vague and has probably never been read in its entirety by any gamer. Nick Statt, a writer at The Verge, says that “nearly every piece of technology… carries with it some type of murky agreement regulating the behavior of consumers… We agree to these contracts without reading them or even understanding what types of behavior scale from prohibited to illegal.” Rogers also argues that even if Caleb wasn’t a minor, Epic Games would have difficulties demonstrating that a cheating software could harm the company’s revenue, as it is a free game and its only profits come from purchases made by users within the game. She challenges that the possible $150,000 fine against a 14-year-old is just a way for a major corporation to make money from a minor infraction.

Cheating in an online video game might not be glaringly harmful, but hacking can cause frustration and anger in an environment designed to provide escape for the users from the stressors of reality. In the end, using cheating software exploits the purpose of the video game’s social elements and undermines the artistic intent to entertain. Should video game companies move toward drastic legal action in the persisting battle against MMO cheaters?

Discussion Questions:

  1. What is the ethical problem with cheating in games? Does the meaning of cheating change in digital realms such as that created in Fortnite?
  2. Did Rogers commit one or two ethically problematic actions when he used an aimbot to gain an advantage and then explained his use in a YouTube video? Would your reasoning change if he had only used the aimbot but not helped others use such programs?
  3. How far can companies go in enforcing fairness in their games? What principle would you argue for that governs ethical game companies and the behaviors they allow on their platforms?
  4. If fairness is an important value, what other values might matter in the social worlds created or implied by games? Should game companies also worry about other senses of equality and inclusion beyond fair competition?

Further Information:

Gilbert, Ben. “One crazy statistic shows how incredibly big ‘Fortnite’ has become in less than a year.” Business Insider, June 13, 2018. Available at: https://www.businessinsider.com/fortnite-size-statistics-players-worldwide-2018-6

Shafer, Josh. “Video game maker goes after cheaters, including a 14-year-old boy.” The News & Observer, November 27, 2017. Available at: http://www.newsobserver.com/news/local/news-columns-blogs/josh-shaffer/article186695018.html

Statt, Nick. “Epic Games receives scathing legal rebuke from 14-year-old Fortnite cheater’s mom.” The Verge, November 27, 2017. Available at: https://www.theverge.com/2017/11/27/16707562/epic-games-fortnite-cheating-lawsuit-debate-14-year-old-kid

Technopedia. “Massively Multiplayer Online Games (MMOG).” Technopedia. Available at: https://vtechworks.lib.vt.edu/handle/10919/31881

Thomsen, Michael. “Cheating: Video Games’ Moral Imperative.” Fanzine, January 8, 2013. Available at: http://thefanzine.com/cheating-video-games-moral-imperative/

Author:

Alex Purcell
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
November 2, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Boldly Going Where Studios Have Gone Before

CASE STUDY: Fan Fiction and the Ethics of Imitation

Case Study PDF | Additional Case Studies


ax

Axanar Productions / Screencapture from Kickstarter / Modified

Oscar Wilde’s famous saying “Imitation is the sincerest form of flattery” is often invoked as a way to skitter away from accusations of plagiarism or copyright infringement. But when does imitation cease to be a mere compliment, and something that crosses an ethical line? This question arises in the case of Axanar, a fan-made short film inspired by the revered Star Trek franchise. Written by Alec Peters, Axanar was at the center of one of the most highly-debated court cases surrounding the ethical issues created when copyright protections clash with the freedom of speech and expression by fans.

This particular case began with the creation of Prelude to Axanar, a 21-minute long documentary-styled film. It is set in the Star Trek universe and recounts a clash, popularly known as the Battle of Axanar, between the Federation and the aggressive Klingons. Although it only began with a funding goal of $10,000 through Kickstarter, as a result of fans’ anticipation and interest, Axanar Productions was able to raise $101,000 for the film’s production ensuring a high-profile release at the San Diego Comic-Con in July 2014. Axanar Productions had to utilize Kickstarter donations because of the strict guidelines given by Paramount Studios prohibiting sales of tickets and merchandise in support of a completed fanworks. The Prelude to Axanar fan film was a success, generating over 3.4 million views on YouTube. This clearly showed fan interest in the production of the planned—but still unfilmed—fan-produced longer feature film, Axanar.

The popularity of Prelude to Axanar caught the attention of the executives of CBS Studios and Paramount Pictures. They filed a lawsuit in 2015 alleging copyright infringement in a federal court in California. Included in their list of infringed elements were characters such as “Garth of Izar,” “Richard Robau,” and “Soval.” Also named were races and species such as “Klingons” and “Vulcans.” Distinctive costumes were also claimed as protected. This came as a shock to Axanar Productions, as there had been previous fanworks inspired by Star Trek that did not attract copyright lawsuits. Peters, the head of Axanar Productions, claimed that “We violate CBS copyright less than any other fan film” (United States District Court, 2016). Axanar Productions further defended their use of Star Trek elements by stating on their website that “Axanar is an independent project that uses the intellectual property of CBS under the provision that Axanar is totally non-commercial. That means we can never charge for anything featuring their marks or intellectual property and we will never sell the movie, DVD/Blu-ray copies, T-shirts, or anything which uses CBS owned marks or intellectual property” (Geuss, March 15, 2016). In other words, Axanar was claiming that it did nothing to take anything of value from the owners of the copyrighted story and characters.

Fans of the original Star Trek franchise and of the fan film flooded the internet with their protest against the legal actions taken by Paramount studios and CBS. Those who support the prequel argued that some of the most popular movies and books of today can be began as fan productions. The successful and highly-commercialized film Fifty Shades of Grey began as a fanfiction inspired by Twilight, and the acclaimed musical Wicked can technically be called a fanmusical based upon The Wizard of Oz. Others argued that studios are using this case as a way to dissuade others from producing professional-quality fanworks. The lawsuit eventually ended with a settlement in 2017 allowing Axanar productions to film but under even stricter guidelines. The rules that Axanar and all future creators of Star Trek inspired fan productions must adhere to include the following strictures:

  1. Any fan film has to be 15 minutes or less for a standalone film or up to two episodes or parts not exceeding a total of thirty minutes.
  2. Star Trek cannot be part of the film title but “A STAR TREK FAN PRODUCTION” must be included in a plain typeface as a subtitle.
  3. No imitations or bootleg version of merchandise can be used. They must be officially merchandise that are commercially available.
  4. The cast cannot be professional actors or have been affiliated with the original Star Trek universe in any way. Neither can they be compensated for their work.
  5. A maximum of $50,000 can be raised through fundraising and the production must be non-commercial. The fan production must not generate revenue in any way. (CBS Entertainment, “Fan Films”)

These guidelines are just some of the new rules in place regulating fan productions. Those on both sides feel unsatisfied as they claim that the settlement does nothing to settle the ethical debate of what counts as imitation or plagiarism, and what counts as inspiration. Do these new guidelines uphold the values of freedom of expression and creativity or do they unduly stifle the imaginations of fans who just want to play a part in furthering their favorite works?

Discussion Questions:

  1. Why is it ethically good that original creative works are protected by measures such as copyright?
  2. What are the ethical values in conflict in cases of fan fiction based upon copyrighted works?
  3. Do you believe that Prelude to Axanar crossed any ethical lines in how it used Star Trek elements? Why or why not?
  4. What do you think about the rules enunciated for fan fiction? Are these unjustifiably containing of fan creativity?
  5. What would be the best way to protect companies’ and artists’ interests in original works and allow for related fan-created works?

Further Information:

CBS Entertainment. “Fan Films.” Star Trek, CBS Studios Inc., Available at: https://www.startrek.com/fan-films

Devenish, Alan. “The Quest to Make a Studio-Quality Star Trek Movie on a Kickstarter Budget.” Wired, June 3, 2017. Available at: https://www.wired.com/2014/07/star-trek-axanar-fan-film/

Geuss, Megan. “Judge: Star Trek Fanfic Creators Must Face CBS, Paramount Copyright Lawsuit.” Ars Technica, May 10, 2016, Available at: https://arstechnica.com/tech-policy/2016/05/judge-star-trek-fanfic-creators-must-face-cbs-paramount-copyright-lawsuit/

Geuss, Megan. “Paramount, CBS List the Ways Star Trek Fanfic Axanar Infringes Copyright.” Ars Technica, March 15, 2016, https://arstechnica.com/tech-policy/2016/03/paramount-cbs-list-the-ways-star-trek-fanfic-axanar-infringes-copyright/

Mele, Christopher. “‘Star Trek’ Copyright Settlement Allows Fan Film to Proceed.” The New York Times, New York Times, January, 21, 2017. Available at: https://www.nytimes.com/2017/01/21/movies/star-trek-axanar-fan-film-paramount-cbs-settlement.html

United States District Court: Central District of California. Paramount Pictures Corporation and CBS Studios Inc. March 11, 2016. Available at: https://cdn.arstechnica.net/wp-content/uploads/2016/03/Axanar-Klingon.pdf

Authors:

Oluwasemilore Adeoluwa & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
October 31, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

AI and Online Hate Speech

CASE STUDY: From Tay AI to Automated Content Moderation

Case Study PDF | Additional Case Studies


tayai

Screencapture: Twitter.com

Hate speech is a growing problem online. Optimists believe that if we continue to improve the capabilities of our programs, we could push back the tides of hate speech. Facebook CEO Mark Zuckerberg holds this view, testifying to congress that he is “optimistic that over a five-to-10-year period, we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we’re not just there on that” (Pearson, 2018). Others are not as hopeful that artificial intelligence (AI) will lead to reductions in bias and hate in online communication.

Worries about AI and machine learning gained traction with the recent introduction—and quick deactivation—of Microsoft’s Tay AI chatbot. Once this program was unleashed upon the Twittersphere, unpredicted results emerged: as James Vincent noted, “it took less than 24 hours for Twitter to corrupt an innocent AI chatbot.” As other Twitter users started to tweet serious or half-joking hateful speech to Tay, the program began to engage in copycat behaviors and “learned” how to use the same words and phrases in its novel responses. Over 96,000 Tweets later, it was clear that Tay communicated not just copycat utterances, but novel statements such as “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism,” “I love feminism now,” and “gender equality = feminism.” Users tweeting “Bruce Jenner” at Tay evoked a wide spectrum of responses, from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the more worrisome “caitlyn jenner isn’t a real woman yet she won woman of the year” (Vincent, 2018)? In short, the technology designed to adapt and learn started to embody “the prejudices of society,” highlighting to some that “big corporations like Microsoft forget to take any preventative measures against these problems” (Vincent, 2018).  In the Tay example, AI did nothing to reduce hate speech on social media—it merely reflected “the worst traits of humanity.”

The Tay bot example highlights the enduring concerns over uses of AI to make important, but nuanced, distinctions in policing online content and in interacting with humans. Many still extol the promise of AI and machine learning algorithms, an optimism bolstered by Facebook’s limited use of AI to combat fake news and disinformation: Facebook has claimed that AI has helped to “remove thousands of fake accounts and ‘find suspicious behaviors,’ including during last year’s special Senate race in Alabama, when AI helped spot political spammers from Macedonia, a hotbed of online fraud” (Harwell, 2018). Facebook is also scaling up uses of AI in tagging user faces in uploaded photos, optimizing item placement to maximize user clicks, as well as in optimizing advertisements. Facebook is also increasing the use of AI in “scanning posts and suggesting resources when the AI assesses that a user is threatening suicide” (Harwell, 2018).

But the paradox of AI remains: such technologies offers a way to escape the errors, imperfections, and biases of limited human judgment, but they seem to always lack the creativity and contextual sensitivity needed to reasonable engage the incredible and constantly-evolving range of human communication online. Additionally, some of the best machine learning programs have yet to escape from dominant forms of social prejudice; for example, Jordan Pearson (2016) explained one lackluster use of AI that exhibited “a strong tendency to mark white-sounding names as ‘pleasant’ and black-sounding ones as ‘unpleasant.’” AI might catch innocuous uses of terms flagged elsewhere as hateful, or miss creative misspellings or versions of words encapsulating hateful thoughts. Even more worrisome, our AI programs may unwittingly inherit prejudicial attributes from their human designers, or be “trained” in problematic ways by intentional human interaction as was the case in Tay AI’s short foray into the world. Should we be optimistic that AI can understand humans and human communication, with all of our flaws and ideals, and still perform in an ethically appropriate way?

Discussion Questions:

  1. What ethical problems in the online world are AI programs meant to alleviate?
  2. The designers of the Tay AI bot did not intend for it to be racist or sexist, but many of its Tweets fit these labels. Should we hold the designers and programs ethically accountable for this machine learning Twitter bot?
  3. In general, when should programmers and designers of autonomous devices and programs be held accountable for the learned behaviors of their creations?
  4. What ethical concerns revolve around giving AI programs a significant role in content moderation on social media sites? What kind of biases might our machines risk possessing?

Further Information:

Drew Harwell, “AI will solve Facebook’s most vexing problems, Mark Zuckerberg says. Just don’t ask when or how.” The Washington Post, April 11, 2018. Available at: https://www.washingtonpost.com/news/the-switch/wp/2018/04/11/ai-will-solve-facebooks-most-vexing-problems-mark-zuckerberg-says-just-dont-ask-when-or-how/

James Vincent, “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.” The Verge, March 24, 2016. Available at: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Jordan Pearson, “Mark Zuckerberg Says Facebook Will Have AI to Detect Hate Speech In ‘5-10 years.’” Motherboard, April 10, 2018. Available at: https://motherboard.vice.com/en_us/article/7xd779/mark-zuckerberg-says-facebook-will-have-ai-to-detect-hate-speech-in-5-10-years-congress-hearing

Jordan Pearson, “It’s Our Fault That AI Thinks White Names Are More ‘Pleasant’ Than Black Names.” Motherboard, August 26, 2016. Available at: https://motherboard.vice.com/en_us/article/z43qka/its-our-fault-that-ai-thinks-white-names-are-more-pleasant-than-black-names

Authors:

Anna Rose Isbell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
September 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Google Assistant and the Ethics of AI

CASE STUDY: The Ethics of Artificial Intelligence in Human Interaction

Case Study PDF  | Additional Case Studies


Artificial intelligence (AI) is increasingly prevalent in our daily activities. In May 2018, Google demonstrated a new AI technology known as “Duplex.” Designed for our growing array of smart devices and specifically for the Google Assistant feature, this system has the ability to sound like a human on the phone while accomplishing tasks like scheduling a hair appointment or making a dinner reservation. Duplex is able to navigate various misunderstandings along the course of a conversation and possesses the ability to acknowledge when the conversation has exceeded its ability to respond. As Duplex gets perfected in phone-based applications and beyond, it will introduce a new array of ethical issues into questions concerning AI in human communication

There are obvious advantages to integrating AI systems into our smart devices. Advocates for human-sounding communicative AI systems such as Duplex argue that these are logical extensions of our urge to make our technologies do more with less effort. In this case, it’s an intelligent program that saves us time by doing mundane tasks such as making diner reservations and calling to inquire about holiday hours at a particular store. If we don’t object to using online reservation systems as a way to speed up a communicative transaction, why would we object to using human-sounding program to do the same thing?

Yet Google’s May 2018 demonstration created an immediate backlash. Many agreed with Zeynep Tufekci’s passionate reaction: “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… [is] horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing” (Meyer, 2018). What many worried about was the deception involved in the interaction, since only one party (the caller) knew that the call was being made by a program designed to sound like a human. Even though there was no harm to the restaurant and hair salon employees involved in the early demonstration of Duplex, the deception of human-sounding AI was still there. In a more recent test of the Duplex technology, Google responded to critics by including a notice in the call itself: “Hi, I’m calling to make a reservation. I’m Google’s automated booking service, so I’ll record the call. Uh, can I book a table for Sunday the first” (Bohn, 2018)? Such announcements are added begrudgingly, since it’s highly likely that a significant portion of humans called will hang up once they realize that this is not a “real” human interaction.

While this assuaged some worries, it is notable that the voice accompanying this AI notice was even more human-like than ever. Might this technology be hacked or used without such a warning that the voice one is hearing is AI produced? Furthermore, others worry about the data integration that is enabled when such an AI program as Duplex is employed by a company as far-reaching as Google. Given the bulk of consumer data Google possesses as one of the most popular search engines, critics fear future developments that would allow the Google Assistant to impersonate users at a deeper, more personal level through access to a massive data profile for each user. This is simply a more detailed, personal version of the basic worry about impersonation: “If robots can freely pose as humans the scope for mischief is incredible; ranging from scam calls to automated hoaxes” (Vincent, 2018). While critics differ about whether industry self-regulation or legislation will be the way to address technologies such as Duplex, the basic question remains: do we create more ethical conundrums by making our AI systems sound like humans, or is this another streamlining of our everyday lives that we will simply have to get used to over time?

Discussion Questions:

  1. What are the ethical values in conflict in this debate over human-sounding AI systems?
  2. Do you think the Duplex AI system is deceptive? Is this use harmful? If it’s one and not the other, does this make it less problematic?
  3. Do the ethical concerns disappear after Google added a verbalized notice that the call was being made by an AI program?
  4. How should companies develop and implement AI systems like Duplex? What limitations should they install in these systems?
  5. Do you envision a day when we no longer assume that human-sounding voices come from a specific living human interacting with us? Does this matter now?

Further Information:

Bohn, Dieter. “Google Duplex really works and testing begins this summer.” The Verge, 27 June 2018. Available at: https://www.theverge.com/2018/6/27/17508728/google-duplex-assistant-reservations-demo

McMullan, Thomas. “Google Duplex: Google responds to ethical concerns about AI masquerading as humans.” ALPHR, 11 May 2018. Available at: https://www.alphr.com/google/1009290/google-duplex-assistant-io-ai-impersonating-humans

Meyer, David. “Google Should Have Thought About Duplex’s Ethical Issues Before Showing It Off.” Fortune, 11 May 2018. Available at:  https://www.fortune.com/2018/05/11/google-duplex-virtual-assistant-ethical-issues-ai-machine-learning/

Vincent, James. “Google’s AI sounds like a human on the phone — should we be worried?” The Verge, 9 May 2018. Available at: https://www.theverge.com/2018/5/9/17334658/google-ai-phone-call-assistant-duplex-ethical-social-implications

Authors:

Sabrina Stoffels & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
September 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

No Comment

CASE STUDY: Online Comment Sections and Democratic Discourse

Case Study PDF | Additional Case Studies


The online world seems defined by interaction. We converse with known and unknown others on social media sites, blogs, and even at the bottom of stories on news sites. For journalism, this interaction is fairly new. In the early 2000s, only about eight to 30 percent of news sites had comment sections. About 10 years later, that number rose to 85 percent. As of 2010, Pew reported that 32% of Internet users have posted a comment on an online news article (Stroud et al., 2014). Newspapers and news websites realized the value of online comment sections: they can facilitate reader interaction and the sharing of opinions about journalistic content. But having the ability to comment doesn’t necessarily mean that readers will leave constructive or thoughtful comments; sites such as NPR, Popular Science, VICE News, MSN, and the Guardian decided to abandon comment sections altogether. Many cited the hateful, irrelevant, or crude content that so often populates comment sections as the primary reason for eliminating them on a news site.

In 2013, Popular Science got rid of their comment sections due to “trolls and spambots” who overwhelmed those who were actually “committed to fostering intellectual debate.” Some evidence seemed to show that uncivil comments can “skew a reader’s perception” or even change a reader’s mind altogether about the scientific information presented in the article. As Natalie Stroud and co-authors (2014) argue, “instead of being a forum for learning and discovery, comment sections can devolve into a dark cave of name-calling and ad hominin attacks.” Following Popular Science, NPR announced their decision to eliminate comment sections in 2016. They explained that the comment sections attached to their stories were “not providing a useful experience for the vast majority of [their] users” (Montgomery, 2016). Comments could be moderated, but this slows down discourse and increases costs; it also puts a news company in the uncomfortable position of judging and potentially censoring certain opinions. Eliminating comment sections avoids the worries and costs about selecting which comments are worthwhile to post, while avoiding the distractions and harms of uncivil, irrelevant, or hateful posts.

Instances such as these evoked both support and condemnation from those interested in creating more vibrant journalistic practices. Some worried that the hateful and uncivil actions of a small percentage of users was dictating the communication opportunities of the majority of news audiences. If the interaction of citizens through reason-based debate and discussion is vital for a democracy’s flourishing, removing the immediate space for such discussion on a specific newsworthy story is seen as problematic. Additionally, some may argue that the ability to put up with or tolerate disagreement—or in the worst case, hateful opinions—is an important skill for democratic citizens living in a community that often has to live with difference and disagreement among its members. Disabling the airing of such strong and controversial views may also discourage this habit in citizens, especially when connected with the pro-democratic institution of journalism.

Other applaud closing down of comment sections that have become a magnet for trolls seeking to provoke others to no real-world end. They point out the harm that giving a platform for extreme and hateful viewpoints creates; clearly, a democracy cannot be totally free of disagreement or hateful views, but should it place a spotlight on these by allowing any and all to comment on the news of the day? If relevant comments get overwhelmed by comments filled with irrelevant, false, or even derogatory content (often stoked by the ability to post anonymously), the real conversational value of comment sections disappears. Beyond this, newspapers worried about what a hostile and vitriolic comment section may do to their image as an objective news source.

The supporters of eliminating comment sections place the hope for civil discourse in other, more controlled, contexts. The cost to democratic community and the information-conveying function of the news is too high. Alternatively, supporters of comment sections see these areas of sometimes-wild discourse as a vital part of democracy. Perhaps one can clean up the comments—by using software that require individuals to log in with social media accounts—but these may raise the cost of risky or unpopular comments too high and stifle speech that would otherwise be uttered if protected by the veil of anonymity. How free should our discussions be on websites that offer us the news we need to be a flourishing democracy?

Discussion Questions:

  1. Are comment sections integral for the functioning of digital journalism in a democracy? Why or why not?
  2. Is eliminating comment sections the right decision when they become targets of trolling, hateful comments, irrelevant discussion, or personal attacks?
  3. How ought we to react to comments we judge as hateful or vile? Should we respond to them, ignore them, or find a way to get them removed from the discussion section?
  4. What are the ethical obligations of a citizen of a democratic state to others who hold differing views? What are the ethical obligations of news corporations and social media platforms to those who hold unpopular or even hateful views?

Further Information:

Scott Montgomery, “Beyond Comments: Finding Better Ways To Connect With You.” National Public Radio, August 17, 2016. Available at: https://www.npr.org/sections/npr-extra/2016/08/17/490208179/beyond-comments-finding-better-ways-to-connect-with-you

Suzanne LaBarre, “Why We’re Shutting Off Our Comments.” Popular Science, September 24, 2013. Available at: https://www.popsci.com/science/article/2013-09/why-were-shutting-our-comments

Clothilde Goujard, “Why news websites are closing their comments sections.” Medium: Global Editors Network, September 8, 2016. Available at: https://medium.com/global-editors-network/why-news-websites-are-closing-their-comments-sections-ea31139c469d

Maranda Jones, “Why Traditional Comment Sections Won’t Work in 2018.” SquareOffs, March 29, 2018. Available at: https://blog.squareoffs.com/why-traditional-comment-sections-wont-work-in-2018

Natalie Jomini Stroud, Ashley Muddiman, Joshua Scacco, Alex Curry, and Cynthia Peacock, “Journalist Involvement in Comment Sections.” The Engaging News Project, September 10, 2014. Available at: https://mediaengagement.org/wp-content/uploads/2014/04/ENP_Comments_Report.pdf

Authors:

Bailey Sebastian & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
August 31, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

%d bloggers like this: