Media Ethics Initiative

Home » Posts tagged 'fake news'

Tag Archives: fake news

Misinformation, Social Media, and the Price of Political Persuasion

CASE STUDY: Examining Twitter’s Decision to Ban Purchased Political Advertisements

Case Study PDF | Additional Case Studies


tw

Kon Karampelas / Unsplash / Modified

Social media platforms such as Facebook and Twitter are increasingly used to promote a specific cause or political candidate, influence voting patterns, and target individuals based on their political beliefs (Conger, 2019). Social media’s impact on politics can be seen through paid political advertising, where organizations and campaigns pay to spread their messages across multiple networks. In the course of using finely crafted messages, campaigns sometimes take part in techniques such as the selling or buying of users’ data for targeted advertising (Halpern, 2019).

These “micro” targeted ads are often the location of fake news or misinformation. Twitter and Facebook both revealed that Russian operatives used their platforms in attempts to influence the outcome of the 2016 U.S. presidential election. In 2017, Facebook turned over thousands of Russian-bought ads to the U.S. Congress. The ads showed that “operatives worked off evolving lists of racial, religious, political and economic themes. They used these to create pages, write posts and craft ads that would appear in users’ news feeds — with the apparent goal of appealing to one audience and alienating another” (Entous et al., 2017). All of this was enabled by the targeting power and data collection practices of social media platforms that financially depend on selling advertising to paying individuals and companies.

In response to the growing concerns associated with the misuse of online political ads, Twitter CEO Jack Dorsey announced that his social media platform would ban all political advertisements from its platform starting in November 2019 (Conger, 2019).  The platform’s move is an effort to curtail paid political speech without affecting political discourse in general. “The reach of political messages should be earned, not bought,” Dorsey wrote in a series of Tweets. Dorsey laid out the challenges he thinks paid political advertisements present to civic discourse: over-dependence on technological optimization, the spread of misinformation, and the promotion of manipulated media such as “deep fake” videos (Cogner, 2019). Another worrisome example are the “bots” or automated social media accounts that were created during the 2016 presidential election to help inflate social media traffic around specific presidential campaigns and influence “the democratic process behind the scenes” (Guilbeault, 2019). These programs magnified the reach of misinformation that seemed to run rampant over social media during the campaign.

Twitter’s restriction on political advertising is not absolute, however. Candidates or users would still be able to post videos of campaign ads, just as long as they do not pay Twitter specifically to promote the ad and enhance its audience. For example, 2020 presidential candidate Joe Biden released a viral campaign ad in December 2019 critiquing President Donald Trump’s reputation with foreign leaders (Chiu, 2019). Because the Biden campaign posted the video to Biden’s account and did not pay for Twitter to promote the ad as one of the site’s promoted revenue products, the campaign video was permitted and could be shared by other users. The platform announced it will still allow some paid issue-based or cause-based ads that pertain to subjects such as the economy, the environment, and other social causes. However, “candidates, elected officials and political parties will be banned from buying any ads at all, including ones about issues or causes” (Wagner, 2019).

Twitter’s decision to ban all paid political advertising is not celebrated by all parties. Some worry that social media platforms may go too far in the search for accuracy and an uncertain ideal of truth in political discourse in their spaces. Facebook’s reaction to paid advertising is the diametric opposite to Twitter’s approach: “In Zuckerberg’s view, Facebook, though a private company, is the public square where such ideas can be debated,” said Sue Halpern, who covers politics and technology for The New Yorker (Halpern, 2018).  Facebook has indicated that it will continue to allow politicians to post or purchase ads, even if they contain content that some may judge as false or misleading information. Facebook CEO Mark Zuckerburg’s argument is that as a tool for freedom of expression, one of Facebook’s objectives shouldn’t be to censor political ads or be an arbiter of truth. Instead of getting rid of political ads, the platform said it will focus on combating micro-targeting, where “campaigns tap Facebook’s vast caches of data to reach specific audiences with pinpoint accuracy, going after voters in certain neighborhoods, jobs and age ranges, or even serving up ads only to fans of certain television shows or sports teams” (Scola, 2019). Google, owner of the video sharing platform YouTube, also made commitments to reduce targeted political ads, pledging that political advertisers “will only be able to target ads based on a user’s age, gender, and postcode.” However, the tech company’s policy for removing ads is still murky: “In the policy update, Google focused on misleading ads related to voter suppression and election integrity, not claims targeting candidates” (O’Sullivan & Fung, 2019). But some maintain that truth in political advertising is a worthy and attainable ideal. Criticizing Facebook’s stance, U.S. Representative Alexandria Ocasio-Cortez voiced her support for paid advertising bans, saying “If a company cannot or does not wish to run basic fact-checking on paid political advertising, then they should not run paid political ads at all” (Conger, 2019).

Since Twitter and other social media platforms are companies and not government entities, they are not strictly governed by the First Amendment’s protections of speech and expression (Hudson, 2018). The debate over curtailing paid political advertising on social media brings up a related normative issue: how much free expression or speech should anyone—including social media platforms—foster or encourage in a democratic community, even if the messages are judged as untrue or skewed by many, or if they are connected to monetary interests? The challenge remains: who will decide what constitutes unacceptably false and misleading ads, whose monetary influence or interests are unallowable, and what constitutes allowable and effective partisan attempts at persuasion?

Discussion Questions:

  1. Should paid political advertisements be banned from social media? Should all political ads, paid or freely posted, be forbidden? Why or why not?
  2. What could be possible alternatives to Twitter and others banning paid political advertisements?
  3. Do you believe a focus on microtargeting is superior to banning paid political advertisements?
  4. What is ethically worrisome about microtargeting? Do you think these concerns would also extend to sophisticate campaigns run by traditional advertising agencies, or even public health campaigns targeting specific behaviors?
  5. Might there be ethically-worthy values to microtargeting political or commercial messages?
  6. Is curtailing certain kinds of political speech on social media good or bad for democracy? Explain your answer, and any distinctions you wish to draw.

 Further Information:

Chiu, A. “‘The world is laughing at President Trump’: Joe Biden highlights viral NATO video in campaign ad.” The Washington Post, December 5, 2019. Available at: https://www.washingtonpost.com/nation/2019/12/05/joe-biden-campaign-ad-laughing-trump-trudeau-macron-nato/

Conger, K. “Twitter Will Ban All Political Ads, C.E.O. Jack Dorsey Says.” The New York Times, October 30, 2019. Available at: https://www.nytimes.com/2019/10/30/technology/twitter-political-ads-ban.html

Entous, A., Timberg, C., & Dwoskin, E. “Russian operatives used Facebook ads to exploit America’s racial and religious divisions.” The Washington Post, September 25, 2017. Available at: https://www.washingtonpost.com/business/technology/russian-operatives-used-facebook-ads-to-exploit-divisions-over-black-political-activism-and-muslims/2017/09/25/4a011242-a21b-11e7-ade1-76d061d56efa_story.html

Guilbeault, D. & Wolley, S. “How Twitter Bots Are Shaping the Election.” The Atlantic, November 1, 2019. Available at: https://www.theatlantic.com/technology/archive/2016/11/election-bots/506072/

Halpern, S. “The Problem of Political Advertising on Social Media.” The New Yorker, October 24, 2019. Available at: https://www.newyorker.com/tech/annals-of-technology/the-problem-of-political-advertising-on-social-media

Hudson, D. “In the Age of Social Media, Expand the Reach of the First Amendment.” Human Rights Magazine, October 20, 2018. Available at: https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/the-ongoing-challenge-to-define-free-speech/in-the-age-of-socia-media-first-amendment/

O’Sullivan, D. & Fung, B. “Google’s updated ad policy will still allow politicians to run false ads.” CNN, November 20, 2019. Available at: https://www.cnn.com/2019/11/20/tech/google-political-ads-update/index.html

Roose, K. “Digital Hype Aside, Report Says Political Campaigns Are Mostly Analog Affairs.” The New York Times, March 21, 2019. Available at: https://www.nytimes.com/2019/03/21/business/political-campaigns-digital-advertising.html

Wagner, K. “Twitter’s Political Ad Ban Will Exempt Some ‘Issue’ Ads.” Bloomberg, November 15, 2019. Available at: https://www.bloomberg.com/news/articles/2019-11-15/twitter-s-political-ad-ban-will-exempt-some-issue-ads

Authors:

Allyson Waller & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
March 25, 2020


This case study is supported by funding from the John S. and James L. Knight Foundation. Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement at sstroud (at) austin.utexas.edu.

 

Press Freedom and Alternative Facts

The Center for Media Engagement and Media Ethics Initiative Present:


Press Freedom in the Age of Alternative Facts

David McCraw
Senior VP & Deputy General Counsel
The New York Times

March 13, 2020 (Friday) ¦ 1:30PM-2:30PM ¦ Belo Center for New Media (BMC) 5.208


David McCraw.. Earl WilsonWhat is the role of journalism and the press in a time when the truth is malleable, fake news is on the rise, and journalistic freedom is being put into question? How can our democratic society find real and sustainable solutions to our problems of disinformation, political polarization, and new media technologies?  In this talk, David McCraw will draw on his experiences as a lawyer and as the Deputy General Counsel for The New York Times to explore the ethical challenges awaiting journalists and the American public that arise at the intersection of law, ethics, and technology.

David McCraw is Deputy General Counsel of The New York Times Company and serves as the company’s principal newsroom lawyer. He is the author of the recent book “Truth in Our Times: Inside the Fight for Press Freedom in the Age of Alternative Facts,” a first-person account of the legal battles that helped shape The New York Times’s coverage of Donald Trump, Harvey Weinstein, conflicts abroad, and Washington’s divisive politics. He has been at The Times since 2002. He is a visiting lecturer at Harvard Law School and an adjunct professor at the NYU Law School. He has done pro bono work in support of free expression in Yemen, Russia, Montenegro, Bahrain, and other countries around the world.

This talk is co-sponsored by the LBJ School of Public Affairs and the UT Ethics Project. The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow Media Ethics Initiative and Center for Media Engagement on Facebook for more information on future events. This presentation will be introduced by Rebecca Taylor (UT Austin Ethics Project).

This event is free and open to the public.

Parking Information (nearest garages: SAG and TSG) can be found at: https://parking.utexas.edu/parking/ut-parking-garages


 

The Next Wave of Disinformation

The Center for Media Engagement and Media Ethics Initiative Present:


The Next Wave of Disinformation

Dr. Samuel Woolley
School of Journalism
University of Texas at Austin

February 27, 2020 (Thursday) ¦ 3:30PM-4:30PM ¦ CMA 5.136


hbg-title-9781913068127-30Online disinformation stormed our political process in 2016 and has only worsened since. Yet as Samuel Woolley shows in his new book The Reality Game, it may pale in comparison to what’s to come: humanlike automated voice systems, machine learning, “deepfake” AI-edited videos and images, interactive memes, virtual reality, and more. These technologies have the power not just to manipulate our politics, but to make us doubt our eyes and ears and even feelings. Woolley describes the profound impact these technologies will have on our lives with an eye towards how each new invention built without regard for its consequences edges us further into this digital authoritarianism. In response, Woolley argues for a new culture of innovation–one built around accountability and transparency.

Dr. Samuel Woolley is an Assistant Professor in the School of Journalism at the University of Texas at Austin. As the Program Director of Propaganda Research at the Center for Media Engagement, he studies how automated online tools such as bots and algorithms are used to enable both democracy and civic control. Woolley’s latest book, The Reality Game: How the Next Wave of Technology Will Break the Truth (PublicAffairs), came out in January 2020. His academic work has appeared in a number of peer-reviewed journals and edited volumes. He has published work on politics and social media in venues including Wired, Motherboard, TechCrunch, Slate, the Guardian, and the Atlantic.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow Media Ethics Initiative and Center for Media Engagement on Facebook for more information.

Media Ethics Initiative events are open and free to the public.


 

How Vigilant Should Cable News Be About Political Messages?

CASE STUDY: CNN’s Rejection of Trump’s Political Advertisements

Case Study PDF| Additional Case Studies


bidenad

Youtube.com / modified

In October 2019, CNN declined to air two 30-second paid advertisements from President Donald Trump’s re-election campaign. The two advertisements, titled “Biden Corruption” and “Coup,” came in the aftermath of Nancy Pelosi, Speaker of the U.S. House of Representatives, announcing the launch of a formal impeachment inquiry against Trump.

The inquiry was the result of revelations of a reported phone call President Donald Trump had with Ukrainian President Volodymyr Zelensky in July 2019 in which Trump insinuated the Ukranian government should investigate former Vice President Joe Biden and his son Hunter Biden. Hunter Biden worked for a Ukranian energy company whose owner was being investigated by Ukranian prosecutor general Viktor Shokin (Vogel, 2019). In 2016, Shokin was ousted as prosecutor general in light of accusations of corruption (Kramer, 2016). Trump and his supporters argue that Biden used his power as vice president to protect his son’s employer and influence the dismissal of Shokin. The New York Times reports, however, that “no evidence has surfaced that the former vice president intentionally tried to help his son by pressing for the prosecutor’s dismissal” (Vogel, 2019).

It was revealed that days before Trump and Zelensky’s July 2019 phone call that Trump directed his chief of staff “to place a hold on about $400 million in military aid for Ukraine,” (Pryzbyla & Edelman, 2019), leading some to believe the president issued a quid pro quo in order to tarnish a political rival. The pro-Trump advertisements that CNN refused to air attack network journalists such as Don Lemon and Chris Cuomo, calling them “media lapdogs,” label the House impeachment inquiry as a “coup,” and accuse Joe Biden of promising $1 billion to the Ukraine if Shokin was ousted (Grynbaum, 2019). CNN rejected the advertisements on the grounds they “contained inaccuracies and unfairly attacked the network’s journalists” and stated one ad in particular made “assertions that have been proven demonstrably false” (Grynbaum & Hsu, 2019).

Despite partisan pushback, the network’s move is not an unusual one. Unlike broadcast channels such as NBC, CBS, and ABC, cable networks are not obliged to follow regulations from the Federal Communications Commission pertaining to political advertisements. Because cable networks do not broadcast over “public” airwaves (Tompkins, 2019), they have more power to choose what political advertisements they showcase. Alternatively, broadcast channels are required “to air ads that come from legally qualified political candidates, according to the FCC” (Vranica & Horwitz, 2019), even if they may be laden with what the channel might consider spin, inaccuracies, and fabrications.

CNN wasn’t the only network that decided not to air the two Trump advertisements. NBCUniversal made the decision to pull the “Biden Corruption” ad after it aired once on the company’s cable network channels MSNBC and Bravo (Vranica & Horwitz, 2019). The controversy over these advertisements leads many to ask the question: Should national cable news networks have the power to pick and choose the political advertisements their viewers are exposed to? For those who agree that news networks should have the capability to reject political ads, an argument could be that rejecting ads with false information protects voters from believing and spreading misinformation. Despite the rise of social media, television has remained the main source of election news for a majority of people in the United States. A Pew Research survey found that during the 2016 presidential election, 24% of respondents said they learned information about the presidential election through cable news, followed by 14% receiving information from social media and local television (Gottfired et al., 2019). CNN’s decision could be seen as a way to protect the public from an ad that entangles political persuasion and unconfirmed accusations.

Those opposing CNN’s decision boiled down to two points: partisanship and free speech. Trump has been known to denounce CNN on multiple occasions labeling them as the “fake news media,” and accusing them of partisan bias in favor of Democrats (Samuels, 2019). Tim Murtaugh, communications director for the Trump campaign, argued that the campaign’s “Biden corruption” ad was “entirely accurate and was reviewed by counsel” and that CNN was “protecting Joe Biden in their programming” (Grynbaum & Hsu, 2019). The notion that CNN’s rejection is a partisan ploy also connects to the fear of a suppression of political speech. Facebook’s decision to allow the “Biden Corruption” ad on its social media platform was based on a “fundamental belief in free expression, respect for the democratic process, and the belief that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is” (Lima, 2019). Those supporting enabling access to this advertisement seem concerned that when political advertisements are censored and viewers are not equitably presented with arguments across multiple media platforms, democracy consequently suffers.

As the 2020 presidential election increasingly dominates news media, more controversial advertisements are likely to be released to the public for mass consumption on radio, print, social media, and television. Some might perceive an advertisement as well-argued and appropriately passionate, whereas others might accuse the same message of containing spin, emotional manipulation, or downright falsehoods. Outlets like CNN will continue to be faced with a choice: should they allow any kind of political advertisements on their channel —and leave conclusions and judgments of their veracity to viewers— or should they reject advertisements they believe contain some amount of misinformation in an effort to shield viewers’ political opinions from what they judge as abuse? How much power should a cable network wield in influencing, directly or indirectly, their audience’s exposure to a candidate’s message?

Discussion Questions:

  1. What are the ethical implications of CNN rejecting to air Trump’s two advertisements on its network? What values are in conflict?
  2. How could CNN have alternatively handled the situation other than rejecting to air the advertisements?
  3. Facebook said its decision to keep Trump’s two advertisements on its platform was based on the “belief in free expression” and “respect for the democratic process.” In what ways, if any, did CNN’s decision corroborate or contradict these beliefs?
  4. How does partisanship play a role in this scenario? Does partisanship affect the ways audience members consume media?
  5. Should news outlets be in the habit of fact-checking advertisements before they accept them? What is workable or problematic about this proposal?

 Further Information:

Gottfried, J., Barthel, M., Shearer, E., and Mitchell, A. “The 2016 Presidential Campaign – a News Event That’s Hard to Miss.” Pew Research Center, February 4, 2016. Available at: https://www.journalism.org/2016/02/04/the-2016-presidential-campaign-a-news-event-thats-hard-to-miss/

Grynbaum, M. & Hsu, T. “CNN Rejects 2 Trump Campaign Ads, Citing Inaccuracies.” The New York Times, Oct. 3, 2019. Available at: https://www.nytimes.com/2019/10/03/business/media/cnn-trump-campaign-ad.html

Kramer, A. “Ukraine Ousts Viktor Shokin, Top Prosecutor, and Political Stability Hangs in the Balance.” The New York Times, March, 29, 2016. Available at: https://www.nytimes.com/2016/03/30/world/europe/political-stability-in-the-balance-as-ukraine-ousts-top-prosecutor.html

Lima, C. “How Trump is winning the fight on ‘baseless’ ads.” POLITICO, Oct. 11, 2019. Available at: https://www.politico.com/news/2019/10/11/trump-joe-biden-ad-social-media-misinformation-044267

Przybyla, H. & Edelman, A. “Nancy Pelosi announces formal impeachment inquiry of Trump.” NBC News, September 24, 2019. Available at: https://www.nbcnews.com/politics/trump-impeachment-inquiry/pelosi-announce-formal-impeachment-inquiry-trump-n1058251

Samuels, B. “Trump campaign threatens to sue CNN, citing Project Veritas videos.” The Hill, October 18, 2019. Available at: https://thehill.com/homenews/campaign/466473-trump-campaign-threatens-to-sue-cnn-citing-project-veritas-videos

Tompkins, A. “Do the networks have to give equal time? In a word, no.” Poynter, January 8, 2019. Available at: https://www.poynter.org/ethics-trust/2019/do-the-networks-have-to-give-equal-time-in-a-word-no/

Vogel, K. “Trump, Biden and Ukraine: Sorting Out the Accusations.” The New York Times, September 22, 2019. Available at: https://www.nytimes.com/2019/09/22/us/politics/biden-ukraine-trump.html

Vranica, S. & Horwitz, J. “NBCU Cable Networks Refuse to Air Trump Campaign Ad Aimed at Joe Biden.” The Wall Street Journal, Oct. 10, 2019. Available at: https://www.wsj.com/articles/nbcu-refuses-to-air-trump-campaign-ad-aimed-at-joe-biden-11570746377

Authors:

Allyson Waller & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
February 3, 2020


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us at sstroud (at) austin.utexas.edu and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

 

Fake News and Real Tensions

CASE STUDY: The Impacts of Misinformation in South Asia

Case Study PDF| Additional Case Studies


India and Pakistan are two nuclear powers locked in a decades-long border conflict with spurts of cross-national violence ranging from terrorist operations to full-blown war. In February 2019, a suicide bombing by a Pakistan-based terrorist organization killed 40 paramilitary police officers in Kashmir. Two weeks after the attack, India launched retaliatory action against what it claimed was a terrorist camp within the rival state’s borders. Pakistan then shot down an Indian fighter plane and captured the pilot in response. The pilot was later returned to India.

Throughout the rising tensions and violence reflected in this episode was a relatively new element of the contemporary Indo-Pakistani conflict: fake news. Both national governments, news media in both countries, and social media accounts operated by citizens and trolls alike engaged in widespread and unchecked dissemination of misinformation. The fake news included inflated body counts, mislabeled photographs from previous conflicts, and manufactured gore like an image of a “bucked filled with mutilated body parts, mislabeled as the remains of a dead Indian soldier” (Bagri, 2019).

According to Govindraj Ethiraj, journalist for the fact-checking site Boom, “the wave of misinformation after the Pulwama attack was driven by inflamed emotions, overanxious media on all sides, [and] the desire to use this as a political weapon” (Bagri, 2019). The influx of fake news was made worse because “unfortunately mainstream media organizations both in India and Pakistan have failed to play their roles responsibly,” said Ashok Swain of Uppsala University while describing the back and forth between state agencies and media organizations  (Thaker, 2019). Speaking to their role in the explosion of fake news, a Twitter spokesperson advised that “everyone has a role to play in ensuring misinformation doesn’t spread on the internet” (Phartiyal, 2019). Regardless of who is responsible for the misinformation, Pratic Sinha, co-founder of Alt News, argues, “during such a time, when people’s sensitivities are involved, people are more vulnerable and gullible” (Bagri, 2019).

However, this gullibility may be advantageous in preventing the two nations from going to war. Sadanand Dhume of the American Enterprise Institute explains that “the fact that both governments have effectively been able to create these information bubbles” where “everybody’s willing to believe their own version … means that both sides can declare victory to their people and go home” (Bagri, 2019). Of course, this is all taking place in a climate where misinformation has “led to mass beating and mob lynchings” among communities in South Asia (Phartiyal, 2019). It is then unsurprising that a spokesperson for the Central Reserve Police Force said fake news during such a situation “had the potential to create communal tension and lead to violence” (Bagri, 2019).

At the same time, the identification of fake news is a war in and of itself. Those fighting fake news in South Asia have even been described as “a line of defense for this fifth generation warfare” by Shahzad Ahmed of “Bytes for All,” a Pakistani digital rights group (Jorgic & Pal, 2019). But the best outcome of that war on the Indian subcontinent remains unclear. Despite the risks of misinformation, Dhume holds firm on the positive side effects of certain kinds of fake news. He says, “paradoxically, the over-zealous Indian media and cowed Pakistani media may help prevent escalation of conflict” because “Indian TV is happy to run giddy stories about ‘hundreds’ of terrorists killed” and “Pakistani journalists won’t question their army’s claim that nothing much was hit,” meaning that “everyone gets to save face, and in south Asia face matters—a lot” (Thaker, 2019). If fake news involves illusion, might there be some illusions that are helpful in South Asia’s dreams for peace—or do they all eventually lead India and Pakistan to wake up at war? 

Discussion Questions: 

  1. What are the challenges of defining “fake news?” What values are in conflict in this case?
  2. Is it ever ethically acceptable to spread misinformation? What if one spreads it unknowingly?
  3. Who is responsible for ensuring the information environment is as accurate as possible? To what lengths must they go to do this?
  4. What are the conflicts that may arise if social media or government attempts to control the spread of fake news on social media?

Further Information:

Drazen Jorgic and Alasdair Pal, “Facebook, Twitter sucking into India-Pakistan information war.” Reuters, April 2, 2019. Available at: https://www.reuters.com/article/us-india-pakistan-socialmedia/facebook-twitter-sucked-into-india-pakistan-information-war-idUSKCN1RE18N

Sankalp Phartiyal, “Social media fake news fans tension between India and Pakistan.” Reuters, February 28, 2019. Available at: https://www.reuters.com/article/us-india-kashmir-socialmedia/social-media-fake-news-fans-tension-between-india-and-pakistan-idUSKCN1QH1NY

Aria Thaker, “Indian media trumpeting about Pakistani fake news should look in the mirror first.” Quartz India, February 27, 2019. Available at: https://qz.com/india/1561178/media-in-india-and-pakistan-share-fake-news-amid-border-conflict/

Neha Thirani Bargi, “When India and Pakistan clashed, fake news won.” Los Angeles Times, March 15, 2019. Available at: https://www.latimes.com/world/la-fg-india-pakistan-fake-news-20190315-story.html

Authors:

Dakota Park-Ozee & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
October 15, 2019


This case is supported with funding from the South Asia Institute at the University of Texas at Austin. Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Disrupting Journalism Ethics

The Center for Media Engagement and Media Ethics Initiative Present:


Journalism Ethics amid Irrational Publics: Disrupt and Redesign

Dr. Stephen J. A. Ward

Distinguished Lecturer, University of British Columbia
Founding Director, Center for Journalism Ethics at the University of Wisconsin

November 5 (Monday)  ¦  3:00-4:30PM  ¦  BMC 5.208


Stephen J. A. WardHow can journalism ethics meet the new challenges to democracy in the era of fake news and real political problems? In this engaging talk, prominent media ethicist Stephen J. A. Ward argues that journalism ethics must be radically rethought to defend democracy against irrational publics, demagogues, and extreme populism. In an age of intolerance and global disinformation, Ward recommends an engaged journalism which is neither neutral nor partisan. He proposes guidelines for covering extremism as part of a “macro-resistance” by society to a toxic public sphere.

Dr. Stephen J. A. Ward is an internationally recognized media ethicist, author and educator, living in Canada. He is a Distinguished Lecturer on Ethics at the University of British Columbia, founding director of the Center for Journalism Ethics at the University of Wisconsin, and director of the UBC School of Journalism. He was a war correspondent, foreign reporter and newsroom manager for 14 years and has received a lifetime award for service to professional journalism in Canada. He is editor-in-chief of the forthcoming Springer Handbook for Global Mediaward2 Ethics, and was associate editor of the Journal of Media Ethics. Dr. Ward is the author of 9 media ethics books, including two award-winning books, Radical Media Ethics and The Invention of Journalism Ethics. Also he is the author of Global Journalism Ethics, Ethics and the Media, and Global Media Ethics: Problems and Perspectives. His two new books, Disrupting Journalism Ethics and Ethical Journalism in a Populist Age were published in 2018.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow MEI and CME on Facebook for more information.

Media Ethics Initiative events are open and free to the public.

Co-sponsored by the University of Texas at Austin’s School of Journalism


 

Fake News, Fake Porn, and AI

CASE STUDY: “Deepfakes” and the Ethics of Faked Video Content

Case Study PDF | Additional Case Studies


The Internet has a way of both refining techniques and technologies by pushing them to their limits—and of bending them toward less-altruistic uses. For instance, artificial intelligence is increasingly being used to push the boundaries of what appears to be reality in faked videos. The premise of the phenomenon is straightforward: use artificial intelligence to seamlessly crop the faces of other people (usually celebrities or public figures) from an authentic video into other pre-existing videos.

bface

Photo: Geralt / CC0

While some uses of this technology can be beneficial or harmless, the potential for real damage is also present. This recent phenomenon, often called “Deepfakes,” has gained media attention due to early adopters and programmers using it to place the face of female celebrities onto the bodies of actresses in unrelated adult film videos. A celebrity therefore appears to be participating in a pornographic video even though, in reality, they have not done so. The actress Emma Watson was one of the first targets of this technology, finding her face cropped onto an explicit porn video without her consent. She is currently embroiled in a lawsuit filed against the producer of the faked video. While the Emma Watson case is still in progress, the difficulty of getting videos like these taken down cannot be understated. Law professor Eric Goldman points out the difficulty of pursuing such cases. He notes that while defamation and slander laws may apply to Deepfake videos, there is no straightforward or clear legal path for getting videos like these taken down, especially given their ability to re-appear once uploaded to the internet. While pornography is protected as a form of expression or art of some producer, Deepfake technology creates the possibility of creating adult films without the consent of those “acting” in it. Making matters more complex is the increasing ease with which this technology is available: forums exist with users offering advice on making faked videos and a phone app is available for download that can be employed by basically anyone to make a Deepfake video using little more than a few celebrity images.

Part of the challenge presented by Deepfakes concerns a conflict between aesthetic values and issues of consent. Celebrities or targets of faked videos did not consent to be portrayed in this manner, a fact which has led prominent voices in the adult film industry to condemn Deepfakes. One adult film company executive characterized the problem with Deepfakes in a Variety article: “it’s f[**]ed up. Everything we do … is built around the word consent. Deepfakes by definition runs contrary to consent.” It is unwanted and potentially embarrassing to be placed in a realistic porn video in which one didn’t actually participate. These concerns over consent are important, but Deepfakes muddies the waters by involving fictional creations and situations. Pornography, including fantasy satires based upon real-life figures such as the disgraced politician Anthony Weiner, is protected under the First Amendment as a type of expressive activity, regardless of whether those depicted or satirized approve of its ideas and activities. Nudity and fantasy situations play a range of roles in expressive activity, some with private contexts and some with public contexts. For instance, 2016 saw the installation of several unauthorized—and nude—statues of then-candidate Donald Trump across the United States. Whether or not we judge the message or use of these statues to be laudatory, they do seem to evoke the aesthetic values of creativity and expression that conflicts with a focus on consent to be depicted in a created (and possibly critical) artifact. Might Deepfakes, especially those of celebrities or public figures, ever be a legitimate form of aesthetic expression of their creators, in a similar way that a deeply offensive pornographic video is still a form of expression of its creators? Furthermore, not all Deepfakes are publically exhibited and used in connection with their target’s name, thereby removing most, if not all, of the public harm that would be created by their exhibition. When does private fantasy become a public problem?

Beyond their employment in fictional, but realistic, adult videos, the Deepfakes phenomena raises a more politically-concerning issue. Many are worried that Deepfakes have the potential to damage the world’s political climate through the spread of realistic faked video news. If seeing is believing, might our concerns about misinformation, propaganda, and fake news gain a new depth if all or part of the “news” item in question is a realistic video clip serving as evidence for some fictional claim? Law professors Robert Chesney and Danielle Citron consider a range of scenarios in which Deepfakes technology could prove disastrous when utilized in fake news: “false audio might convincingly depict U.S. officials privately ‘admitting’ a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative,” or “a fake video might depict emergency officials ‘announcing’ an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.” Such uses of faked video could create compelling, and potentially harmful, viral stories with the capacity to travel quickly across social media. Yet in a similar fashion to the licentious employments in forged adult footage, one can see the potential aesthetic values of Deepfakes as a form of expression, trolling, or satire in some political employments. The fairly crude “bad lip reading” videos of the recent past that placed new audio into real videos for humorous effect will soon give way to more realistic Deepfakes involving political and celebrity figures saying humorous, satirical, false, or frightening things. Given AI’s advances and Deepfake technology’s supercharging of how we can reimagine and realistically depict the world, how do we legally and ethically renegotiate the balance among the values of creative expression, the concerns over the consent of others, and our pursuit of truthful content?

Discussion Questions:

  1. Beyond the legal worries, what is the ethical problem with Deepfake videos? Does this problem change if the targeted individual is a public or private figure?
  2. Do your concerns about the ethics of Deepfakes videos depend upon them being made public, and not being kept private by their creator?
  3. Do the ethical and legal concerns raised concerning Deepfakes matter for more traditional forms of art that use nude and non-nude depictions of public figures? Why or why not?
  4. How might artists use Deepfakes as part of their art? Can you envision ways that politicians and celebrities could be legitimately criticized through the creation of biting but fake videos?
  5. How would you balance the need to protect artists (and others’) interest in expressing their views with the public’s need for truthful information? In other words, how can we control the spread of video-based fake news without unduly infringing on art, satire, or even trolling?

Further Information:

Robert Chesney & Danielle Citron, “Deep fakes: A looming crisis for national security, democracy and privacy?” Lawfare, February 26, 2018. Available at: https://www.lawfareblog.com/deep-fakes-looming-crisis-national-security-democracy-and-privacy

Megan Farokhmanesh, “ Is it legal to swap someone’s face into porn without consent?” The Verge, January 30, 2018. Available at: https://www.theverge.com/ 2018/1/30/16945494/deepfakes-porn-face-swap-legal

James Felton, March 13). “’Deep fake’ videos could be used to influence future global politics, experts warn.” IFLScience, March 13, 2018. Available at: http://www.iflscience.com/technology/deep-fake-videos-could-be-used-to-influence-future-global-politics-experts-warn/

Janko Roettgers, “Porn producers offer to help Hollywood take down deepfake videos.” Variety, February 21, 2018. Available at: http://variety.com/2018/ digital/news/deepfakes-porn-adult-industry-1202705749/

Authors:

James Hayden & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
March 21, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

%d bloggers like this: