Media Ethics Initiative

Home » Posts tagged 'AI'

Tag Archives: AI

Changing Comments or Changing Minds?

CASE STUDY: Protesting Censorship in Pakistan through Algorithmic Alterations

Case Study PDF| Additional Case Studies


DTpic-600x129On April 17, 2016, the Daily Times—a major English-language newspaper in Pakistan—published an editorial on the nation’s controversial blasphemy law. The law, according to Human Rights Watch, carries a mandatory death penalty for blaspheming against Islam. The article was published in the wake of renewed controversy over Aasia Bibi, the first woman ever convicted and sentenced to death under the blasphemy law in November 2010 for an incident in 2009. All told, no person convicted of blasphemy has been executed, but 53 people have been killed in extra-judicial violence (Ijaz, 2016).

Given the controversial nature of the law and intentionally provocative publication of the editorial, the Daily Times anticipated a deluge of comments from online readers. The article was ideal for launching the “Free My Voice” campaign. Most contemporary news outlets have websites and the majority of story webpages include a comment feature where users can post feedback, opinions, or questions below a piece of reporting. While there are some sites that moderate comments—deleting those deemed inappropriate or editing for clarity—it is not common practice to meaningfully alter the reader’s message.  That is exactly what happened on the Daily Times editorial. The Daily Times teamed up with ad agency Grey Singapore and free speech organization ARTICLE 19 to launch the “Free My Voice” campaign as a way to protest restrictions on press liberty.

When users tried to comment on the blasphemy editorial, an algorithm automatically reversed the meaning of their post. For example, a comment written to say “the author is correct. The current laws in Pakistan are a distortion of Islam” would instead read “the author is wrong. The current laws in Pakistan aren’t a distortion of Islam” (Smith, 2016). No matter how many times a comment was typed, it would reverse meaning. Readers could not leave a comment that accurately reflected their position. Eventually, potential posters were “led to a landing page to sign a petition or donate toward the Free My Voice campaign” (Hicks, 2016).

According to the Daily Times, the “statements were altered, real-time, on the medium they would least expect censorship to happen—the comment box.” They furthered that the point of the program was to make commenters “experience censorship first-hand to make them feel the frustration of what it is like to lose [a] fundamental right” (Smith, 2016). In a story they published about the project, the Daily Times explained censorship is common in Pakistan; it is ranked as the fourth most dangerous country for journalists by the International Federation of Journalists. They also deem the blasphemy law one of the “country’s biggest threats to free speech” (Smith, 2016).

In addition to the blasphemy law, the project was a response to a newly approved Electronic Crimes Bill, which gave the Pakistani government greater power to block the spread of information, potentially arbitrarily. Tahmina Rahman, Regional Director for ARTICLE 19 Bangladesh and South Asia, expressed hope the “collaboration will demonstrate the pernicious nature of censorship that so often goes unseen but has untold consequences for society” (ARTICLE 19, 2016). Shehryar Taseer, the Daily Times publisher, justified the project saying “we all have to take a stance against censorship,” which is becoming “a global issue” (Smith, 2016).

The idea was to make citizens who might take their freedom of speech for granted experience the same censorship that stifles the press, motivating them to fight back on behalf of journalists. However, there is a potentially unacknowledged irony in censoring citizens engaged with the news to highlight the problem of censoring the press. Citizens are not being directly silenced—prevented from speaking—but censored through forced alteration of their words, words still attributed to them through a username and photo next to the posted comment. The project risks falling into the same ethical trap it claims to oppose, thus alienating citizens who might otherwise support them, especially in a climate where some comments or their engineered opposites may be politically controversial. The Daily Times project could put citizens in risky social or legal territory. “Free My Voice” potentially strips readers of their autonomy in an effort to regain journalistic integrity. While many would agree that the pursuit of a free press is admirable, some might worry about the ethical value of the tactics used by the Daily Times and its collaborators. To what extent can actual and unsuspecting readers be used in a news outlet’s effort to bring public attention to common, but sometimes unnoticed, curtailments of media freedom?

Discussion Questions:

  1. Does the “Free My Voice” discussion harmfully limit discussion of the blasphemy law? What are potential worries about using an algorithm to change the meaning of reader comments?
  2. Should press organizations like the Daily Times protest censorship so directly if it risks further restrictions of their freedoms and ability to provide the public with information?
  3. What obligation, if any, do citizens have to donate, sign petitions, or otherwise participate in protests by journalists?
  4. Is censoring, or forcefully altering, citizen comments an ethical way to critique government censorship?
  5. Do acts of protest have to have the consent of all involved, or all affected by the protest?

Further Information:

ARTICLE 19, “Pakistan: ARTICLE 19 and Daily Times bring censorship home for online readers.” ARTICLE 19, April 17, 2016. Available at: https://www.article19.org/resources/pakistan-article-19-and-daily-times-bring-censorship-home-for-online-readers/

Robin Hicks, “Agency’s algorithm flips sentiment of comments in article about Pakistan’s blasphemy law.” Mumbrella Asia, May 30, 2016. Available at: https://www.mumbrella.asia/2016/05/grey-singapore-idea-for-pakistan-newspaper-inverts-sentiment-of-comments-for-press-freedom-message

Saroop Ijaz, “Facing the death penalty for blasphemy in Pakistan.” Human Rights Watch, October 12, 2016. Available at: https://www.hrw.org/news/2016/10/12/facing-death-penalty-blasphemy-pakistan

Sydney Smith, “Pakistani newspaper alters readers’ comments to make censorship point.” iMediaEthics, May 31, 2016. Available at: https://www.imediaethics.org/pakistani-newspaper-alters-readers-comments-make-censorship-point/

Authors:

Dakota Park-Ozee & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
February 20, 2020


This case is supported with funding from the South Asia Institute at the University of Texas at Austin. Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Designing Ethical AI Technologies

The Center for Media Engagement and Media Ethics Initiative Present:


Good Systems: Designing Values-Driven AI Technologies Is Our Grand Challenge

Dr. Kenneth R. Fleischmann

Professor in the School of Information
University of Texas at Austin

September 24, 2019 (Tuesday) ¦ 3:30PM-4:30PM ¦ CMA 5.136 (LBJ Room)


Technology is neither good nor bad; nor is it neutral.” This is the first law of technology, outlined by historian Melvin Kranzberg in 1985. It means that technology is only good or bad if we perceive it to be that way based on our own value system. At the same time, because the people who design technology value some things more or less than others, their values influence their designs. Michael Crichton’s “Jurassic Park” chaos theorist, Ian Malcolm, notes: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.” That’s the question we have to ask now: Should we increasingly automate various aspects of society? How can we ensure that advances in AI are beneficial to humanity, not detrimental? How can we develop technology that makes life better for all of us, not just some? What unintended consequences are we overlooking or ignoring by developing technology that has the power to be manipulated and misused, from undermining elections to exacerbating racial inequality?

The Inaugural Chair of the Good Systems Grand Challenge, Ken Fleischmann, will present the eight-year research mission of Good Systems, as well as our educational and outreach activities. Specifically, he will discuss the upcoming Good Systems launch events and ways that faculty, researchers, staff, and students can become involved in the Good Systems Grand Challenge.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow Media Ethics Initiative and Center for Media Engagement on Facebook for more information. Media Ethics Initiative events are open and free to the public.


 

The Ethics of Computer-Generated Actors

CASE STUDY: The Ethical Challenges of CGI Actors in Films

Case Study | Additional Case Studies


By Lucasfilm

Photo: LucasFilm

Long-dead actors continue to achieve a sort of immortality in their films. A new controversy over dead actors is coming to life based upon new uses of visual effects and computer-generated imagery (CGI). Instead of simply using CGI to create stunning action sequences, gorgeous backdrops, and imaginary monsters, film makers have started to use its technological wonders to bring back actors from the grave. What ethical problems circle around the use of digital reincarnations in film making?

The use of CGI to change the look of actors is nothing new. For instance, many films have used such CGI methods to digitally de-age actors with striking results (like those found in the Marvel films), or to create spectacular creatures without much physical reality (such as “Golem” in The Lord of the Rings series). What happens when CGI places an actor into a film through the intervention of technology? A recent example of digital reincarnation in the film industry is found in Fast and Furious 7, where Paul Walker had to be digitally recreated due to his untimely death in the middle of the film’s production. Walker’s brothers had to step in to give a physical form for the visual effect artists to finish off Walker’s character in the movie, and the results brought about mixed reviews as some viewers thought it was “odd” that they were seeing a deceased actor on screen that was recreated digitally. However, many argue that this was the best course of action to take in order to complete film production and honor Paul Walker’s work and character.

Other recent films have continued to bet on using CGI to help recreate characters on the silver screen. For instance, 2016’s Rogue One: A Star War Story used advanced CGI techniques that hint at the ethical problems that lie ahead for film-makers. Peter Cushing was first featured in 1977’s Star Wars: A New Hope as Grand Moff Tarkin. In the Star Wars timeline, the events that take place in Rogue One lead directly into A New Hope, so the story writers behind the recent Rogue One felt inclined to include Grand Moff Tarkin as a key character in the events leading up to the next film. There was one problem, however: Peter Cushing died in 1994. The film producers were faced with an interesting problem and ultimately decided to use CGI to digitally resurrect Cushing from the grave to reprise his role as the Imperial officer. The result of this addition of Grand Moff Tarkin in the final cut of the film sent shockwaves across the Star Wars fandom, with some presenting arguments in defense of adding Cushing’s character into the film by claiming that “actors don’t own characters” (Tylt.com) and that the fact that the character looked the same over the course of the fictional timeline enhanced the aesthetic effects of the movies. Others, like Catherine Shoard, were more critical. She condemned the film’s risky choice saying, “though Cushing’s estate approved his use in Rogue One, I’m not convinced that if I had built up a formidable acting career, I’d then want to turn in a performance I had bupkis to do with.” Rich Haridy of New Atlas also expressed some criticism over the use of Peter Cushing in the recent Star Wars film by writing, “there is something inherently unnerving about watching such a perfect simulacrum of someone you know cannot exist.”

This use of CGI to bring back dead actors and place them into film raises troubling questions about consent. Assuming that actors should only appear in films that they choose to, how can we be assured that such post-mortem uses are consistent with the actor’s wishes?  Is gaining permission from the relatives of the deceased enough to use an actor’s image or likeness? Additionally, the possibility is increased that CGI can be used to bring unwilling figures into a film. Many films have employed look-alikes to bring presidents or historical figures into a narrative; the possibility of using CGI to bring in exact versions of actors and celebrities into films does not seem that different from this tactic. This filmic use of CGI actors also extends our worries over “deepfakes” (AI-created fake videos) and falsified videos into the murkier realm of fictional products and narratives. While we like continuity in actors as a way to preserve our illusion of reality in films, what ethical pitfalls await us as we CGI the undead—or the unwilling—into our films or artworks?

Discussion Questions:

  1. What values are in conflict when filmmakers want to use CGI to place a deceased actor into a film?
  2. What is different about placing a currently living actor into a film through the use of CGI? How does the use of CGI differ from using realistic “look-alike” actors?
  3. What sort of limits would you place on the use of CGI versions of deceased actors? How would you prevent unethical use of deceased actors?
  4. How should society balance concerns with an actor’s (or celebrity’s) public image with an artist’s need to be creative with the tools at their disposal?
  5. What ethical questions would be raised by using CGI to insert “extras,” and not central characters, into a film?

Further Information:

Haridy, R. (2016, December 19). “Star Wars: Rogue One and Hollywood’s trip through the uncanny valley.” Available at: https://newatlas.com/star-wars-rogue-one-uncanny-valley-hollywood/47008/

Langshaw, M. (2017, August 02). “8 Disturbing Times Actors Were Brought Back From The Dead By CGI.” Available at: http://whatculture.com/film/8-disturbing-times-actors-were-brought-back-from-the-dead-by-cgi

Shoard, C. (2016, December 21). “Peter Cushing is dead. Rogue One’s resurrection is a digital indignity“. The Guardian. Available at: https://www.theguardian.com/commentisfree/2016/dec/21/peter-cushing-rogue-one-resurrection-cgi

The Tylt. Should Hollywood use CGI to replace dead actors in movies? Available at: https://thetylt.com/entertainment/should-hollywood-use-cgi-to-replace-dead-actors-in-movies

Authors:

William Cuellar & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
February 12, 2019

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Robots, Algorithms, and Digital Ethics

The Media Ethics Initiative Presents:


How to Survive the Robot Apocalypse

Dr. David J. Gunkel

Distinguished Teaching Professor ¦ Department of Communication
Northern Illinois University

April 3, 2018 ¦ Moody College of Communication ¦ University of Texas at Austin



Dr. David J. Gunkel is an award-winning educator, scholar and author, specializing in the study of information and communication technology with a focus on ethics. Formally educated in philosophy and media studies, his teaching and research synthesize the hype of high-technology with the rigor and insight of contemporary critical analysis. He is the author of over 50 scholarly journal articles and book chapters and has published 7 books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press), Of Remixology: Ethics and Aesthetics after Remix (MIT Press), Heidegger and the Media (Polity), and Robot Rights (MIT Press). He is the managing editor and co-founder of the International Journal of Žižek Studies and co-editor of the Indiana University Press series in Digital Game Studies. He currently holds the position of Professor in the Department of Communication at Northern Illinois University, and his teaching has been recognized with numerous awards.

29598348_2150063081891874_1768432062945830222_n

Fake News, Fake Porn, and AI

CASE STUDY: “Deepfakes” and the Ethics of Faked Video Content

Case Study PDF | Additional Case Studies


The Internet has a way of both refining techniques and technologies by pushing them to their limits—and of bending them toward less-altruistic uses. For instance, artificial intelligence is increasingly being used to push the boundaries of what appears to be reality in faked videos. The premise of the phenomenon is straightforward: use artificial intelligence to seamlessly crop the faces of other people (usually celebrities or public figures) from an authentic video into other pre-existing videos.

bface

Photo: Geralt / CC0

While some uses of this technology can be beneficial or harmless, the potential for real damage is also present. This recent phenomenon, often called “Deepfakes,” has gained media attention due to early adopters and programmers using it to place the face of female celebrities onto the bodies of actresses in unrelated adult film videos. A celebrity therefore appears to be participating in a pornographic video even though, in reality, they have not done so. The actress Emma Watson was one of the first targets of this technology, finding her face cropped onto an explicit porn video without her consent. She is currently embroiled in a lawsuit filed against the producer of the faked video. While the Emma Watson case is still in progress, the difficulty of getting videos like these taken down cannot be understated. Law professor Eric Goldman points out the difficulty of pursuing such cases. He notes that while defamation and slander laws may apply to Deepfake videos, there is no straightforward or clear legal path for getting videos like these taken down, especially given their ability to re-appear once uploaded to the internet. While pornography is protected as a form of expression or art of some producer, Deepfake technology creates the possibility of creating adult films without the consent of those “acting” in it. Making matters more complex is the increasing ease with which this technology is available: forums exist with users offering advice on making faked videos and a phone app is available for download that can be employed by basically anyone to make a Deepfake video using little more than a few celebrity images.

Part of the challenge presented by Deepfakes concerns a conflict between aesthetic values and issues of consent. Celebrities or targets of faked videos did not consent to be portrayed in this manner, a fact which has led prominent voices in the adult film industry to condemn Deepfakes. One adult film company executive characterized the problem with Deepfakes in a Variety article: “it’s f[**]ed up. Everything we do … is built around the word consent. Deepfakes by definition runs contrary to consent.” It is unwanted and potentially embarrassing to be placed in a realistic porn video in which one didn’t actually participate. These concerns over consent are important, but Deepfakes muddies the waters by involving fictional creations and situations. Pornography, including fantasy satires based upon real-life figures such as the disgraced politician Anthony Weiner, is protected under the First Amendment as a type of expressive activity, regardless of whether those depicted or satirized approve of its ideas and activities. Nudity and fantasy situations play a range of roles in expressive activity, some with private contexts and some with public contexts. For instance, 2016 saw the installation of several unauthorized—and nude—statues of then-candidate Donald Trump across the United States. Whether or not we judge the message or use of these statues to be laudatory, they do seem to evoke the aesthetic values of creativity and expression that conflicts with a focus on consent to be depicted in a created (and possibly critical) artifact. Might Deepfakes, especially those of celebrities or public figures, ever be a legitimate form of aesthetic expression of their creators, in a similar way that a deeply offensive pornographic video is still a form of expression of its creators? Furthermore, not all Deepfakes are publically exhibited and used in connection with their target’s name, thereby removing most, if not all, of the public harm that would be created by their exhibition. When does private fantasy become a public problem?

Beyond their employment in fictional, but realistic, adult videos, the Deepfakes phenomena raises a more politically-concerning issue. Many are worried that Deepfakes have the potential to damage the world’s political climate through the spread of realistic faked video news. If seeing is believing, might our concerns about misinformation, propaganda, and fake news gain a new depth if all or part of the “news” item in question is a realistic video clip serving as evidence for some fictional claim? Law professors Robert Chesney and Danielle Citron consider a range of scenarios in which Deepfakes technology could prove disastrous when utilized in fake news: “false audio might convincingly depict U.S. officials privately ‘admitting’ a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative,” or “a fake video might depict emergency officials ‘announcing’ an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.” Such uses of faked video could create compelling, and potentially harmful, viral stories with the capacity to travel quickly across social media. Yet in a similar fashion to the licentious employments in forged adult footage, one can see the potential aesthetic values of Deepfakes as a form of expression, trolling, or satire in some political employments. The fairly crude “bad lip reading” videos of the recent past that placed new audio into real videos for humorous effect will soon give way to more realistic Deepfakes involving political and celebrity figures saying humorous, satirical, false, or frightening things. Given AI’s advances and Deepfake technology’s supercharging of how we can reimagine and realistically depict the world, how do we legally and ethically renegotiate the balance among the values of creative expression, the concerns over the consent of others, and our pursuit of truthful content?

Discussion Questions:

  1. Beyond the legal worries, what is the ethical problem with Deepfake videos? Does this problem change if the targeted individual is a public or private figure?
  2. Do your concerns about the ethics of Deepfakes videos depend upon them being made public, and not being kept private by their creator?
  3. Do the ethical and legal concerns raised concerning Deepfakes matter for more traditional forms of art that use nude and non-nude depictions of public figures? Why or why not?
  4. How might artists use Deepfakes as part of their art? Can you envision ways that politicians and celebrities could be legitimately criticized through the creation of biting but fake videos?
  5. How would you balance the need to protect artists (and others’) interest in expressing their views with the public’s need for truthful information? In other words, how can we control the spread of video-based fake news without unduly infringing on art, satire, or even trolling?

Further Information:

Robert Chesney & Danielle Citron, “Deep fakes: A looming crisis for national security, democracy and privacy?” Lawfare, February 26, 2018. Available at: https://www.lawfareblog.com/deep-fakes-looming-crisis-national-security-democracy-and-privacy

Megan Farokhmanesh, “ Is it legal to swap someone’s face into porn without consent?” The Verge, January 30, 2018. Available at: https://www.theverge.com/ 2018/1/30/16945494/deepfakes-porn-face-swap-legal

James Felton, March 13). “’Deep fake’ videos could be used to influence future global politics, experts warn.” IFLScience, March 13, 2018. Available at: http://www.iflscience.com/technology/deep-fake-videos-could-be-used-to-influence-future-global-politics-experts-warn/

Janko Roettgers, “Porn producers offer to help Hollywood take down deepfake videos.” Variety, February 21, 2018. Available at: http://variety.com/2018/ digital/news/deepfakes-porn-adult-industry-1202705749/

Authors:

James Hayden & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
March 21, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

%d bloggers like this: