Media Ethics Initiative

Home » Posts tagged 'artificial intelligence'

Tag Archives: artificial intelligence

Can Artificial Intelligence Reprogram the Newsroom?

CASE STUDY: Trust, Transparency, and Ethics in Automated Journalism

Case Study PDF | Additional Case Studies


Could a computer program write engaging news stories? In a recent Technology Trends and Predictions report from Reuters, 78% of the 200 digital leaders, editors, and CEOs surveyed said investing in artificial intelligence (AI) technologies would help secure the future of journalism (Newman, 2018). Exploring these new methods of reporting, however, has introduced a wide array of unforeseen ethical concerns for those already struggling to understand the complex dynamics between human journalists and computational work. Implementing automated storytelling into the newsrooms presents journalists with questions of how to preserve and encourage accuracy and fairness in their reporting and transparency with the audiences they serve.

 

Artificial intelligence in the newsroom has progressed from an idea to a reality. In 1998, computer scientist Sung-Min Lee predicted the use of artificial intelligence in the newsroom whereby “robot agents” worked alongside, or sometimes in place of, human journalists (Latar, 2015). In 2010, Narrative Science became the first commercial venture to use artificial intelligence to turn data into narrative text. Automated Insights and others would follow after Narrative Science in bringing Lee’s “robot agent” in the newsroom to life through automated storytelling. While today’s newsrooms are using AI to streamline a variety of processes, from tracking breaking news, collecting and interpreting data, fact-checking online content, and even creating chatbots to suggest personalized content to users, the ability to automatically generate text and video stories has encouraged an industry-wide shift toward automated journalism, or “the process of using software or algorithms to automatically generate news stories without human intervention” (Graefe, 2016). Forbes, the New York Times, the Washington Post, ProPublica, and Bloomberg are just some of today’s newsrooms that use AI in their news reporting. The Heliograf, the Washington Post’s “in-house automated storytelling technology” is just one of many examples of how newsrooms are utilizing artificial intelligence to expand their coverage in beats like sports and finance that heavily rely on structured data, “allowing journalists to focus on in-depth reporting” (Gillespie, 2017).

Artificial intelligence has the potential to make journalism better for both those in the newsroom and the newsstand. Journalism think-tank Polis revealed in its 2019 Journalism AI Report that the main motivation for newsrooms to use artificial intelligence was to “help the public cope with a world of news overload and misinformation and to connect them in a convenient way to credible content that is relevant, useful and stimulating for their lives” (Beckett, 2019). Through automation, an incredible amount of news coverage can now be produced at a speed that was once unimaginable. The journalism industry’s AI pioneer, the Associated Press, began using Automated Insights’ Wordsmith program in 2014 to automatically generate corporate earnings reports, increasing their coverage from just 300 to 4,000 companies in one year alone (Hall, 2018).

As 20% of the time spent doing manual tasks has now been freed up by automation (Graefe, 2016), automated storytelling may also allow journalists to focus on areas that require more detailed work, such as investigative or explanatory journalism. In this way, automation in the reporting process may provide much-need assistance for newsrooms looking to expand their coverage amidst limited funding and small staffs (Grieco, 2019). Although these automated stories are said to have “far fewer errors” than human written news stories it is important to note that AI’s abilities are confined to the limitations set by its coders. As a direct product of human creators, even robots are prone to mistakes, and these inaccuracies can be disastrous. When an error in United Bots’ sports reporting algorithm changed 1-0 losses to 10-1, misleading headlines of a “humiliating loss for football club” were generated before being corrected by editors (Granger, 2019). Despite the speed and skill that automation brings to the reporting process, the need for human oversight remains. Automated storytelling cannot analyze society, initiate questioning of implied positions and judgements, nor can it provide deep understanding of complex issues. As Latar stated in 2015, “no robot journalist can become a guardian of democracy and human rights.”

Others might see hope in AI marching into newsrooms. Amidst concerns of fake news, misinformation, and polarization online, there is a growing cry among Americans struggling to trust a current news ecosystem which many perceive as greatly biased (Jones, 2018). Might AI hold a way to restore this missing trust? Founded in 2015, the tech start-up Knowhere News believes their AI could actually eliminate the bias that many connect to this lack of trust. After trending topics on the web are discovered, Knowhere News’ AI system collects data from thousands of news stories of various leanings and perspectives to create an “impartial” news article in under 60 seconds (DeGeurin, 2018). The aggregated and automated nature of this story formation holds out the hope of AI journalism being free of prejudices, limitations, or interests that may unduly affect a specific newsroom or reporter.

Even though AI may seem immune to certain sorts of conscious or unconscious bias affecting humans, critics argue that automated storytelling can still be subject to algorithmic bias. An unethical, biased algorithm was one of the top concerns relayed by respondents in the Polis report previously mentioned, as “bad use of data could lead to editorial mistakes such as inaccuracy or distortion and even discrimination against certain social groups or views” (Beckett, 2019). Bias seeps into algorithms in two main ways, according to the Harvard Business Review. First, algorithms can be trained to be biased due to the biases of its creator. For instance, Amazon’s hiring system was trained to favor words like “executed” and “captured” which were later found to be used more frequently by male applicants. The result was that Amazon’s AI unfairly favored and excluded applicants based on their sex. Second, a lack of diversity and limited representation within an algorithm’s training data, which teaches the system how to function, often results in faulty functioning in future applications. Apple’s facial recognition system has been criticized for failing to recognize darker skin tones once implemented into technologies used by large populations because of a supposed overrepresentation of white faces in the initial training data (Manyika, Silberg, & Presten, 2019).

Clearly, these AI agents do not garner the same sort of introspection in their decision-making processes, as seen by humans. This lack of understanding could be particularly challenging when collecting and interpreting data and ultimately lead to biased reporting. At Bloomberg, for example, journalists had to train the newsrooms’s automated company earnings reporting system, Cyborg, to avoid misleading data from companies who strategically “choose figures in an effort to garner a more favourable portrayal than the numbers warrant” (Peiser, 2019). To produce complete and truthful automated accounts, journalists must work to ensure that algorithms are feeding these narrative storytelling systems clean and accurate data.

Developments in AI technologies continue to expand at “a speed and complexity which makes it hard for politicians, regulators, and academics to keep up” and this lack of familiarity can invoke great concern amongst both journalists and news consumers (Newman, 2018). According to Polis, 60% of journalists from 71 news organizations across 32 countries currently using AI in their newsroom reported concerns about how AI would impact the nature of their work (Beckett, 2019). Uncertain of how to introduce these new technologies to the common consumer, many newsrooms have failed to attribute their automated stories to a clear author or program. This raises concerns regarding the responsibility of journalists to maintain transparency with readers. Should individuals consuming these AI stories have a right to know which stories were automatically generated and which were not? Some media ethicists argue that audiences should be informed of when bots are being used and where to turn for more information; however, author bylines should be held by those with “editorial authority,” a longstanding tool for accountability and follow up with curious readers (Prasad & McBride, 2016).

Whether or not automated journalism will better allow for modern newsrooms to connect with readers in greater quantities and of a higher quality is still an unfinished story. It is certain, however, that AI will take an increasingly prominent place in the newsrooms of the future. How can innovative newsrooms utilize these advanced programs in producing high quality journalistic work to better serve their audiences and balance the standards of journalistic practice that assumed a purely human form in the past?

Discussion Questions:

  1. What is the ethical role of journalism in society? Are human writers essential to that role, or could sophisticated programs fulfill it?
  2. How might the increased volume and speed of automated journalism’s news coverage impact the reporting process?
  3. What are the ethical responsibilities of journalists and editors if automated storytelling is used in their newsroom?
  4. Are AI programs more or less prone to error and bias, over the long run, compared to human journalists?
  5. Who should be held responsible when an algorithm produces stories that are thought to be discriminatory by some readers? How should journalists and editors address these criticisms?
  6. How transparent should newsrooms be in disclosing their AI use with readers?

Further Information:

Beckett, C. (2019, November). New powers, new responsibilities. A global survey of journalism and artificial intelligence. The London School of Economics and Political Science. Available at https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/

DeGeurin, M. (2018, April 4). “A Startup Media Site Says AI Can Take Bias Out of News”. Vice. Available at https://www.vice.com/en_us/article/zmgza5/knowhere-ai-news-site-profile 

Graefe, A. (2016, January 7).Guide to Automated Journalism”. Columbia Journalism Review. Available at https://www.cjr.org/tow_center_reports/guide_to_automated_journalism.php

Granger, J. (2019, March 28). What happens when bots become editor? Journalism.co.uk. Available at: https://www.journalism.co.uk/news/nordic-news-organisations-shows-that-robot-journalism-can-make-personalisation-pay/s2/a736534/

Grieco, E. (2019, July 9). “U.S. newsroom employment has dropped by a quarter since 2008, with greatest decline at newspapers”. Pew Research Center. Available at https://www.pewresearch.org/fact-tank/2019/07/09/u-s-newsroom-employment-has-dropped-by-a-quarter-since-2008/

Jones, J. (2018, June 20). “Americans: Much Misinformation, Bias, Inaccuracy in News”. Gallup. Available at: https://news.gallup.com/opinion/gallup/235796/americans-misinformation-bias-inaccuracy-news.aspx

Latar, N. (2015, January). “The Robot Journalist in the Age of Social Physics: The End of Human Journalism?” The New World of Transitioned Media: Digital Realignment and Industry Transformation. Available at: www.researchgate.net

Manyika, J., Silberg, J., & Presten, B. (2019, October 25). “What Do We Do About the Biases in AI?” Harvard Business Review. Available at: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

McBride, K. & Prasad, S. (2016, August 11). “Ask the ethicist: Should bots get a byline?” Poynter. Available at https://www.poynter.org/ethics-trust/2016/ask-the-ethicist-should-bots-get-a-byline/

Newman, N. (2018) Journalism, Media and Technology Trends and Predictions 2018. Reuters Institute for the Study of Journalism & Google. Available at https://ora.ox.ac.uk/objects/uuid:45381ce5-19d7-4d1c-ba5e-3f2d0e923b32/download_file?safe_filename=Newman%2BPredictions%2B2018%2BFINAL.pdf

Peiser, J. (2019, February 5). “The Rise of the Robot Reporter”. The New York Times. Available at https://www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-robots.html

WashPostPR. (2017, September 1). “The Washington Post leverages automated storytelling to cover high school football”. The Washington Post. Available at https://www.washingtonpost.com/pr/wp/2017/09/01/the-washington-post-leverages-heliograf-to-cover-high-school-football/

Authors:

Chloe Young  & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
April 27, 2020

www.mediaengagement.org

Image: The People Speak! / CC BY-NC 2.0


This case study is supported by funding from the John S. and James L. Knight Foundation. Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement at sstroud (at) austin.utexas.edu.

 

The Ethics of Computer-Generated Actors

CASE STUDY: The Ethical Challenges of CGI Actors in Films

Case Study | Additional Case Studies


By Lucasfilm

Photo: LucasFilm

Long-dead actors continue to achieve a sort of immortality in their films. A new controversy over dead actors is coming to life based upon new uses of visual effects and computer-generated imagery (CGI). Instead of simply using CGI to create stunning action sequences, gorgeous backdrops, and imaginary monsters, film makers have started to use its technological wonders to bring back actors from the grave. What ethical problems circle around the use of digital reincarnations in film making?

The use of CGI to change the look of actors is nothing new. For instance, many films have used such CGI methods to digitally de-age actors with striking results (like those found in the Marvel films), or to create spectacular creatures without much physical reality (such as “Golem” in The Lord of the Rings series). What happens when CGI places an actor into a film through the intervention of technology? A recent example of digital reincarnation in the film industry is found in Fast and Furious 7, where Paul Walker had to be digitally recreated due to his untimely death in the middle of the film’s production. Walker’s brothers had to step in to give a physical form for the visual effect artists to finish off Walker’s character in the movie, and the results brought about mixed reviews as some viewers thought it was “odd” that they were seeing a deceased actor on screen that was recreated digitally. However, many argue that this was the best course of action to take in order to complete film production and honor Paul Walker’s work and character.

Other recent films have continued to bet on using CGI to help recreate characters on the silver screen. For instance, 2016’s Rogue One: A Star War Story used advanced CGI techniques that hint at the ethical problems that lie ahead for film-makers. Peter Cushing was first featured in 1977’s Star Wars: A New Hope as Grand Moff Tarkin. In the Star Wars timeline, the events that take place in Rogue One lead directly into A New Hope, so the story writers behind the recent Rogue One felt inclined to include Grand Moff Tarkin as a key character in the events leading up to the next film. There was one problem, however: Peter Cushing died in 1994. The film producers were faced with an interesting problem and ultimately decided to use CGI to digitally resurrect Cushing from the grave to reprise his role as the Imperial officer. The result of this addition of Grand Moff Tarkin in the final cut of the film sent shockwaves across the Star Wars fandom, with some presenting arguments in defense of adding Cushing’s character into the film by claiming that “actors don’t own characters” (Tylt.com) and that the fact that the character looked the same over the course of the fictional timeline enhanced the aesthetic effects of the movies. Others, like Catherine Shoard, were more critical. She condemned the film’s risky choice saying, “though Cushing’s estate approved his use in Rogue One, I’m not convinced that if I had built up a formidable acting career, I’d then want to turn in a performance I had bupkis to do with.” Rich Haridy of New Atlas also expressed some criticism over the use of Peter Cushing in the recent Star Wars film by writing, “there is something inherently unnerving about watching such a perfect simulacrum of someone you know cannot exist.”

This use of CGI to bring back dead actors and place them into film raises troubling questions about consent. Assuming that actors should only appear in films that they choose to, how can we be assured that such post-mortem uses are consistent with the actor’s wishes?  Is gaining permission from the relatives of the deceased enough to use an actor’s image or likeness? Additionally, the possibility is increased that CGI can be used to bring unwilling figures into a film. Many films have employed look-alikes to bring presidents or historical figures into a narrative; the possibility of using CGI to bring in exact versions of actors and celebrities into films does not seem that different from this tactic. This filmic use of CGI actors also extends our worries over “deepfakes” (AI-created fake videos) and falsified videos into the murkier realm of fictional products and narratives. While we like continuity in actors as a way to preserve our illusion of reality in films, what ethical pitfalls await us as we CGI the undead—or the unwilling—into our films or artworks?

Discussion Questions:

  1. What values are in conflict when filmmakers want to use CGI to place a deceased actor into a film?
  2. What is different about placing a currently living actor into a film through the use of CGI? How does the use of CGI differ from using realistic “look-alike” actors?
  3. What sort of limits would you place on the use of CGI versions of deceased actors? How would you prevent unethical use of deceased actors?
  4. How should society balance concerns with an actor’s (or celebrity’s) public image with an artist’s need to be creative with the tools at their disposal?
  5. What ethical questions would be raised by using CGI to insert “extras,” and not central characters, into a film?

Further Information:

Haridy, R. (2016, December 19). “Star Wars: Rogue One and Hollywood’s trip through the uncanny valley.” Available at: https://newatlas.com/star-wars-rogue-one-uncanny-valley-hollywood/47008/

Langshaw, M. (2017, August 02). “8 Disturbing Times Actors Were Brought Back From The Dead By CGI.” Available at: http://whatculture.com/film/8-disturbing-times-actors-were-brought-back-from-the-dead-by-cgi

Shoard, C. (2016, December 21). “Peter Cushing is dead. Rogue One’s resurrection is a digital indignity“. The Guardian. Available at: https://www.theguardian.com/commentisfree/2016/dec/21/peter-cushing-rogue-one-resurrection-cgi

The Tylt. Should Hollywood use CGI to replace dead actors in movies? Available at: https://thetylt.com/entertainment/should-hollywood-use-cgi-to-replace-dead-actors-in-movies

Authors:

William Cuellar & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
February 12, 2019

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Digital Immortality on an App

CASE STUDY: The Ethics of Legacy Chatbots

Case Study PDF | Additional Case Studies


photo-1515378046673-631269b99967

Diego / Unsplash / Modified

The death of a loved one is a traumatic experience in a person’s life. But does it really signal the end of one’s interactions with the deceased? Some companies, are looking to technology to give us a new hope of talking to the dead. Eternime wants to create a platform that allows for friends and relatives of deceased loved ones to hold conversations with an artificial intelligence powered “chatbot” and avatar that resembles the past person. Eugenia Kuyda, a co-founder of the AI startup Luka, created a digital “version” of her deceased friend Roman, as well as an app-based chatbot version of the celebrity Prince (MacDonald, 2016). These legacy chatbot programs function by using advanced computing algorithms and access to a range of digital records of the person they are simulating, including Twitter posts, Facebook posts, text messages, Fitbit data, and more. While technology has not yet advanced enough to create chatbots that resemble specific people with complete accuracy, companies like Eternime and Luka are working hard to perfect it. Eternime is allowing interested customers to sign up now and provide information to the website so that the company will have all the information they need to create a carbon copy of a user in the future. When the technology catches up with the idea, people will be able to communicate with their deceased friends and relatives. Will this concept ease the loss of a loved one or will it damage those grieving?

One of the challenges that people face when losing a loved one is the loss they feel. Mourners are sometimes left with little reminding them of the deceased person beyond a few photographs, possessions, or memories of the deceased. While these things can be valuable to people that have lost someone close, they may not feel like enough. This is where Eternime is hoping to assist. If the chatbots emulate the deceased persons as well as some expect, these conversations might feel as authentic as chat conversations with the real person. Some even argue that the artificial intelligence powered chatbot may provide a more truthful representation of a person’s life than the person him- or herself. Simon Parkin believes “Eterni.me and other similar services counter the fallibility of human memory; they offer a way to fix the details of a life as time passes.” Being able to learn the experiences of a person now deceased could be invaluable to a grieving person. Eternime’s co-creator Marius Ursache, believes that Eternime “could be a virtual library of humanity” (Parkin, 2015).

Leaving a legacy is important to many people because it ensures their life has importance even after they die. Most people achieve this by having children to share their stories or by creating something that will be used or remembered after they die. However, Eternime provides a new avenue to store a legacy. Ursache asserts that the purpose of Eternime is “about creating an interactive legacy, a way to avoid being totally forgotten in the future ” (Parkin, 2015). However, critics argue that this may not be a legacy that you want to leave. Laura Parker argues that “an avatar with an approximation of your voice and bone structure, who can tell your great-grandchildren how many Twitter followers you had, doesn’t feel like the same thing [as the traditional ways of leaving a legacy]” (Parker, 2014).

Others worry that platforms like Eternime will not assist those dealing with grief, but rather prevent them from accepting the loss and moving on. Joan Berzoff, a specialist in end-of-life issues, believes that “a post-death avatar goes against all we know about bereavement” (Parker, 2014). Living life with an algorithm that is merely attempting to mimic a deceased person could distance a user from reality, a reality that includes the fact that when a person dies, their biological presence and form are gone forever. Applications like Eternime could prevent users from accepting the reality of their situation and cause the users additional grief.

Another issue with chatbots of deceased people is the fact that a program would have to synthesize and access large amounts of private and personal information. Algorithms that mimic the user may be very advanced and realistic. However, they will never be perfect. For example, people often keep certain information secret or only share it with certain people. Will the AI chatbot know how to discreetly continue such practices of strategic information presentation, or maintain lies or secrets that the deceased lived with? The chatbot may pick up information or behavior from a text to a close friend and share it with a child. While this could lead to an honest-talking and transparent version of the deceased individual, it would not represent the deceased person in the way they would might wish to be represented, judging on their information sharing practices in their past interactions. Would we find that we judge a deceased individual differently based upon how a less-nuanced, but perhaps more truthful, chatbot simulates them after they are dead?

The technology to achieve the goal of legacy chatbots that are indistinguishable from real humans is not yet a reality, but it will be soon. The idea of deceased people being immortalized on the internet or in an AI-driven program gives us a digital way to talk to the dead. Will this technology lessen the burden of losing a loved one or make it harder to accept their loss? Will it create legacies of people to be past down to future generations, or tarnish our perception of people of the past?

Discussion Questions:

  1. Do you think it’s realistic, given enough person data, to create an interactive simulation of that person in chatbot form?
  2. Would legacy chatbots be a helpful thing for those mourning the loss of a loved one? Might they help new people learn about the deceased person?
  3. Does the deceased person have a right to any privacy of their information? Should someone make a chatbot of a deceased person without their permission?
  4. Could a deceased individual be harmed by truthful things that their legacy chatbot says on their behalf?
  5. What ethical interests are in conflict with legacy chatbots? How might you balance these issues, should people and companies want to build such bots?

Further Information:

Hamilton, Isobel. “These 2 tech founders lost their friends in tragic accidents. Now they’ve built AI chatbots to give people life after death,” Business Insider, November 17, 2018. Available at: https://www.businessinsider.com/eternime-and-replika-giving-life-to-the-dead-with-new-technology-2018-11

MacDonald, Cheyenne. “Would you resurrect your dead friend as an AI? Try out ‘memorial’ chatbot app – and you can even talk to a virtual version of Prince,” Daily Mail, October 6, 2016. Available at: https://www.dailymail.co.uk/sciencetech/article-3826208/Would-resurrect-dead-friend-AI-Try-memorial-chatbot-app-talk-virtual-version-Prince.html

Parker, Laura. “How to become virtually immortal,” The New Yorker, April 4, 2018. Available at: https://www.newyorker.com/tech/annals-of-technology/how-to-become-virtually-immortal

Parkin, Simon. “Back-up brains: The era of digital immortality,” BBC, January 23, 2015. Available at: http://www.bbc.com/future/story/20150122-the-secret-to-immortality

Authors:

Colin Frick & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
December 4, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They are produced for educational use only. They can be used in unmodified PDF form without permission for classroom or educational uses. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

AI and Online Hate Speech

CASE STUDY: From Tay AI to Automated Content Moderation

Case Study PDF | Additional Case Studies


tayai

Screencapture: Twitter.com

Hate speech is a growing problem online. Optimists believe that if we continue to improve the capabilities of our programs, we could push back the tides of hate speech. Facebook CEO Mark Zuckerberg holds this view, testifying to congress that he is “optimistic that over a five-to-10-year period, we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we’re not just there on that” (Pearson, 2018). Others are not as hopeful that artificial intelligence (AI) will lead to reductions in bias and hate in online communication.

Worries about AI and machine learning gained traction with the recent introduction—and quick deactivation—of Microsoft’s Tay AI chatbot. Once this program was unleashed upon the Twittersphere, unpredicted results emerged: as James Vincent noted, “it took less than 24 hours for Twitter to corrupt an innocent AI chatbot.” As other Twitter users started to tweet serious or half-joking hateful speech to Tay, the program began to engage in copycat behaviors and “learned” how to use the same words and phrases in its novel responses. Over 96,000 Tweets later, it was clear that Tay communicated not just copycat utterances, but novel statements such as “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism,” “I love feminism now,” and “gender equality = feminism.” Users tweeting “Bruce Jenner” at Tay evoked a wide spectrum of responses, from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the more worrisome “caitlyn jenner isn’t a real woman yet she won woman of the year” (Vincent, 2018)? In short, the technology designed to adapt and learn started to embody “the prejudices of society,” highlighting to some that “big corporations like Microsoft forget to take any preventative measures against these problems” (Vincent, 2018).  In the Tay example, AI did nothing to reduce hate speech on social media—it merely reflected “the worst traits of humanity.”

The Tay bot example highlights the enduring concerns over uses of AI to make important, but nuanced, distinctions in policing online content and in interacting with humans. Many still extol the promise of AI and machine learning algorithms, an optimism bolstered by Facebook’s limited use of AI to combat fake news and disinformation: Facebook has claimed that AI has helped to “remove thousands of fake accounts and ‘find suspicious behaviors,’ including during last year’s special Senate race in Alabama, when AI helped spot political spammers from Macedonia, a hotbed of online fraud” (Harwell, 2018). Facebook is also scaling up uses of AI in tagging user faces in uploaded photos, optimizing item placement to maximize user clicks, as well as in optimizing advertisements. Facebook is also increasing the use of AI in “scanning posts and suggesting resources when the AI assesses that a user is threatening suicide” (Harwell, 2018).

But the paradox of AI remains: such technologies offers a way to escape the errors, imperfections, and biases of limited human judgment, but they seem to always lack the creativity and contextual sensitivity needed to reasonable engage the incredible and constantly-evolving range of human communication online. Additionally, some of the best machine learning programs have yet to escape from dominant forms of social prejudice; for example, Jordan Pearson (2016) explained one lackluster use of AI that exhibited “a strong tendency to mark white-sounding names as ‘pleasant’ and black-sounding ones as ‘unpleasant.’” AI might catch innocuous uses of terms flagged elsewhere as hateful, or miss creative misspellings or versions of words encapsulating hateful thoughts. Even more worrisome, our AI programs may unwittingly inherit prejudicial attributes from their human designers, or be “trained” in problematic ways by intentional human interaction as was the case in Tay AI’s short foray into the world. Should we be optimistic that AI can understand humans and human communication, with all of our flaws and ideals, and still perform in an ethically appropriate way?

Discussion Questions:

  1. What ethical problems in the online world are AI programs meant to alleviate?
  2. The designers of the Tay AI bot did not intend for it to be racist or sexist, but many of its Tweets fit these labels. Should we hold the designers and programs ethically accountable for this machine learning Twitter bot?
  3. In general, when should programmers and designers of autonomous devices and programs be held accountable for the learned behaviors of their creations?
  4. What ethical concerns revolve around giving AI programs a significant role in content moderation on social media sites? What kind of biases might our machines risk possessing?

Further Information:

Drew Harwell, “AI will solve Facebook’s most vexing problems, Mark Zuckerberg says. Just don’t ask when or how.” The Washington Post, April 11, 2018. Available at: https://www.washingtonpost.com/news/the-switch/wp/2018/04/11/ai-will-solve-facebooks-most-vexing-problems-mark-zuckerberg-says-just-dont-ask-when-or-how/

James Vincent, “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.” The Verge, March 24, 2016. Available at: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Jordan Pearson, “Mark Zuckerberg Says Facebook Will Have AI to Detect Hate Speech In ‘5-10 years.’” Motherboard, April 10, 2018. Available at: https://motherboard.vice.com/en_us/article/7xd779/mark-zuckerberg-says-facebook-will-have-ai-to-detect-hate-speech-in-5-10-years-congress-hearing

Jordan Pearson, “It’s Our Fault That AI Thinks White Names Are More ‘Pleasant’ Than Black Names.” Motherboard, August 26, 2016. Available at: https://motherboard.vice.com/en_us/article/z43qka/its-our-fault-that-ai-thinks-white-names-are-more-pleasant-than-black-names

Authors:

Anna Rose Isbell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
September 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Google Assistant and the Ethics of AI

CASE STUDY: The Ethics of Artificial Intelligence in Human Interaction

Case Study PDF  | Additional Case Studies


Artificial intelligence (AI) is increasingly prevalent in our daily activities. In May 2018, Google demonstrated a new AI technology known as “Duplex.” Designed for our growing array of smart devices and specifically for the Google Assistant feature, this system has the ability to sound like a human on the phone while accomplishing tasks like scheduling a hair appointment or making a dinner reservation. Duplex is able to navigate various misunderstandings along the course of a conversation and possesses the ability to acknowledge when the conversation has exceeded its ability to respond. As Duplex gets perfected in phone-based applications and beyond, it will introduce a new array of ethical issues into questions concerning AI in human communication

There are obvious advantages to integrating AI systems into our smart devices. Advocates for human-sounding communicative AI systems such as Duplex argue that these are logical extensions of our urge to make our technologies do more with less effort. In this case, it’s an intelligent program that saves us time by doing mundane tasks such as making diner reservations and calling to inquire about holiday hours at a particular store. If we don’t object to using online reservation systems as a way to speed up a communicative transaction, why would we object to using human-sounding program to do the same thing?

Yet Google’s May 2018 demonstration created an immediate backlash. Many agreed with Zeynep Tufekci’s passionate reaction: “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… [is] horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing” (Meyer, 2018). What many worried about was the deception involved in the interaction, since only one party (the caller) knew that the call was being made by a program designed to sound like a human. Even though there was no harm to the restaurant and hair salon employees involved in the early demonstration of Duplex, the deception of human-sounding AI was still there. In a more recent test of the Duplex technology, Google responded to critics by including a notice in the call itself: “Hi, I’m calling to make a reservation. I’m Google’s automated booking service, so I’ll record the call. Uh, can I book a table for Sunday the first” (Bohn, 2018)? Such announcements are added begrudgingly, since it’s highly likely that a significant portion of humans called will hang up once they realize that this is not a “real” human interaction.

While this assuaged some worries, it is notable that the voice accompanying this AI notice was even more human-like than ever. Might this technology be hacked or used without such a warning that the voice one is hearing is AI produced? Furthermore, others worry about the data integration that is enabled when such an AI program as Duplex is employed by a company as far-reaching as Google. Given the bulk of consumer data Google possesses as one of the most popular search engines, critics fear future developments that would allow the Google Assistant to impersonate users at a deeper, more personal level through access to a massive data profile for each user. This is simply a more detailed, personal version of the basic worry about impersonation: “If robots can freely pose as humans the scope for mischief is incredible; ranging from scam calls to automated hoaxes” (Vincent, 2018). While critics differ about whether industry self-regulation or legislation will be the way to address technologies such as Duplex, the basic question remains: do we create more ethical conundrums by making our AI systems sound like humans, or is this another streamlining of our everyday lives that we will simply have to get used to over time?

Discussion Questions:

  1. What are the ethical values in conflict in this debate over human-sounding AI systems?
  2. Do you think the Duplex AI system is deceptive? Is this use harmful? If it’s one and not the other, does this make it less problematic?
  3. Do the ethical concerns disappear after Google added a verbalized notice that the call was being made by an AI program?
  4. How should companies develop and implement AI systems like Duplex? What limitations should they install in these systems?
  5. Do you envision a day when we no longer assume that human-sounding voices come from a specific living human interacting with us? Does this matter now?

Further Information:

Bohn, Dieter. “Google Duplex really works and testing begins this summer.” The Verge, 27 June 2018. Available at: https://www.theverge.com/2018/6/27/17508728/google-duplex-assistant-reservations-demo

McMullan, Thomas. “Google Duplex: Google responds to ethical concerns about AI masquerading as humans.” ALPHR, 11 May 2018. Available at: https://www.alphr.com/google/1009290/google-duplex-assistant-io-ai-impersonating-humans

Meyer, David. “Google Should Have Thought About Duplex’s Ethical Issues Before Showing It Off.” Fortune, 11 May 2018. Available at:  https://www.fortune.com/2018/05/11/google-duplex-virtual-assistant-ethical-issues-ai-machine-learning/

Vincent, James. “Google’s AI sounds like a human on the phone — should we be worried?” The Verge, 9 May 2018. Available at: https://www.theverge.com/2018/5/9/17334658/google-ai-phone-call-assistant-duplex-ethical-social-implications

Authors:

Sabrina Stoffels & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
September 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Robots, Algorithms, and Digital Ethics

The Media Ethics Initiative Presents:


How to Survive the Robot Apocalypse

Dr. David J. Gunkel

Distinguished Teaching Professor ¦ Department of Communication
Northern Illinois University

April 3, 2018 ¦ Moody College of Communication ¦ University of Texas at Austin



Dr. David J. Gunkel is an award-winning educator, scholar and author, specializing in the study of information and communication technology with a focus on ethics. Formally educated in philosophy and media studies, his teaching and research synthesize the hype of high-technology with the rigor and insight of contemporary critical analysis. He is the author of over 50 scholarly journal articles and book chapters and has published 7 books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press), Of Remixology: Ethics and Aesthetics after Remix (MIT Press), Heidegger and the Media (Polity), and Robot Rights (MIT Press). He is the managing editor and co-founder of the International Journal of Žižek Studies and co-editor of the Indiana University Press series in Digital Game Studies. He currently holds the position of Professor in the Department of Communication at Northern Illinois University, and his teaching has been recognized with numerous awards.

29598348_2150063081891874_1768432062945830222_n

Fake News, Fake Porn, and AI

CASE STUDY: “Deepfakes” and the Ethics of Faked Video Content

Case Study PDF | Additional Case Studies


The Internet has a way of both refining techniques and technologies by pushing them to their limits—and of bending them toward less-altruistic uses. For instance, artificial intelligence is increasingly being used to push the boundaries of what appears to be reality in faked videos. The premise of the phenomenon is straightforward: use artificial intelligence to seamlessly crop the faces of other people (usually celebrities or public figures) from an authentic video into other pre-existing videos.

bface

Photo: Geralt / CC0

While some uses of this technology can be beneficial or harmless, the potential for real damage is also present. This recent phenomenon, often called “Deepfakes,” has gained media attention due to early adopters and programmers using it to place the face of female celebrities onto the bodies of actresses in unrelated adult film videos. A celebrity therefore appears to be participating in a pornographic video even though, in reality, they have not done so. The actress Emma Watson was one of the first targets of this technology, finding her face cropped onto an explicit porn video without her consent. She is currently embroiled in a lawsuit filed against the producer of the faked video. While the Emma Watson case is still in progress, the difficulty of getting videos like these taken down cannot be understated. Law professor Eric Goldman points out the difficulty of pursuing such cases. He notes that while defamation and slander laws may apply to Deepfake videos, there is no straightforward or clear legal path for getting videos like these taken down, especially given their ability to re-appear once uploaded to the internet. While pornography is protected as a form of expression or art of some producer, Deepfake technology creates the possibility of creating adult films without the consent of those “acting” in it. Making matters more complex is the increasing ease with which this technology is available: forums exist with users offering advice on making faked videos and a phone app is available for download that can be employed by basically anyone to make a Deepfake video using little more than a few celebrity images.

Part of the challenge presented by Deepfakes concerns a conflict between aesthetic values and issues of consent. Celebrities or targets of faked videos did not consent to be portrayed in this manner, a fact which has led prominent voices in the adult film industry to condemn Deepfakes. One adult film company executive characterized the problem with Deepfakes in a Variety article: “it’s f[**]ed up. Everything we do … is built around the word consent. Deepfakes by definition runs contrary to consent.” It is unwanted and potentially embarrassing to be placed in a realistic porn video in which one didn’t actually participate. These concerns over consent are important, but Deepfakes muddies the waters by involving fictional creations and situations. Pornography, including fantasy satires based upon real-life figures such as the disgraced politician Anthony Weiner, is protected under the First Amendment as a type of expressive activity, regardless of whether those depicted or satirized approve of its ideas and activities. Nudity and fantasy situations play a range of roles in expressive activity, some with private contexts and some with public contexts. For instance, 2016 saw the installation of several unauthorized—and nude—statues of then-candidate Donald Trump across the United States. Whether or not we judge the message or use of these statues to be laudatory, they do seem to evoke the aesthetic values of creativity and expression that conflicts with a focus on consent to be depicted in a created (and possibly critical) artifact. Might Deepfakes, especially those of celebrities or public figures, ever be a legitimate form of aesthetic expression of their creators, in a similar way that a deeply offensive pornographic video is still a form of expression of its creators? Furthermore, not all Deepfakes are publically exhibited and used in connection with their target’s name, thereby removing most, if not all, of the public harm that would be created by their exhibition. When does private fantasy become a public problem?

Beyond their employment in fictional, but realistic, adult videos, the Deepfakes phenomena raises a more politically-concerning issue. Many are worried that Deepfakes have the potential to damage the world’s political climate through the spread of realistic faked video news. If seeing is believing, might our concerns about misinformation, propaganda, and fake news gain a new depth if all or part of the “news” item in question is a realistic video clip serving as evidence for some fictional claim? Law professors Robert Chesney and Danielle Citron consider a range of scenarios in which Deepfakes technology could prove disastrous when utilized in fake news: “false audio might convincingly depict U.S. officials privately ‘admitting’ a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative,” or “a fake video might depict emergency officials ‘announcing’ an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.” Such uses of faked video could create compelling, and potentially harmful, viral stories with the capacity to travel quickly across social media. Yet in a similar fashion to the licentious employments in forged adult footage, one can see the potential aesthetic values of Deepfakes as a form of expression, trolling, or satire in some political employments. The fairly crude “bad lip reading” videos of the recent past that placed new audio into real videos for humorous effect will soon give way to more realistic Deepfakes involving political and celebrity figures saying humorous, satirical, false, or frightening things. Given AI’s advances and Deepfake technology’s supercharging of how we can reimagine and realistically depict the world, how do we legally and ethically renegotiate the balance among the values of creative expression, the concerns over the consent of others, and our pursuit of truthful content?

Discussion Questions:

  1. Beyond the legal worries, what is the ethical problem with Deepfake videos? Does this problem change if the targeted individual is a public or private figure?
  2. Do your concerns about the ethics of Deepfakes videos depend upon them being made public, and not being kept private by their creator?
  3. Do the ethical and legal concerns raised concerning Deepfakes matter for more traditional forms of art that use nude and non-nude depictions of public figures? Why or why not?
  4. How might artists use Deepfakes as part of their art? Can you envision ways that politicians and celebrities could be legitimately criticized through the creation of biting but fake videos?
  5. How would you balance the need to protect artists (and others’) interest in expressing their views with the public’s need for truthful information? In other words, how can we control the spread of video-based fake news without unduly infringing on art, satire, or even trolling?

Further Information:

Robert Chesney & Danielle Citron, “Deep fakes: A looming crisis for national security, democracy and privacy?” Lawfare, February 26, 2018. Available at: https://www.lawfareblog.com/deep-fakes-looming-crisis-national-security-democracy-and-privacy

Megan Farokhmanesh, “ Is it legal to swap someone’s face into porn without consent?” The Verge, January 30, 2018. Available at: https://www.theverge.com/ 2018/1/30/16945494/deepfakes-porn-face-swap-legal

James Felton, March 13). “’Deep fake’ videos could be used to influence future global politics, experts warn.” IFLScience, March 13, 2018. Available at: http://www.iflscience.com/technology/deep-fake-videos-could-be-used-to-influence-future-global-politics-experts-warn/

Janko Roettgers, “Porn producers offer to help Hollywood take down deepfake videos.” Variety, February 21, 2018. Available at: http://variety.com/2018/ digital/news/deepfakes-porn-adult-industry-1202705749/

Authors:

James Hayden & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
March 21, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

%d bloggers like this: