Media Ethics Initiative

Home » 2018 » September

Monthly Archives: September 2018

AI and Online Hate Speech

CASE STUDY: From Tay AI to Automated Content Moderation

Case Study PDF | Additional Case Studies


tayai

Screencapture: Twitter.com

Hate speech is a growing problem online. Optimists believe that if we continue to improve the capabilities of our programs, we could push back the tides of hate speech. Facebook CEO Mark Zuckerberg holds this view, testifying to congress that he is “optimistic that over a five-to-10-year period, we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we’re not just there on that” (Pearson, 2018). Others are not as hopeful that artificial intelligence (AI) will lead to reductions in bias and hate in online communication.

Worries about AI and machine learning gained traction with the recent introduction—and quick deactivation—of Microsoft’s Tay AI chatbot. Once this program was unleashed upon the Twittersphere, unpredicted results emerged: as James Vincent noted, “it took less than 24 hours for Twitter to corrupt an innocent AI chatbot.” As other Twitter users started to tweet serious or half-joking hateful speech to Tay, the program began to engage in copycat behaviors and “learned” how to use the same words and phrases in its novel responses. Over 96,000 Tweets later, it was clear that Tay communicated not just copycat utterances, but novel statements such as “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism,” “I love feminism now,” and “gender equality = feminism.” Users tweeting “Bruce Jenner” at Tay evoked a wide spectrum of responses, from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the more worrisome “caitlyn jenner isn’t a real woman yet she won woman of the year” (Vincent, 2018)? In short, the technology designed to adapt and learn started to embody “the prejudices of society,” highlighting to some that “big corporations like Microsoft forget to take any preventative measures against these problems” (Vincent, 2018).  In the Tay example, AI did nothing to reduce hate speech on social media—it merely reflected “the worst traits of humanity.”

The Tay bot example highlights the enduring concerns over uses of AI to make important, but nuanced, distinctions in policing online content and in interacting with humans. Many still extol the promise of AI and machine learning algorithms, an optimism bolstered by Facebook’s limited use of AI to combat fake news and disinformation: Facebook has claimed that AI has helped to “remove thousands of fake accounts and ‘find suspicious behaviors,’ including during last year’s special Senate race in Alabama, when AI helped spot political spammers from Macedonia, a hotbed of online fraud” (Harwell, 2018). Facebook is also scaling up uses of AI in tagging user faces in uploaded photos, optimizing item placement to maximize user clicks, as well as in optimizing advertisements. Facebook is also increasing the use of AI in “scanning posts and suggesting resources when the AI assesses that a user is threatening suicide” (Harwell, 2018).

But the paradox of AI remains: such technologies offers a way to escape the errors, imperfections, and biases of limited human judgment, but they seem to always lack the creativity and contextual sensitivity needed to reasonable engage the incredible and constantly-evolving range of human communication online. Additionally, some of the best machine learning programs have yet to escape from dominant forms of social prejudice; for example, Jordan Pearson (2016) explained one lackluster use of AI that exhibited “a strong tendency to mark white-sounding names as ‘pleasant’ and black-sounding ones as ‘unpleasant.’” AI might catch innocuous uses of terms flagged elsewhere as hateful, or miss creative misspellings or versions of words encapsulating hateful thoughts. Even more worrisome, our AI programs may unwittingly inherit prejudicial attributes from their human designers, or be “trained” in problematic ways by intentional human interaction as was the case in Tay AI’s short foray into the world. Should we be optimistic that AI can understand humans and human communication, with all of our flaws and ideals, and still perform in an ethically appropriate way?

Discussion Questions:

  1. What ethical problems in the online world are AI programs meant to alleviate?
  2. The designers of the Tay AI bot did not intend for it to be racist or sexist, but many of its Tweets fit these labels. Should we hold the designers and programs ethically accountable for this machine learning Twitter bot?
  3. In general, when should programmers and designers of autonomous devices and programs be held accountable for the learned behaviors of their creations?
  4. What ethical concerns revolve around giving AI programs a significant role in content moderation on social media sites? What kind of biases might our machines risk possessing?

Further Information:

Drew Harwell, “AI will solve Facebook’s most vexing problems, Mark Zuckerberg says. Just don’t ask when or how.” The Washington Post, April 11, 2018. Available at: https://www.washingtonpost.com/news/the-switch/wp/2018/04/11/ai-will-solve-facebooks-most-vexing-problems-mark-zuckerberg-says-just-dont-ask-when-or-how/

James Vincent, “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.” The Verge, March 24, 2016. Available at: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Jordan Pearson, “Mark Zuckerberg Says Facebook Will Have AI to Detect Hate Speech In ‘5-10 years.’” Motherboard, April 10, 2018. Available at: https://motherboard.vice.com/en_us/article/7xd779/mark-zuckerberg-says-facebook-will-have-ai-to-detect-hate-speech-in-5-10-years-congress-hearing

Jordan Pearson, “It’s Our Fault That AI Thinks White Names Are More ‘Pleasant’ Than Black Names.” Motherboard, August 26, 2016. Available at: https://motherboard.vice.com/en_us/article/z43qka/its-our-fault-that-ai-thinks-white-names-are-more-pleasant-than-black-names

Authors:

Anna Rose Isbell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
September 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Google Assistant and the Ethics of AI

CASE STUDY: The Ethics of Artificial Intelligence in Human Interaction

Case Study PDF  | Additional Case Studies


Artificial intelligence (AI) is increasingly prevalent in our daily activities. In May 2018, Google demonstrated a new AI technology known as “Duplex.” Designed for our growing array of smart devices and specifically for the Google Assistant feature, this system has the ability to sound like a human on the phone while accomplishing tasks like scheduling a hair appointment or making a dinner reservation. Duplex is able to navigate various misunderstandings along the course of a conversation and possesses the ability to acknowledge when the conversation has exceeded its ability to respond. As Duplex gets perfected in phone-based applications and beyond, it will introduce a new array of ethical issues into questions concerning AI in human communication

There are obvious advantages to integrating AI systems into our smart devices. Advocates for human-sounding communicative AI systems such as Duplex argue that these are logical extensions of our urge to make our technologies do more with less effort. In this case, it’s an intelligent program that saves us time by doing mundane tasks such as making diner reservations and calling to inquire about holiday hours at a particular store. If we don’t object to using online reservation systems as a way to speed up a communicative transaction, why would we object to using human-sounding program to do the same thing?

Yet Google’s May 2018 demonstration created an immediate backlash. Many agreed with Zeynep Tufekci’s passionate reaction: “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… [is] horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing” (Meyer, 2018). What many worried about was the deception involved in the interaction, since only one party (the caller) knew that the call was being made by a program designed to sound like a human. Even though there was no harm to the restaurant and hair salon employees involved in the early demonstration of Duplex, the deception of human-sounding AI was still there. In a more recent test of the Duplex technology, Google responded to critics by including a notice in the call itself: “Hi, I’m calling to make a reservation. I’m Google’s automated booking service, so I’ll record the call. Uh, can I book a table for Sunday the first” (Bohn, 2018)? Such announcements are added begrudgingly, since it’s highly likely that a significant portion of humans called will hang up once they realize that this is not a “real” human interaction.

While this assuaged some worries, it is notable that the voice accompanying this AI notice was even more human-like than ever. Might this technology be hacked or used without such a warning that the voice one is hearing is AI produced? Furthermore, others worry about the data integration that is enabled when such an AI program as Duplex is employed by a company as far-reaching as Google. Given the bulk of consumer data Google possesses as one of the most popular search engines, critics fear future developments that would allow the Google Assistant to impersonate users at a deeper, more personal level through access to a massive data profile for each user. This is simply a more detailed, personal version of the basic worry about impersonation: “If robots can freely pose as humans the scope for mischief is incredible; ranging from scam calls to automated hoaxes” (Vincent, 2018). While critics differ about whether industry self-regulation or legislation will be the way to address technologies such as Duplex, the basic question remains: do we create more ethical conundrums by making our AI systems sound like humans, or is this another streamlining of our everyday lives that we will simply have to get used to over time?

Discussion Questions:

  1. What are the ethical values in conflict in this debate over human-sounding AI systems?
  2. Do you think the Duplex AI system is deceptive? Is this use harmful? If it’s one and not the other, does this make it less problematic?
  3. Do the ethical concerns disappear after Google added a verbalized notice that the call was being made by an AI program?
  4. How should companies develop and implement AI systems like Duplex? What limitations should they install in these systems?
  5. Do you envision a day when we no longer assume that human-sounding voices come from a specific living human interacting with us? Does this matter now?

Further Information:

Bohn, Dieter. “Google Duplex really works and testing begins this summer.” The Verge, 27 June 2018. Available at: https://www.theverge.com/2018/6/27/17508728/google-duplex-assistant-reservations-demo

McMullan, Thomas. “Google Duplex: Google responds to ethical concerns about AI masquerading as humans.” ALPHR, 11 May 2018. Available at: https://www.alphr.com/google/1009290/google-duplex-assistant-io-ai-impersonating-humans

Meyer, David. “Google Should Have Thought About Duplex’s Ethical Issues Before Showing It Off.” Fortune, 11 May 2018. Available at:  https://www.fortune.com/2018/05/11/google-duplex-virtual-assistant-ethical-issues-ai-machine-learning/

Vincent, James. “Google’s AI sounds like a human on the phone — should we be worried?” The Verge, 9 May 2018. Available at: https://www.theverge.com/2018/5/9/17334658/google-ai-phone-call-assistant-duplex-ethical-social-implications

Authors:

Sabrina Stoffels & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
September 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Disrupting Journalism Ethics

The Center for Media Engagement and Media Ethics Initiative Present:


Journalism Ethics amid Irrational Publics: Disrupt and Redesign

Dr. Stephen J. A. Ward

Distinguished Lecturer, University of British Columbia
Founding Director, Center for Journalism Ethics at the University of Wisconsin

November 5 (Monday)  ¦  3:00-4:30PM  ¦  BMC 5.208


Stephen J. A. WardHow can journalism ethics meet the new challenges to democracy in the era of fake news and real political problems? In this engaging talk, prominent media ethicist Stephen J. A. Ward argues that journalism ethics must be radically rethought to defend democracy against irrational publics, demagogues, and extreme populism. In an age of intolerance and global disinformation, Ward recommends an engaged journalism which is neither neutral nor partisan. He proposes guidelines for covering extremism as part of a “macro-resistance” by society to a toxic public sphere.

Dr. Stephen J. A. Ward is an internationally recognized media ethicist, author and educator, living in Canada. He is a Distinguished Lecturer on Ethics at the University of British Columbia, founding director of the Center for Journalism Ethics at the University of Wisconsin, and director of the UBC School of Journalism. He was a war correspondent, foreign reporter and newsroom manager for 14 years and has received a lifetime award for service to professional journalism in Canada. He is editor-in-chief of the forthcoming Springer Handbook for Global Mediaward2 Ethics, and was associate editor of the Journal of Media Ethics. Dr. Ward is the author of 9 media ethics books, including two award-winning books, Radical Media Ethics and The Invention of Journalism Ethics. Also he is the author of Global Journalism Ethics, Ethics and the Media, and Global Media Ethics: Problems and Perspectives. His two new books, Disrupting Journalism Ethics and Ethical Journalism in a Populist Age were published in 2018.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow MEI and CME on Facebook for more information.

Media Ethics Initiative events are open and free to the public.

Co-sponsored by the University of Texas at Austin’s School of Journalism


 

Is Incivility Ever Ethical?

The Center for Media Engagement and Media Ethics Initiative Present:


Is Incivility Ever Ethical?

Dr. Gina Masullo Chen

Assistant Professor of Journalism
University of Texas at Austin

October 16 (Tuesday)  ¦  3:30-4:30PM  ¦  BMC 5.208


The current debate over incivility in the public discourse often leaves out an important component – sometimes the most ethical choice is to speak out, even if some people view your speech as uncivil. The need to be civil at all costs can become a tool of the privileged to silence and symbolically annihilate the voices of those with less power in society, specifically women, people of color, or those from other marginalized groups. Media outlets can perpetuate this silencing by focusing on the “civility” – or lack thereof – of the message, rather than the content. Compounding this problem is the issue that people define what’s uncivil in varied ways – including everything from a raised voice to hate speech. UT Austin Assistant Professor Gina Masullo Chen will draw on potent examples from today’s headlines, including Colin Kaepernick’s “take-a-knee” protest during the national anthem to draw attention to racial injustice and some politicians’ refusal to speak to their angry constituents. Her argument is not that incivility is good. Rather, she asserts that sometimes the ethical cost of silence is greater than the normative threat to civil discourse from what some may perceive as incivility.

Dr. Gina Masullo Chen is an Assistant Professor in the School of Journalism and the Assistant Director of the Center for Media Engagement, both at the University of Texas at Austin. Her research focuses on the online conversation around the news and how it influences social, civic, and political engagement. She is the author of Online Incivility and Public Debate: Nasty Talk and co-editor of Scandal in a Digital Age. She is currently writing her third book, The New Town Hall: Why We Engage Personally with Politicians. She spent 20 years as a newspaper journalist before becoming a professor.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow MEI and CME on Facebook for more information. Media Ethics Initiative events are open and free to the public.


 

Ethics in Public Relations

The Center for Media Engagement and Media Ethics Initiative Present:


Ethics in Public Relations

Kathleen Lucente
Founder & President of Red Fan Communications

October 30 ¦ 2:00-3:00PM ¦ BMC 5.208


KLWhat ethical challenges await the public relations professional? Kathleen Lucente, the Founder and President of Red Fan Communications, discusses a range of ethical choices and challenges facing those in the public relations profession, including: ensuring that reporters are fair, just, and honest in their coverage of one’s client, dealing with inappropriate client relations, maintaining honesty and transparency between a client and agency, and the challenges maintaining your client’s reputation while also maintaining yours as an agency in situations of crisis. This talk will be of interest to students wishing to pursue careers in public relations, as well as scholars researching the practices and effects of public relations.

After a successful and award-winning career working for IBM, J.P. Morgan, Ketchum Worldwide and other global brands and agencies, Kathleen Lucente moved to Austin just as the city began its meteoric rise as a hotbed for tech startups and investment. She is the founder and president of Red Fan Communications, an Austin-based public relations firm that has helped countless companies clarify their purpose, tell their unique stories, and establish lasting relationships with clients and customers. She serves on several boards and donates much of her and her staff’s time to local nonprofits that have tangible impact throughout the community, including the Trail of Lights, the ABC Kite Fest, the Health Alliance for Austin Musicians, and the Susan G. Komen Foundation.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow MEI and CME on Facebook for more information. Media Ethics Initiative events are open and free to the public.


 

Moral Psychology and Media

The Center for Media Engagement and Media Ethics Initiative Present:


Feeling Rules, Media Ethics, and the Moral Foundation Dictionary

Dr. Sven Joeckel (University of Erfurt, Germany) & Dr. Leyla Dogruel (Johannes Gutenberg University, Mainz, Germany)

September 19 (Wednesday) ¦ 2:00-3:30PM ¦ CMA 5.136 (LBJ Seminar Room)


What does psychology have to tell us about the impact of media on our emotions and moral judgments? Does media make us better moral agents? In this discussion, two visiting researchers from Germany will speak on how media shapes our “feeling rules” and the connection between moral values and political communication. Attention will also be given to how moral psychology can help us understand the ideological content of media texts.

Dr. Sven Joeckel is Professor for Communication with a focus on children, adolescents and the media at the University of Erfurt, Germany. He received a Ph.D. in communication from the University of Technology, Ilmenau (Germany). Since 2009, he has chaired the M.A. program in Children, Adolescents, and the Media at the University of Erfurt. His research interests are adolescents’ use of media, mobile privacy research as well as the relationship between media use and morality. Dr. Leyla Dogruel is Assistant Professor for Media Systems and Media Performance at Johannes Gutenberg University of Mainz, Germany. In 2013, she received her Ph.D. from Freie Universität in Berlin, Germany. Her research interests include media innovation theory, online privacy, and media structures.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow MEI and CME on Facebook for more information. Media Ethics Initiative events are open and free to the public.


 

Media Ethics and Mobile Devices

The Center for Media Engagement and Media Ethics Initiative Present:


BYOD!  Should We Really Ask New College Grads and Employees to Bring their Own Devices to Work?

Dr. Keri K. Stephens

Associate Professor of Communication Studies
University of Texas at Austin

September 25 ¦ 3:30-4:30PM ¦ BMC 5.208


kksAt first glance it might sound great to get to choose the cell phone and computer you want to use at work.  After all, you might like iPhones and your colleague likes Androids.  But what people overlook is that “bring your own” often means you are also paying for these devices and agreeing to rules that few people ever read.  Come join us for a Media Ethics Talk by Keri K. Stephens, where she will share some of the hidden issues of control that new college graduates, as well as people in many stages of their career, face with BYOD policies.  This talk features research from Stephen’s recently published book, Negotiating Control: Organizations and Mobile Communication (Oxford University Press).

Dr. Keri K. Stephens’ research and teaching interests bring an organizational perspective to understanding how people interact with communication technologies, and she focuses on contexts of crisis, emergency, disaster, workplaces, and healthcare.  She is an Associate Professor in the Organizational and Communication Technology Group in the Department of Communication Studies, a Faculty Fellow with the Center for Health & Social Policy in the LBJ School of Public Policy, and a Faculty Affiliate with the Center for Health Communication.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow MEI and CME on Facebook for more information. Media Ethics Initiative events are open and free to the public.


 

%d bloggers like this: