Media Ethics Initiative

Home » Posts tagged 'robots'

Tag Archives: robots

Designing Ethical AI Technologies

The Center for Media Engagement and Media Ethics Initiative Present:


Good Systems: Designing Values-Driven AI Technologies Is Our Grand Challenge

Dr. Kenneth R. Fleischmann

Professor in the School of Information
University of Texas at Austin

September 24, 2019 (Tuesday) ¦ 3:30PM-4:30PM ¦ CMA 5.136 (LBJ Room)


Technology is neither good nor bad; nor is it neutral.” This is the first law of technology, outlined by historian Melvin Kranzberg in 1985. It means that technology is only good or bad if we perceive it to be that way based on our own value system. At the same time, because the people who design technology value some things more or less than others, their values influence their designs. Michael Crichton’s “Jurassic Park” chaos theorist, Ian Malcolm, notes: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.” That’s the question we have to ask now: Should we increasingly automate various aspects of society? How can we ensure that advances in AI are beneficial to humanity, not detrimental? How can we develop technology that makes life better for all of us, not just some? What unintended consequences are we overlooking or ignoring by developing technology that has the power to be manipulated and misused, from undermining elections to exacerbating racial inequality?

The Inaugural Chair of the Good Systems Grand Challenge, Ken Fleischmann, will present the eight-year research mission of Good Systems, as well as our educational and outreach activities. Specifically, he will discuss the upcoming Good Systems launch events and ways that faculty, researchers, staff, and students can become involved in the Good Systems Grand Challenge.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow Media Ethics Initiative and Center for Media Engagement on Facebook for more information. Media Ethics Initiative events are open and free to the public.


 

AI and Online Hate Speech

CASE STUDY: From Tay AI to Automated Content Moderation

Case Study PDF | Additional Case Studies


tayai

Screencapture: Twitter.com

Hate speech is a growing problem online. Optimists believe that if we continue to improve the capabilities of our programs, we could push back the tides of hate speech. Facebook CEO Mark Zuckerberg holds this view, testifying to congress that he is “optimistic that over a five-to-10-year period, we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we’re not just there on that” (Pearson, 2018). Others are not as hopeful that artificial intelligence (AI) will lead to reductions in bias and hate in online communication.

Worries about AI and machine learning gained traction with the recent introduction—and quick deactivation—of Microsoft’s Tay AI chatbot. Once this program was unleashed upon the Twittersphere, unpredicted results emerged: as James Vincent noted, “it took less than 24 hours for Twitter to corrupt an innocent AI chatbot.” As other Twitter users started to tweet serious or half-joking hateful speech to Tay, the program began to engage in copycat behaviors and “learned” how to use the same words and phrases in its novel responses. Over 96,000 Tweets later, it was clear that Tay communicated not just copycat utterances, but novel statements such as “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism,” “I love feminism now,” and “gender equality = feminism.” Users tweeting “Bruce Jenner” at Tay evoked a wide spectrum of responses, from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the more worrisome “caitlyn jenner isn’t a real woman yet she won woman of the year” (Vincent, 2018)? In short, the technology designed to adapt and learn started to embody “the prejudices of society,” highlighting to some that “big corporations like Microsoft forget to take any preventative measures against these problems” (Vincent, 2018).  In the Tay example, AI did nothing to reduce hate speech on social media—it merely reflected “the worst traits of humanity.”

The Tay bot example highlights the enduring concerns over uses of AI to make important, but nuanced, distinctions in policing online content and in interacting with humans. Many still extol the promise of AI and machine learning algorithms, an optimism bolstered by Facebook’s limited use of AI to combat fake news and disinformation: Facebook has claimed that AI has helped to “remove thousands of fake accounts and ‘find suspicious behaviors,’ including during last year’s special Senate race in Alabama, when AI helped spot political spammers from Macedonia, a hotbed of online fraud” (Harwell, 2018). Facebook is also scaling up uses of AI in tagging user faces in uploaded photos, optimizing item placement to maximize user clicks, as well as in optimizing advertisements. Facebook is also increasing the use of AI in “scanning posts and suggesting resources when the AI assesses that a user is threatening suicide” (Harwell, 2018).

But the paradox of AI remains: such technologies offers a way to escape the errors, imperfections, and biases of limited human judgment, but they seem to always lack the creativity and contextual sensitivity needed to reasonable engage the incredible and constantly-evolving range of human communication online. Additionally, some of the best machine learning programs have yet to escape from dominant forms of social prejudice; for example, Jordan Pearson (2016) explained one lackluster use of AI that exhibited “a strong tendency to mark white-sounding names as ‘pleasant’ and black-sounding ones as ‘unpleasant.’” AI might catch innocuous uses of terms flagged elsewhere as hateful, or miss creative misspellings or versions of words encapsulating hateful thoughts. Even more worrisome, our AI programs may unwittingly inherit prejudicial attributes from their human designers, or be “trained” in problematic ways by intentional human interaction as was the case in Tay AI’s short foray into the world. Should we be optimistic that AI can understand humans and human communication, with all of our flaws and ideals, and still perform in an ethically appropriate way?

Discussion Questions:

  1. What ethical problems in the online world are AI programs meant to alleviate?
  2. The designers of the Tay AI bot did not intend for it to be racist or sexist, but many of its Tweets fit these labels. Should we hold the designers and programs ethically accountable for this machine learning Twitter bot?
  3. In general, when should programmers and designers of autonomous devices and programs be held accountable for the learned behaviors of their creations?
  4. What ethical concerns revolve around giving AI programs a significant role in content moderation on social media sites? What kind of biases might our machines risk possessing?

Further Information:

Drew Harwell, “AI will solve Facebook’s most vexing problems, Mark Zuckerberg says. Just don’t ask when or how.” The Washington Post, April 11, 2018. Available at: https://www.washingtonpost.com/news/the-switch/wp/2018/04/11/ai-will-solve-facebooks-most-vexing-problems-mark-zuckerberg-says-just-dont-ask-when-or-how/

James Vincent, “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.” The Verge, March 24, 2016. Available at: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Jordan Pearson, “Mark Zuckerberg Says Facebook Will Have AI to Detect Hate Speech In ‘5-10 years.’” Motherboard, April 10, 2018. Available at: https://motherboard.vice.com/en_us/article/7xd779/mark-zuckerberg-says-facebook-will-have-ai-to-detect-hate-speech-in-5-10-years-congress-hearing

Jordan Pearson, “It’s Our Fault That AI Thinks White Names Are More ‘Pleasant’ Than Black Names.” Motherboard, August 26, 2016. Available at: https://motherboard.vice.com/en_us/article/z43qka/its-our-fault-that-ai-thinks-white-names-are-more-pleasant-than-black-names

Authors:

Anna Rose Isbell & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
September 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Google Assistant and the Ethics of AI

CASE STUDY: The Ethics of Artificial Intelligence in Human Interaction

Case Study PDF  | Additional Case Studies


Artificial intelligence (AI) is increasingly prevalent in our daily activities. In May 2018, Google demonstrated a new AI technology known as “Duplex.” Designed for our growing array of smart devices and specifically for the Google Assistant feature, this system has the ability to sound like a human on the phone while accomplishing tasks like scheduling a hair appointment or making a dinner reservation. Duplex is able to navigate various misunderstandings along the course of a conversation and possesses the ability to acknowledge when the conversation has exceeded its ability to respond. As Duplex gets perfected in phone-based applications and beyond, it will introduce a new array of ethical issues into questions concerning AI in human communication

There are obvious advantages to integrating AI systems into our smart devices. Advocates for human-sounding communicative AI systems such as Duplex argue that these are logical extensions of our urge to make our technologies do more with less effort. In this case, it’s an intelligent program that saves us time by doing mundane tasks such as making diner reservations and calling to inquire about holiday hours at a particular store. If we don’t object to using online reservation systems as a way to speed up a communicative transaction, why would we object to using human-sounding program to do the same thing?

Yet Google’s May 2018 demonstration created an immediate backlash. Many agreed with Zeynep Tufekci’s passionate reaction: “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… [is] horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing” (Meyer, 2018). What many worried about was the deception involved in the interaction, since only one party (the caller) knew that the call was being made by a program designed to sound like a human. Even though there was no harm to the restaurant and hair salon employees involved in the early demonstration of Duplex, the deception of human-sounding AI was still there. In a more recent test of the Duplex technology, Google responded to critics by including a notice in the call itself: “Hi, I’m calling to make a reservation. I’m Google’s automated booking service, so I’ll record the call. Uh, can I book a table for Sunday the first” (Bohn, 2018)? Such announcements are added begrudgingly, since it’s highly likely that a significant portion of humans called will hang up once they realize that this is not a “real” human interaction.

While this assuaged some worries, it is notable that the voice accompanying this AI notice was even more human-like than ever. Might this technology be hacked or used without such a warning that the voice one is hearing is AI produced? Furthermore, others worry about the data integration that is enabled when such an AI program as Duplex is employed by a company as far-reaching as Google. Given the bulk of consumer data Google possesses as one of the most popular search engines, critics fear future developments that would allow the Google Assistant to impersonate users at a deeper, more personal level through access to a massive data profile for each user. This is simply a more detailed, personal version of the basic worry about impersonation: “If robots can freely pose as humans the scope for mischief is incredible; ranging from scam calls to automated hoaxes” (Vincent, 2018). While critics differ about whether industry self-regulation or legislation will be the way to address technologies such as Duplex, the basic question remains: do we create more ethical conundrums by making our AI systems sound like humans, or is this another streamlining of our everyday lives that we will simply have to get used to over time?

Discussion Questions:

  1. What are the ethical values in conflict in this debate over human-sounding AI systems?
  2. Do you think the Duplex AI system is deceptive? Is this use harmful? If it’s one and not the other, does this make it less problematic?
  3. Do the ethical concerns disappear after Google added a verbalized notice that the call was being made by an AI program?
  4. How should companies develop and implement AI systems like Duplex? What limitations should they install in these systems?
  5. Do you envision a day when we no longer assume that human-sounding voices come from a specific living human interacting with us? Does this matter now?

Further Information:

Bohn, Dieter. “Google Duplex really works and testing begins this summer.” The Verge, 27 June 2018. Available at: https://www.theverge.com/2018/6/27/17508728/google-duplex-assistant-reservations-demo

McMullan, Thomas. “Google Duplex: Google responds to ethical concerns about AI masquerading as humans.” ALPHR, 11 May 2018. Available at: https://www.alphr.com/google/1009290/google-duplex-assistant-io-ai-impersonating-humans

Meyer, David. “Google Should Have Thought About Duplex’s Ethical Issues Before Showing It Off.” Fortune, 11 May 2018. Available at:  https://www.fortune.com/2018/05/11/google-duplex-virtual-assistant-ethical-issues-ai-machine-learning/

Vincent, James. “Google’s AI sounds like a human on the phone — should we be worried?” The Verge, 9 May 2018. Available at: https://www.theverge.com/2018/5/9/17334658/google-ai-phone-call-assistant-duplex-ethical-social-implications

Authors:

Sabrina Stoffels & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
September 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. Please email us and let us know if you found them useful! For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Robots, Algorithms, and Digital Ethics

The Media Ethics Initiative Presents:


How to Survive the Robot Apocalypse

Dr. David J. Gunkel

Distinguished Teaching Professor ¦ Department of Communication
Northern Illinois University

April 3, 2018 ¦ Moody College of Communication ¦ University of Texas at Austin



Dr. David J. Gunkel is an award-winning educator, scholar and author, specializing in the study of information and communication technology with a focus on ethics. Formally educated in philosophy and media studies, his teaching and research synthesize the hype of high-technology with the rigor and insight of contemporary critical analysis. He is the author of over 50 scholarly journal articles and book chapters and has published 7 books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press), Of Remixology: Ethics and Aesthetics after Remix (MIT Press), Heidegger and the Media (Polity), and Robot Rights (MIT Press). He is the managing editor and co-founder of the International Journal of Žižek Studies and co-editor of the Indiana University Press series in Digital Game Studies. He currently holds the position of Professor in the Department of Communication at Northern Illinois University, and his teaching has been recognized with numerous awards.

29598348_2150063081891874_1768432062945830222_n

Robots, Algorithms, and Digital Ethics

The Media Ethics Initiative Presents:

How to Survive the Robot Apocalypse

IMG_5530

Photo: Media Ethics Initiative

 

Dr. David J. Gunkel

Distinguished Teaching Professor
Department of Communication
Northern Illinois University

April 3 — 2:00-3:30PM — BMC 5.208

[Video of talk here]

 

Whether we recognize it or not, we are in the midst of a robot invasion. The machines are now everywhere and doing virtually everything. We chat with them online. We play with them in digital games. We collaborate with them at work. And we rely on their capabilities to help us manage all aspects of our increasingly data-rich, digital lives.  As these increasingly capable devices come to occupy influential positions in contemporary culture—positions where they are not just tools or instruments of human action but social actors in their own right—we will need to ask ourselves some intriguing but rather difficult questions: At what point might a robot, an algorithm, or other autonomous system be held responsible for the decisions it makes or the actions it deploys? When, in other words, would it make sense to say “It’s the computer’s fault?” Likewise, at what point might we have to seriously consider extending something like rights—civil, moral or legal standing—to these socially active devices? When, in other words, when would it no longer be considered non-sense to suggest something like “the rights of robots?” In this engaging talk, David Gunkel will demonstrate why it not only makes sense to talk about these things but also why avoiding this subject could have significant social consequences.

1ladETpkDr. David J. Gunkel is an award-winning educator, scholar and author, specializing in the study of information and communication technology with a focus on ethics. Formally educated in philosophy and media studies, his teaching and research synthesize the hype of high-technology with the rigor and insight of contemporary critical analysis. He is the author of over 50 scholarly journal articles and book chapters and has published 7 books. He is the managing editor and co-founder of the International Journal of Žižek Studies and co-editor of the Indiana University Press series in Digital Game Studies. He currently holds the position of Professor in the Department of Communication at Northern Illinois University, and his teaching has been recognized with numerous awards.

Free and open to the UT community and general public –  Follow us on Facebook


%d bloggers like this: