Media Ethics Initiative

Home » Posts tagged 'big data'

Tag Archives: big data

Designing Ethical AI Technologies

The Center for Media Engagement and Media Ethics Initiative Present:


Good Systems: Designing Values-Driven AI Technologies Is Our Grand Challenge

Dr. Kenneth R. Fleischmann

Professor in the School of Information
University of Texas at Austin

September 24, 2019 (Tuesday) ¦ 3:30PM-4:30PM ¦ CMA 5.136 (LBJ Room)


Technology is neither good nor bad; nor is it neutral.” This is the first law of technology, outlined by historian Melvin Kranzberg in 1985. It means that technology is only good or bad if we perceive it to be that way based on our own value system. At the same time, because the people who design technology value some things more or less than others, their values influence their designs. Michael Crichton’s “Jurassic Park” chaos theorist, Ian Malcolm, notes: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.” That’s the question we have to ask now: Should we increasingly automate various aspects of society? How can we ensure that advances in AI are beneficial to humanity, not detrimental? How can we develop technology that makes life better for all of us, not just some? What unintended consequences are we overlooking or ignoring by developing technology that has the power to be manipulated and misused, from undermining elections to exacerbating racial inequality?

The Inaugural Chair of the Good Systems Grand Challenge, Ken Fleischmann, will present the eight-year research mission of Good Systems, as well as our educational and outreach activities. Specifically, he will discuss the upcoming Good Systems launch events and ways that faculty, researchers, staff, and students can become involved in the Good Systems Grand Challenge.

The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow Media Ethics Initiative and Center for Media Engagement on Facebook for more information. Media Ethics Initiative events are open and free to the public.


 

Swipe Right to Expose

CASE STUDY: Journalism, Privacy, and Digital Information

Case Study PDF | Additional Case Studies


In a world where LGBTQ people still often lack full protection and equal rights, it can be a challenge for someone to be public about their sexuality. Some have taken to dating apps such as Grindr, Tinder, and Bumble, which allow for a more secure way for people to chat and potentially meetup outside of cyberspace. On such apps, one’s dating profile can often be seen by anyone who is also using the app, illustrating how these services blur the line between private and public information.

Nico Hines, a straight and married reporter for news site, The Daily Beast, decided to report on the usage of dating apps during the 2016 Rio Olympics in the Olympic Village. Relying upon the public and revealing nature of profiles—at least to potential dates—Hines made profiles on different dating apps and used them to interact with a number of athletes. Most of his interactions were through the Grindr app which is a dating app for gay men. This app works through geotagging so that people can match up with others who are geographically near them. Profiles include information such as height, ethnicity, and age which can often be used to identify a person even if a full name isn’t given. He eventually wrote up his experiences in the article “The Other Olympic Sport in Rio: Swiping.”

To preserve the anonymity of the individuals with whom he was interacting, Hines did not use specific athletes’ names in his story. He did reveal details about those seeking dates including their physical features, the sport they were competing in, and their home country.  Readers and critics found that it was relatively easy to identify which athletes he was talking about using the information he provided. Since many of these athletes were not openly identified as LGBTQ, critics argued that he was “potentially outing” many of the athletes by describing them in the course of his story. Amplifying this concern was the fact that in some of the home countries of men who were potentially outed, it was dangerous or illegal to be openly gay.

In his defense, some pointed out that Hines didn’t intend to out or harm specific vulnerable individuals in the course of his story about the social lives of Olympic Athletes. His published account didn’t include the names of any male athletes he interacted with on Grindr, and he only named some of the straight women who he found on the Tinder app. The Daily Beast’s Editor-in-chief, John Avalon, stated that Hines didn’t mean to focus mainly on the Grindr app but since he “had many more responses on Grindr than apps that cater mostly to straight people,” Hines decided to write about that app. When Hines interacted with the athletes on the various dating apps, he didn’t lie about who he was and, as Avalon noted, Hines “immediately admitted that he was a journalist whenever he was asked who he was.”

The controversy eventually consumed Hines’ published story. After the wave of criticism crested, The Daily Beast first removed names and descriptions of the athletes in the article. But by the end of the day, the news site had completely removed the article with Avalon replacing it with an editor’s note that concluded: “Our initial reaction was that the entire removal of the piece was not necessary. We were wrong. We’re sorry.” Regardless of the decisions reached by this news site, difficult questions remain about what kinds of stories—and methods—are ethically allowed in the brave new world of digital journalism.

Discussion Questions: 

  1. What are the ethical values or interests at stake in the debate over the story authored by Hines?
  2. Hines tried to preserve the anonymity of those he was writing about. How could he have done more for the subjects of his story, while still doing justice to the story he wanted to report on?
  3. There are strong reasons why journalists should ethically and legally be allowed to use publicly-available information in their stories. Is the information shared through dating apps public information?
  4. How does Hines’ use of dating profile information differ, if at all, from long-standing practices of investigative or undercover journalism?

Further Information:

Frank Pallotta & Rob McLean, “Daily Beast removes Olympics Grindr article after backlash.” CNN, August 12, 2016. Available at: http://money.cnn.com/2016/08/12/ media/daily-beast-olympics-article-removal/index.html

John Avalon, “A Note from the Editors.” The Daily Beast, August 11, 2016. Available at: https://www.thedailybeast.com/a-note-from-the-editors

Curtis M. Wong, “Straight Writer Blasted For ‘Outing’ Olympians In Daily Beast Piece.” Huffington Post, August 11, 2016. Available at: https://www.huffingtonpost. com/entry/daily-beast-grindr-article_us_57aca088e4b0db3be07d6581

Authors:

Bailey Sebastian & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 24, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

Biometrics, Data, and Privacy

CASE STUDY: Facing the Challenges of Next Generation Identification: Biometrics, Data, and Privacy

Case Study PDF | Additional Case Studies


In 2011, the FBI signed a contract with the military developer Lockheed Martin to launch a pilot program called “Next Generation Identification” (NGI). This program was designed to utilize biometric data to assist the investigations of local and federal law enforcement agencies as part of anti-terrorism, anti-fraud, and national security programs. By late 2014, the program’s facial recognition sector became fully operational and was used regularly by the FBI and other law enforcement agencies to aid in the identification of persons of interest in open investigations. This program compares visuals from surveillance cameras or other imaging devices with the world’s largest database of photographs to find similarities in facial details and provide identification. When the program was first introduced, the FBI used biometric data from known and convicted criminals to compile the database. However, the database has since been expanded through a program run by the FBI’s Criminal Justice Information Services (CJIS) called FACE (facial analysis, comparison, and evaluation), and now includes driver license photos from individuals that have been issued a photo ID from participating states.

Objections to the inclusion of driver license photos of law abiding citizens have been raised by many organizations, including the Government Accountability Office, the Electronic Frontier Foundation (EFF), and the Electronic Privacy Information Center. This controversy primarily stems from the perceived lack of disclosure by the FBI over the specifics of the NGI and FACE programs, or the ability for citizens to agree to, or opt-out of, the use of their images. Senior Staff Attorney for the EFF, Jennifer Lynch, raised concerns about the implications of such technology, as well as its legal validity. She notes that “data that’s being collected for one purpose is being used for a very different purpose.” Lynch argues that due to facial recognition technologies, “Americans cannot easily take precautions against the covert, remote, and mass capture of their images,” especially if they are not made aware that such capture and retention is taking place in the first place. These organizations argue that this goes against federal law (The Privacy Act of 1974) that states that images of faces are protected personal information and alters “the traditional presumption of innocence in criminal cases by placing more of a burden on the defendant to show he is not who the system identifies him to be.”

Those who are not worried about the NGI program or the inclusion of law-abiding citizens’ photographs in the database say that biometric data, including a person’s face, is no different than collecting a person’s fingerprints, and that this information is crucial for national security. Such information has gained a renewed importance in light of recent terror attacks, both domestically and abroad. Stephen Morris, the assistant director of the CJIS, states that “new high-tech tools like facial recognition are in the best interests of national security” and argues that it aids law enforcement officials in identifying and capturing terrorists and other criminals. The FBI also “maintains that it searches only against criminal databases” and that requests can be made to include other outside databases, such as various state and federal databases (including state driver license photo databases, and the Department of Defense passport photo database) if and when the FBI deems it necessary for a specific criminal investigation. This highlights the fact that facial recognition technology cannot be considered independently of the databases it uses in its search for more information about imaged persons of interest. Against those that call for more human oversight across database requests integral to facial recognition technology, Morris argues that those who think that “collecting biometrics is an invasion of folks’ privacy” should instead be concerned with how to best “identify…the right person.” How will our society face the conflicting interests at stake in the collection and use of biometric data in maintaining public safety and national security?

Discussion Questions: 

  1. What are the ethical values or interests at stake in the debate over using photo databases in the NGI program?
  2. Do you believe the government can use databases not intended for biometric identification purposes? If so, what limits would you place on these uses?
  3. As facial recognition technology gets more advanced, what sort of ethical limitations should we place on its use by government or private entities?
  4. What would the ethical challenges be to using extremely advanced facial recognition technology in situations not concerning national security—such as online image searches?

Further Information:

Rebecca Boyle, “Anti-Fraud Facial Recognition System revokes the Wrong Person’s License.” Popular Science, July 18, 2011. Available at: www.popsci.com/gadgets/article/ 2011-07/anti-fraud-facial-recognition-system-generates-false-positives-revoking-wrong-persons-license

Eric Markowitz, “The FBI Now has the Largest Biometric Database in the World. Will it lead to more Surveillance?” International Business Times, April 23, 2016. Available at: www.ibtimes.com/fbi-now-has-largest-biometric-database-world-will-it-lead-more-surveillance-2345062

Sam Thielman, “FBI using Vast Public Photo Data and Iffy Facial Recognition Tech to find Criminals.” The Guardian, June 15, 2016. Available at: www.theguardian.com/us-news/2016/jun/15/fbi-facial-recognition-software-photo-database-privacy

Authors:

Jason Head & Scott R. Stroud, Ph.D.
Media Ethics Initiative
University of Texas at Austin
April 17, 2018

www.mediaethicsinitiative.org


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the University of Texas at Austin. They can be used in unmodified PDF form without permission for classroom use. For use in publications such as textbooks, readers, and other works, please contact the Media Ethics Initiative.

danah boyd on Digital Ethics

The Media Ethics Initiative Presents:


Hacking Big Data:

Discovering Vulnerabilities in a Sociotechnical Society

Dr. danah boyd

Principal Researcher at Microsoft Research ¦ Founder of Data & Society Institute

March 6, 2018 ¦ Moody College of Communication ¦ University of Texas at Austin


 

 


Dr. danah boyd is a Principal Researcher at Microsoft Research, the founder and president of Data & Society, and a Visiting Professor at New York University. Her research is focused on addressing social and cultural inequities by understanding the relationship between technology and society. Her most recent books – “It’s Complicated: The Social Lives of Networked Teens” and “Participatory Culture in a Networked Age” – examine the intersection of everyday practices and social media. She is a 2011 Young Global Leader of the World Economic Forum, a member of the Council on Foreign Relations, a Director of both Crisis Text Line and Social Science Research Council, and a Trustee of the National Museum of the American Indian. She received a bachelor’s degree in computer science from Brown University, a master’s degree from the MIT Media Lab, and a Ph.D in Information from the University of California, Berkeley. This event was co-sponsored by the Global Media Industry Speaker Series.

Follow the Media Ethics Initiative on Facebook and Twitter (@EthicsOfMedia)

boyd1

Robots, Algorithms, and Digital Ethics

The Media Ethics Initiative Presents:

How to Survive the Robot Apocalypse

IMG_5530

Photo: Media Ethics Initiative

 

Dr. David J. Gunkel

Distinguished Teaching Professor
Department of Communication
Northern Illinois University

April 3 — 2:00-3:30PM — BMC 5.208

[Video of talk here]

 

Whether we recognize it or not, we are in the midst of a robot invasion. The machines are now everywhere and doing virtually everything. We chat with them online. We play with them in digital games. We collaborate with them at work. And we rely on their capabilities to help us manage all aspects of our increasingly data-rich, digital lives.  As these increasingly capable devices come to occupy influential positions in contemporary culture—positions where they are not just tools or instruments of human action but social actors in their own right—we will need to ask ourselves some intriguing but rather difficult questions: At what point might a robot, an algorithm, or other autonomous system be held responsible for the decisions it makes or the actions it deploys? When, in other words, would it make sense to say “It’s the computer’s fault?” Likewise, at what point might we have to seriously consider extending something like rights—civil, moral or legal standing—to these socially active devices? When, in other words, when would it no longer be considered non-sense to suggest something like “the rights of robots?” In this engaging talk, David Gunkel will demonstrate why it not only makes sense to talk about these things but also why avoiding this subject could have significant social consequences.

1ladETpkDr. David J. Gunkel is an award-winning educator, scholar and author, specializing in the study of information and communication technology with a focus on ethics. Formally educated in philosophy and media studies, his teaching and research synthesize the hype of high-technology with the rigor and insight of contemporary critical analysis. He is the author of over 50 scholarly journal articles and book chapters and has published 7 books. He is the managing editor and co-founder of the International Journal of Žižek Studies and co-editor of the Indiana University Press series in Digital Game Studies. He currently holds the position of Professor in the Department of Communication at Northern Illinois University, and his teaching has been recognized with numerous awards.

Free and open to the UT community and general public –  Follow us on Facebook


Hacking Big Data

The Media Ethics Initiative Presents:


Hacking Big Data: Discovering Vulnerabilities in a Sociotechnical Society

Dr. danah boyd

Principal Researcher at Microsoft Research and the Founder of Data & Society

March 6, 2018 —  5:00-6:30PM —  BMC 1.202 — [Video of talk here]


MSR3

Data-driven and algorithmic systems increasingly underpin many decision-making systems, shaping where law enforcement are stationed and what news you are shown on social media. The design of these systems is inscribed with organizational and cultural values. Often, these systems depend on the behavior of everyday people, who may not act as expected. Meanwhile, adversarial actors also seek to manipulate the data upon which these systems are built for personal, political, and economic reasons. In this talk, danah will unpack some of the unique cultural challenges presented by “big data” and machine learning, raising critical questions about fairness and accountability. She will describe how those who are manipulating media for lulz are discovering the attack surfaces of new technical systems and how their exploits may undermine many aspects of society that we hold dear. Above all, she will argue that we need to develop more sophisticated ways of thinking about technology before jumping to hype and fear.

Dr. danah boyd is a Principal Researcher at Microsoft Research, the founder and president of Data & Society, and a Visiting Professor at New York University. Her research is focused on addressing social and cultural inequities by understanding the relationship between technology and society. Her most recent books – “It’s Complicated: The Social Lives of Networked Teens” and “Participatory Culture in a Networked Age” – examine the intersection of everyday practices and social media. She is a 2011 Young Global Leader of the World Economic Forum, a member of the Council on Foreign Relations, a Director of both Crisis Text Line and Social Science Research Council, and a Trustee of the National Museum of the American Indian. She received a bachelor’s degree in computer science from Brown University, a master’s degree from the MIT Media Lab, and a Ph.D in Information from the University of California, Berkeley.


Co-sponsored by the Global Media Industry Speaker Series

Free and open to all — Follow us on Facebook


%d bloggers like this: