Media Ethics Initiative

Home » 2020 » April

Monthly Archives: April 2020

Can Artificial Intelligence Reprogram the Newsroom?

CASE STUDY: Trust, Transparency, and Ethics in Automated Journalism

Case Study PDF | Additional Case Studies

Could a computer program write engaging news stories? In a recent Technology Trends and Predictions report from Reuters, 78% of the 200 digital leaders, editors, and CEOs surveyed said investing in artificial intelligence (AI) technologies would help secure the future of journalism (Newman, 2018). Exploring these new methods of reporting, however, has introduced a wide array of unforeseen ethical concerns for those already struggling to understand the complex dynamics between human journalists and computational work. Implementing automated storytelling into the newsrooms presents journalists with questions of how to preserve and encourage accuracy and fairness in their reporting and transparency with the audiences they serve.


Artificial intelligence in the newsroom has progressed from an idea to a reality. In 1998, computer scientist Sung-Min Lee predicted the use of artificial intelligence in the newsroom whereby “robot agents” worked alongside, or sometimes in place of, human journalists (Latar, 2015). In 2010, Narrative Science became the first commercial venture to use artificial intelligence to turn data into narrative text. Automated Insights and others would follow after Narrative Science in bringing Lee’s “robot agent” in the newsroom to life through automated storytelling. While today’s newsrooms are using AI to streamline a variety of processes, from tracking breaking news, collecting and interpreting data, fact-checking online content, and even creating chatbots to suggest personalized content to users, the ability to automatically generate text and video stories has encouraged an industry-wide shift toward automated journalism, or “the process of using software or algorithms to automatically generate news stories without human intervention” (Graefe, 2016). Forbes, the New York Times, the Washington Post, ProPublica, and Bloomberg are just some of today’s newsrooms that use AI in their news reporting. The Heliograf, the Washington Post’s “in-house automated storytelling technology” is just one of many examples of how newsrooms are utilizing artificial intelligence to expand their coverage in beats like sports and finance that heavily rely on structured data, “allowing journalists to focus on in-depth reporting” (Gillespie, 2017).

Artificial intelligence has the potential to make journalism better for both those in the newsroom and the newsstand. Journalism think-tank Polis revealed in its 2019 Journalism AI Report that the main motivation for newsrooms to use artificial intelligence was to “help the public cope with a world of news overload and misinformation and to connect them in a convenient way to credible content that is relevant, useful and stimulating for their lives” (Beckett, 2019). Through automation, an incredible amount of news coverage can now be produced at a speed that was once unimaginable. The journalism industry’s AI pioneer, the Associated Press, began using Automated Insights’ Wordsmith program in 2014 to automatically generate corporate earnings reports, increasing their coverage from just 300 to 4,000 companies in one year alone (Hall, 2018).

As 20% of the time spent doing manual tasks has now been freed up by automation (Graefe, 2016), automated storytelling may also allow journalists to focus on areas that require more detailed work, such as investigative or explanatory journalism. In this way, automation in the reporting process may provide much-need assistance for newsrooms looking to expand their coverage amidst limited funding and small staffs (Grieco, 2019). Although these automated stories are said to have “far fewer errors” than human written news stories it is important to note that AI’s abilities are confined to the limitations set by its coders. As a direct product of human creators, even robots are prone to mistakes, and these inaccuracies can be disastrous. When an error in United Bots’ sports reporting algorithm changed 1-0 losses to 10-1, misleading headlines of a “humiliating loss for football club” were generated before being corrected by editors (Granger, 2019). Despite the speed and skill that automation brings to the reporting process, the need for human oversight remains. Automated storytelling cannot analyze society, initiate questioning of implied positions and judgements, nor can it provide deep understanding of complex issues. As Latar stated in 2015, “no robot journalist can become a guardian of democracy and human rights.”

Others might see hope in AI marching into newsrooms. Amidst concerns of fake news, misinformation, and polarization online, there is a growing cry among Americans struggling to trust a current news ecosystem which many perceive as greatly biased (Jones, 2018). Might AI hold a way to restore this missing trust? Founded in 2015, the tech start-up Knowhere News believes their AI could actually eliminate the bias that many connect to this lack of trust. After trending topics on the web are discovered, Knowhere News’ AI system collects data from thousands of news stories of various leanings and perspectives to create an “impartial” news article in under 60 seconds (DeGeurin, 2018). The aggregated and automated nature of this story formation holds out the hope of AI journalism being free of prejudices, limitations, or interests that may unduly affect a specific newsroom or reporter.

Even though AI may seem immune to certain sorts of conscious or unconscious bias affecting humans, critics argue that automated storytelling can still be subject to algorithmic bias. An unethical, biased algorithm was one of the top concerns relayed by respondents in the Polis report previously mentioned, as “bad use of data could lead to editorial mistakes such as inaccuracy or distortion and even discrimination against certain social groups or views” (Beckett, 2019). Bias seeps into algorithms in two main ways, according to the Harvard Business Review. First, algorithms can be trained to be biased due to the biases of its creator. For instance, Amazon’s hiring system was trained to favor words like “executed” and “captured” which were later found to be used more frequently by male applicants. The result was that Amazon’s AI unfairly favored and excluded applicants based on their sex. Second, a lack of diversity and limited representation within an algorithm’s training data, which teaches the system how to function, often results in faulty functioning in future applications. Apple’s facial recognition system has been criticized for failing to recognize darker skin tones once implemented into technologies used by large populations because of a supposed overrepresentation of white faces in the initial training data (Manyika, Silberg, & Presten, 2019).

Clearly, these AI agents do not garner the same sort of introspection in their decision-making processes, as seen by humans. This lack of understanding could be particularly challenging when collecting and interpreting data and ultimately lead to biased reporting. At Bloomberg, for example, journalists had to train the newsrooms’s automated company earnings reporting system, Cyborg, to avoid misleading data from companies who strategically “choose figures in an effort to garner a more favourable portrayal than the numbers warrant” (Peiser, 2019). To produce complete and truthful automated accounts, journalists must work to ensure that algorithms are feeding these narrative storytelling systems clean and accurate data.

Developments in AI technologies continue to expand at “a speed and complexity which makes it hard for politicians, regulators, and academics to keep up” and this lack of familiarity can invoke great concern amongst both journalists and news consumers (Newman, 2018). According to Polis, 60% of journalists from 71 news organizations across 32 countries currently using AI in their newsroom reported concerns about how AI would impact the nature of their work (Beckett, 2019). Uncertain of how to introduce these new technologies to the common consumer, many newsrooms have failed to attribute their automated stories to a clear author or program. This raises concerns regarding the responsibility of journalists to maintain transparency with readers. Should individuals consuming these AI stories have a right to know which stories were automatically generated and which were not? Some media ethicists argue that audiences should be informed of when bots are being used and where to turn for more information; however, author bylines should be held by those with “editorial authority,” a longstanding tool for accountability and follow up with curious readers (Prasad & McBride, 2016).

Whether or not automated journalism will better allow for modern newsrooms to connect with readers in greater quantities and of a higher quality is still an unfinished story. It is certain, however, that AI will take an increasingly prominent place in the newsrooms of the future. How can innovative newsrooms utilize these advanced programs in producing high quality journalistic work to better serve their audiences and balance the standards of journalistic practice that assumed a purely human form in the past?

Discussion Questions:

  1. What is the ethical role of journalism in society? Are human writers essential to that role, or could sophisticated programs fulfill it?
  2. How might the increased volume and speed of automated journalism’s news coverage impact the reporting process?
  3. What are the ethical responsibilities of journalists and editors if automated storytelling is used in their newsroom?
  4. Are AI programs more or less prone to error and bias, over the long run, compared to human journalists?
  5. Who should be held responsible when an algorithm produces stories that are thought to be discriminatory by some readers? How should journalists and editors address these criticisms?
  6. How transparent should newsrooms be in disclosing their AI use with readers?

Further Information:

Beckett, C. (2019, November). New powers, new responsibilities. A global survey of journalism and artificial intelligence. The London School of Economics and Political Science. Available at

DeGeurin, M. (2018, April 4). “A Startup Media Site Says AI Can Take Bias Out of News”. Vice. Available at 

Graefe, A. (2016, January 7).Guide to Automated Journalism”. Columbia Journalism Review. Available at

Granger, J. (2019, March 28). What happens when bots become editor? Available at:

Grieco, E. (2019, July 9). “U.S. newsroom employment has dropped by a quarter since 2008, with greatest decline at newspapers”. Pew Research Center. Available at

Jones, J. (2018, June 20). “Americans: Much Misinformation, Bias, Inaccuracy in News”. Gallup. Available at:

Latar, N. (2015, January). “The Robot Journalist in the Age of Social Physics: The End of Human Journalism?” The New World of Transitioned Media: Digital Realignment and Industry Transformation. Available at:

Manyika, J., Silberg, J., & Presten, B. (2019, October 25). “What Do We Do About the Biases in AI?” Harvard Business Review. Available at:

McBride, K. & Prasad, S. (2016, August 11). “Ask the ethicist: Should bots get a byline?” Poynter. Available at

Newman, N. (2018) Journalism, Media and Technology Trends and Predictions 2018. Reuters Institute for the Study of Journalism & Google. Available at

Peiser, J. (2019, February 5). “The Rise of the Robot Reporter”. The New York Times. Available at

WashPostPR. (2017, September 1). “The Washington Post leverages automated storytelling to cover high school football”. The Washington Post. Available at


Chloe Young  & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
April 27, 2020

Image: The People Speak! / CC BY-NC 2.0

This case study is supported by funding from the John S. and James L. Knight Foundation. Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement at sstroud (at)


Are there Bigger Fish to Fry in the Struggle for Animal Rights?

CASE STUDY: PETA’s Campaign against Anti-Animal Language

Case Study PDF| Additional Case Studies


PETA / TeachKind / Modified

Idioms are everywhere in the English language. To most, they are playful and innocuous additions to our conversations. People for the Ethical Treatment of Animals (PETA), however, disagrees. On December 4, 2018 the group took to the internet to let English speakers know the error of their ways. The high-profile—and highly controversial—animal welfare group used their Twitter and Instagram accounts to advocate for a removal of “speciesism from… daily conversations” (PETA, 2018). “Speciesism” is the unreasonable and harmful privileging of one species—typically humans—over other species of animals. Like racism or sexism, it is a bias that PETA wants us to shun and abandon in our actions. Their campaign, however, took the new step of targeting anti-animal language in common idioms and suggested animal friendly alternatives. When knocking out tasks, one should say they were able to “feed two birds with one scone,” rather than “kill two birds with one stone.” “Take the bull by the horns” was replaced with “take the flower by the thorns,” and “more than one way to skin a cat” was replaced by “more than one way to peel a potato.” Instead of being the member of the household to “bring home the bacon,” a so-called breadwinner might “bring home the bagels” (PETA, 2018). As one Twitter user questioned, “surely [PETA] [has] bigger fish to fry than this” (Wang, 2018)?

The organization argued that “our society has worked hard to eliminate racist, homophobic, and ableist language and the prejudices that usually accompany it” (PETA, 2018). To those offering bigger fish, they responded that “suggesting that there are more pressing social justice issues that require more immediate attention is selfish” (PETA, 2018). Since certain language may perpetuate discriminatory ideals, PETA encouraged the public to understand the harms and values implied by speciesist language. This type of language “denigrates and belittles nonhuman animals, who are interesting, feeling individuals” (PETA, 2018). To remove speciesist language from your daily conversation is potentially a simple change and, PETA would claim, a far kinder way to use language.

However, some would argue that PETA’s choice to liken anti-animal language to other problematic language—slurs based on race, sexuality, or ability—is a step too far. Many in the public voiced concerns with PETA’s efforts, particularly the implicit equation of violence against humans with violence against animals. A journalist from The Root, Monique Judge, explained, “racist language is inextricably tied to racism, racial terrorism, and racial violence… it is not the same thing as using animals in a turn of phrase or enjoying a BLT” (Chow, 2018). Political consultant Shermichael Singleton agreed with Judge, calling PETA’s statement “extremely ignorant” and “blatantly irresponsible” given the direct ties between racist language and physical violence (Chow, 2018).

Furthermore, others critiqued PETA’s suggested idioms, as either still harmful to animals or themselves harmful to other groups. For example, the Washington Post wondered if scones were really a healthy option for birds (Wang, 2018). Similarly, one Twitter user contested that feeding a horse that’s already fed would be bad for the horse, it was potentially hypocritical to argue animals are sacred and not plants, and that “bring home the bagels” could be anti-Semitic (Chow, 2018). Another Twitter user urged—tongue firmly in cheek—that taking by the flower by the thorns “sounds like some blatant anti-plantism … which is just more speciesism” (Moye, 2018).

As the public voiced both serious and sarcastic disapproval of PETA’s campaign, the animal welfare organization stood strong on its message. In response to likening anti-animal language with racist, homophobic, and ableist language, PETA’s spokeswoman Ashley Byrne said, “‘Encouraging people to be kind’ was not ‘a competition.’” Furthermore, Byrne commented “our compassion does not need to be limited,” asserting that “teaching people to be kind to animals only helps in terms of encouraging them to practice kindness in general” (Wang, 2018). As society is becoming more progressive about animal welfare in other ways, PETA wants the public to use language that encourages kindness to animals. As PETA put it in their original tweet, “words matter” (Moye, 2018). Broadening the recognition of this truth, PETA argues, could be helpful in alleviating the suffering of humans, not just animals.

While PETA’s campaign seems to focus on an individualized and language-conscious approach toward improving animal welfare, their comparison of so-called anti-animal language to the struggles of marginalized groups provoked wary reactions. The organization has a long line of radical and controversial campaigns promoting animal welfare, making it difficult to broadly assess their methods as helpful or counterproductive. Perhaps breaking into the social media attention economy was part of this campaign’s purpose. Regardless of its intentions, PETA’s latest campaign has prompted its audience to consider the nature of harmful language in social conventions, and whether or not the audience is living up to their own standards. Leaving talk of skinning cats aside, perhaps PETA has succeeded in getting us to realize there’s more than one way to turn a phrase.

Discussion Questions:

  1. Are speakers obligated to make relatively small changes to improve the world through language, even if those improvements seem minor?
  2. What are the ethical issues with equating anti-animal and, for example, anti-Black language? Are these the same or different than those of comparing other forms of human oppression?
  3. Is there a way to differentiate racist/sexist language use and anti-animal language use without valuing human life over that of animal life?
  4. What responsibility do humans have to animals? Should this responsibility manifest itself through language or other means first?
  5. When, if ever, is it ethical to intentionally provoke controversy to draw attention to a political or moral issue?

Further Information:

Chow, Lorraine. “PETA Wants Us to Stop Using ‘Anti-Animal Language’.” EcoWatch, 5 December 2018. Available at:

PETA. “‘Bring Home the Bagels’: We Suggest Anti-Speciesist Language—Many Miss the Point” PETA, 7 December 2018. Available at:

Moye, David. “PETA Gets Dogged For Tweet Demanding End To ‘Anti-Animal Language’.” HuffPost, 5 December 2018. Available at:

Wang, Amy B. “PETA Wants to Change ‘Anti-Animal’ Sayings, but the Internet Thinks They’re Feeding a Fed Horse.” The Washington Post, 6 December 2018. Available at:


Sophia Park, Dakota Park-Ozee, & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
April 13, 2020

Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement at sstroud (at)


Images of Death in the Media

CASE STUDY: Journalism and the Ethics of Using the Dead

Case Study PDF| Additional Case Studies

The wonders of technology in the 21st century are less wonderful when looking into the eyes of a dead body on your handheld screen. Gruesome images can serve as a powerful journalistic tool to influence the emotions of viewers and inspire action, but they can also be disturbing for many viewers. Some argue that such shocking images shouldn’t be used to increase attention to a story. Others claim only shocking images can be used to illustrate the intensity of an event, a vital part of moving and educating the public. Where do we draw the line regarding what is appropriate in publishing images capturing death?

Historically, death images have had a controversial past in American journalism. For example, photos of horrific lynchings provide modern viewers with an accurate depiction of the normalization of brutal death for African Americans, with crowds of white townspeople surrounding the hangings with refreshments and a “picnic-like atmosphere” (Zuckerman, 2017). These events portray a lack of humanity; while African Americans are burned and dismembered, everyone else is merrymaking, as if they were at a neighborhood carnival. One could argue that photos speak louder than words in this case, as historical news reports describing the atrocities conveyed them as spectacles, but not to the extent of celebrations. For example, following a headline announcing the lynching of John Hartfield at 5 o’clock that afternoon, a sub-headline read “Thousands of People Are Flocking into Ellisville to Attend the Event.” Gruesome photos seem to be useful now, however, to understand the environment that encouraged these killings. Though it is recognized that these images are prone to disturb, many maintain that it is quite necessary to confront them in order to prevent anything of its kind in the future. While “there is no comfort in looking at this history—and little hope save the courage of those who survived it … there may be no justice… until we stop looking away” (Zuckerman, 2017). This is the same reasoning that 60 Minutes relied upon when they decided to show historical lynching photos in a segment produced by Oprah Winfrey on lynching and racist violence in America. Of course, the challenge is that these pictures might cause or invoke psychological harm among those related to the victims of lynching or those fearful of racist violence. There is also the possibility that airing such photos might cause enjoyment or encouragement among racist viewers; critics might remind us that many of these shocking photos were taken and celebrated in their day because they captured an event prized by violent racists.

Some of the most controversial imagery in the contemporary media comes from school massacres. While the mass shooting at Columbine High School in 1999 is certainly an unforgettable moment in our nation’s history, current students continue their battle against what they see as the causes of such tragedies. As part of the #MyLastShot campaign, students are encouraged to place a sticker on the back of their driver’s license that indicates their desire to have photos of their bodies published in the event they are killed by gun violence, in a way similar to the marks indicating organ donor status (Baldacci & Andone, 2019). Campaign founder and Columbine student Kaylee Tyner said, “it’s about bringing awareness to the actual violence, and to start conversations.” She also referenced the impact the viral video of students hiding under their desks during the school shooting at Marjory Stoneman Douglas High School had on her: “I remember seeing people on Twitter saying it was so horrible… but what I was thinking was, no, that’s the reality of a mass shooting.” While such school shootings are comparatively rare, they are clearly horrible. The #MyLastShot campaign represents an attempt to get individuals to agree in advance to use shocking imagery of their death to showcase these horrors to others. If one is killed, the hope is that the image of their dead body might bring about a better understanding of the problem of school shootings, and perhaps move readers to action. Even if one is not killed in a school shooting, agreeing to the conditions of the #MyLastShot campaign still illustrates a willingness to do something that many perceive as shocking, and thereby functions as a plea for heightened attention to issues such as school shootings and gun control measures. At the same time, viewing the photos and videos of school shootings may be so traumatizing to others as to make children afraid of attending school, and for that matter, make parents afraid of sending them. Most children will not be directly affected by a school shooting. How much fear and imagery is needed to motivate the right amount of concern and action on the part of readers?

The #MyLastShot campaign leverages consent before one becomes a victim of violent crime. Not all newsworthy images involve subjects that could give such consent in advance. In cases where subjects of photographs cannot give consent for the use of their images or likenesses in news stories, lines begin to blur separating the newsworthy from the private. The National Press Photographers Association’s Code of Ethics states “intrude on private moments of grief only when the public has an overriding and justifiable need to see” (Code of Ethics). Determining when this “need to see” is applicable, however, is the difficult part of death photography. This is especially true when covering wars and other violent conflicts. The photo of Alan Kurdi, a two-year-old Syrian refugee whose dead body washed up on the Turkish coast after attempting to flee to Europe, overwhelmed the world with despair. Many believed the photo was vital in understanding the terrors of the refugee crisis, while others were disgusted at the exhibition made of his body throughout the media. Kurdi did not—or possibly could not, given his status as a child—consent to such a use of his corpse for news purposes. Regardless, the powerful image of Kurdi caused change; the British prime minister was so moved by it that the United Kingdom began to accept more refugees (Walsh & Time Photo, 2015). Yet Peter Bouckaert of Human Rights Watch, who was the initial distributor of the photo, said “when I see the image now on people’s social media profiles, I contact them and I ask them to take it down. I think it is important that we give Alan Kurdi back his privacy and his dignity. It’s important to let this little boy rest now.”

Images of death tell a certain truth, but their reception often differs based upon the intention behind their use and the meanings imputed by their viewers. It isn’t uncommon for photojournalists to be accused of non-journalistic advocacy in their most shocking photographs. For many, though, the only message being conveyed through a shocking image of death is the truth, one that happens to hit us harder than most other written stories. But telling a shocking truth does not exhaust all that journalists owe their subjects and readers. When can images of death be used in a way that respects the deceased and contributes to a potentially laudatory social goal of the journalist conveying this information?

Discussion Questions:

  1. What are the different sorts of images of death mentioned in this case, and what differentiates each type?
  2. Under what conditions, if any, is it acceptable to photograph the dead for news purposes?
  3. What responsibility do journalists and news organizations have to the deceased? How does one respect an individual who no longer is alive?
  4. If journalism is meant to inform on important public issues, could a shocking photograph be too moving? When does powerful photojournalism shade into unethical manipulation?
  5. What ethical obligations do photojournalists owe their readers? Do these considerations change when violence or other human agents are involved?

Further Information:

Baldacci, Marlena, & Andone, Dakin. “Columbine Students Start Campaign to Share Images of Their Death If They’re Killed in Gun Violence.” CNN, 30 March 2019, Available at:

“Code of Ethics,” National Press Photographers Association. Available at:

Farmer, Brit McCandless. “Why 60 Minutes aired Photos of Lynchings in Report by Oprah.” 60 Minutes Overtime, 27 April 2018, Available at:

Lewis, Helen. “How Newsrooms Handle Graphic Images of Violence.” Nieman Reports, 5 January 2016, Available at:

Waldman, Paul. “What the Parkland Students Wanted the World to See-But the Media Didn’t.” The American Prospect, 2018, Available at:

Walsh, Bryan, and TIME Photo. “Drowned Syrian Boy Alan Kurdi’s Story: Behind the Photo.” Time, Time, 29 December 2015, Available at:

Zuckerman, Michael. “Telling the Truth about American Terror.” Harvard Law Today, 4 May 2015, Available at:


Page Trotter, Dakota Park-Ozee, & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
April 2, 2020

This case study is supported by funding from the John S. and James L. Knight Foundation. Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement at sstroud (at)


%d bloggers like this: