Media Ethics Initiative

Home » Posts tagged 'democracy'

Tag Archives: democracy

Politics in the Age of Digital Information Overload

CASE STUDY: Facebook’s Policy to Allow Misleading Political Ads

Case Study PDF | Additional Case Studies


On September 24, 2019 Facebook’s Vice President of Global Affairs and Communications, Nick Clegg, announced during his speech at the Atlantic Festival in Washington DC that Facebook would not fact-check or censor political advertising on the social media platform. Speaking on behalf of the tech company, he noted: “We don’t believe that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny” (Clegg, 2019).

 

With the 2020 presidential election in the United States approaching, Facebook immediately faced criticism for this decision, especially since it closely followed other controversial decisions involving the tech company’s refusal to remove misleading content – namely, a doctored video which made House Speaker Nancy Pelosi appear drunk and a Donald Trump ad which accused candidate Joe Biden of bribing the Ukrainian government to fire a prosecutor investigating the former Vice President’s son. Despite the Pelosi video’s misleading editing techniques and the Biden-focused ad’s lack of evidence to back it up its serious claims, Facebook stood firm in their decision (Stewart, 2019). There is the probability that more misinformation campaigns will be run by a range of parties in the next election, causing worries about the hopes of achieving a free and informed election. For example, Roose recounts a Facebook ad “run by [the group] North Dakota Democrats [which] warned North Dakotans that they could lose their out-of-state hunting licenses if they voted in the midterm elections” – an assertion that was utterly false (Roose, 2018).

On October 17, 2019 founder and CEO of Facebook Mark Zuckerberg spoke publicly at Georgetown University explaining his reasoning for the policy to not fact-check political advertisements, using his 3-minute speech to appeal to the First Amendment. Zuckerberg emphasized that he is concerned about misinformation, but ultimately believes it is dangerous to give a private entity the power to determine which forms of non-truthful speech are deserving of censorship. Instead, he stressed the importance of the credibility of the individual behind a post, rather than the post itself. Zuckerberg hopes to accomplish this through the introduction of another policy in which Facebook requires users to provide a government ID and prove their location in order to purchase and run political ads on the site (Zuckerberg, 2019).

Zuckerberg maintains that through the transparency of identity, accountability will be achieved and “people [can] decide what’s credible, not tech companies” (Zuckerberg, 2019). Appealing to John Stuart Mill’s ideas of free speech, Zuckerberg believes that the truth is always bound to come out. The unfiltered speech of politicians provides an opportunity for claims to be publicly evaluated and contested. If deception is revealed, then an opportunity for correction is provided through the refutation of the false speech of others. If a non-popular source turns out to be right about an unpopular point, others have the opportunity to learn from that truth. In either case, the hope is that the political community can use the identity of candidates or speakers in making judgments concerning who they deem credible and what arguments are worthy of belief. Censoring political information, the argument goes, only deprives people of the ability to see who their representatives really are.

Many find Zuckerberg’s free-speech defense of Facebook’s stance too idealized and not realistic enough. Of particular importance is the evolving role that social media has to play in society. Social media platforms were once only utilized for catching up with friends; now many Americans catch their news from social media sites rather than traditional print or televised news (Suciu, 2019). Additionally, as Facebook’s algorithm “gets to know our interests better, it also gets better at serving up the content that reinforces those interests, while also filtering out those things we generally don’t like” (Pariser, 2016). Based on user data such as what we “like” or what our friends share, Facebook may facilitate the creation of an “echo chamber” by providing news to our feeds that is increasingly one-sided or identifiably partisan. Such arrangements where people only engage with those who share like-minded views contribute heavily to confirmation bias – a logical error that occurs when people “stop gathering information when the evidence gathered so far confirms the views or prejudices one would like to be true” Heshmat, 2015). If politicians are able to mislead in their purchased advertising, they could use such a platform to encourage individuals to engage in confirmation bias by feeding them information tailored to match their data without critically looking into it – or opposing information – any further (Heshmat, 2015). Furthermore, those hoping that Facebook will crack down on paid political advertising are disheartened by the conflict of interest surrounding this issue. Political advertisers pay top dollar to advertise on social media sites. In fact, the Trump campaign alone has already spent more than $27 million on Facebook’s platform and the Wall Street Journal predicts that “in 2020, digital political ad spending [will] increase to about $2.8 billion” (Isaac & Kang, 2020 and Bruell, 2019). The economics of political advertising revenue make Facebook’s decision about curtailing it even harder to swallow.

The larger question of whether platforms like Facebook should become the arbiters of truthful and informative political speech on their sites presents one of the most pressing ethical dilemmas of the information age. On one hand, it is a dangerous and possibly slippery slope to place private tech companies into the position of deciding what counts as untruthful speech deserving of censorship. Some might worry that the distinction between truthful and untruthful political speech isn’t one that could be enforced – political ads often make questionable inferences from cherry-picked evidence, or purposefully extract specific phrases, images, or statements out of their context to render their opponents especially undesirable among audience members. How could anyone – including Zuckerberg – be tasked with evaluating anything but blatant falsehoods among the sea of claims that are questionable or badly reasoned to only some on the political spectrum? Given the challenging nature of determining what a lie is (as opposed to strategic presentation, lies of omission, or simple mistakes), the issue of eliminating purposefully untruthful speech becomes that much more challenging. Many would believe that political actors, just like everyday voters, should be able to express opinions and arguments that don’t seem particularly well-reasoned to all. On the other hand, the classic conception of free expression and the marketplace of ideas that grounds this reluctance to eliminate untruthful speech on social media may not be so realistic in our age of technology and self-selecting groups and political communities.

Between information overload and confirmation bias, it may be unreasonable to assume everyone can and will look into every news story they see on Facebook. And, as some critics would point out, many of the most vulnerable in our society, such as women and minorities, suffer the brunt of harassment online when absolute expression is valued. With so much at stake on both sides it is worthwhile to consider what has the most potential to enhance or inhibit the democratic process: reducing interference in personal expression or reducing misinformation in political advertising?

Discussion Questions: 

  1. What are the central values in conflict in Facebook’s decision to not fact-check political advertisements?
  2. Has the evolution of technology and the overload of information in our era mitigated John Stuart Mill’s arguments for unrestrained free speech?
  3. Do social media companies like Facebook owe the public fact-checking services? Why or why not?
  4. Who is responsible for accurate political information: Producers, consumers, or disseminators of advertisements? 

Further Information:

Bruell, A. (2019, June 4). “Political Ad Spending Will Approach $10 Billion in 2020, New Forecast Predicts.” Available at: https://www.wsj.com/articles/political-ad-spending-will-approach-10-billion-in-2020-new-forecast-predicts-11559642400

Clegg, Nick. (2019, November 13). “Facebook: Elections and Political Speech.” Available at: https://about.fb.com/news/2019/09/elections-and-political-speech/

Heshmat, S. (2015, April 23). “What Is Confirmation Bias?” Available at: https://www.psychologytoday.com/us/blog/science-choice/201504/what-is-confirmation-bias

Isaac, M., & Kang, C. (2020, January 9). “Facebook Says It Won’t Back Down from Allowing Lies in Political Ads.” Available at: https://www.nytimes.com/2020/01/09/technology/facebook-political-ads-lies.html

Pariser, Eli. (2016, July 24). “The Reason Your Feed Became An Echo Chamber — And What To Do About It.” Available at: https://www.npr.org/sections/alltechconsidered/2016/07/24/486941582/the-reason-your-feed-became-an-echo-chamber-and-what-to-do-about-it

Roose, K. (2018, November 4). “We Asked for Examples of Election Misinformation. You Delivered.” Available at: https://www.nytimes.com/2018/11/04/us/politics/election-misinformation-facebook.html

Stewart, E. (2019, October 9). “Facebook is refusing to take down a Trump ad making false claims about Joe Biden.” Available at: https://www.vox.com/policy-and-politics/2019/10/9/20906612/trump-campaign-ad-joe-biden-ukraine-facebook

Suciu, P. (2019, October 11). “More Americans Are Getting Their News from Social Media.” Available at: https://www.forbes.com/sites/petersuciu/2019/10/11/more-americans-are-getting-their-news-from-social-media/#6bc8516f3e17

Zuckerberg, Mark. (2019, October 17). Washington Post. “Watch live: Facebook CEO Zuckerberg speaks at Georgetown University.” Available at: https://www.youtube.com/watch?v=2MTpd7YOnyU

Authors:

Kat Williams & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin

May 21, 2020


This case study is supported by funding from the John S. and James L. Knight Foundation. It was produced as part of a cooperative endeavor by the Media Ethics Initiative and the Project on Ethics in Political Communication. This case can be used in unmodified PDF form for classroom or educational uses. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement.

 

Campaigning for Your Enemies

CASE STUDY: Deceptive Motives in Political Advertising

Case Study PDF | Additional Case Studies


akin2In 2012, Democratic Missouri Senator Claire McCaskill purchased $1.7 million in television advertisements focusing on one of her Republican rivals, Rep. Todd Akin. Instead of tearing him down, the ad surprisingly made claims that would endear him to Republican voters. One of McCaskill’s purchased television commercials called Akin a “crusader against bigger government” and referenced his “pro-family agenda,” finally concluding that “Akin alone says President Obama is ‘a complete menace to our civilization’” (McCaskill for Missouri 2012, 2012a).

McCaskill also ran advertisements meant to question the integrity and conservative credentials of Akin’s Republican rivals. Her advertisements attacked businessman John Brunner for an inconsistent history of voting in elections, and saying he “can’t even say where he would cut the federal budget.” Another ad called former state Treasurer Sarah Steelman “more pay-to-play,” and “just more of the same” (McCaskill for Missouri 2012, 2012b). Steelman’s campaign said the ad “further shows that Sarah Steelman is the candidate that the status quo fears the most,” while the Senate Conservatives fund (which opposed Akin but had not yet chosen one of the other candidates) said “Akin isn’t weak because he’s too conservative. He’s weak because he’s too liberal on spending and earmarks.” The Akin campaign also declined to comment on whether the ad was meant to help them: “While there is much speculation about Claire McCaskill’s strategy, what is clear is that Todd Akin has honestly and directly answered questions and unabashedly articulates a vision for the path ahead. McCaskill and other Democrats may see this as a liability; voters see this as integrity” (Catanese, 2012).

McCaskill later stated that her campaign’s goal in running these commercials for a possible opponent was to boost Akin among his Republican opponents running for the nomination: “Running for reelection to the U.S. Senate as a Democrat from Missouri, I had successfully manipulated the Republican primary so that in the general election I would face the candidate I was most likely to beat” (McCaskill, 2015). NPR matter-of-factly referred to Akin as “the Republican [McCaskill] preferred to face.” (Frank, 2012). POLITICO reported that “If there was any doubt which Republican Sen. Claire McCaskill wants to run against this fall, a trio of ads she released Thursday put it to rest.” (Catanese, 2012). There are few critiques of the strategy, and they focus on the strategy’s savviness (or lack thereof). For example, Roll Call wrote “Missouri political insiders saw the McCaskill tactic as too-cute-by-half, unless it worked, and it did.”

Debate about this strategy’s ethical dimensions was hard to come by, though the effort to aid Akin led to a campaign finance complaint: in 2015, a watchdog group filed a complaint against McCaskill for spending $40,000 polling Republican voters in Missouri and sharing findings with the Akin campaign, in excess of the $2,500 limit for an in-kind (non-monetary) campaign contribution (Scott, 2015). While the complaint was dismissed, it underscores that there were some who found McCaskill’s treatment of the Akin campaign in the Republican primary questionable (Hunter et al, 2015).

McCaskill acknowledged that this tactic of helping weaker political opponents succeed brought with it political worries: “I was fully aware of the risk and would have felt terrible if Todd Akin had become a United States senator. On the other hand, if you went down the list of issues, there was not a dime’s worth of difference among the three primary candidates on how they would have voted if they had become senators.” Yet McCaskill’s campaign team thought there was something different about Akin, something that they could exploit to win the election for the senate seat. McCaskill went on to defeat him in the general election after he made comments about “legitimate rape” rarely resulting in pregnancy, bolstering the extreme image McCaskill’s campaign had set out to create (Cohen, 2012).

Some are not convinced that this is an ethical strategy for a political candidate to utilize. Even if the ads were factually accurate, the motivations behind the ads were not as transparent as most political advertisements are (“I’m candidate X, I approve this message because I want to defeat this person I am attacking”). If democracy is concerned with placing the best leaders in positions of power through advocacy and votes, McCaskill could be indirectly helping to elect someone that might be less qualified than his rivals. Even if the gamble paid off with her victory, some would worry that the tactic involved deception about her true motives—her ads were not really meant to echo her or her campaign’s views of Akin’s liabilities (such as abortion, on which she later campaigned aggressively and effectively against Akin), but instead used appeals targeted to Republican primary voters so that they would pick a candidate against their electoral interests. By pressing the right Republican “buttons” with her praise, critics could worry that McCaskill’s ads represent the kind of manipulative communication that we want to discourage in political discourse. In the eyes of skeptics, this sort of strategic communication was an ethical risk that might reduce voter trust in any message from McCaskill’s campaign.

Beyond this particular election, McCaskill’s strategy of bolstering her most beatable opponent raises larger ethical issues in political communication. Candidates are free to produce persuasive ads, including ones that are emotionally powerful in their provocative wording, claims, and imagery. But advocating for someone one believes would be a disastrous Senator, should they win, seems deceptive. Even if the information in the advertisement was accurate, running an advertisement subtly encouraging voters to vote for Todd Akin for a purpose other than electing Todd Akin to the U.S. Senate could undermine voters’ faith in both the information they’ve been given and in the political process itself. If voters knew that the McCaskill campaign’s motivation for attacking Akin was to help Akin win a Republican primary, would they receive the message the same way, and would this tactic feed into cynicism about the political process?

Are political advertisements best thought of as a reflection of what a candidate believes, and what they want us to believe about them or their opponents? Or are they simply tools to motivate or demotivate certain behaviors by voters? With our enhanced ability to distance messages from one speaker (through real or fake organizations, super PACs, and so forth), and with our technological ability to crunch data to target specific messages to specific groups, what are the ethical limits on the creative and strategic messaging a campaign uses? 

Discussion Questions: 

  1. Should campaigns devote resources to advancing or helping prospective opponents who hold views they truly believe are dangerous or extreme?
  2. If you were a campaign manager, could you live with the prospect (however unexpected) of this candidate being elected?
  3. What is your threshold for your opponent where you could not entertain the possibility of “helping” them?
  4. There is always the possibility that your decision will become publicly known. In the event “your” candidate is successful and you face them in a general election, are your attacks against them less credible if it becomes known that you helped put them closer to elected office?
  5. If you were in a competitive primary and one of your opponents was helped by the opposing party, would you have the same reaction to the ethics of the situation?

Further Information:

Catanese, David. 2012, July 19. “McCaskill meddles in GOP primary.” POLITICO. Available at: https://www.politico.com/story/2012/07/mccaskill-meddles-in-gop-primary-078737

Cohen, David. 2012, August 19. “Earlier: Akin: ‘Legitimate rape’ rarely leads to pregnancy .” POLITICO. Available at: https://www.politico.com/story/2012/08/akin-legitimate-rape-victims-dont-get-pregnant-079864

Hunter, Caroline C., Lee E. Goodman, and Matthew S. Petersen. 2017. Statement of Reasons of Vice Chair Caroline C. Hunter and Commissioners Lee E. Goodman And Matthew S. Petersen. Washington, DC: Federal Election Commission, https://eqs.fec.gov/eqsdocsMUR/17044406199.pdf

James, Frank. 2012, August 8. “Missouri’s Claire McCaskill Gets Clarity On Her Opponent, If Not Her Future.” National Public Radio. Available at: https://www.npr.org/sections/itsallpolitics/2012/08/08/158413422/missouris-claire-mccaskill-gets-clarity-on-opponent-if-not-her-future

McCaskill, Claire. 2015, August 11. “How I Helped Todd Akin Win — So I Could Beat Him Later.” POLITICO. Available at: https://www.politico.com/magazine/story/2015/08/todd-akin-missouri-claire-mccaskill-2012-121262

McCaskill for Missouri 2012. 2012a, July 19. “Three of a kind, one and the same: Todd Akin.” YouTube. Available at: https://www.youtube.com/watch?v=ec4t_3vaBMc

—. 2012b, July 19. “Three of a kind, one and the same: Sarah Steelman.” YouTube. Available at: https://www.youtube.com/watch?v=JwPWqg_z4EQ

Miller, Jonathan. 2012, August 7. “Missouri: Todd Akin Wins GOP Nod to Face Claire McCaskill.” Roll Call. Available at: https://www.rollcall.com/2012/08/07/missouri-todd-akin-wins-gop-nod-to-face-claire-mccaskill

Murphy, Kevin. 2012, November 6. “Missouri Republican Akin loses after comments on rape.” Reuters. Available at: https://www.reuters.com/article/us-usa-campaign-senate-missouri/missouri-republican-akin-loses-after-comments-on-rape-idUSBRE8A60HJ20121107

Scott, Eugene. 2015, August 14. “Group accuses McCaskill of violating federal campaign finance laws.” CNN. Available at: https://www.cnn.com/2015/08/13/politics/claire-mccaskill-todd-akin/index.html

Authors:

Conor Kilgore & Scott R. Stroud, Ph.D.
Project on Ethics in Political Communication / Center for Media Engagement
George Washington University / University of Texas at Austin

May 7, 2020


This case study was produced by the Media Ethics Initiative and the Project on Ethics in Political Communication. It remains the intellectual property of these two organizations. It is supported by funding from the John S. and James L. Knight Foundation. This case can be used in unmodified PDF form for classroom or educational uses. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement.

 

Can Artificial Intelligence Reprogram the Newsroom?

CASE STUDY: Trust, Transparency, and Ethics in Automated Journalism

Case Study PDF | Additional Case Studies


Could a computer program write engaging news stories? In a recent Technology Trends and Predictions report from Reuters, 78% of the 200 digital leaders, editors, and CEOs surveyed said investing in artificial intelligence (AI) technologies would help secure the future of journalism (Newman, 2018). Exploring these new methods of reporting, however, has introduced a wide array of unforeseen ethical concerns for those already struggling to understand the complex dynamics between human journalists and computational work. Implementing automated storytelling into the newsrooms presents journalists with questions of how to preserve and encourage accuracy and fairness in their reporting and transparency with the audiences they serve.

 

Artificial intelligence in the newsroom has progressed from an idea to a reality. In 1998, computer scientist Sung-Min Lee predicted the use of artificial intelligence in the newsroom whereby “robot agents” worked alongside, or sometimes in place of, human journalists (Latar, 2015). In 2010, Narrative Science became the first commercial venture to use artificial intelligence to turn data into narrative text. Automated Insights and others would follow after Narrative Science in bringing Lee’s “robot agent” in the newsroom to life through automated storytelling. While today’s newsrooms are using AI to streamline a variety of processes, from tracking breaking news, collecting and interpreting data, fact-checking online content, and even creating chatbots to suggest personalized content to users, the ability to automatically generate text and video stories has encouraged an industry-wide shift toward automated journalism, or “the process of using software or algorithms to automatically generate news stories without human intervention” (Graefe, 2016). Forbes, the New York Times, the Washington Post, ProPublica, and Bloomberg are just some of today’s newsrooms that use AI in their news reporting. The Heliograf, the Washington Post’s “in-house automated storytelling technology” is just one of many examples of how newsrooms are utilizing artificial intelligence to expand their coverage in beats like sports and finance that heavily rely on structured data, “allowing journalists to focus on in-depth reporting” (Gillespie, 2017).

Artificial intelligence has the potential to make journalism better for both those in the newsroom and the newsstand. Journalism think-tank Polis revealed in its 2019 Journalism AI Report that the main motivation for newsrooms to use artificial intelligence was to “help the public cope with a world of news overload and misinformation and to connect them in a convenient way to credible content that is relevant, useful and stimulating for their lives” (Beckett, 2019). Through automation, an incredible amount of news coverage can now be produced at a speed that was once unimaginable. The journalism industry’s AI pioneer, the Associated Press, began using Automated Insights’ Wordsmith program in 2014 to automatically generate corporate earnings reports, increasing their coverage from just 300 to 4,000 companies in one year alone (Hall, 2018).

As 20% of the time spent doing manual tasks has now been freed up by automation (Graefe, 2016), automated storytelling may also allow journalists to focus on areas that require more detailed work, such as investigative or explanatory journalism. In this way, automation in the reporting process may provide much-need assistance for newsrooms looking to expand their coverage amidst limited funding and small staffs (Grieco, 2019). Although these automated stories are said to have “far fewer errors” than human written news stories it is important to note that AI’s abilities are confined to the limitations set by its coders. As a direct product of human creators, even robots are prone to mistakes, and these inaccuracies can be disastrous. When an error in United Bots’ sports reporting algorithm changed 1-0 losses to 10-1, misleading headlines of a “humiliating loss for football club” were generated before being corrected by editors (Granger, 2019). Despite the speed and skill that automation brings to the reporting process, the need for human oversight remains. Automated storytelling cannot analyze society, initiate questioning of implied positions and judgements, nor can it provide deep understanding of complex issues. As Latar stated in 2015, “no robot journalist can become a guardian of democracy and human rights.”

Others might see hope in AI marching into newsrooms. Amidst concerns of fake news, misinformation, and polarization online, there is a growing cry among Americans struggling to trust a current news ecosystem which many perceive as greatly biased (Jones, 2018). Might AI hold a way to restore this missing trust? Founded in 2015, the tech start-up Knowhere News believes their AI could actually eliminate the bias that many connect to this lack of trust. After trending topics on the web are discovered, Knowhere News’ AI system collects data from thousands of news stories of various leanings and perspectives to create an “impartial” news article in under 60 seconds (DeGeurin, 2018). The aggregated and automated nature of this story formation holds out the hope of AI journalism being free of prejudices, limitations, or interests that may unduly affect a specific newsroom or reporter.

Even though AI may seem immune to certain sorts of conscious or unconscious bias affecting humans, critics argue that automated storytelling can still be subject to algorithmic bias. An unethical, biased algorithm was one of the top concerns relayed by respondents in the Polis report previously mentioned, as “bad use of data could lead to editorial mistakes such as inaccuracy or distortion and even discrimination against certain social groups or views” (Beckett, 2019). Bias seeps into algorithms in two main ways, according to the Harvard Business Review. First, algorithms can be trained to be biased due to the biases of its creator. For instance, Amazon’s hiring system was trained to favor words like “executed” and “captured” which were later found to be used more frequently by male applicants. The result was that Amazon’s AI unfairly favored and excluded applicants based on their sex. Second, a lack of diversity and limited representation within an algorithm’s training data, which teaches the system how to function, often results in faulty functioning in future applications. Apple’s facial recognition system has been criticized for failing to recognize darker skin tones once implemented into technologies used by large populations because of a supposed overrepresentation of white faces in the initial training data (Manyika, Silberg, & Presten, 2019).

Clearly, these AI agents do not garner the same sort of introspection in their decision-making processes, as seen by humans. This lack of understanding could be particularly challenging when collecting and interpreting data and ultimately lead to biased reporting. At Bloomberg, for example, journalists had to train the newsrooms’s automated company earnings reporting system, Cyborg, to avoid misleading data from companies who strategically “choose figures in an effort to garner a more favourable portrayal than the numbers warrant” (Peiser, 2019). To produce complete and truthful automated accounts, journalists must work to ensure that algorithms are feeding these narrative storytelling systems clean and accurate data.

Developments in AI technologies continue to expand at “a speed and complexity which makes it hard for politicians, regulators, and academics to keep up” and this lack of familiarity can invoke great concern amongst both journalists and news consumers (Newman, 2018). According to Polis, 60% of journalists from 71 news organizations across 32 countries currently using AI in their newsroom reported concerns about how AI would impact the nature of their work (Beckett, 2019). Uncertain of how to introduce these new technologies to the common consumer, many newsrooms have failed to attribute their automated stories to a clear author or program. This raises concerns regarding the responsibility of journalists to maintain transparency with readers. Should individuals consuming these AI stories have a right to know which stories were automatically generated and which were not? Some media ethicists argue that audiences should be informed of when bots are being used and where to turn for more information; however, author bylines should be held by those with “editorial authority,” a longstanding tool for accountability and follow up with curious readers (Prasad & McBride, 2016).

Whether or not automated journalism will better allow for modern newsrooms to connect with readers in greater quantities and of a higher quality is still an unfinished story. It is certain, however, that AI will take an increasingly prominent place in the newsrooms of the future. How can innovative newsrooms utilize these advanced programs in producing high quality journalistic work to better serve their audiences and balance the standards of journalistic practice that assumed a purely human form in the past?

Discussion Questions:

  1. What is the ethical role of journalism in society? Are human writers essential to that role, or could sophisticated programs fulfill it?
  2. How might the increased volume and speed of automated journalism’s news coverage impact the reporting process?
  3. What are the ethical responsibilities of journalists and editors if automated storytelling is used in their newsroom?
  4. Are AI programs more or less prone to error and bias, over the long run, compared to human journalists?
  5. Who should be held responsible when an algorithm produces stories that are thought to be discriminatory by some readers? How should journalists and editors address these criticisms?
  6. How transparent should newsrooms be in disclosing their AI use with readers?

Further Information:

Beckett, C. (2019, November). New powers, new responsibilities. A global survey of journalism and artificial intelligence. The London School of Economics and Political Science. Available at https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/

DeGeurin, M. (2018, April 4). “A Startup Media Site Says AI Can Take Bias Out of News”. Vice. Available at https://www.vice.com/en_us/article/zmgza5/knowhere-ai-news-site-profile 

Graefe, A. (2016, January 7).Guide to Automated Journalism”. Columbia Journalism Review. Available at https://www.cjr.org/tow_center_reports/guide_to_automated_journalism.php

Granger, J. (2019, March 28). What happens when bots become editor? Journalism.co.uk. Available at: https://www.journalism.co.uk/news/nordic-news-organisations-shows-that-robot-journalism-can-make-personalisation-pay/s2/a736534/

Grieco, E. (2019, July 9). “U.S. newsroom employment has dropped by a quarter since 2008, with greatest decline at newspapers”. Pew Research Center. Available at https://www.pewresearch.org/fact-tank/2019/07/09/u-s-newsroom-employment-has-dropped-by-a-quarter-since-2008/

Jones, J. (2018, June 20). “Americans: Much Misinformation, Bias, Inaccuracy in News”. Gallup. Available at: https://news.gallup.com/opinion/gallup/235796/americans-misinformation-bias-inaccuracy-news.aspx

Latar, N. (2015, January). “The Robot Journalist in the Age of Social Physics: The End of Human Journalism?” The New World of Transitioned Media: Digital Realignment and Industry Transformation. Available at: www.researchgate.net

Manyika, J., Silberg, J., & Presten, B. (2019, October 25). “What Do We Do About the Biases in AI?” Harvard Business Review. Available at: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

McBride, K. & Prasad, S. (2016, August 11). “Ask the ethicist: Should bots get a byline?” Poynter. Available at https://www.poynter.org/ethics-trust/2016/ask-the-ethicist-should-bots-get-a-byline/

Newman, N. (2018) Journalism, Media and Technology Trends and Predictions 2018. Reuters Institute for the Study of Journalism & Google. Available at https://ora.ox.ac.uk/objects/uuid:45381ce5-19d7-4d1c-ba5e-3f2d0e923b32/download_file?safe_filename=Newman%2BPredictions%2B2018%2BFINAL.pdf

Peiser, J. (2019, February 5). “The Rise of the Robot Reporter”. The New York Times. Available at https://www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-robots.html

WashPostPR. (2017, September 1). “The Washington Post leverages automated storytelling to cover high school football”. The Washington Post. Available at https://www.washingtonpost.com/pr/wp/2017/09/01/the-washington-post-leverages-heliograf-to-cover-high-school-football/

Authors:

Chloe Young  & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
April 27, 2020

www.mediaengagement.org

Image: The People Speak! / CC BY-NC 2.0


This case study is supported by funding from the John S. and James L. Knight Foundation. Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement at sstroud (at) austin.utexas.edu.

 

Are there Bigger Fish to Fry in the Struggle for Animal Rights?

CASE STUDY: PETA’s Campaign against Anti-Animal Language

Case Study PDF| Additional Case Studies


twobirds

PETA / TeachKind / Modified

Idioms are everywhere in the English language. To most, they are playful and innocuous additions to our conversations. People for the Ethical Treatment of Animals (PETA), however, disagrees. On December 4, 2018 the group took to the internet to let English speakers know the error of their ways. The high-profile—and highly controversial—animal welfare group used their Twitter and Instagram accounts to advocate for a removal of “speciesism from… daily conversations” (PETA, 2018). “Speciesism” is the unreasonable and harmful privileging of one species—typically humans—over other species of animals. Like racism or sexism, it is a bias that PETA wants us to shun and abandon in our actions. Their campaign, however, took the new step of targeting anti-animal language in common idioms and suggested animal friendly alternatives. When knocking out tasks, one should say they were able to “feed two birds with one scone,” rather than “kill two birds with one stone.” “Take the bull by the horns” was replaced with “take the flower by the thorns,” and “more than one way to skin a cat” was replaced by “more than one way to peel a potato.” Instead of being the member of the household to “bring home the bacon,” a so-called breadwinner might “bring home the bagels” (PETA, 2018). As one Twitter user questioned, “surely [PETA] [has] bigger fish to fry than this” (Wang, 2018)?

The organization argued that “our society has worked hard to eliminate racist, homophobic, and ableist language and the prejudices that usually accompany it” (PETA, 2018). To those offering bigger fish, they responded that “suggesting that there are more pressing social justice issues that require more immediate attention is selfish” (PETA, 2018). Since certain language may perpetuate discriminatory ideals, PETA encouraged the public to understand the harms and values implied by speciesist language. This type of language “denigrates and belittles nonhuman animals, who are interesting, feeling individuals” (PETA, 2018). To remove speciesist language from your daily conversation is potentially a simple change and, PETA would claim, a far kinder way to use language.

However, some would argue that PETA’s choice to liken anti-animal language to other problematic language—slurs based on race, sexuality, or ability—is a step too far. Many in the public voiced concerns with PETA’s efforts, particularly the implicit equation of violence against humans with violence against animals. A journalist from The Root, Monique Judge, explained, “racist language is inextricably tied to racism, racial terrorism, and racial violence… it is not the same thing as using animals in a turn of phrase or enjoying a BLT” (Chow, 2018). Political consultant Shermichael Singleton agreed with Judge, calling PETA’s statement “extremely ignorant” and “blatantly irresponsible” given the direct ties between racist language and physical violence (Chow, 2018).

Furthermore, others critiqued PETA’s suggested idioms, as either still harmful to animals or themselves harmful to other groups. For example, the Washington Post wondered if scones were really a healthy option for birds (Wang, 2018). Similarly, one Twitter user contested that feeding a horse that’s already fed would be bad for the horse, it was potentially hypocritical to argue animals are sacred and not plants, and that “bring home the bagels” could be anti-Semitic (Chow, 2018). Another Twitter user urged—tongue firmly in cheek—that taking by the flower by the thorns “sounds like some blatant anti-plantism … which is just more speciesism” (Moye, 2018).

As the public voiced both serious and sarcastic disapproval of PETA’s campaign, the animal welfare organization stood strong on its message. In response to likening anti-animal language with racist, homophobic, and ableist language, PETA’s spokeswoman Ashley Byrne said, “‘Encouraging people to be kind’ was not ‘a competition.’” Furthermore, Byrne commented “our compassion does not need to be limited,” asserting that “teaching people to be kind to animals only helps in terms of encouraging them to practice kindness in general” (Wang, 2018). As society is becoming more progressive about animal welfare in other ways, PETA wants the public to use language that encourages kindness to animals. As PETA put it in their original tweet, “words matter” (Moye, 2018). Broadening the recognition of this truth, PETA argues, could be helpful in alleviating the suffering of humans, not just animals.

While PETA’s campaign seems to focus on an individualized and language-conscious approach toward improving animal welfare, their comparison of so-called anti-animal language to the struggles of marginalized groups provoked wary reactions. The organization has a long line of radical and controversial campaigns promoting animal welfare, making it difficult to broadly assess their methods as helpful or counterproductive. Perhaps breaking into the social media attention economy was part of this campaign’s purpose. Regardless of its intentions, PETA’s latest campaign has prompted its audience to consider the nature of harmful language in social conventions, and whether or not the audience is living up to their own standards. Leaving talk of skinning cats aside, perhaps PETA has succeeded in getting us to realize there’s more than one way to turn a phrase.

Discussion Questions:

  1. Are speakers obligated to make relatively small changes to improve the world through language, even if those improvements seem minor?
  2. What are the ethical issues with equating anti-animal and, for example, anti-Black language? Are these the same or different than those of comparing other forms of human oppression?
  3. Is there a way to differentiate racist/sexist language use and anti-animal language use without valuing human life over that of animal life?
  4. What responsibility do humans have to animals? Should this responsibility manifest itself through language or other means first?
  5. When, if ever, is it ethical to intentionally provoke controversy to draw attention to a political or moral issue?

Further Information:

Chow, Lorraine. “PETA Wants Us to Stop Using ‘Anti-Animal Language’.” EcoWatch, 5 December 2018. Available at: https://www.ecowatch.com/peta-twitter-anti-animal-language-2622492677.html

PETA. “‘Bring Home the Bagels’: We Suggest Anti-Speciesist Language—Many Miss the Point” PETA, 7 December 2018. Available at: https://www.peta.org/blog/bring-home-the-bagels/

Moye, David. “PETA Gets Dogged For Tweet Demanding End To ‘Anti-Animal Language’.” HuffPost, 5 December 2018. Available at: https://www.huffpost.com/entry/peta-anti-animal-language-tweet_n_5c071400e4b0a6e4ebd97522

Wang, Amy B. “PETA Wants to Change ‘Anti-Animal’ Sayings, but the Internet Thinks They’re Feeding a Fed Horse.” The Washington Post, 6 December 2018. Available at: https://www.washingtonpost.com/science/2018/12/05/peta-wants-change-anti-animal-sayings-internet-thinks-theyre-feeding-fed-horse/

Authors:

Sophia Park, Dakota Park-Ozee, & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
April 13, 2020


Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement at sstroud (at) austin.utexas.edu.

 

Images of Death in the Media

CASE STUDY: Journalism and the Ethics of Using the Dead

Case Study PDF| Additional Case Studies


The wonders of technology in the 21st century are less wonderful when looking into the eyes of a dead body on your handheld screen. Gruesome images can serve as a powerful journalistic tool to influence the emotions of viewers and inspire action, but they can also be disturbing for many viewers. Some argue that such shocking images shouldn’t be used to increase attention to a story. Others claim only shocking images can be used to illustrate the intensity of an event, a vital part of moving and educating the public. Where do we draw the line regarding what is appropriate in publishing images capturing death?

Historically, death images have had a controversial past in American journalism. For example, photos of horrific lynchings provide modern viewers with an accurate depiction of the normalization of brutal death for African Americans, with crowds of white townspeople surrounding the hangings with refreshments and a “picnic-like atmosphere” (Zuckerman, 2017). These events portray a lack of humanity; while African Americans are burned and dismembered, everyone else is merrymaking, as if they were at a neighborhood carnival. One could argue that photos speak louder than words in this case, as historical news reports describing the atrocities conveyed them as spectacles, but not to the extent of celebrations. For example, following a headline announcing the lynching of John Hartfield at 5 o’clock that afternoon, a sub-headline read “Thousands of People Are Flocking into Ellisville to Attend the Event.” Gruesome photos seem to be useful now, however, to understand the environment that encouraged these killings. Though it is recognized that these images are prone to disturb, many maintain that it is quite necessary to confront them in order to prevent anything of its kind in the future. While “there is no comfort in looking at this history—and little hope save the courage of those who survived it … there may be no justice… until we stop looking away” (Zuckerman, 2017). This is the same reasoning that 60 Minutes relied upon when they decided to show historical lynching photos in a segment produced by Oprah Winfrey on lynching and racist violence in America. Of course, the challenge is that these pictures might cause or invoke psychological harm among those related to the victims of lynching or those fearful of racist violence. There is also the possibility that airing such photos might cause enjoyment or encouragement among racist viewers; critics might remind us that many of these shocking photos were taken and celebrated in their day because they captured an event prized by violent racists.

Some of the most controversial imagery in the contemporary media comes from school massacres. While the mass shooting at Columbine High School in 1999 is certainly an unforgettable moment in our nation’s history, current students continue their battle against what they see as the causes of such tragedies. As part of the #MyLastShot campaign, students are encouraged to place a sticker on the back of their driver’s license that indicates their desire to have photos of their bodies published in the event they are killed by gun violence, in a way similar to the marks indicating organ donor status (Baldacci & Andone, 2019). Campaign founder and Columbine student Kaylee Tyner said, “it’s about bringing awareness to the actual violence, and to start conversations.” She also referenced the impact the viral video of students hiding under their desks during the school shooting at Marjory Stoneman Douglas High School had on her: “I remember seeing people on Twitter saying it was so horrible… but what I was thinking was, no, that’s the reality of a mass shooting.” While such school shootings are comparatively rare, they are clearly horrible. The #MyLastShot campaign represents an attempt to get individuals to agree in advance to use shocking imagery of their death to showcase these horrors to others. If one is killed, the hope is that the image of their dead body might bring about a better understanding of the problem of school shootings, and perhaps move readers to action. Even if one is not killed in a school shooting, agreeing to the conditions of the #MyLastShot campaign still illustrates a willingness to do something that many perceive as shocking, and thereby functions as a plea for heightened attention to issues such as school shootings and gun control measures. At the same time, viewing the photos and videos of school shootings may be so traumatizing to others as to make children afraid of attending school, and for that matter, make parents afraid of sending them. Most children will not be directly affected by a school shooting. How much fear and imagery is needed to motivate the right amount of concern and action on the part of readers?

The #MyLastShot campaign leverages consent before one becomes a victim of violent crime. Not all newsworthy images involve subjects that could give such consent in advance. In cases where subjects of photographs cannot give consent for the use of their images or likenesses in news stories, lines begin to blur separating the newsworthy from the private. The National Press Photographers Association’s Code of Ethics states “intrude on private moments of grief only when the public has an overriding and justifiable need to see” (Code of Ethics). Determining when this “need to see” is applicable, however, is the difficult part of death photography. This is especially true when covering wars and other violent conflicts. The photo of Alan Kurdi, a two-year-old Syrian refugee whose dead body washed up on the Turkish coast after attempting to flee to Europe, overwhelmed the world with despair. Many believed the photo was vital in understanding the terrors of the refugee crisis, while others were disgusted at the exhibition made of his body throughout the media. Kurdi did not—or possibly could not, given his status as a child—consent to such a use of his corpse for news purposes. Regardless, the powerful image of Kurdi caused change; the British prime minister was so moved by it that the United Kingdom began to accept more refugees (Walsh & Time Photo, 2015). Yet Peter Bouckaert of Human Rights Watch, who was the initial distributor of the photo, said “when I see the image now on people’s social media profiles, I contact them and I ask them to take it down. I think it is important that we give Alan Kurdi back his privacy and his dignity. It’s important to let this little boy rest now.”

Images of death tell a certain truth, but their reception often differs based upon the intention behind their use and the meanings imputed by their viewers. It isn’t uncommon for photojournalists to be accused of non-journalistic advocacy in their most shocking photographs. For many, though, the only message being conveyed through a shocking image of death is the truth, one that happens to hit us harder than most other written stories. But telling a shocking truth does not exhaust all that journalists owe their subjects and readers. When can images of death be used in a way that respects the deceased and contributes to a potentially laudatory social goal of the journalist conveying this information?

Discussion Questions:

  1. What are the different sorts of images of death mentioned in this case, and what differentiates each type?
  2. Under what conditions, if any, is it acceptable to photograph the dead for news purposes?
  3. What responsibility do journalists and news organizations have to the deceased? How does one respect an individual who no longer is alive?
  4. If journalism is meant to inform on important public issues, could a shocking photograph be too moving? When does powerful photojournalism shade into unethical manipulation?
  5. What ethical obligations do photojournalists owe their readers? Do these considerations change when violence or other human agents are involved?

Further Information:

Baldacci, Marlena, & Andone, Dakin. “Columbine Students Start Campaign to Share Images of Their Death If They’re Killed in Gun Violence.” CNN, 30 March 2019, Available at: http://www.cnn.com/2019/03/30/us/columbine-gun-violence-campaign/index.html

“Code of Ethics,” National Press Photographers Association. Available at: https://nppa.org/code-ethics

Farmer, Brit McCandless. “Why 60 Minutes aired Photos of Lynchings in Report by Oprah.” 60 Minutes Overtime, 27 April 2018, Available at: https://www.cbsnews.com/news/why-60-minutes-aired-photos-of-lynchings-in-report-by-oprah/

Lewis, Helen. “How Newsrooms Handle Graphic Images of Violence.” Nieman Reports, 5 January 2016, Available at: http://www.niemanreports.org/articles/how-newsrooms-handle-graphic-images-of-violence/

Waldman, Paul. “What the Parkland Students Wanted the World to See-But the Media Didn’t.” The American Prospect, 2018, Available at: http://www.prospect.org/article/what-parkland-students-wanted-world-see-news-media-didnt

Walsh, Bryan, and TIME Photo. “Drowned Syrian Boy Alan Kurdi’s Story: Behind the Photo.” Time, Time, 29 December 2015, Available at: http://www.time.com/4162306/alan-kurdi-syria-drowned-boy-refugee-crisis/

Zuckerman, Michael. “Telling the Truth about American Terror.” Harvard Law Today, 4 May 2015, Available at: http://www.today.law.harvard.edu/feature/telling-the-truth-about-american-terror/

Authors:

Page Trotter, Dakota Park-Ozee, & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
April 2, 2020


This case study is supported by funding from the John S. and James L. Knight Foundation. Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement at sstroud (at) austin.utexas.edu.

 

Misinformation, Social Media, and the Price of Political Persuasion

CASE STUDY: Examining Twitter’s Decision to Ban Purchased Political Advertisements

Case Study PDF | Additional Case Studies


tw

Kon Karampelas / Unsplash / Modified

Social media platforms such as Facebook and Twitter are increasingly used to promote a specific cause or political candidate, influence voting patterns, and target individuals based on their political beliefs (Conger, 2019). Social media’s impact on politics can be seen through paid political advertising, where organizations and campaigns pay to spread their messages across multiple networks. In the course of using finely crafted messages, campaigns sometimes take part in techniques such as the selling or buying of users’ data for targeted advertising (Halpern, 2019).

These “micro” targeted ads are often the location of fake news or misinformation. Twitter and Facebook both revealed that Russian operatives used their platforms in attempts to influence the outcome of the 2016 U.S. presidential election. In 2017, Facebook turned over thousands of Russian-bought ads to the U.S. Congress. The ads showed that “operatives worked off evolving lists of racial, religious, political and economic themes. They used these to create pages, write posts and craft ads that would appear in users’ news feeds — with the apparent goal of appealing to one audience and alienating another” (Entous et al., 2017). All of this was enabled by the targeting power and data collection practices of social media platforms that financially depend on selling advertising to paying individuals and companies.

In response to the growing concerns associated with the misuse of online political ads, Twitter CEO Jack Dorsey announced that his social media platform would ban all political advertisements from its platform starting in November 2019 (Conger, 2019).  The platform’s move is an effort to curtail paid political speech without affecting political discourse in general. “The reach of political messages should be earned, not bought,” Dorsey wrote in a series of Tweets. Dorsey laid out the challenges he thinks paid political advertisements present to civic discourse: over-dependence on technological optimization, the spread of misinformation, and the promotion of manipulated media such as “deep fake” videos (Cogner, 2019). Another worrisome example are the “bots” or automated social media accounts that were created during the 2016 presidential election to help inflate social media traffic around specific presidential campaigns and influence “the democratic process behind the scenes” (Guilbeault, 2019). These programs magnified the reach of misinformation that seemed to run rampant over social media during the campaign.

Twitter’s restriction on political advertising is not absolute, however. Candidates or users would still be able to post videos of campaign ads, just as long as they do not pay Twitter specifically to promote the ad and enhance its audience. For example, 2020 presidential candidate Joe Biden released a viral campaign ad in December 2019 critiquing President Donald Trump’s reputation with foreign leaders (Chiu, 2019). Because the Biden campaign posted the video to Biden’s account and did not pay for Twitter to promote the ad as one of the site’s promoted revenue products, the campaign video was permitted and could be shared by other users. The platform announced it will still allow some paid issue-based or cause-based ads that pertain to subjects such as the economy, the environment, and other social causes. However, “candidates, elected officials and political parties will be banned from buying any ads at all, including ones about issues or causes” (Wagner, 2019).

Twitter’s decision to ban all paid political advertising is not celebrated by all parties. Some worry that social media platforms may go too far in the search for accuracy and an uncertain ideal of truth in political discourse in their spaces. Facebook’s reaction to paid advertising is the diametric opposite to Twitter’s approach: “In Zuckerberg’s view, Facebook, though a private company, is the public square where such ideas can be debated,” said Sue Halpern, who covers politics and technology for The New Yorker (Halpern, 2018).  Facebook has indicated that it will continue to allow politicians to post or purchase ads, even if they contain content that some may judge as false or misleading information. Facebook CEO Mark Zuckerburg’s argument is that as a tool for freedom of expression, one of Facebook’s objectives shouldn’t be to censor political ads or be an arbiter of truth. Instead of getting rid of political ads, the platform said it will focus on combating micro-targeting, where “campaigns tap Facebook’s vast caches of data to reach specific audiences with pinpoint accuracy, going after voters in certain neighborhoods, jobs and age ranges, or even serving up ads only to fans of certain television shows or sports teams” (Scola, 2019). Google, owner of the video sharing platform YouTube, also made commitments to reduce targeted political ads, pledging that political advertisers “will only be able to target ads based on a user’s age, gender, and postcode.” However, the tech company’s policy for removing ads is still murky: “In the policy update, Google focused on misleading ads related to voter suppression and election integrity, not claims targeting candidates” (O’Sullivan & Fung, 2019). But some maintain that truth in political advertising is a worthy and attainable ideal. Criticizing Facebook’s stance, U.S. Representative Alexandria Ocasio-Cortez voiced her support for paid advertising bans, saying “If a company cannot or does not wish to run basic fact-checking on paid political advertising, then they should not run paid political ads at all” (Conger, 2019).

Since Twitter and other social media platforms are companies and not government entities, they are not strictly governed by the First Amendment’s protections of speech and expression (Hudson, 2018). The debate over curtailing paid political advertising on social media brings up a related normative issue: how much free expression or speech should anyone—including social media platforms—foster or encourage in a democratic community, even if the messages are judged as untrue or skewed by many, or if they are connected to monetary interests? The challenge remains: who will decide what constitutes unacceptably false and misleading ads, whose monetary influence or interests are unallowable, and what constitutes allowable and effective partisan attempts at persuasion?

Discussion Questions:

  1. Should paid political advertisements be banned from social media? Should all political ads, paid or freely posted, be forbidden? Why or why not?
  2. What could be possible alternatives to Twitter and others banning paid political advertisements?
  3. Do you believe a focus on microtargeting is superior to banning paid political advertisements?
  4. What is ethically worrisome about microtargeting? Do you think these concerns would also extend to sophisticate campaigns run by traditional advertising agencies, or even public health campaigns targeting specific behaviors?
  5. Might there be ethically-worthy values to microtargeting political or commercial messages?
  6. Is curtailing certain kinds of political speech on social media good or bad for democracy? Explain your answer, and any distinctions you wish to draw.

 Further Information:

Chiu, A. “‘The world is laughing at President Trump’: Joe Biden highlights viral NATO video in campaign ad.” The Washington Post, December 5, 2019. Available at: https://www.washingtonpost.com/nation/2019/12/05/joe-biden-campaign-ad-laughing-trump-trudeau-macron-nato/

Conger, K. “Twitter Will Ban All Political Ads, C.E.O. Jack Dorsey Says.” The New York Times, October 30, 2019. Available at: https://www.nytimes.com/2019/10/30/technology/twitter-political-ads-ban.html

Entous, A., Timberg, C., & Dwoskin, E. “Russian operatives used Facebook ads to exploit America’s racial and religious divisions.” The Washington Post, September 25, 2017. Available at: https://www.washingtonpost.com/business/technology/russian-operatives-used-facebook-ads-to-exploit-divisions-over-black-political-activism-and-muslims/2017/09/25/4a011242-a21b-11e7-ade1-76d061d56efa_story.html

Guilbeault, D. & Wolley, S. “How Twitter Bots Are Shaping the Election.” The Atlantic, November 1, 2019. Available at: https://www.theatlantic.com/technology/archive/2016/11/election-bots/506072/

Halpern, S. “The Problem of Political Advertising on Social Media.” The New Yorker, October 24, 2019. Available at: https://www.newyorker.com/tech/annals-of-technology/the-problem-of-political-advertising-on-social-media

Hudson, D. “In the Age of Social Media, Expand the Reach of the First Amendment.” Human Rights Magazine, October 20, 2018. Available at: https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/the-ongoing-challenge-to-define-free-speech/in-the-age-of-socia-media-first-amendment/

O’Sullivan, D. & Fung, B. “Google’s updated ad policy will still allow politicians to run false ads.” CNN, November 20, 2019. Available at: https://www.cnn.com/2019/11/20/tech/google-political-ads-update/index.html

Roose, K. “Digital Hype Aside, Report Says Political Campaigns Are Mostly Analog Affairs.” The New York Times, March 21, 2019. Available at: https://www.nytimes.com/2019/03/21/business/political-campaigns-digital-advertising.html

Wagner, K. “Twitter’s Political Ad Ban Will Exempt Some ‘Issue’ Ads.” Bloomberg, November 15, 2019. Available at: https://www.bloomberg.com/news/articles/2019-11-15/twitter-s-political-ad-ban-will-exempt-some-issue-ads

Authors:

Allyson Waller & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
March 25, 2020


This case study is supported by funding from the John S. and James L. Knight Foundation. Cases produced by the Media Ethics Initiative remain the intellectual property of the Media Ethics Initiative and the Center for Media Engagement. They can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement at sstroud (at) austin.utexas.edu.

 

Press Freedom and Alternative Facts

The Center for Media Engagement and Media Ethics Initiative Present:


Press Freedom in the Age of Alternative Facts

David McCraw
Senior VP & Deputy General Counsel
The New York Times

March 13, 2020 (Friday) ¦ 1:30PM-2:30PM ¦ Belo Center for New Media (BMC) 5.208


David McCraw.. Earl WilsonWhat is the role of journalism and the press in a time when the truth is malleable, fake news is on the rise, and journalistic freedom is being put into question? How can our democratic society find real and sustainable solutions to our problems of disinformation, political polarization, and new media technologies?  In this talk, David McCraw will draw on his experiences as a lawyer and as the Deputy General Counsel for The New York Times to explore the ethical challenges awaiting journalists and the American public that arise at the intersection of law, ethics, and technology.

David McCraw is Deputy General Counsel of The New York Times Company and serves as the company’s principal newsroom lawyer. He is the author of the recent book “Truth in Our Times: Inside the Fight for Press Freedom in the Age of Alternative Facts,” a first-person account of the legal battles that helped shape The New York Times’s coverage of Donald Trump, Harvey Weinstein, conflicts abroad, and Washington’s divisive politics. He has been at The Times since 2002. He is a visiting lecturer at Harvard Law School and an adjunct professor at the NYU Law School. He has done pro bono work in support of free expression in Yemen, Russia, Montenegro, Bahrain, and other countries around the world.

This talk is co-sponsored by the LBJ School of Public Affairs and the UT Ethics Project. The Media Ethics Initiative is part of the Center for Media Engagement at the University of Texas at Austin. Follow Media Ethics Initiative and Center for Media Engagement on Facebook for more information on future events. This presentation will be introduced by Rebecca Taylor (UT Austin Ethics Project).

This event is free and open to the public.

Parking Information (nearest garages: SAG and TSG) can be found at: https://parking.utexas.edu/parking/ut-parking-garages


 

%d bloggers like this: