Photo: Deejpilot | Canva

Jockeying for attention

Lessons from “horse race” coverage can improve reporting on new developments in AI technology

News reports often frame technological developments in artificial intelligence (AI) as “races” between competing companies or nations in which the speed of innovation determines the victor.

“Horse race” coverage of election campaigns has been a convention of journalism for almost as long as there have been elections—despite substantial research documenting that the voting public, electoral candidates, and the news industry itself all suffer when journalists use horse race frames.

Scholars have found that such coverage can increase public distrust in both politicians and news outlets while leading to inaccurate reporting of polling data and, ultimately, an uninformed public.

Previous studies have also determined that horse race coverage often exaggerates leading candidates’ strengths and trailing candidates’ weaknesses. The same tendency can be observed in recent coverage of rival developments in AI technology. 

For example, in an article titled “How Apple Fell Behind in the AI Arms Race,” the Wall Street Journal quoted a developer who noted that Siri, Apple’s digital assistant, was “the last thing Apple was first on.”

Five tips to improve coverage of AI tech developments

Below we adapt insights from critical studies of horse race election coverage to show how reporting on developments in AI technology can avoid some obvious pitfalls to better inform the public. These insights are one component of a larger project to promote algorithmic literacy for journalists.

New York University journalism professor Jay Rosen summarized his guiding principle for improved election coverage as “Not the odds, but the stakes.” 

The same applies to AI journalism. Emphasizing who’s “winning the race” leads to flawed assumptions in reporting on AI technology and distracts attention from other vital questions. 

1. Be cautious of equating developments with advances or progress

Racing metaphors often facilitate unjustified claims about future progress, which Sayash Kapoor and Arvind Narayanan, the authors of AI Snake Oil, have identified as one pitfall of AI journalism.

“Horse races are run on tracks, where progress happens in straight lines. This is not true for AI progress,” Kapoor explains. “Extrapolating AI advances based on recent progress is common, but could give a false sense of how quickly AI progress will continue.”

2. Investigate real-world AI applications and safety concerns, and explore labor, economic, and environmental impacts

Framing tech development primarily as a contest between rival developers marginalizes questions about how new technology is likely to impact the general public, including what one international team of researchers has identified as a host of “sociotechnical harms.” 

As AI competition escalates, “research about keeping these tools safe is taking a back seat,” according to Time. Without proper accountability and transparency, this technology could upend workers’ rights, contribute irreversibly to an existing climate crisis, and exacerbate economic disparities.

3. Cover the role of policymakers

Rapid developments in AI make it difficult for regulators to keep up: “By the time a congressional committee can hold hearings and draft legislation, AI will likely be several generations ahead.” But regulatory and policy recommendations —such as the 2023 Paris Charter on AI and Journalism—deserve more frequent and better coverage.

4. Expose bias in AI systems

Investigate how bias in AI algorithms disproportionately affects marginalized groups and provides the public with misleading or incomplete information. Look into the data sets used to train AI systems and question whether they perpetuate existing inequalities.

For instance: ChatGPT is prone to producing biased and incorrect answers. “Some A.I. researchers have accused OpenAI of recklessness. And school districts around the country, including New York City’s, have banned ChatGPT to try to prevent a flood of A.I.-generated homework.” stated Kevin Rose in the New York Times. 

For specific AI systems, ask: 

  • How will this technology shift power
  • What guardrails are necessary to make its use safe and equitable? 
  • Will its operation be invisible or unclear to the people most affected by it?

5. Emphasize collaboration over competition

Reporting that highlights interdisciplinary efforts, ethical AI initiatives, and international collaboration will help ensure that technology serves the public good.

For example, as Alaa Abdulaal, head of digital foresight for the Riyadh-based multilateral foundation Digital Cooperation Organization, told Yahoo, “It cannot be only done by government itself … It needs to take a cooperative approach, where we have at the same table the private sector, public sector, civil society — all of them sitting together to come up with the right set of frameworks for AI.”

Growing public concern about AI — including a steep rise in the number of adults who say that increased use of AI in daily life makes them feel “more concerned than excited” — is likely a consequence of many factors.

Amid tech developers’ own claims of “revolutionary” or “groundbreaking” advances and the hype of pundits (whether they are AI “boomers” or “doomers”), journalists have important roles to play in informing the public. 

However, well-intentioned but flawed attempts by journalists who use narrow “horse race” or “arms race” frames to cover the latest developments in AI tech may exacerbate these problems.

By following the tips provided here, journalists covering developments in AI tech can provide comparable insights, helping the public better understand AI’s broader societal implications, including its ethical, environmental, and economic impacts.

Related Stories

Expand All Collapse All
Comments

Comments are closed.