Things are moving fast in the generative AI world for everyone but making time for conversations about the opportunities and challenges should go on the to-do list
- Bavarian Broadcasting’s AI Ethics Guidelines
- Wired magazine’s generative AI policy
- JournalismAI tools
ChatGPT takes a swing
We asked ChatGPT: “Write a newsroom policy for responsible and ethical use of generative AI in journalism.” Here’s what it produced:
As the use of generative AI in journalism becomes increasingly prevalent, it is important to establish clear policies to ensure its responsible and ethical use. The following is a newsroom policy that outlines guidelines for the use of generative AI in journalism:
- Purpose of use: Generative AI should only be used when it is essential to the story or when it can provide unique insight or analysis that cannot be achieved through traditional reporting methods.
- Data sources: Data sources used to train the AI should be reliable and unbiased. Newsrooms should strive to ensure that any data used to train AI models is not discriminatory or harmful towards any particular group or community.
- Transparency: Any content created with the help of generative AI should be clearly labeled as such. Readers should be informed about the use of AI and how it was used in the creation of the content.
- Responsibility: Editors and reporters should be held responsible for the content generated by AI models. They should ensure that the content aligns with their newsroom’s ethical standards and should be prepared to correct any errors or inaccuracies.
- Privacy: Newsrooms should take steps to protect the privacy of individuals who are involved in any data used to train AI models. This includes obtaining consent when necessary and ensuring that any personal information is properly secured.
- Impact: Newsrooms should consider the potential impact of using generative AI on their audience and society as a whole. They should weigh the benefits of using AI against any potential negative consequences.
- Continuous evaluation: Newsrooms should regularly evaluate their use of generative AI to ensure that it remains responsible and ethical. They should be willing to adapt their policies and practices as needed to ensure that the use of AI aligns with their ethical standards.
By following these guidelines, newsrooms can ensure that they are using generative AI in a responsible and ethical manner that upholds the integrity of journalism.
OpenAI launched ChatGPT, its powerful new generative AI tool, on Nov. 30. The announcement had the collective force of a tree falling in the woods—when no one is there. News organizations generally ignored the announcement.
Reuters and the New York Times didn’t report anything about the communication-altering tool for nearly a week. CNN and the Los Angeles Times followed a few days later.
After six months, ChatGPT has more than 100 million active users. Google’s rival product, Bard, was released for testing in late March, and generative AI has been associated with everything from a fake Drake song to undermining democracy.
News organizations cannot be as passive about generative AI as they were about covering ChatGPT’s launch last fall.
“This genie’s out of the bottle and it’s already changing everything,” said Abraham Doris-Down, senior director for design and user experience at the Boston Globe. “I’ve been telling everybody I know this is going to be one of those moments at least as impactful as the cell phone. We’re going to have that conversation in a few years. Remember what it was like to write an email or a resume before ChatGPT?”
This particular genie, which joins the internet and social media as relatively recent, powerful technological tools that portend to fundamentally shift how news is gathered and communicated, has challenged news organizations to consider how generative AI should be incorporated into the newsroom.
Unlike previous generations of AI, which were programmed to recognize patterns or respond to programmed rules, generative AI is more advanced, particularly in its ability to construct clear and concise conclusions from expansive datasets.
This genie’s out of the bottle and it’s already changing everything.Abraham Doris-Down, Boston Globe
“That could be a game changer for local news, such as agenda-watching tools,” said Darla Cameron, managing editor for visual journalism at The Texas Tribune. “The tech is there that you could easily transcribe, but what it can’t do yet is identify, what is the story in this meeting? How do you train a model on that without the model building in the biases that newsrooms have had in the past?”
Cameron emphasized generative AI presents a particular challenge for newsrooms that do not have massive resources.
“I do really worry that there’s going to be such a divide between newsrooms that have a lot of technical resources and ones that don’t,” she said.
Charlie Beckett, a veteran journalist who directs the JournalismAI project at the London School of Economics, emphasized every news organization should ask the same questions about how generative should or should not be used. The answers, however, can differ.
“It depends entirely upon your editorial mission and business model,” he said. “You can use it to create a vehicle for clickbait or fake news. You can use it to erode your journalistic ethics and capacity, or you can do the opposite. At one level, it’s a very pragmatic issue. It’s a question of tools.”
The question applies particularly well to smaller news organizations.
“You have to think about your staffing and your skilling,” he said. “But also, what business are you in? If you’re a local paper in a small town, what is your business really about? Is it about creating a sense of community or is it a question of trying to give people data about what is happening in their town?”
I do really worry that there’s going to be such a divide between newsrooms that have a lot of technical resources and ones that don’t.Darla Cameron, The Texas Tribune
Beckett emphasized how organizations answer these types of questions should direct their uses and guidelines for generative AI.
AI is not new to journalism. Large-scale news organizations, such as the Associated Press and Wall Street Journal, have used AI for years. The introduction of generative AI tools last fall, however, represented a leap forward in the nature and power of AI tools.
Few news organizations have published explicit guidelines regarding how they will use generative AI. For example, The Texas Tribune has not yet developed formal policies for using AI tools, though it is considering collaborations and partnerships to develop tools for potential news uses, including with organizations like JournalismAI. Germany’s Bavarian Broadcasting published expansive guidelines for how its operation will use AI. The guidelines emphasize, “We ask ourselves every time before employing new technology: Does this really offer a tangible benefit to our users and employees at BR?”
The guidelines include concerns about transparency, editorial control, and a commitment to continue evolving and assessing how the tools are being used.
Similarly, Wired magazine outlined how its reporters and editors will use generative AI, emphasizing, “We do not publish stories with text generated by AI.” The policy notes inherent errors and biases AI tools create and perpetuate but indicates the magazine plans to experiment with more behind-the-scenes uses of generative AI, such as headline suggestions and story-idea generation.
CNET started experimenting with generative AI tools in November, publishing an announcement about how it had used AI in mid-January. Less than a week later, the organization stopped using AI-written stories and indicated future reports about AI will include disclosures that CNET uses its own AI tools. The rollout, according to reporting about the organization’s use had not been transparent enough, with even members of the newsroom taken by surprise.
Beckett emphasized newsroom guidelines should include communication throughout the organization.
Everyone needs to know about AI because it’s going to affect everybody within a news organization.Charlie Beckett, JournalismAI
“The first step toward any kind of strategic approach is knowledge,” he said. “Basic knowledge and diffuse knowledge across your organization. Everyone needs to know about AI because it’s going to affect everybody within a news organization.”
Cameron and Doris-Down indicated their organizations are having conversations about how AI should and should not be used.
“Where I would want to draw the line, at least for us right now, is directly publishing anything generated by AI,” Cameron said. “If we were going to do a headline bot, we would probably incorporate it with our normal process for writing headlines. It generated three options, get the conversation started, and then go to our editors who normally do that.”
Cameron noted the Tribune has a Watson-powered, non-generative AI tag suggestion tool since 2017.
Doris-Down, who emphasized he does not speak for the Globe as an organization, said “we are in an exploratory phase to figure out what our policy should be. We’re committed to keeping on top of the trends that impact news and definitely AI.”
Beckett, whose organization provides tools to help news organizations, particularly those with smaller newsrooms, emphasized there is no one-size-fits-all set of guidelines for AI.
“One of the key things that I think people will forget, I would always ask it in terms of the audience,” he said. “So, what does the audience want in terms of transparency? What does the audience want in terms of accuracy? What does the audience value in terms of ethics?”
It’s not doing thinking. It’s doing statistical analysis. There’s a sanity check needed, based on stuff I’ve seen produced by ChatGPT.Abraham Doris-Down, Boston Globe
Crucially, he noted, organizations should keep in mind AI tools have critical limitations.
“We know it hallucinates,” he said, using an industry term for when AI tools struggle for an answer and end up generating false information. “We know it doesn’t know what truth is. So, if you use it as a kind of direct-to-publishing truth machine, then you’re making a mistake.”
Beckett continued, “It’s still predictive. It still has no innate knowledge. It’s still not sentient, anything remotely like that. It has no idea what accuracy is or what truth is. It’s like some over-eager intern that is just desperate to give you some kind of answer. It scribbles down whatever it desperately remembers from your lecture.”
Doris-Down agreed, generative AI is a powerful tool, but a clearheaded understanding of its limitations is crucial to any news organization’s discussion regarding how it should be used.
“It’s not doing thinking,” he said. “It’s doing statistical analysis. There’s a sanity check needed, based on stuff I’ve seen produced by ChatGPT.”
Despite these concerns, ChatGPT’s ability to quickly and clearly build answers to difficult questions can be impressive. When asked to create a policy for how news organizations should use generative AI, the tool produced a seven-point set of guidelines in fewer than 10 seconds.
The policy included common-sense concerns about transparency, reading “Any content created with the help of generative AI should be clearly labeled as such.” Regarding AI’s impact, it concluded, “Newsrooms should consider the potential impact of using generative AI on their audience and society as a whole.”
Where I would want to draw the line, at least for us right now, is directly publishing anything generated by AI.Darla Cameron, The Texas Tribune
Overall, the tool echoed what experts contend: Generative AI requires human supervision.
Cameron and Doris-Down cautioned that organizations should be talking about how to use generative AI and should not rush into implementing the tools.
“Any news organization should at least be having people talk about this,” Doris-Down said. “Sooner than later, you’re going to need a policy.”
Cameron noted, “It is an interesting moment where things are changing so quickly, that you want to go into this space and be deliberative and bring our inherent skepticism about this stuff. But you could argue that journalists didn’t do a great job adapting to social media.”
Similarly, Beckett associated conversations about how generative AI should be used in journalism with the ongoing, difficult conversations newsrooms have had about technology since the late 1990s.
“The power relations between journalists and the public have been shaken and stirred,” he said. “I don’t think it’s been like a revolution, but it’s been a profound period of change. Unless you understand that, you won’t even get the basic technical bits right. If you think you can just bring in some gear and automate a few things and carry on as before then I think you are underestimating the shift.”
Jared Schroeder is an associate professor of journalism at Southern Methodist University in Dallas, he is the author of “The Press Clause and Digital Technologies’ Fourth Wave.” Daxton “Chip” Stewart, is a professor of journalism at Texas Christian University in Fort Worth. He is the author of “Media Law Through Science Fiction: Do Androids Dream of Electric Free Speech?”