Andrew Finlayson, Pete Pachal, Sil Hamilton

For newsrooms looking to bring AI into the fold, consultants are an attractive — maybe even necessary — choice

Welcome to “AI is here,” a new series from RJI highlighting AI innovation in journalism. In this first installment, we take a look at how news organizations are introducing AI resources into their newsrooms with the help of industry consultants.

In less than two years, AI has gone from a quietly influential luxury to a necessity for the news industry. But dedicated AI teams remain extremely rare in newsrooms at a time when contraction of staff is far more likely than expansion, leaving news organizations to turn to consultants and outside training to “onboard” AI into their workflows.

This move follows a trend of increasing use of consultants to develop — for example — more sensitive, trauma-informed newsrooms, but it also bucks a longer and more robust pattern of the industry evolving slowly, cautiously, and with a trademark distrust of the tech industry.

It’s clear, however, that news organizations have little choice in the matter. For technology so suddenly and intractably ubiquitous, failure to hop aboard the AI train would be tantamount in the medium-to-long term to abstaining from cell phones or computers. Consultants, then, make it possible for news organizations to board the train even if they can’t afford to hire their own conductor.

Andrew Finlayson
Andrew Finlayson

Andrew Finlayson, executive vice president of digital media and AI strategies at research and consultancy firm SmithGeiger, said reliance on consultants for the onboarding process need not limit newsrooms in the future. On the contrary, the foundational elements consultants can help put in place through training make self-reliance easier going forward.

“When you set up a training system, you can train yourself to be able to train others in the organization,” Finlayson said. “Then you can start to communicate to staff where you are with AI, and frankly there is a lack of that at this time. A lot of companies have not fully developed guardrails and guidelines for using this technology.”

Riding the seesaw

At least publicly, both in the tech industry and in journalism, AI guardrails have often emerged not as fully formed rules but as a somewhat clumsy process of trial and error in which prominent failures define the boundaries. After Sports Illustrated, CNET and others posted error-ridden and mostly unlabeled stories written by AI, the industry largely retreated to using AI as an assistant, rather than a stand-in for staff writers. On the tech side, after companies quickly leapt from language models into image and video generation, that horizontal expansion has become more cautious after the software first replicated existing racial biases, then overcompensated.

In that seesaw environment, consultants say news organizations need to lay out some guiding principles and limits, even in broad strokes.

Pete Pachal
Pete Pachal

“Using AI to do content at scale is not a thing a serious media organization should be thinking about right now,” said Pete Pachal, founder of AI training resource The Media Copilot. “But there is certainly room for adopting platforms to create new experiences that they might not have used before.”

Such “experiences” seem almost limitless, and include assisting in the collection, organization and presentation of data, making it easier to archive and access old content or even aiding in the very coding that makes those applications possible.

Pachal — who, like Finlayson, worked in news before becoming a consultant — echoed others who said one of the quickest and easiest steps a newsroom can take is to start playing with generative AI chatbots to learn how they work and discover ways that they can make time-consuming processes such as editing, summarizing and transcribing more efficient. These tools often have a free version that journalists can use to great effect before their organization commits to a paid tool.

Yet hesitancy around how to present even the most unobtrusive uses of AI to the public continue to impede full-throated, public-facing adoption.

“There are a lot of people who recognize the utility of AI as an assistant to maybe make a passage more explanatory, but don’t want to label it as such,” Pachal said.

He pointed to a study released late last year that indicated labels identifying AI-generated content do impact audience perceptions, leading readers to choose human-created stories over AI products. (In an interesting wrinkle, however, participants did prefer AI content to human-created versions when they didn’t know which was which.)

Finlayson said he recommends that news organizations always include a description of how they use AI in an easily accessible location on their website, thereby lessening any perceived ambiguity.

Privacy in an open-source era

Another common concern is privacy. Can an investigative reporter compiling information about the wrongdoings of a civil servant, for instance, trust that sensitive information typed into a chatbot or shared with a colleague in an AI-driven messaging platform won’t be subsumed into the tool’s algorithm? For some, it seems that no amount of assurance will overcome an innate distrust of the tech industry, which has a spotty record on the data privacy front.

“There are some relatively straightforward things you can do to keep data private, but who really knows?” Pachal said. “Even if OpenAI says use the API and then it’s private, do you trust that? Some people still don’t.”

One solution for those concerned about privacy is to use an AI model that runs on a newsroom’s local computer network, an approach that can also make it easier to create a more customized suite of tools that meet an organization’s individual needs. But it can be a challenging, finnicky proposition, once again highlighting the value of outside help.

Sil Hamilton
Sil Hamilton

“There is a ‘golden point’ where you have a fairly small model that’s good enough to do what you need it to do and is also cheaper than using a proprietary model,” said Sil Hamilton, who helps the nonprofit Hacks/Hackers hold educational AI workshops at universities and professional conferences as a researcher-in-residence. “But it’s difficult to do that without a lot of fine tuning. A lot of smaller models are not very good.”

The key to that balancing problem lies in the architecture that forms the foundation of modern AI models: the transformer, a type of neural network that is skilled at factoring context and complex meaning into the process of turning a prompt (“Give me an image of ice cream”) into an output (an image of ice cream).

Transformers scale quadratically, meaning that for every increase in the size of an AI-generated image or paragraph, the computational burden doesn’t simply rise in proportion: it is squared, quickly bringing the necessary computational power into the stratosphere for larger tasks.

“A decent small model, like Llama 3’s 8B version, can run on a MacBook with maybe 16GB of RAM,” Hamilton said. “When you’re wanting to use the large-sized Llama 3 model, you’re going to need eight Nvidia H100 cards in a cluster, which costs around $200,000. Your IT has to be equipped to handle that, which is not the easiest thing, and it’s going to cost a lot more in electricity.”

Implement something fun

More important than these concerns, according to Hamilton, is the mindset from which news organizations approach their use of AI. As opposed to the development of word processors or the internet, innovations to which AI is often compared as a leap forward in journalism’s relationship with technology, Hamilton’s view is that AI is not intended as a ready-made product that works as a “plug and play” solution for the complex and often widely differing needs of journalists.

“AI is really from the realm of research — it doesn’t belong to the realm of IT yet,” Hamilton said. “The way that large language models are continuing to be released is completely at odds with how any news organization wants to approach this. A news organization wants infrastructure, technical support, service-level contracts that they’re just not getting from these more research-minded organizations.”

In other words, organizations that want to onboard AI must first be open to experimentation — a view that perhaps throws the sometimes-embarrassing trial-and-error process of using AI into new light as more than a series of very public mistakes. Dead ends and failures, after all, are inevitable branches on the path to successful experimentation.

“It doesn’t cost too much to just have somebody play around with the technology for a few days and try to implement something fun,” Hamilton added. “And what you’re getting in the process is something a little bit closer to what the AI companies imagine when they release these technologies, because that’s exactly what they’re doing, too.”


Cite this article

Fitzgerald, Austin (2024, July 29). For newsrooms looking to bring AI into the fold, consultants are an attractive — maybe even necessary — choice. Reynolds Journalism Institute. Retrieved from: https://rjionline.org/news/for-newsrooms-looking-to-bring-ai-into-the-fold-consultants-are-an-attractive-maybe-even-necessary-choice/

Related Stories

Expand All Collapse All
Comments

Comments are closed.