Cartoon of two employees carefully examining a giant dartboard with a dart in the center.

Illustration: Jamie Chismar

How we made the Ecosystem Research Toolkit

We applied the basics of product development to design a user-centered, actionable guide for local funders

Earlier this month we launched “Commissioning a Local News Ecosystem Assessment: An Operational Toolkit.” We approached this toolkit with a simple premise: commissioning ecosystem research is as much a challenge of operations as research. We wanted to be clear about what an ecosystem assessment takes, not just what it is. 

Methodology matters. But across projects, we saw that assessments were more often drifting because goals are assumed, RFPs are vague, or vendors and commissioners are solving slightly different problems.

As ecosystem assessments move from occasional practice in local journalism to standard operating procedure, these friction points are becoming apparent. We designed the toolkit to bring structure to that “messy middle” — the operational work that makes strong research possible.

Our process

We approached this toolkit as a product. 

First, we identified a clear target user: people commissioning ecosystem assessments, often for the first time. These people are usually local funders or other types of civic leaders. We’ll call them “commissioners” throughout this post. 

We stayed laser-focused on them throughout the process of developing the toolkit, continually asking questions such as: 

  • If someone is doing this for the first time, where will they get stuck? 
  • What is intimidating? 
  • What terminology is opaque? 
  • What is ‘obvious’ to researchers but not others?” 

You’ll see this first-time user approach show up in features such as a clear “This step does not…,” frequent menus of choices, and basic explanations of tradeoffs between different research methods. It also shows up in what we chose to leave out, such as deep methodological tutorials.

Then we conducted user research by speaking to seven Press Forward chapters at various stages of the assessment process and five research vendors in order to understand the experience from people on both sides of the process. We also had informal conversations with dozens of other commissioners and research vendors. 

The insights gained from those touchpoints helped us develop what are often called design mandates — clear principles that guide how a tool should function to serve the user’s needs. 

That’s why the toolkit is:

  • Oriented around stages and decision points, not abstract information topics
  • Modular, so communities can adapt it to their context
  • Updateable, so it can evolve as the field evolves
  • Structured with clear step distinctions so users can locate themselves in the process
  • Punctuated with “done conditions” 

Done conditions are a product concept that clarify what “good enough” looks like at a given stage. They give teams a shared understanding of what they are trying to achieve and when they’re ready to move forward. They emerged as a simple but important solution for some of the most common pain points. 

What we learned, and how that shaped the toolkit

Insight 1: Commissioners lack structured support to define purpose and decisions

As ecosystem assessments become more common, more funders and civic leaders are commissioning research for the first time. Many assessments begin with a broad desire to “learn about our ecosystem,” but they need to be tied to real decisions.

Many commissioners understood that alignment and scoping mattered, but lacked clear frameworks for translating curiosity into goals. This made it harder to articulate their needs to vendors.

Research is most powerful when it’s connected to the decisions it’s meant to inform. Without that clarity, research and decision-making drift apart. We have seen this repeatedly across communities; it is one of the most consistent risks in ecosystem work. The toolkit needed to help users explicitly connect the two.

Design mandate

  • Elevate research scope decisions to a strategic level, not just an administrative one
  • Anchor the toolkit to the decisions that must be made at each stage of the process
  • Provide structure for alignment and goal-setting conversations
  • Make tradeoffs clearly visible

Insight 2: Commissioners lack frameworks for evaluating vendors based on strategic fit

Research commissioners have a wide range of vendor options. The differences between them are less about quality and more about philosophy, tradeoffs, and alignment. 

But commissioners often lack clear frameworks for evaluating those differences or translating their priorities into vendor selection criteria. They may not know what questions to ask — or which differences matter most.

They may feel uncertain challenging research experts. They may over-index on deliverables because those are easy to compare. They may default to price as the key criterion because it is the most straightforward variable. Meanwhile, the most consequential distinctions — depth versus comparability, local embeddedness versus national benchmarking, community trust versus scale — are harder to assess without structured criteria.

As a result, vendor selection can become a procurement exercise rather than a strategic alignment decision.

Design mandate

  • Surface the key tradeoffs that differentiate vendor approaches
  • Equip commissioners with questions and vocabulary to evaluate strategic fit
  • Frame vendor selection as partnership design, not just price comparison

Insight 3: Commissioners lack clear benchmarks for cost and time

Because ecosystem assessments are still relatively bespoke, there aren’t widely shared reference points for:

  • What this work typically costs
  • How long it takes to do well
  • What drives variation in price
  • What changes when budgets or timelines are cut

As a result, ambition is often set without taking resource constraints into account. Vendor proposals end up being the first time true cost and timeline requirements are surfaced.

That moment can trigger sticker shock. Timelines may be compressed to meet external pressures. And when costs feel high, commissioners may default to the least expensive option — only to find that it doesn’t equip them with the information needed to make strategic decisions.

Greater transparency from vendors about cost drivers and timeline realities would help. But bringing these constraints into the open earlier in the commissioning process is an important interim step.

Design mandate

  • Be explicit about cost ranges and what influences them.
  • Articulate what a realistic timeline looks like – and what can cause it to balloon
  • Link scope directly to budget and time tradeoffs

Taken together, these three insights pointed to a consistent gap: commissioners were lacking structured support for making complex decisions that were new to them. 

Commissioning ecosystem research isn’t about finding the perfect method. It’s about making deliberate decisions, in sequence, under real constraints. 


Cite this article

Zirulnick, Ariel (2026, March 10). How we made the Ecosystem Research Toolkit. Reynolds Journalism Institute. Retrieved from: https://rjionline.org/news/how-we-made-the-ecosystem-research-toolkit/

Related Stories

Expand All Collapse All