A vast translucent grid of empty decision panels with a single illuminated neon-green panel, evoking the 99% of decisions that never get research.

Decision Intelligence · · 7 min read

The Decisions That Never Get Research

There's a debate happening in enterprise research right now about AI accuracy. Vendors publish parity scores. Buyers run bake-offs. Procurement teams ask whether synthetic respondents track within 8% or 12% of fielded panels. It's the wrong debate.

The wrong opening question

The accuracy question matters, but it's a second-order question. The first-order question is something the entire category — including most of the AI-native vendors and almost all of the traditional research incumbents — has quietly stopped asking.

How many enterprise decisions get made without any research input at all?

In the work we do at Acumen, the answer keeps coming back to roughly 80%. Sometimes higher. Almost never lower.

The market for AI consumer research is being framed as a battle between synthetic and traditional. The real story is that traditional and synthetic combined still don't cover the vast majority of decisions enterprise leaders make. The gap isn't between two methodologies. The gap is between some research and no research at all.

That's the gap that matters. And it's the one the entire category is failing to address.

The unspoken arithmetic of enterprise research

A typical Fortune 500 marketing organization runs maybe 12 to 20 fielded research studies per year. A pharma commercial team might run 30. A travel and hospitality brand running brand tracking and segmentation work might hit 25 to 40 if they're well-resourced. The average sophisticated enterprise buyer has a research budget that funds, optimistically, 30 to 50 fielded engagements annually.

Now count the decisions.

A single launch involves dozens of message tests, dozens of stakeholder reaction checks, dozens of pricing and packaging iterations. A loyalty program redesign involves hundreds of small decisions about tier qualification, earn rates, redemption mechanics, and benefit structure. A regulatory filing involves stakeholder modeling across legal, compliance, executive sign-off, and external regulator response. A campaign optimization cycle involves daily creative decisions across geographies, segments, and channels.

The rough math: a sophisticated enterprise team makes thousands of decisions a year that benefit from understanding how customers, executives, regulators, or internal stakeholders will respond.

Their research budget covers roughly 30 to 50 of those decisions.

That's a 99% coverage gap.

This isn't an indictment of traditional research. The fielded studies that do get commissioned are typically the highest-stakes ones — regulatory submissions, multi-year strategic positioning, capital deployment decisions. Those need statistical projection, peer-reviewed methodology, defensibility under scrutiny. Traditional research is the right tool for those decisions.

The problem is everything else. Every tactical message. Every stakeholder-reaction question. Every "should we change this benefit, this offer, this fare structure, this loyalty rule, this clinical narrative" decision that gets made every quarter.

Those decisions don't get research. They get someone's opinion in a meeting.

Sometimes the opinion is informed by past research. More often it's informed by HiPPO — the highest-paid person's opinion — and the rest of the room aligns around it because there's no time and no budget to do anything else.

This is the actual baseline that AI consumer research should be measured against. Not against fielded panels. Against nothing.

Why the AI category got distracted

The AI consumer research category — synthetic respondents, AI personas, decision intelligence platforms, all of it — got distracted by an accuracy debate because that's the debate procurement teams know how to evaluate.

It's a familiar shape. Compare a new vendor against a known benchmark. Score the parity. Make a buying decision based on whether the parity number is above some threshold.

Almost every AI research vendor has bent the entire category in this direction. Evidenza publishes an 88% accuracy claim. Synthetic Users publishes an 85–92% parity range. Newer vendors compete on whose number is higher.

The parity comparison is doing useful work — it's a real diligence question, and the vendors who answer it honestly are doing the category a favor compared to the ones who don't. But it's also functioning as a category trap. It's defining the value of synthetic research as how close it gets to traditional research. As if traditional research is the ceiling and synthetic is the cheaper, faster approximation underneath it.

That framing fundamentally misunderstands the decision economics inside an enterprise.

If synthetic research is just a faster, cheaper approximation of traditional research, then its addressable market is the existing research budget — currently funding those 30 to 50 fielded studies a year. That's the market the AI vendors are bidding against each other for. It's a contest for the same dollar.

But the actual addressable market for AI consumer research isn't the research budget. It's the non-research budget — the marketing, brand, product, strategy, and stakeholder-engagement budgets currently being spent without any consumer input at all.

That's a 10x to 50x larger market, and it's not the one anyone is positioning into.

What the no-research gap actually looks like

In the conversations we have with enterprise teams, the no-research gap shows up the same way every time.

A travel brand changes a loyalty tier qualification rule. The decision is made in a Tuesday meeting. Three executives weigh in. Someone with strong opinions about Diamond members carries the room. The change ships in six weeks. Nobody talked to a Diamond member during the decision cycle. The brand will eventually see the consequence in attrition data, six months later.

A pharma commercial team changes the language in an HCP-facing message. The change is small. There's no time and no budget to field a 200-physician study for a single message change. Three brand managers and the agency creative director debate it for an hour. The version that ships is the one the loudest person preferred. Six months later the HCP detail force reports anecdotally that the new language isn't landing.

A bank changes a fee disclosure in the mobile onboarding flow. The change is made by a product manager with input from compliance and design. No customer was consulted, because consulting customers requires a research request, which requires a budget code, which requires a quarter of lead time. The change ships. The conversion drop shows up six weeks later in the funnel.

These are not edge cases. This is the operating reality of how enterprise decisions get made between the fielded research studies.

And the response from the AI consumer research category, so far, has been to pitch synthetic research as a replacement for the fielded studies that do get done — competing for the 1% of decisions that already have research input, instead of building for the 99% that don't.

It's a category-strategic mistake. And it's an opportunity, if you frame it correctly.

What changes when you build for the gap

When you stop building for the research-replacement use case and start building for the no-research gap, three things shift.

The buyer changes. Replacement use cases sell to insights and research teams — the people who own the research budget you're trying to capture. They're a small audience and a defensive one. The no-research-gap use case sells to operators — CMOs, CCOs, brand and product leaders, Chief Strategy Officers. They're a larger audience and an offensive one. They're trying to make better decisions, not protect existing research workflows.

The procurement question changes. Replacement procurement asks "is this 88% as good as fielded research, and can I justify the cost difference?" Gap procurement asks "is this 100x better than the meeting where we decided this last quarter?" The second question is much easier to answer affirmatively, and it's the one that closes deals.

The relationship to traditional research changes. Replacement positioning makes the insights team your enemy. Gap positioning makes them your champion. Insights teams want their organizations making better decisions. They're frustrated by the 80% of decisions that get made without their input because there's no time and no budget. A platform that fills that gap makes them look more comprehensive, not less essential. They become the buyers, not the gatekeepers.

This is the strategic re-frame the AI consumer research category needs to make. Not replacing traditional research. Filling the void next to it.

The honest version of what this means for AI vendors

Building for the gap is harder than building for replacement. Replacement is a clean substitution argument — same job, faster, cheaper. The gap requires explaining a category that doesn't yet exist in the buyer's mental model.

Most AI consumer research vendors will keep competing on accuracy because it's the easier sell. The accuracy debate has clear procurement criteria, established benchmarks, and a familiar shape. Vendors will publish higher parity scores. Buyers will compare them. The category will continue to feel like it's growing while collectively addressing a fraction of the actual opportunity.

The vendors that build for the gap are taking a longer bet. The bet is that enterprise teams will gradually realize they're spending most of their decision-making budget on opinions, not insight, and that AI is the only economic way to close that gap at scale. When that realization lands, the addressable market expands by an order of magnitude — and the vendors positioned for the gap, rather than for replacement, will be the ones who capture it.

Acumen is built around this bet. Not because it's the easier story to sell, but because it's the more honest one — and we think the harder, more honest stories are usually the ones that compound.

What this means if you're an enterprise buyer

If you're inside an enterprise team trying to decide whether AI consumer research is a fit, the most useful question to ask isn't "is the AI accurate enough."

The question is: how many decisions are we making this year that would benefit from consumer or stakeholder input that we currently don't have?

If that number is small — if your team genuinely runs research on most decisions — then AI consumer research is competing with your existing research vendors. Evaluate on parity. Compare accuracy claims. Pick the vendor whose methodology best matches your fielded approach.

If that number is large — and for most enterprise teams it will be — then AI consumer research isn't competing with your research vendors. It's covering ground they were never going to cover anyway. Evaluate on whether the platform extends your existing research investment, fits your stakeholder coverage, and gives you decision-grade output for the decisions that don't currently have input.

That's a different evaluation. It's also the one that matches the operational reality of how decisions actually get made.

The accuracy debate is real, but it's the wrong opening question. The opening question is whether you'd rather have a slightly-imperfect signal for the 80% of decisions you currently get nothing on, or a perfect signal for the 1% you can already afford to research.

That's the actual choice. Frame it that way and the category looks different.

Juan Manuel Gonzalez is the Founder & CEO of G&CO. and Acumen, a decision intelligence platform for the human layer of enterprise decisions. Acumen extends traditional research with AI personas calibrated against client data and real-world signals, built for the decisions that move too fast for fielded research to reach.