Writing

The Dark Matter Is Never Where You Think

April 13, 2026·
AIvertical AIproduct

The number that stopped me: OpenEvidence, a medical AI company, recently closed a round at a $12B valuation — double what it was six months ago. They've aggregated over 50% of US physicians, who use the product an average of 14 minutes per day. Ad revenue is growing 30% month-over-month with 90% gross margins.

I've been reading Nikhil Davar and Byrne Hobart's analysis of why this happened, and it changed how I think about the product I'm building.

Their framework centers on a concept they call "dark matter": economically valuable context that centralized AI platforms can't see. The argument is that OpenEvidence's real moat isn't better models or more compute. It's that doctors share their clinical uncertainty with it. The half-formed diagnostic hypothesis ("I think it's X, but something feels off") gets spoken into OpenEvidence because doctors trust it. That thought didn't exist in any database before. It gets created by the trusted interaction itself.

When I first read this, I asked myself the obvious question: does my product have dark matter? Do my users share things with it they wouldn't tell ChatGPT?

My first instinct was to look at the end consumer — the person experiencing the service. And my initial answer was no. Whatever anxieties or hesitations they might have, they'd probably share those with a general-purpose AI too. The trust barrier I was imagining didn't really exist on that side.

But I was asking the wrong question, or rather, looking at the wrong side of the transaction.

The more interesting dark matter isn't what people hide because of embarrassment or privacy concerns. It's what people reveal at the moment they're making a decision. When someone is in the process of choosing — weighing options, surfacing doubts, talking themselves into or out of something — they generate a specific kind of signal that doesn't exist anywhere else. Not in reviews, not in surveys, not in CRM data.

That's the moment my product lives in. And the anxieties people express there, the specific language they use to articulate what they want, the objections they voice right before converting — that's not information they'd generate by chatting with ChatGPT. They'd generate it with a general AI only if they were also actively deciding something, in context, right now.

Davar and Hobart's broader point is that edge routers win by being present at economically valuable moments that central routers can't access. The mechanism doesn't have to be professional credibility, like with physicians. It can be transactional context: being the thing someone talks to when they're actually about to do something.

The dark matter is never where you first look for it. For OpenEvidence it's clinical uncertainty. For my product it's decision-moment anxiety. Both are invisible to GPT, not because of trust barriers, but because the context that generates them doesn't exist outside the specific interaction.

That's the moat. It's not the model. It's the moment.

← Back to Writing