The Layer Nobody Is Building
Every platform era has a piece of infrastructure that becomes obvious only after someone builds it. For agentic AI, that moment is now.
This is the third post in the series The Headless Firm. The previous posts argued that the economic logic behind the enterprise software suite has collapsed, and that most AI agents cannot operate reliably in the environments they are being deployed into. This post examines what has to exist before any of that changes. It draws on a paper we co-authored: “The Headless Firm: How AI Reshapes Enterprise Boundaries”.
If you were running an e-commerce business in 2009 and wanted to accept payments online, you had a problem that had nothing to do with your actual product. You needed to negotiate with banks, build fraud detection, handle chargebacks, and navigate compliance requirements that varied by geography and card network. Most companies either avoided it entirely or built something brittle and expensive. Then Stripe reduced all of that to a few lines of code. Not by solving payments in a fundamentally new way, but by absorbing the complexity into an infrastructure layer that every application above it could rely on without thinking about it. The value was not in what Stripe did. It was in what became unnecessary to rebuild.
The same pattern has played out repeatedly. Twilio for telephony. Segment for customer data pipelines. Cloudflare for security at the edge. In each case, a problem that had been solved badly and expensively by thousands of individual companies was solved once, properly, and made available as a service. We are at that moment again. The layer that needs to exist is a workflow priors layer for agentic AI, and nobody has built it yet.
Everyone optimized for the model.
The previous posts in this series described two structural facts about enterprise AI. First, the economic logic behind the integrated software suite has collapsed, and the architecture that replaces it is an hourglass: an intent layer at the top, a competitive market of vertical agents at the bottom, and a thin orchestration layer in the middle. Second, most agents being deployed today cannot operate reliably in that architecture because they have no structural model of the environments they are supposed to act in. They know the documented process. They have never seen the real one.
The reason this has not been solved is straightforward. The industry has spent the last three years almost entirely on the model layer, where the progress is visible, measurable, and fundable. Benchmark scores improve. Context windows expand. Reasoning capabilities compound. These are real advances and they matter, but they do not address the question of how an agent acquires the operational knowledge it needs to act safely inside a specific enterprise environment. That question has been treated as someone else’s problem, and nobody has claimed it.
Incumbents cannot build it
The platforms that could theoretically own this layer have structural reasons not to. Large enterprise vendors like Salesforce and Microsoft are building agent capabilities on top of their existing products, which means their process knowledge is scoped to their own ecosystems. They can tell an agent how Salesforce works. They cannot tell it how your organization uses Salesforce, which is a different thing entirely. Process mining platforms have deep expertise in workflow analysis but derive their models from event logs in backend systems, which captures what the system recorded but not what the user actually did. Neither approach produces the kind of ground-level process model that agents need to operate in the full complexity of a real enterprise environment.
The gap is not an oversight. It is a structural consequence of every incumbent optimizing for what already exists rather than for what needs to be built from scratch. That is the same reason Stripe did not come from a bank.
Agents need workflow priors
The missing infrastructure needs a name precise enough to reason about. We call it workflow priors: the accumulated operational knowledge of how enterprise software is actually used, encoded in a form that agents can draw on before they act. Just as a Bayesian prior shapes how an agent interprets new information before it has seen anything, workflow priors shape how an AI agent navigates an enterprise environment before it has made a single mistake. The agent that has them operates with context. The agent that lacks them is guessing.
The value of getting this right compounds in a way that point solutions cannot match. Every agent that draws on rich workflow priors is safer and more capable than one operating without them. Every organization that deploys on top of a shared workflow priors layer avoids rebuilding the same foundation themselves. More deployments produce richer priors, which make agents more reliable, which drives more deployments. The network effect is structural, not incidental.
Process knowledge is not an add-on
There is a tempting shortcut that does not work, and it is worth naming directly. The instinct of most enterprise software vendors will be to add process understanding as a feature on top of existing architectures: a context awareness module, a workflow grounding service, an agent safety layer. These will be announced, positioned, and in some cases genuinely well-engineered. They will also be fundamentally limited by the same constraint that limits every incumbent trying to absorb a structural shift from within: they are optimizing for the existing product rather than for the new layer.
Stripe worked because it was built from scratch around the abstraction of a payment, not as a feature of an existing banking platform. Segment worked because it was built around the abstraction of a customer event, not as an analytics add-on. A workflow priors layer follows the same logic: it has to be built around the abstraction of a workflow as it actually exists in production, not as it was designed or documented. That starting point is not available to any vendor whose business model depends on an architecture that predates it.
This is the competitive logic of the hourglass in practice. The waist of the hourglass does not belong to the vendors at the top or the agents at the bottom. It belongs to whoever builds the right abstraction at the right moment and makes it available as infrastructure that everything else can rely on. That position is being contested right now, mostly by companies that do not yet know they are in the contest.
How it relates to the hourglass
If workflow priors get built properly, the consequences extend well beyond making individual agents more reliable. It changes the economics of enterprise AI deployment in the same way Stripe changed the economics of online commerce. Companies that previously could not afford to build the foundation themselves can deploy on top of it. The coordination cost of going agentic drops from a major infrastructure project to a configuration problem. The market for vertical agents at the bottom of the hourglass expands because the barrier to building reliable agents falls with the availability of reliable workflow priors as shared infrastructure.
Infrastructure businesses have a different shape than application businesses: slower to build, stickier once established, and valued on completely different metrics. Workflow priors will not generate revenue the way a software subscription does. They will generate something more durable, which is structural position in an architecture that every enterprise deploying AI will eventually need. Whether that position gets claimed by a focused infrastructure company, absorbed by a platform, or fragmented across a dozen partial solutions is the question the next few years will answer. The window for building it cleanly is open now and will not stay open indefinitely.
The full paper is available here.
Tassilo Klein and Sebastian Wieczorek are co-founders of Mantix.




