Skip to content

Where Decisions Disappear

Published: at 12:02 AM
0 views

Enterprise software has always been built in the image of accounting. Not just financially, but philosophically. It assumes the world can be made legible if we record enough facts, timestamp enough actions, and preserve enough artifacts. Once everything is logged, the thinking goes, clarity will follow.

This works well for transactions. It works poorly for judgment.

If you want to see the problem, watch what happens when someone new joins a team and asks an innocent question: why do we do it this way?

You can usually find the what in five minutes. The ticket exists. The epic exists. The Jira status moved. The purchase order was approved. The config flag is on. The contract has a redline. The dashboard has a line that went up and then plateaued. The incident postmortem has a timeline down to the minute.

The why is different.

The why lives in a meeting that never had notes. Or in a Slack thread that got lost when a channel was archived. Or in a half-joking aside someone said in a hallway: “If we touch that right now, we’ll miss the launch.” Or in an executive’s intuition after seeing a competitor implode. Or in the knowledge that a vendor is fragile but politically protected. Or in the fact that the CFO was in a bad mood that quarter. Or in the memory that the last time you tried to do the “clean” thing, production burned for two days and Support still has scars.

Sometimes the why lives in something even harder to admit: the decision was made to end an argument, not to find truth. Or to buy time. Or to preserve optionality. Or to keep a promise that mattered socially even if it made no sense economically.

Enterprise systems are good at storing what can be audited. They are bad at storing what can only be understood.

That’s a subtle distinction until you’re inside it. If you’ve ever been through an audit, you’ve seen how much of enterprise life is optimized to be defensible rather than correct. An auditor asks, “Who approved this?” and the system can answer. The auditor asks, “Why was this exception granted?” and the system produces a paragraph that looks like a reason but reads like a spell. A system can tell you that a security exception exists. It can tell you when it was granted. It can tell you who clicked approve. It can’t tell you what fear or trade-off that click represented.

Most enterprise systems capture outcomes. They almost never capture reasoning. Over time, this becomes an architectural constraint on the organization itself. The software becomes a mirror that reflects only what can be recorded cleanly, and people slowly learn to think in ways the mirror can see.

That is why enterprise software has felt stuck for so long. Not because it lacks features. Because it’s missing a layer.

We’ve spent decades building systems of record. Then systems of workflow. Then systems of analysis. The next layer won’t be a better dashboard. It won’t be another automation. It will be systems that can preserve and reconstruct judgment.

From the outside, organizations look like data machines. They publish KPIs. They run QBRs. They schedule reviews. They talk about “alignment” like it’s a spreadsheet cell you can fill with the right value.

From the inside, organizations feel like something else. They feel like decision machines. Not in the tidy sense of “we decide and then we execute,” but in the messy sense of constant triage. Which fires matter. Which customers matter. Which risks are real. Which rules are serious and which are theater. Where to spend political capital. When to delay. When to force.

This is why the most consequential organizational decisions often have nothing to do with the numbers on the slide. The slide is an artifact. The decision is a negotiation.

Take hiring. On paper it’s a pipeline. Source, screen, interview, debrief, offer. In reality it’s an argument about what kind of company you are becoming. It’s an argument about standards when you’re tired. It’s an argument about timing when the roadmap is slipping. It’s an argument about whether you trust your manager to onboard someone with sharp edges. It’s an argument about whether you can survive another quarter without that role.

Or take budget approvals. You can pretend it’s ROI math, but anyone who has watched an annual planning cycle knows the truth: budgets are maps of power. A team that shipped something important last quarter gets the benefit of the doubt. A team that missed gets interrogated. A leader who has earned trust can “invest ahead.” A leader who hasn’t is asked to prove the future with a spreadsheet that is really a ritual of obedience.

Or take a product launch delay. The slide says “risk.” The real question is: who will own the embarrassment if it goes wrong? And who has the authority to make the whole company wait? That authority is rarely printed anywhere, but everyone can feel it.

Enterprise software is largely blind to this. It records the outcome—hire or no hire, approved or rejected, shipped or delayed—but not the structure of reasoning behind it. So when you look back later, the organization appears rational on paper while behaving idiosyncratically in practice.

That gap is not a philosophical annoyance. It’s where most waste and fragility originates.

It’s also where strategy leaks.

Organizations don’t usually fail because they can’t record events. They fail because they can’t coordinate judgment. The same decision gets made three different ways in three different corners of the org. Exceptions pile up quietly. Temporary hacks become permanent policy because no one remembers what “temporary” was protecting. A company becomes inconsistent, then incoherent, then brittle. And when something finally breaks, everyone can point to a trail of artifacts that prove they followed process, but no one can explain the reasoning that built the trap.

When enterprise systems fail, the reflex is to add more instrumentation. More fields. More required notes. More “reasons.” More checkboxes. More mandatory templates.

This doesn’t fix the problem. It usually accelerates it.

The more you force people to explain themselves in a system, the faster they learn how to perform explanation without revealing thought. They write what is safe. They write what is legible. They write what will not be used against them later.

This is why so many enterprise artifacts read like they were written by a lawyer who has never met the product.

Look at an approval comment field that says “Justification.” In week one it’s honest. In week eight it becomes a phrase library. “Aligns with strategic priorities.” “Improves customer experience.” “Reduces operational risk.” The system is technically satisfied. Nothing meaningful has been preserved.

The same thing happens with postmortems. The first time a team writes one, it feels like truth. The tenth time, it becomes choreography. You learn which causes are acceptable (“lack of monitoring”) and which are career-limiting (“we shipped anyway because leadership wanted the announcement”). You learn how to put fire in a box.

You can collect an enormous amount of information this way and still have no memory.

Because institutional memory is not storage. It’s the ability to carry reasons across time.

And reasons are fragile because they’re compressed.

In the moment, a decision feels like a whole world. You remember the mood in the room. You remember who was tired. You remember the customer on the call. You remember the runway. You remember the thing that happened last time. Six months later, the decision is a status change. The world around it is gone.

This is why organizations end up with strange fossils: a policy that exists because of an incident no one remembers, a procurement rule that exists because of a vendor that no longer matters, a technical constraint that exists because someone once promised something to a big customer and the promise became infrastructure. The artifacts survive. The reasons evaporate.

Organizations have a short half-life for reasoning. People change teams. Managers change. Teams rename themselves. Slack histories disappear behind permissions. Confluence pages rot. A reorg wipes out the social topology that made a decision make sense.

This is why organizations repeat debates. Not because people are foolish. Because the reasons decayed.

You can see it in the way the same arguments reappear wearing different clothes. The security team warns about vendor risk. The product team argues for speed. Finance asks for predictability. Someone says, “We should standardize.” Someone else says, “Standardization will slow us down.” Everyone believes they are having the debate for the first time.

Sometimes you can even pinpoint the moment the memory was lost. It’s when the one person who held the story left. Or when the channel got archived. Or when the decision’s context stopped being felt in the body of the organization.

Enterprise software was supposed to solve this. That was the promise: a system of record, a system of truth, a durable memory. But it preserves artifacts, not interpretation. It’s a museum without labels.

And then we try to solve the museum problem with automation.

The recent wave of enterprise AI has focused heavily on execution: route the approvals, summarize the tickets, extract the entities, recommend the next action, flag the anomaly. The pitch is always the same. The organization is a machine. The software will run it.

Automation is seductive because it looks like competence. It produces outputs. It moves states. It reduces human effort.

But automation is also fundamentally conservative. It encodes yesterday’s assumptions.

An automated routing rule is a frozen picture of authority. A “recommended vendor” is a frozen picture of trust. A model trained on “successful launches” is trained on a world that no longer exists the moment the market shifts. When the environment changes, the system doesn’t protest. It complies, confidently, because confidence is what it was optimized to display.

This creates a dangerous asymmetry. Humans are expected to justify deviations from automated recommendations, even when the recommendation is operating on expired premises. Over time, people stop questioning the system because questioning has a cost. The system becomes the default not because it is right, but because it is convenient and defensible.

The organizations that survive turbulence don’t survive because they automate more. They survive because they can revise their reasons.

Which returns to the missing layer.

Most enterprise systems treat decisions as implicit. A record changes state, and the decision is inferred. Purchase order approved. Incident resolved. Candidate rejected. Feature shipped. Budget cut. The system has a history of outcomes. It does not have a history of judgment.

Treating decisions as first-class objects means something more specific than “add a notes field.” It means acknowledging that decisions have internal structure even when they are made fast.

There is always a set of alternatives, even if it’s just “do it now” versus “do it later.” There are constraints, even if they are unspoken. There are assumptions, even if they feel obvious in the moment. There are risks, even if they’re described as vibes. There is accountability, even if it’s distributed and ambiguous.

The trick is not to document everything. Over-documentation destroys signal. The trick is to capture the minimum context that allows a future reader to reconstruct the shape of the decision.

Think about what you actually want six months later. You don’t want a transcript. You want the hinge.

You want to know which alternative was tempting and why it lost. You want to know which constraint was binding. You want to know which assumption everyone implicitly shared. You want to know which risk was accepted consciously versus smuggled in through optimism. You want to know who would have owned the downside if it went wrong. That’s usually enough to make the decision legible again.

If you do this consistently, patterns appear that organizations usually can’t see about themselves.

You discover where you repeatedly bet on fragile assumptions. You discover where “risk” is invoked as a rhetorical weapon rather than a probability distribution. You discover where the org is systematically overconfident. You discover where informal power overrides the formal process so reliably that the formal process becomes theater.

This is not surveillance. It’s self-knowledge.

And this is where AI, used correctly, becomes interesting.

The most powerful role for AI in enterprise software is not executor, but interpreter.

An interpreter doesn’t just read documents. It learns an organization’s dialect. It learns which phrases mean “this is real” and which mean “this is for the record.” It learns which metrics matter only when a VP is watching. It learns that approvals depend on timing more than substance. It learns that a particular team will accept risk in Q4 but not in Q2. It learns that a certain class of incident is always followed by a ritual “root cause” that never changes anything, and that the real fix is political, not technical.

You can’t hard-code this. That’s the point. An organization is not a static system. It is a living negotiation between incentives, identities, scars, and aspirations. The rules are partly written and partly embodied.

It’s also why forcing people to “capture reasoning” manually usually fails. People don’t mind writing. They mind exposure. They mind being misread. They mind leaving a permanent artifact that can be pulled out later without context. The result is either silence or performance.

So the interesting design question is: how do you preserve judgment without turning the workplace into a confessional?

The answer probably looks less like mandatory forms and more like careful inference. A system that learns from the traces decisions already leave: what alternatives were discussed in review threads, what risks were raised in incident channels, what constraints were referenced in planning docs, where the debate kept looping until one person said “we’re doing this.” The goal isn’t to record everything people think. It’s to keep enough of the decision’s shape that it doesn’t collapse into a single timestamped outcome.

And if these systems are to be trusted, they have to be designed with a clear boundary: the point is not to grade individual humans. It’s to help the organization remember. A system of judgment that becomes a surveillance tool will produce the same performative artifacts as every other enterprise system, just faster and with better grammar.

What AI can do—if we design for it—is build a model of how decisions are made, not just what decisions were made. It can surface drift: “You used to reject this risk profile. Now you accept it.” It can surface contradictions: “Your process says X, but your decisions behave like Y.” It can surface forgotten context: “The last time you attempted this migration, you paused because of a vendor limitation that still exists.” It can surface hidden dependencies: “This team’s decisions are constrained by Support load in a way you’re not measuring.”

This is a different kind of enterprise intelligence. Not “insight” as in more charts. Insight as in understanding your own mind.

We already have a word for what enterprise software has been: systems of record. Then systems of workflow. Then systems of insight. The next layer will be systems of judgment.

These systems won’t replace decision-making. They will make decision-making harder to lie about, even to ourselves. They will make assumptions explicit. They will keep reasons from evaporating.

They will let organizations ask questions that are currently almost impossible to ask at scale.

Would we make this decision the same way today? Which assumptions no longer hold? Where are we systematically overconfident? Which decisions age poorly, and why?

The reason this shift is inevitable is simple: as organizations grow more complex, the cost of misunderstood decisions rises faster than the cost of misunderstood data.

Coordination failures compound. Local optimizations conflict. Risk accumulates quietly because no single dashboard measures “we keep making the same bet and hoping the world doesn’t change.” At the same time, the limits of data-centric software are becoming obvious. More dashboards do not produce better judgment. More automation does not produce resilience.

What organizations lack is not information. It’s the ability to remember how they think.

This is why the change will look quiet from the outside. It won’t arrive as a flashy demo where an agent clicks buttons for you. It will arrive as systems that ask better questions, that preserve reasoning, that keep institutional memory alive in a way artifacts never could.

The organizations that adopt these systems will seem unusually coherent. They will change direction without panic. They will make fewer repeated mistakes. They will argue less about what happened and more about what to do next, because the past will be less mysterious.

They won’t be smarter in the abstract.

They will simply stop losing their decisions to time.


Previous Post
The Go Getter Mentality
Next Post
Consciousness Is a Conversation