A New Coexistence
AI introduces a fourth layer
Every organization is, at its core, a system of accumulated knowledge.
I don’t mean the knowledge that lives in databases or employee handbooks. I mean the deeper kind: the understanding of why things work, how decisions were actually made, what the org chart doesn’t capture. This is the knowledge that allows an institution to function as more than the sum of its current headcount. It’s what remains when individuals leave, when leadership changes, when the external environment shifts.
For most of organizational history, this knowledge has been maintained in three distinct forms, each with its own logic and its own fragility. The arrival of AI introduces a fourth form, one that stores and processes institutional knowledge in a structurally different way from all three predecessors. In my view, understanding how these four layers interact is more interesting than the narrower question of what AI can or cannot automate.
Three Forms of Memory
Every organization that persists beyond its founders develops, over time, a distributed system of knowledge that no single person fully possesses. This system operates through three layers.
First, there’s knowledge held by people. The senior engineer who knows exactly which legacy system will break if you push a certain update. The sales director who understands which clients respond to which framing. This is experiential knowledge, built over years, and it walks out the door every time someone retires or moves on.
Second, there’s knowledge encoded in processes. Standard operating procedures, compliance frameworks, decision trees. These are the formal systems an organization builds to ensure consistency regardless of who’s executing. They work well for repeatable tasks, but they tend to become obsolete when conditions change faster than documentation can be updated.
Third, there’s knowledge embedded in culture. This is the hardest to see and the hardest to change. It’s the unspoken understanding of how things are actually done, what gets rewarded and what gets punished, regardless of what the handbook says, which shortcuts are tolerated and which are career-ending. Culture is what makes two organizations with identical org charts and identical processes produce wildly different outcomes.
These three forms are not interchangeable. Each stores a different kind of knowledge, operates at a different speed, and fails in a different way. People leave. Processes become obsolete. Culture resists change even when change is clearly necessary.
What holds them together is that they all emerged from the same source: accumulated human experience within a specific organizational context.
The Fourth Layer
AI introduces a form of organizational memory that does not originate from human experience in the same way.
Models trained on institutional data can retrieve, pattern-match, and generate outputs based on information that was previously distributed across all three layers. They don’t forget when someone retires. They don’t require years of socialization to absorb context. They don’t lose fidelity when a process is rewritten.
But they operate with a fundamentally different epistemology, and I think this point deserves more attention than it typically gets.
Human memory is contextual. It carries not just the information but also the conditions under which it was generated: the exceptions, the judgment calls, and an understanding of what the formal record does not say. Machine memory is precise but flat. It can reproduce patterns without understanding the context that produced them.
I want to be clear that this is not a limitation that will simply be resolved by better models. It’s a structural characteristic of how two different systems encode knowledge. One builds understanding from the inside out, through participation and experience. The other builds it from the outside in, through pattern recognition across data.
The numbers illustrate why this matters at scale. According to Deloitte’s 2026 State of AI in the Enterprise report, which surveyed over 3,200 senior leaders globally, two-thirds of organizations are reporting productivity and efficiency gains from AI. But only about a third are using AI to deeply transform how they actually operate: creating new products, reinventing core processes, or fundamentally rethinking business models. The rest are optimizing what already exists. That gap, I think, is partly explained by this epistemological difference. The easy wins come from deploying AI against explicit, well-documented knowledge. The hard part is everything that was never written down.

The Epistemological Layer
The philosopher Michael Polanyi observed that human knowledge contains a dimension that resists articulation. His famous formulation was straightforward: we can know more than we can tell. Organizations have always operated with this reality. The most critical institutional knowledge is often the knowledge that no one has written down, because no one could fully express it.
When AI enters this system, it interacts primarily with what is already explicit or can be made explicit: documents, communications, and recorded decisions. The tacit dimension, the layer that often determines why an institution functions the way it does, remains largely outside its reach.
A 2026 Gartner survey found that 47% of digital workers still struggle to find the information they need to do their jobs. That’s after decades of knowledge management systems, intranets, wikis, and now AI-powered search. The persistence of this problem is telling. It suggests that the issue was never primarily about search technology. It was about the fact that a huge portion of organizational knowledge was never captured in searchable form to begin with.
This reframes AI’s contribution.
Machine memory is not a replacement for the other three forms. It’s a fourth layer that operates in parallel, with its own capabilities and its own blind spots. And that means organizations now contain four epistemologically distinct systems of knowledge, each encoding reality in a way the others cannot fully access.
That coexistence itself is a new phenomenon. How these layers interact, where they reinforce each other, and where they essentially speak different languages is a question that most current analyses have not yet begun to examine seriously. The AI-driven knowledge management market is growing rapidly, with some estimates putting it at $7.7 billion in 2025 and projecting it toward $35 billion by 2029. That’s a lot of capital being deployed on the assumption that the fourth layer can eventually absorb much of what the other three contain. I think that assumption deserves more scrutiny than it’s getting.
The organizations that will navigate this well are the ones that understand what each layer is good at and, just as importantly, what it can’t do. Treating AI as a drop-in replacement for institutional memory built over decades of human experience is a mistake. Treating it as a powerful but structurally different fourth layer, one that needs to be managed alongside the other three rather than as a substitute for them, is a more honest and ultimately more productive framing.

