Context Lake: A System Class Defined by Decision Coherence

Context Lake: A System Class Defined by Decision Coherence

Correctness for Collective AI Systems

Xiaowei Jiang

Canonical Document • January 15, 2026

Executive Summary

AI agents are increasingly the primary consumers of data, and systems designed for human analysis become bottlenecks under agent workloads. Human decision-making occurs in discrete cycles where data freshness is secondary to cognition; batch processing, eventual consistency, and analytical snapshots are acceptable.

Agents operate fundamentally differently. They make continuous, irreversible decisions in milliseconds. When multiple agents operate over shared resources, their actions interact before reconciliation is possible. In this regime, correctness guarantees that apply after the decision window therefore fail to prevent conflicts.

This operating regime introduces a requirement that existing system classes do not enforce: interacting decisions must be evaluated against a consistent representation of reality at the moment their effects take place. We formalize this requirement as the Decision Coherence Law and show why it is necessary when decisions are continuous, concurrent, and irreversible.

Decision Coherence requires three categories of guarantees:

Semantic operations, enabling agents to form decision predicates directly from unstructured data through interpretation, similarity, and meaning-based selection. Semantic interpretations that influence decisions must be represented as first-class system state and governed by the same consistency guarantees as other decision-relevant data.

Consistency properties, ensuring transactional consistency over all decision-relevant context, including shared semantic interpretations. When decisions interact, agents must observe context that corresponds to a single consistent account of reality.

Operational envelopes bounding staleness and degradation under load. Even when observations are internally consistent and mutually compatible, unbounded temporal divergence allows agents to act on context that no longer reflects the reality in which their actions take effect, producing coordination failures despite individual correctness.

No existing system class provides this conjunction of guarantees. The Composition Impossibility Theorem proves independently advancing systems cannot be composed to provide Decision Coherence—the requirement can only be enforced within a single system boundary.

A Context Lake is the system class defined by Decision Coherence. This document derives its necessity from first principles, proves why existing architectures fail, and specifies the architectural invariants required for correctness in collective agent systems.


How to Read This Document

This document defines Decision Coherence as a correctness requirement for concurrent AI agents and shows why no existing system class can satisfy it. From this result, the Context Lake follows as a necessary system class—not a product, pattern, or optimization.

This document is deliberately constraint-driven. Its purpose is to identify what must be true for correctness under concurrent, irreversible action, and to rule out architectures that cannot meet those constraints.

Audience

This document is for systems architects, senior engineers, and technical leaders evaluating agent-native infrastructure. It assumes familiarity with transactional consistency, distributed system failure modes, and architectural invariants.

It is not an implementation guide, API reference, product comparison, or introduction to AI agents.

Reading Paths

Quick read (≈30 minutes):

Read the Executive Summary, Chapter 1, Chapter 3, Chapter 4, and Chapter 6 (§6.2). This answers: What constraint forces a new system class?

Deep read:

Read sequentially. Chapters 2-3 establish necessity. Chapter 6 eliminates composition. Chapters 8-9 define the admissibility and enforcement conditions required for correctness. Repetition is deliberate.

Section Roles

  • Eliminative (define what cannot work): Chapters 2, 3, 6, 8; Appendix A.
  • Illustrative (show consequences): Chapters 1, 5, 7.

1. Introduction: The Foundational Shift

1.1 The Primary Consumer Has Changed

Traditional databases, data warehouses, and analytics platforms were primarily designed for human analysis cycles. While some systems have served automated processes, the dominant design center has been retrospective analysis rather than continuous decision-making under concurrency. Humans analyze data. Agents act on context. That difference fundamentally changes the requirements.

AI agents operate on millisecond decision loops. Observe. Decide. Act. Repeat. Continuously, many times per second. When they require context to make a decision, "an hour old" is ancient history and even "a minute old" is outdated. Agents require data as it exists in that instant, not as it was at the time of the last batch update.

Human Decision Cycle

(hours to days)

Dashboard(stale OK)Meeting / AnalysisDecisionAction

Agent Decision Cycle

(milliseconds)

ObserveDecideActcontinuous loop

Scope:

This document addresses systems where AI agents—not humans—are the primary consumers of context, and where agents operate constructively such that intelligence compounds rather than in isolation.

By "agents" we mean AI systems with two defining characteristics:

  • Semantic understanding: Interpreting meaning directly from unstructured content—text, images, audio, video—at decision time
  • Continuous operation: Acting in perpetual motion without batch boundaries or quiescent periods

For single-agent systems or independent agents with isolated contexts, the requirements established here do not apply.

Unless stated otherwise, "context" refers to the information an agent uses at decision time, and guarantees are discussed at the system level rather than as isolated component properties.

1.2 The Memory Bottleneck

Changing the primary consumer from humans to agents does more than accelerate decision cycles. It changes where understanding must reside.

Agents act continuously and concurrently. To act correctly, an agent must reason over far more context than can be held within a single execution: recent events, evolving state, shared commitments, historical patterns, and interpretations produced by other agents. This context is not static. It evolves as agents act.

Under these conditions, the limiting factor shifts from the ability to infer or decide to the ability to retain and carry forward context as agents continue to operate.

This raises a fundamental architectural question: should decision-relevant memory remain agent-local, or should it exist as shared, authoritative infrastructure?

One might hope that improving models alone could resolve this constraint. Models can retain information in two ways: through their parameters and through execution-local context.

Parametric memory is not concrete state. Model weights encode statistical regularities learned during training or fine-tuning. They do not store explicit facts about the current world, cannot be updated transactionally, and cannot be corrected atomically. As a result, parametric memory cannot represent rapidly changing reality or serve as an authoritative source of truth for real-time decisions.

Execution-local contextual memory is ephemeral and instance-bound. Even if an agent could retain unlimited prior context within a single execution, that state would exist only within that execution. It would be lost when execution ends and would be invisible to other concurrent agents. Such memory cannot be shared, accumulated across agents, or establish agreement about what is currently true.

This constraint is not unique to artificial systems.

For most of human history, humans acted as autonomous agents whose reasoning and memory were co-located in their own minds. What a person learned existed primarily inside their brain—private, lossy, and non-authoritative. Knowledge accumulated by one individual vanished with them unless it was externalized.

Intelligence did not compound reliably until memory moved outside the human brain. Writing, libraries, and the internet created an external memory substrate that was durable, shared, and authoritative. These systems did not replace human reasoning; they made reasoning cumulative by giving it a stable foundation to build upon.

AI systems now face an analogous inflection point. If each agent keeps its own memory, we get millions of isolated intelligences that never learn from each other. If memory becomes shared, breakthroughs propagate instantly across all agents at machine speed. To compound, memory must become infrastructure.

1.3 What Shared Memory Enables

The following example illustrates what changes when agents share a unified memory substrate.

Consider a warehouse fulfillment system with three autonomous agents operating concurrently: Inventory Agent (monitors stock levels), Shipping Agent (processes orders), and Restocking Agent (receives supplier deliveries). Each makes dozens of decisions per minute.

14:23:18.300Restocking Agent processes a returned unit that had been counted as available, discovers it is defective, and submits a correction to inventory.
14:23:18.310Inventory Agent applies the correction transactionally, updating available inventory: 2 units → 1 unit.
14:23:18.350Shipping Agent, processing an order that requires 2 units, queries inventory and sees: 1 unit available.
14:23:18.400Shipping Agent immediately recognizes insufficient inventory. Escalates to customer service for split shipment approval rather than committing an order it cannot fulfill.

This occurs in 100 milliseconds. The system avoided an invalid inventory commitment because the Inventory Agent serialized the update and the Shipping Agent evaluated its decision against the same coherent representation of inventory state.

Without shared context, the Shipping Agent would have committed the order against stale inventory, creating a fulfillment failure discovered only after the fact.

This example illustrates the simplest form of constructive operation, where all decision-relevant state is deterministic, transactional, and already shared. It represents the limiting case in which decision coherence reduces to classical transactional correctness: when all decision-relevant context is deterministic, shared, and updated within a single system, standard transactions are sufficient.

1.4 When Knowledge Exists but Is Trapped in an Agent Silo

In real systems, the most consequential failures arise under very different conditions. These failures do not result from missing data, delayed processing, or incorrect computation. These failures occur when knowledge is formed correctly but cannot participate in a decision because it exists inside an isolated processing boundary, creating incompatible decision-time views of reality.

Consider an online commerce system at the moment a customer places an order for a laptop. The system is organized in a standard and intentional way. Checkout logic runs on an operational database that owns orders, payments, inventory holds, and authorization decisions, and must respond within a few milliseconds. User behavior—page views, navigation events, and session flows—is collected at much larger scale and processed in a lakehouse, where it is used to derive behavioral patterns, risk signals, and training data. This architectural split is deliberate and widely adopted, reflecting the fundamentally different performance, storage, and access requirements of transactional decision-making and behavioral analysis.

At decision time, the checkout agent evaluates the order using the state available within its operational database. From its perspective, every relevant signal appears normal:

  • the customer has a clean transaction history,
  • the purchase amount is within the customer’s usual range,
  • the shipping address has been used successfully before, and
  • the device and location match recent legitimate activity.

Based on this view of the world, the checkout agent authorizes the transaction.

At the same time, a separate behavior agent processes clickstream data for the same session inside the lakehouse. From this data, it derives a weak but meaningful pattern. The user arrived directly on a deep checkout URL and proceeded to purchase without browsing, comparison, or cart exploration. By itself, this pattern is common and not definitive. It does not justify blocking the transaction. However, it is a known precursor in account-takeover scenarios when it appears in combination with an otherwise normal-looking purchase. The behavior agent records this interpretation as derived knowledge inside the lakehouse, where it will later contribute to analysis and model training.

The checkout agent never observes this knowledge. This is not because it ignores behavioral data, nor because the behavior agent failed to compute the signal, nor because the system is slow or misconfigured. The knowledge exists, but it is written to a system the checkout agent cannot consult within its decision window. Each agent behaves correctly relative to the information it can see, and each writes its results to the system it owns. The failure arises because the interpretation formed by one agent is not visible to the other at the moment a decision is made.

The laptop ships. Thirty-six hours later, the charge is disputed. Investigation confirms that the account was compromised earlier that day. The attacker kept the transaction within normal bounds, relying on the fact that the only early warning signal existed as behavioral knowledge trapped outside the checkout agent’s decision context.

This failure was not caused by missing data, delayed processing, or a bad model. Knowledge was formed correctly and on time. The failure was an agent silo: decision-relevant knowledge existed but could not participate in the decision that mattered.

Crucially, no single agent possessed enough information to act alone. There was no oracle signal that could justify blocking the transaction in isolation. Correct action depended on combining weak, derived behavioral knowledge with transactional context at decision time. Existing architectures make this combination structurally difficult. Knowledge that emerges in analytical systems is treated as something to be learned from later, rather than as something that can participate immediately in decisions.

This is the failure mode that motivates the remainder of this document. It is not a coordination bug or an implementation mistake. It is the natural consequence of systems in which decision-relevant knowledge is allowed to exist outside the boundary where decisions are evaluated.

Unlike the previous example, no amount of transactional correctness within either system can make this decision coherent.


2. What AI Agents Require (and Why Existing System Classes Fail)

Agents do not merely analyze data. They act continuously, concurrently, and irreversibly in the real world. This operating regime imposes requirements on context that existing data systems were not designed to satisfy. This section describes those requirements and explains why no existing system class meets them.

2.1 Agents Require Semantic Operations

Earlier AI systems handled meaning in bounded ways. Interpretation either occurred offline during training and preprocessing, or was delegated to specialized subsystems such as search engines and recommendation models operating over fixed semantic spaces. At decision time, applications evaluated against pre-computed features, embeddings, or scores whose interpretation was stable and externally defined.

Recent advances in large language models change this.

Agents can now interpret raw inputs—text, images, audios, videos—on demand, as decisions are being made. They extract intent, resolve entities, assess similarity, and classify activity based on meaning.

As a result, agent decisions often depend on semantic operations such as:

  • interpreting free-form input to determine intent or risk,
  • retrieving situations similar in meaning rather than matching identifiers,
  • classify activity into pattern families.

For agents, semantic interpretation is not preprocessing or analysis—it is part of the decision itself, and deferring it changes which decision is made.

This represents a shift from earlier applications:

  • interpretation occurs during operation, not just beforehand,
  • meaning evolves continuously,
  • semantic meaning determine which state participates in a decision.

This section establishes a workload fact, not a consistency claim: agent decisions depend on semantic operations that participate directly in retrieval at decision time.

2.2 Agents Need Derived Context

Raw data records what happened. It includes events, records, logs, sensor readings, messages, and other direct observations of activity. By itself, raw data does not represent all of the state an agent must reason over at decision time. Decisions also depend on derived context: representations produced by interpreting and consolidating raw observations over time.

Derived context represents state computed from observations to reflect recent behavior and evolving conditions. Examples include rolling aggregates, derived state, deviations from baseline, correlated activity indicators, and similarity structures.

As agents operate continuously and decisions overlap in time, derived context accumulates and evolves alongside raw data. It reflects how recent experience has been condensed into decision-relevant signals that would otherwise need to be reconstructed repeatedly from raw observations.

Both raw and derived context evolve continuously under concurrency. Multiple agents read and write overlapping portions of context at the same time. There is no quiescent point between updates and decisions, and no interval during which context can be assumed to be fixed.

2.3 Agents Require Many Retrieval Patterns

A single agent decision typically composes multiple dependent retrievals over raw and derived context, including:

  • point lookups over current state,
  • range scans over recent history,
  • filter or aggregations over dynamically defined cohorts,
  • secondary-index access over non-key attributes,
  • similarity retrieval over high-dimensional representations,
  • semantic retrieval based on interpreted intent and conceptual relevance.

These retrievals maybe causally ordered. Later steps depend on earlier results, and their latencies compose. All must complete within a bounded decision window.

Many of these retrievals operate naturally over derived context, which exposes accumulated structure—patterns, correlations, and evolving conditions—that are not present in raw observations alone. Other retrievals consult raw context directly. In practice, agent decisions synthesize results across both raw and derived context.

If any retrieval observes stale or delayed inputs relative to others, the decision is evaluated against a combination of facts that did not exist together at any single moment.

2.4 Existing System Classes Fragment Decision-Time Context

Current system classes divide responsibility along clean architectural boundaries:

  • OLTP systems efficiently mutate primary records, but are not designed to continuously maintain large volumes of derived context or to support complex, multi-pattern retrieval under high concurrency.
  • Analytical systems compute rich derived analytical results at scale, but operate over historical snapshots, trading freshness and concurrency for scale and query flexibility.
  • Search engines support efficient keyword retrieval, but do not continuously maintain derived context, and make updates visible according to indexing policies rather than mutation timing.
  • Vector databases optimize similarity search over vector representations, but do not own the additional context against which decisions are evaluated.

Each of these systems is effective within its intended scope. The limitation arises because agent decisions must combine raw and derived context, using multiple retrieval patterns, over continuously evolving state, under concurrency, at decision time.

This requirement is sometimes conflated with HTAP systems. However, HTAP has never been defined as a system class with formal semantics or invariants. It is a descriptive label for a workload pattern—executing analytical queries closer to transactional data—rather than an abstraction that defines how decisions are evaluated under concurrency.

The requirements discussed here address a different question altogether: what must hold for concurrent decisions over evolving context to be meaningful at all.

No existing system class was designed for this role.

2.5 The Structural Question

When systems fail to meet agent requirements, the default response is incremental: faster pipelines, better indexes, smarter caching.

But speed alone cannot close this gap. The problem is structural, not operational.

This chapter has established what agents require and why existing system classes fail to provide it. But understanding requirements is not the same as understanding purpose.

When agents act in isolation, their behavior does not compound. When multiple agents operate over shared resources or state, their decisions either reinforce one another or interfere. Reinforcement is not automatic: it occurs only when information that influences one agent's decision is visible to others whose decisions interact. Absent this condition, concurrent operation amplifies error rather than intelligence.

Under this operating regime, a fundamental question emerges:

What constraint must hold for agent action to be constructive—when decisions overlap in time, interact through shared state, and produce irreversible effects?

This is not a question about performance optimization or system composition. It is a question about the minimal requirement for agent operations to produce compounding intelligence rather than compounding errors.


3. Decision Coherence

The preceding chapter established the operating regime in which agent interactions become consequential: concurrent decisions over shared state with irreversible effects. It also posed the structural question of what must hold for such interactions to be constructive rather than interfering.

This chapter derives the constraint that answers it.

Under constructive operation, intelligence compounds: one agent's work informs others' decisions, patterns propagate across agents, and understanding accumulates in shared memory rather than remaining isolated.

But compounding is only possible when agents act on the same understanding of reality. If Agent A updates its understanding but Agent B cannot see that update—if they operate from incompatible representations—then Agent B cannot benefit from Agent A's work. Intelligence cannot compound; it fragments.

More fundamentally, when interacting decisions are evaluated against incompatible representations of reality, the system admits no single coherent execution history. Each decision may be locally justifiable relative to what it observed, yet there exists no unified history in which their joint outcomes can be explained. Once such decisions commit irreversible effects, later reconciliation cannot retroactively restore coherence.

3.1 The Decision Coherence Law

Decision Coherence Law.

For agents that take irreversible actions whose effects interact to operate constructively, interacting decisions must be evaluated against a coherent representation of reality at the moment they are made.

This law is foundational in scope, but its necessity follows directly from the structure of interacting decisions. We do not reduce it to more primitive laws; rather, we establish its necessity by showing that without it, the operational meaning of collective behavior are undefined rather than merely suboptimal. It therefore serves as the starting point from which other properties follow. The remainder of this chapter derives the operational requirements that any system satisfying this law must possess.

INCOHERENTtime →DB timeline:↑ read @ t3Search timeline:↑ read @ t2Vector timeline:↑ read @ t1|Agent decision ("now")One decision-time cut fractures across timelines

Understanding constructive operation:

Constructive operation refers to a mode of multi-agent behavior in which agents' decisions reinforce rather than interfere. Under constructive operation, information that influences one agent's decision is available to others whose decisions interact, allowing discoveries and patterns to propagate across agents instead of remaining isolated.

This is what distinguishes constructive operation from:

  • Isolated operation: Agents act independently, so discoveries and decisions do not benefit others and intelligence does not compound.
  • Interfering operation: Agents conflict without benefit, producing worse outcomes than working alone

Why constructive operation requires coherent representation:

For intelligence to compound, agents must base decisions on the same understanding of what is currently true.

Consider the intended flow:

  • Agent A makes a discovery (recognizes a pattern, detects an anomaly, refines an interpretation)
  • Agent B makes a decision milliseconds later
  • Agent B's decision should benefit from Agent A's discovery

This flow only works if both agents operate from the same representation of reality. If Agent A updates its understanding but Agent B cannot see that update—if they work from different representations—then Agent B cannot benefit from Agent A's work.

What coherent representation means:

A coherent representation means that the observations used to evaluate interacting decisions are mutually compatible. Together, they must describe a reality in which all of those observations could simultaneously hold at the moment decisions are made.

Under this condition, interacting decisions can reinforce rather than undermine one another; without it, intelligence fragments as decisions rely on incompatible accounts of reality.

3.2 Operational Requirements

The Decision Coherence Law defines a structural condition for constructive collective operation. This section explains how that condition must be interpreted in practice, in systems where reality evolves continuously and agent decisions have effects that extend beyond system boundaries.

Consistency governs how actions are reflected within a data system. Its guarantees are defined over reads and writes that occur inside the database, with transactional boundaries that begin and end within the system. These guarantees determine how internal state is observed and updated, but they apply only to actions whose effects remain confined to the data system itself.

Decision coherence addresses a broader scope. Agent decisions are evaluated against observed state at a particular moment and take effect at a later moment. Those effects are not limited to internal updates: agents may show content to a user, block or allow an interaction, trigger a notification, surface a recommendation, or influence a user's behavior in real time. Once taken, such actions alter the external world and cannot be undone or reordered by database mechanisms.

Because decision evaluation and action are temporally separated, coherence cannot be ensured by strengthening storage consistency alone. Even perfectly consistent internal state does not prevent incoherence if decisions are based on observations that no longer reflect the world at the time their effects occur. In this regime, agents may act on views that were once admissible but have since diverged, producing outcomes that conflict when composed, despite no violation of database consistency.

Operationalizing the Decision Coherence Law therefore requires systems to make two aspects explicit. First, they must define what constitutes a coherent set of observations for evaluating a decision. Second, they must define the temporal and concurrency bounds within which an agent's decision remains admissible relative to a changing world. These bounds apply to decision-making itself, not to the maintenance of internal consistency.

From the law, three categories of requirements follow:

Semantic Operations (Scope):
The transactional scope of the system must extend beyond raw records to include semantic meaning and transformations that influence decisions. Similarity relations, inferred state, intent, and other derived interpretations are not external annotations; they participate in the same coherence regime as base data. If semantic meaning is computed or observed outside the transactional scope, decisions depend on interpretations that are not governed by the same coherence guarantees.

Consistency Properties (Safety):
When decisions interact, all observations involved—both raw data and semantic interpretations—must satisfy compatibility constraints. Violations do not merely degrade outcomes; they make it impossible to jointly explain interacting decisions as arising from a single, coherent account of reality.

Operational Envelopes (Liveness):
Because reality evolves continuously and agent actions take effect outside transactional control, systems must define the temporal and concurrency bounds under which observations remain admissible for decision-making. These envelopes constrain when interacting decisions may be evaluated so that their effects remain compatible despite change.

Together, they define what systems must enforce for Decision Coherence in practice. The system belongs to this class only if all three are upheld jointly.

3.3 Semantic Operations

Why necessary

Agents do not act on raw observations. Decisions depend on semantic interpretation and evaluation: how observations are understood, categorized, related to prior context, and incorporated into decision state.

If semantic operations execute outside the system—in application logic, prompts, or agent-local execution—agents observing identical data can derive incompatible interpretations. One agent classifies an interaction as a billing complaint and escalates; another interprets the same interaction as a general inquiry and closes. Both actions may be locally justifiable, yet they are based on incompatible semantic evaluations.

This divergence arises even when underlying data is consistent and timely, because the logic that determines what observations mean and how they influence action lies outside the system's transactional regime. Semantic interpretations computed externally cannot be governed, shared, or reconciled reliably, and therefore cannot participate in the system's authoritative representation of reality.

As a result, interpretations discovered by one agent—such as recognizing an intent shift or emerging pattern—remain invisible to others, even when they operate over the same underlying data. When semantic understanding is isolated in this way, agent behavior cannot reinforce across decisions.

Requirement

The system must support semantic transformation and semantic retrieval as native capabilities over live state.

Semantic transformation converts observations into semantic state that participates in decision evaluation—such as embeddings, entity resolutions, inferred concepts, scores, and other derived interpretations.

Semantic retrieval selects decision-relevant state using semantic predicates—such as concept membership, inferred relationships, intent classification, and meaning-based filters.

What this provides

Native semantic operations produce shared semantic state that participates in the system's coherent representation of reality.

Results of semantic transformation can be materialized and may be updated and observed under the same consistency constraints as other decision-relevant data. These transformations may be non-deterministic, approximate, or revised over time; transactional guarantees govern the atomic visibility and coordinated use of their resulting representations, not the computational properties of the transformation itself. When semantic understanding is represented as system context, it becomes visible to other agents and available for subsequent decisions, rather than remaining implicit or isolated.

Invariant

Semantic interpretation that participates in decision evaluation is represented within the system's state and governed by the same coherence guarantees as other decision-relevant data.

Implications
  • Unified retrieval: Decisions may compose exact, aggregate, and semantic predicates over the same decision-relevant state.
  • Semantic transformation: Raw observations may be interpreted and converted into semantic state within the system boundary.
  • Governed accuracy-latency tradeoffs : Tradeoffs affecting semantic evaluation occur within the system boundary and are visible to decision logic.

3.4 Transactional Consistency

Why necessary

Without transactional consistency, agents may observe intermediate or partially applied changes—statesthat never existed as a valid configuration of the system. Decisions evaluated against such observations are undefined, because they rely on representations that cannot be reconciled with any possible state of reality.

This failure mode is independent of coordination across agents. Even a single decision becomes undefined if it is evaluated against a partial, mixed, or transient view of state. In such cases, there exists no possible execution history in which the observed facts could have simultaneously held, making the decision logically ungrounded.

Transactional consistency therefore enforces a necessary safety condition implied by the Decision Coherence Law: the observations used to evaluate a decision must correspond to some coherent state of the system. Without this condition, decision evaluation is undefined regardless of freshness, latency, or coordination.

Requirement

All state transitions must satisfy standard transactional consistency Guarantees (linearizability, serializability, snapshot, or read committed isolation).

What this provides

Transactional consistency ensures agreement on state transitions under concurrency. Agents observe state that corresponds to some valid configuration of the system, rather than transient or impossible combinations of updates.

Invariant

At any moment, there exists exactly one authoritative representation of state against which decisions commit.

Implications
  • Atomic visibility of mutations: State transitions become visible as indivisible units; no decision can observe a partially applied change.
  • Isolation under concurrency: Concurrent decisions observe state that corresponds to some valid configuration of the system, rather than interleavings of in-progress updates.
  • No forked or duplicated "current" views: The system admits a single present; no cache, replica, or execution context may define an independent notion of “now.”

3.5 Temporal Envelope

Why necessary

The Decision Coherence Law requires that decisions be evaluated against reality at the time their effects take place. When reality evolves through mutations, this means the state that exists at the moment an action takes effect is the state against which the decision must be evaluated.

In practice, decisions are evaluated over observations retrieved at an earlier retrieval time, and actions take effect at a later decision time. Between retrieval and action, reality may continue to evolve through additional mutations.

If mutations occur after the retrieval time but before the decision takes effect—and those mutations are not reflected in the observations used—then the decision is evaluated against past reality rather than the reality in which it acts. The observed representation is still internally coherent, but it is temporally misaligned with the moment of action.

When multiple agents make concurrent, irreversible decisions under such temporal divergence, their actions may be locally justifiable yet difficult to compose. Without temporal bounds, interacting decisions lose a well-defined temporal frame of reference.

This distinguishes agent systems from traditional regimes. In OLTP systems, a slow transaction may still be correct when it completes. In batch analytics, staleness is acceptable because analysis is retrospective. In agent systems, decisions interact and take effect immediately; unbounded staleness introduces incompatible temporal slices of reality into decision-making.

If agents decide in tens of milliseconds while decision-relevant state is stale by seconds, concurrent agents evaluate decisions against different moments of reality. This is not merely “slow.” It undermines the ability to explain interacting outcomes coherently.

Requirement

The system must define a temporal bound Δ that limits the admissible gap between retrieval time and decision time.

For every observation used in a decision: decision_time − retrieval_time < Δ.

The declared temporal bound Δ is a correctness constraint, not a performance target.

Δ must be small enough that an agent can treat the observed context as a sufficiently recent approximation of reality when evaluating a decision whose effects may interact with others.

Any temporal bound that allows agents to act on context they cannot reasonably regard as current renders the temporal envelope vacuous and does not satisfy Decision Coherence.

What this provides

The temporal envelope bounds the maximum temporal divergence between observed state and the moment of action, accounting for both propagation delay and decision evaluation.

Within this bound, decisions are evaluated against reality that is sufficiently current to remain compatible with other interacting decisions, even as reality continues to evolve.

Invariant

Any mutation that can affect a decision becomes visible within the decision’s latency budget.

Implications
  • Continuous ingestion: Decision-relevant updates become visible without batching or refresh cycles.
  • Incremental maintenance of derived state: Derived state evolves continuously alongside raw observations.
  • Bounded tail latency: Retrieval latency are bounded relative to decision time.

3.6 Concurrency Envelope

Why necessary

When many agents operate continuously, high concurrent access is the steady state. Each agent makes decisions repeatedly, and those decisions overlap in time while reading and writing shared state.

Decision Coherence is meaningful only if its requirements remain enforceable under this operating regime. If guarantees hold only at low concurrency, then constructive collective operation itself pushes the system outside the regime where coherence can be maintained.

In this sense, concurrency is a liveness condition that underwrites safety: the system must be able to make progress at the concurrency implied by the workload while continuing to satisfy the transactional and temporal requirements derived from the law.

Requirement

The system must maintain transactional consistency and the declared temporal envelope Δ for all decisions under sustained concurrent access at a declared concurrency level C at production scale.

The declared concurrency envelope C must admit the system's peak-state concurrent decision workload.

Any envelope that requires quiescence, serial operation, or load shedding to preserve guarantees is invalid.

What this provides

Decision Coherence that holds in practice, not just in principle: the system continues to provide coherent observations and temporally admissible decisions under the concurrent access patterns that constructive agent operation produces.

Invariant

Consistency and temporal guarantees must hold under sustained production concurrency.

Implications
  • No global serialization points: Correctness does not depend on quiescence or system-wide coordination barriers.
  • Hot-key and skew tolerance: Localized contention does not collapse global guarantees.
  • Workload isolation: Unrelated activity cannot degrade temporal guarantees.

3.7 Necessity and Sufficiency

Decision Coherence requires the joint satisfaction of semantic operations, transactional consistency, and operational envelopes. These constraints address different failure modes and are independent but inseparable: each is necessary, and none is sufficient on its own.

Semantic Operations (Scope)

The system must provide native semantic operations that enable interpretation, transformation, and semantic retrieval within the system boundary. Semantic understanding that influences decisions must be representable as shared state governed by the system.

Consistency Properties (Safety)

The system must enforce transactional consistency over all decision-relevant state, ensuring atomic visibility and isolation under concurrent mutation.

Operational Envelopes (Liveness)

The system must define and enforce bounds under which decisions remain admissible:

  • Temporal envelope (Δ): Bounds the maximum staleness of decision-relevant state visible to any agent.
  • Concurrency envelope (C): Ensures guarantees hold under sustained concurrent access, preventing load-induced violations.

Why all are necessary

The constraints above are conceptually independent, yet operationally inseparable.

  • Transactional Consistency without Temporal Envelope produces coherent but stale reality. Agents agree on what was true, not what is true now.
  • Transactional Consistency without Concurrency Envelope produces correct transactions that cannot complete within decision windows under load. Correctness exists in principle but not in practice.
  • Temporal + Concurrency without Transactional Consistency produces fast but incoherent views. Agents see recent state but divergent versions of it.
  • All three without Semantic Operations produces identical data with incompatible interpretations. Agents agree on observations but not on meaning.

Complete definition

A system provides Decision Coherence if and only if it:

  • provides semantic operations as native capabilities over decision-relevant state;
  • enforces transactional consistency over all decision-relevant state; and
  • maintains these guarantees within a temporal envelope Δ at a concurrency level C.

Decision Coherence — Joint Constraint Model

DECISION COHERENCErequiresSEMANTICOPERATORSMeaning in-systemTRANSACTIONALCONSISTENCYAtomic commit, isolationTEMPORALENVELOPEBounded stalenessCONCURRENCYENVELOPECorrectness under load

All constraints are required simultaneously. No dimension is sufficient on its own.


4. Context Lake: The System Class Defined by Decision Coherence

4.1 Definition: Context Lake

A Context Lake is the system class defined by Decision Coherence.

A system qualifies as a Context Lake if and only if it enforces Decision Coherence: transactional consistency over all decision-relevant state, maintained within declared temporal and concurrency envelopes, with semantic operations executed as native capabilities.

The declared temporal and concurrency envelopes are defined relative to agent decision evaluation, not individual subsystems, queries, or components.

Systems that satisfy these constraints only at human timescales, batch intervals, or low concurrency do not qualify.

Relationship to Existing System Classes:

Context Lake requirements form a strict superset of the requirements of existing system classes.

  • Transactional databases: provide consistency, but do not maintain derived or semantic context within decision-time temporal and concurrency envelopes.
  • Analytical warehouses: provide rich queries over derived state, but operate over historical snapshots without bounded freshness.
  • Search engines: provide flexible retrieval, but not with transactional consistency or bounded freshness.
  • Vector databases: provide semantic similarity, but not integrated with transactional state or other retrieval patterns under a single coherence regime.

A Context Lake satisfies the requirements of each of these system classes while additionally enforcing the constraints required for Decision Coherence. As a result, a Context Lake can serve the roles of a database, warehouse, search engine, and vector store, while also supporting correct concurrent agent decision-making.

Any system that allows agents to act on context stale beyond its declared temporal envelope Δ violates the Decision Coherence Law.

A system may initially fall outside the Context Lake class. As the system evolves to enforce the invariants required for Decision Coherence, it becomes a Context Lake by definition. This classification depends on enforced invariants rather than prior system class, lineage, or architectural resemblance.

4.2 Context Preparation and Context Retrieval

The purpose of a Context Lake is to provide the right context at the moment a decision is made.

Context cannot satisfy this requirement by default. To make the right context available when a decision is evaluated, experience must first be interpreted, consolidated, and organized. Context may need to be prepared before it can be retrieved for decision-making.

For this reason, context participates in a Context Lake in two distinct phases:

  • Context Preparation, which organizes experience into memory that may later support decisions, and
  • Context Retrieval, in which prepared or on-demand context is retrieved to evaluate and decide action.

Only the second phase participates directly in decision-making. The first exists solely to make the second possible.

Context Preparation

Context preparation incorporates observed experience into shared memory so that correct context can later be retrieved.

This phase transforms raw observations into representations that may be relevant for future decisions through interpretation, consolidation, and structuring. It organizes memory without asserting that the resulting state is sufficient to justify action.

Context preparation:

  • may proceed asynchronously or incrementally,
  • may require low-latency updates to keep decision-relevant context current,
  • may revise prior interpretations as definitions or evidence change.

The outputs of context preparation do not participate directly in decisions. They do not constrain which actions may be taken. Their correctness is judged only by whether they enable correct decisions when later retrieved.

Context Retrieval

Context participates directly in decision-making only when it is retrieved to evaluate and decide action.

In this phase, the system retrieves the specific subset of prepared or on-demand context required to evaluate a decision predicate. Retrieved context determines the outcome of the decision and constrains which actions are admissible.

Because decisions may cause irreversible effects, context retrieval must satisfy Decision Coherence:

  • retrieved observations must be mutually compatible,
  • transactional consistency must be enforced, and
  • retrieval must occur within bounded temporal and concurrency envelopes (Δ, C).

Errors at this stage are irreversible. Once a decision commits, its effects cannot be reconciled after the fact. For this reason, correctness and latency requirements apply only to Context Retrieval.

This is the defining responsibility of the Context Lake system class.

Document Focus

Up to this point in the document, we have focused primarily on context retrieval.

This focus is intentional. The Decision Coherence Law and the operational requirements derived from it apply at the moment context participates in a decision. They constrain how decisions are evaluated and made, not how experience is ingested, interpreted, or transformed.

How context is prepared is essential, but it is consequential rather than foundational. Preparation exists to make correct retrieval possible; correctness itself is determined only when context is retrieved and used to evaluate action.

4.3 Context Lake in the Decision Loop

A Context Lake sits between experience and action. It provides a shared substrate through which decision-makers obtain the context required to evaluate and make decisions under real-world constraints.

The Context Lake does not prescribe how decisions are made. Instead, it defines:

  • how experience is organized into decision-relevant context, and
  • how that context is retrieved at the boundary.

Decision-makers interact with a Context Lake in exactly two ways:

  • by contributing experience through their actions, and
  • by retrieving context to evaluate those actions.

All decision logic remains external to the system class.

Context Lake — Decision Loop Architecture

Decision-Maker A(agent / service)Decision-Maker B(agent / service)CONTEXT LAKEAuthoritative Shared Contextbase / derived / semanticRetrievalActionActionRetrieval
Decision Loop with Shared Context

Experience—such as events, observations, and state changes produced by applications and the world—is first admitted into the Context Lake as raw experience. Context preparation then operates over this admitted experience, interpreting and organizing it into decision-relevant representations.

When a decision must be made, a decision-maker retrieves context from the Context Lake at the system boundary. That context reflects the system's current understanding of reality as maintained by continuous preparation over shared experience. The decision-maker evaluates its options and takes an action. That action may update shared context within the Context Lake or take effect in the external world. In either case, the outcome may produce new experience that subsequently enters the Context Lake.

This interaction pattern applies uniformly regardless of how many decision-makers exist. A single decision-maker forms a closed feedback loop with the Context Lake; multiple decision-makers form overlapping loops over the same shared context.

Decision-makers retrieve context at the system boundary and contribute experience through their actions. Coordination arises because interacting decisions evaluate against a shared, continuously maintained representation of reality.

As multiple decision-makers operate concurrently:

  • Coordination does not depend on direct communication between decision-makers.
  • Interaction is mediated through shared context rather than centralized orchestration.
  • Each decision evaluates against a coherent prepared view of reality at its decision time.
Concurrency is therefore not a special case—it is the default operating condition.

System-Class Boundary

This architecture makes the system boundary explicit.

  • Context Preparation is transformation logic executed within the Context Lake.
  • Context Retrieval occurs at the boundary between the Context Lake and decision-makers.
  • Decision-making and action occur entirely outside the Context Lake.
The Context Lake provides context; it does not make decisions.

4.4 Implication and Scope

Decision Coherence defines a new system class boundary. It does not replace transaction theory; it extends it. A system enforcing Decision Coherence must provide transactional consistency over all decision-relevant state, including both structured and semantic state, maintained within temporal and concurrency envelopes sufficient for agent coordination.

This system class is required only under a specific operating regime. Decision Coherence applies when autonomous agents make concurrent decisions whose effects interact and cannot be undone. Outside this regime, the constraints imposed by Decision Coherence are unnecessary.

Decision Coherence is not required when any of the following holds:

  • Agent decisions do not interact through shared state or resources.
  • Actions are reversible or correctable after the fact.
  • The workload is analytical rather than decisional.
  • A human mediates all actions before they commit.
  • Agents perform read-only operations.
  • Incoherent outcomes are acceptable.

Decision Coherence applies if and only if all of the following are true:

  • Autonomous agents
  • Interacting decisions
  • Irreversible actions
  • No human mediation

When these conditions hold, a Context Lake is not optional. When any condition is absent, Decision Coherence is unnecessary.


5. Use Case: Anchorless Fraud Detection

The preceding chapter defines a Context Lake as a system class derived from Decision Coherence. That definition is categorical: when autonomous agents act concurrently over shared state and take irreversible actions, correctness requires a single, coherent representation of reality at decision time.

This chapter examines a case in which violations of that requirement cannot be hidden, amortized, or repaired.

Anchorless fraud detection is chosen because its operating conditions eliminate tolerance for incoherence. Decisions occur under tight latency constraints, actions are irreversible, and adversarial adaptation exploits any delay, semantic fragmentation, or inconsistency in visibility immediately. Architectural violations that might remain latent in benign settings surface here as immediate and irreparable failures.

No fraud-specific assumption is required for the arguments that follow. The case serves only to concentrate the constraints already derived.

5.1 Digital Gift Card Draining

Digital gift cards are a preferred target for payment fraud because they offer instant delivery, fixed denominations, and rapid resale. In large-scale attacks, adversaries aim to convert stolen payment credentials into liquid value as quickly as possible, before issuers or merchants can react.

To achieve this, attackers deliberately burn every stable identifier:

  • Each transaction uses a fresh account, card, device fingerprint, and IP address.
  • Residential proxy networks rotate IPs aggressively.
  • Digital delivery eliminates shipping address reuse.

As a result, no individual transaction is suspicious in isolation. Each appears indistinguishable from a legitimate first-time purchase.

5.2 The Attack Constraint

Despite aggressive identifier rotation, attackers operate under hard constraints they cannot eliminate:

  • Category constraint: gift cards must be purchased from digital-goods merchants.
  • Denomination constraint: values are discrete (e.g., $50, $100).
  • Network constraint: traffic concentrates in a small number of proxy ASNs.
  • Time constraint: stolen cards must be burned quickly.

These constraints force the attack to manifest as bursty, coordinated activity over short time windows. The fraud signal exists only at the collective level.

5.3 What the Decision Must Determine

For each incoming transaction, the system must decide—within milliseconds—whether to approve, apply friction, or deny. That decision depends on evaluating recent collective behavior, including:

  • how many similar transactions have occurred recently,
  • whether activity is accelerating,
  • how traffic is distributed across cards, issuers, and networks,
  • whether the transaction resembles known fraud patterns,
  • whether mitigation has already been triggered,
  • whether current risk budgets or thresholds are exhausted,
  • whether merchant- or issuer-specific overrides apply.

Critically, "similar" here is not exact match. Fraud patterns mutate deliberately.

5.4 The Role of Similarity Search (and Why It Is Necessary)

Attackers actively perturb attributes to evade exact predicates:

  • amounts vary slightly around fixed denominations,
  • device fingerprints drift within a family,
  • proxy infrastructure rotates across subnets and providers,
  • checkout flows differ across channels and merchants.

As a result, the system must evaluate:

  • exact aggregations (e.g., count of $100 gift-card purchases), and
  • similarity-based evidence

Similarity search is therefore required to detect pattern families, not just exact replicas.

This capability naturally forces the use of search or vector-indexed systems, which can retrieve nearest neighbors or approximate matches efficiently at scale.

5.5 Why Conservative Blocking Is Not Viable

A naïve defense is to block whenever any evidence—exact or similarity-based—appears suspicious. In practice, this is untenable.

Attackers can cheaply generate partial “bad-looking” signals:

  • shallow similarity to past fraud,
  • transient spikes in popular denominations,
  • bursts that resemble promotions.

If the system blocks conservatively on partial or approximate evidence, attackers can weaponize this behavior into a denial-of-service attack, causing large volumes of legitimate transactions to be blocked.

Because denial-of-service risk is economically and operationally unacceptable, the system cannot block on approximate or partial evidence alone. It must rely on confirmed patterns, combining exact aggregates, similarity signals, and authoritative policy state.

5.6 Why Existing Systems Force a Split Architecture

Confirming anchorless fraud patterns requires three distinct retrieval capabilities, each with incompatible semantics:

Exact aggregation over dynamically defined predicates:

Fraud decisions require computing exact aggregates over recent data using predicates defined at decision time by the incoming transaction. This includes:

  • counts and distinct counts over transaction-relative cohorts,
  • entropy and dispersion over dynamically selected groups,
  • burst detection and acceleration over short windows.

These predicates are transaction-relative, continuously changing, and not safely enumerable or pre-materializable.

This requirement pushes implementations toward search or analytics systems, which are optimized for evaluating ad-hoc, dynamically filtered aggregations over recent data.

Similarity search over recent and historical behavior:

To detect families of fraud patterns that deliberately evade exact predicates, systems must perform approximate retrieval over evolving behavior. This includes:

  • nearest-neighbor search over embeddings,
  • similarity over device, network, or checkout signatures,
  • retrieval of patterns resembling known historical fraud bursts.

Authoritative decision context and system memory:

Beyond aggregates and similarity, fraud decisions require querying the system's authoritative state memory: active mitigations, adaptive thresholds, risk budgets, and prior decisions—all maintained under transactional consistency.

This state defines the system's decision context and institutional memory. It requires database semantics.

5.7 Why the Split Is Structural

No existing system class simultaneously provides:

  • exact aggregation over dynamically defined predicates,
  • high-dimensional approximate similarity search, and
  • • transactionally consistent decision context and system memory

within a single decision-time reality.

As a result, real deployments are forced into a split architecture where:

  • evidence (exact aggregates and similarity signals) lives in search and vector systems,
  • decision context and memory (policies, state, prior decisions, budgets) lives in databases.

This split is not an implementation accident. It follows from incompatible semantic guarantees across system classes, making consistency unattainable under adversarial pressure.

5.8 The Failure Mode: Cross System Inconsistency

A fraud decision must evaluate:

exact aggregates + similarity evidence + authoritative decision state

at the same logical moment in reality.

In a split architecture, these inputs evolve on different timelines:

  • exact aggregates update independently,
  • similarity indexes update independently,
  • policy, enforcement, and system memory update independently.

There is no single snapshot that spans all three.

5.9 Consequences and Structural Necessity

Under adversarial pressure, a split architecture produces the following failures:

  • False negatives: Fraudulent transactions are approved after risk has been inferred, but before enforcement or suppression becomes visible across systems.
  • False positives: Legitimate transactions are blocked because partial or approximate evidence is observed without corroborating aggregates or policy context.
  • Exploitability: Attackers deliberately exploit timing gaps between evidence generation and enforcement visibility to extract value or induce denial-of-service conditions.

Because these decisions commit irreversible actions, errors cannot be repaired after the fact.

These failures are not incidental, operational, or tunable. They arise necessarily from the architectural separation of evidence from authoritative decision state.

Why the Split Is Necessary:

Anchorless fraud detection requires all of the following simultaneously:

  1. Exact aggregation over live data
  2. Similarity search over evolving behavior
  3. Transactionally consistent policy and enforcement state
  4. Evaluation of all three against a single decision-time reality
  5. Immediate visibility of committed decisions to subsequent ones

No composition of existing system classes can satisfy these requirements at once.

The failure is therefore structural, not operational. The split architecture is forced by existing system class boundaries, and those same boundaries make consistency unattainable.

5.10 Beyond Adversarial Pressure

Fraud detection concentrates all decision coherence requirements within a single use case under maximum pressure. This makes the failure mode visible and catastrophic.

But adversarial pressure is not what creates the requirement—it merely accelerates and amplifies failures that occur naturally under benign concurrent operation.

The same divergence emerges in any system where:

  • Multiple agents act continuously on shared context
  • Decisions interact through common resources or state
  • Actions are irreversible or consequential

Examples include: collaborative document editing (agents resolving concurrent edits), inventory allocation (agents committing overlapping reservations), dynamic resource scheduling (agents claiming capacity), real-time personalization (agents updating user state while serving requests).

In these systems, the failure is simply harder to notice. Divergent decisions manifest as subtle inconsistencies, phantom inventory, conflicting commitments, or degraded user experience rather than financial loss. But the structural cause is identical: evaluation against different decision-time realities.

Decision Coherence is not a fraud-specific requirement. It is the fundamental constraint that emerges when autonomous agents operate constructively over shared state at machine speed.


6. The Necessity of the Context Lake

Decision coherence requires that agents act against a single coherent representation of reality at decision time—one that is authoritative, current, and semantically governed. Existing system classes do not satisfy these requirements simultaneously.

Teams therefore attempt to approximate them through composition. These efforts are rational, but they introduce additional independently advancing surfaces.

Much of the fragmentation observed in modern architectures is thus not a starting condition, but the cumulative result of attempts to recover decision coherence through composition.

This section shows why composition cannot succeed in this role. Under continuous mutation, the limits on freshness and coherence are structural, not accidental.

This section evaluates system classes rather than specific implementations. System classes are defined by architectural invariants, not by incremental optimizations; a system may change class over time only by adopting different invariants. The claim here is categorical: no existing system class enforces decision coherence under continuous mutation as an architectural invariant.

6.1 Lemma: Visibility Gating Necessity

Lemma (Visibility Gating Necessity).

Consider multiple systems, each determining when state becomes visible according to internal system policy (indexing, caching, replication, batching). An external coordinator cannot guarantee cross-system atomic visibility of decision-relevant effects while keeping the participating systems within their native system classes unless every participating system exposes a visibility-gating capability to:

  • durably accept decision-relevant effects while keeping them invisible, and
  • later make those effects visible or abort them based on an explicit external decision.

Proof (sketch): Assume an external coordinator claims to guarantee cross-system atomic visibility for a decision whose effects span more than one such system, while at least one participating system does not expose the stated visibility-gating capability.

Atomic visibility must be enforced either by controlling when effects become visible (write-time), or by controlling what a read is allowed to observe (read-time); any intermediate construction reduces to one of these two.

Write-time enforcement:

Any protocol that commits across multiple systems must apply decision-relevant effects to them. In a system lacking visibility-gating, once an effect is durably applied it becomes visible according to the system's local rules. In any execution without instantaneous, globally synchronized visibility, there exists an interval in which an effect has become visible in one system while the corresponding effect has not yet become visible in another, violating atomic visibility.

Read-time enforcement:

Another approach to enforcing atomicity is to read each system “as of” a shared cut at query time. However, each system determines when state becomes visible according to its own internal policy, and these visibility boundaries are not shared or comparable across systems. Any shared cut must therefore be defined in an external coordinate system (e.g., event time).

Once coherence is defined relative to an event-time cut, mutation operations are no longer native—all changes must be represented as immutable event-time facts. The systems degenerate into append-only logs with state semantics implemented externally.

Therefore, cross-system atomic visibility cannot be guaranteed while preserving native system behavior unless every participating system exposes the stated visibility-gating capability. ∎

6.2 Composition Impossibility Theorem

Definition: Independently Advancing System

A system is independently advancing if it determines when durable state becomes visible according to internal system policy and does not expose any mechanism by which visibility of those effects can be made contingent on an external decision boundary.

This describes the default behavior of modern infrastructure: search engines advance via indexing cycles; caches via TTL and invalidation; analytical systems via batch refresh; replicas via lag; vector databases via asynchronous embedding and index construction; feature stores via scheduled materialization. In each case, visibility advances independently.

Theorem (Composition Impossibility):

Decision coherence under continuous mutation cannot be achieved by composing independently advancing systems while preserving the independently advancing system class, unless a single system already enforces decision coherence as a non-bypassable property.

Proof (sketch)

Under continuous mutation, decision coherence requires that all decision-relevant effects become visible atomically across the systems that participate in the decision.

By the Visibility Gating Necessity lemma, cross-system atomic visibility of such effects cannot be guaranteed unless every participating system exposes a visibility-gating capability. By Definition, an independently advancing system exposes no such capability.

It follows that composing independently advancing systems while preserving the independently advancing system class cannot satisfy the necessary condition for decision coherence under continuous mutation.

The only exception is when a single system already enforces decision coherence as a non-bypassable property, so that decisions do not require cross-system atomic visibility guarantees from a composition. ∎

Corollary (Authority Localization).

Under continuous mutation, decision coherence can be enforced only within a single system boundary. It cannot be synthesized by composing multiple systems without degeneration of their system classes.

6.3 Survey of Existing System Classes

This section surveys existing system classes and why each individually fails to satisfy Decision Coherence.

Relational Databases (single-node and distributed)

Provide transactional consistency, but not decision-time temporal or concurrency envelopes over derived state or ad-hoc retrieval patterns. They lack native support for semantic operations.

NoSQL / Document Stores

Provide scale and flexibility, but offer weaker consistency guarantees, restricted retrieval patterns, and no native semantic operations.

Search Engines

Provide flexible full-text retrieval over large corpora, but lack transactional consistency. Index refresh introduces unavoidable staleness.

Data Warehouses

Provide shared analytical snapshots, but lack live ingestion, live transformation, and low-latency retrieval under concurrency.

Data Lakes and Lakehouses

Provide shared storage at rest, but lack live ingestion, live transformation, and low-latency retrieval under concurrency.

Stream Processing Systems

Provide live ingestion and transformation, but are not designed to serve context—raw or derived—for low-latency retrieval.

Vector Databases

Provide semantic similarity retrieval, but support only this single access pattern and lack general-purpose retrieval capabilities.

System ClassTransactional
Consistency
Temporal
Envelope
Concurrency
Envelope
Semantic
Operations
Decision
Coherence
Single-Node
Relational DB
Distributed
Relational DB
NoSQL
Databases
Search
Engines
Data
Warehouses
Data Lakes /
Lakehouses
Streaming
Systems
Vector
Databases

Legend

✓ = Natively satisfies the requirement

◐ = Partially satisfies the requirement

✗ = Does not satisfy the requirement

Each system class above fails to satisfy Decision Coherence on its own. And yet, a tempting pattern emerges: every individual requirement of Decision Coherence exists somewhere today.

Because each requirement can be satisfied by some system class, a compelling intuition forms: Database + Stream Processor + Search Engine + Vector Database + Data Lake/Lakehouse + LLM APIs would seemingly provide all required capabilities.

That intuition is rational. It is also incorrect. A composition of systems does not produce a composition of guarantees; guarantees are enforced only within system boundaries. When systems are composed, their guarantees may fragment rather than combine.

By Composition Impossibility (§6.2), no composition of existing system classes satisfies Decision Coherence.

6.4 Conclusion: Context Lake as a New System Class

Under continuous mutation, decision coherence requires that all decision-relevant context be both fresh and mutually coherent at the moment a decision is made.

No existing system class as defined by its architectural invariants and native visibility semantics satisfies this requirement independently. By the Composition Impossibility Theorem, no composition of such systems can satisfy it either—unless one component already enforces decision coherence as the authoritative boundary.

This establishes a categorical gap. Decision coherence under concurrent action is neither an emergent property of existing system classes nor a consequence of their composition.

A new system class is therefore required — one whose defining responsibility is to enforce decision coherence under continuous mutation and concurrency.

That system class is a Context Lake.


7. Context Engineering: Making Shared Memory Coherent

A Context Lake provides infrastructure. Context Engineering concerns context preparation: the organization of memory prior to context retrieval.

7.1 From Data Engineering to Context Engineering

Data engineering organizes data for human analysis. Pipelines extract, transform, and load information optimized for retrospective understanding. Correctness is evaluated analytically. Latency is measured in minutes or hours.

Context engineering organizes memory for agent action. It builds on structured data transformations from data engineering, while extending them to unstructured experience under real-time constraints. It often relies on semantic operations and, when required, incremental maintenance to keep relevant context usable within the system's decision-time envelope.

Memory refers to persisted information that influences how agents reason and act. It includes what happened, what it means, and what is currently true.

Context engineering operates continuously under concurrency. Experience is ingested, interpreted, and incorporated into shared context incrementally. Interpretation and state evolve while new experience arrives and agents query context.

7.2 Memory Layers: The Canonical Structure

Context engineering organizes memory into three distinct layers. Each has distinct mutability contracts, write authorities, and roles in supporting decision-making.

Memory Layers

State Memory (mutable)

Decision-time truth (current, authoritative state)

↑ apply / consolidate     ↓ read

Semantic Memory (governed)

Interpretations, patterns, rules (semantic meaning)

↑ derive / transform     ↓ read

Episodic Memory (append-only)

Raw events and observations (immutable history)

7.2.1 Episodic Memory

Invariant: Episodic memory preserves observed experience as it occurred. Past experience is never revised to change what was observed.

Governed By: Written only by ingestion. Source systems, applications, and agents record experience as it occurs.

Mutability: Immutable. Episodes are not revised in place. Corrections are recorded as new episodes. Physical storage may be compacted or retired once meaning is preserved.

Contents: Events, logs, messages, traces, and other raw observations.

Role: Foundation of memory. The authoritative record of observation from which meaning and state are derived.

Example: A customer support message is recorded with raw text, timestamp, sender, and metadata. Later classification does not alter the episode; it produces new semantic state.

7.2.2 Semantic Memory

Invariant: Semantic memory represents shared interpretation of experience. Interpretations may evolve, but they are maintained as shared context.

Governed By: Written only by Context Engineering through explicit, versioned semantic transformations.

Mutability: Mutable by design. Interpretations may be revised as evidence, models, or definitions improve, without altering underlying episodic memory.

Contents: Interpreted signals. Entity resolutions, sentiment classifications, extracted concepts, causal links, and relationships.

Role: Interprets experience. Bridges raw observation and interpretation by answering "What does this mean?" without prescribing action.

Example: A support message is analyzed. Semantic memory records intent="billing dispute", sentiment="frustrated", escalation_risk=0.82. When the sentiment model improves, these values update without modifying the original message.

7.2.3 State Memory

Invariant: State memory represents the current set of operative conditions used to evaluate decisions.

Governed By: Governed by Context Engineering. Agents, applications, and policies may update state by executing permitted, well-defined transitions under shared consistency and validation rules.

Mutability: Mutable by design. State evolves as conditions change. Updates are applied with transactional guarantees under concurrency.

Contents: Authoritative present-tense facts such as current status flags, thresholds, quotas, counters, and active constraints.

Role: Authoritative for action evaluation. Agents query state memory when evaluating and executing decisions, applying updates transactionally against it.

Example: From a support message and its interpretation, state memory updates: customer_47291.support_status = "active_escalation", churn_risk = "high", next_action = "priority_response_required". An agent deciding on a discount queries this state directly.

7.2.4 Why These Layers Are Distinct

The three memory layers represent fundamentally different kinds of information with different lifecycle requirements.

  • Episodic memory records what was observed. Recorded observations are immutable: changing it would falsify history.
  • Semantic memory records what observations mean. It must be mutable, because understanding evolves as models, definitions, and evidence improve.
  • State memory records what is operative now—it must be mutable because conditions change, and authoritative because agents require a single source of conditions when evaluating actions.

Collapsing observation, interpretation, and state into a single undifferentiated layer produces the following failure modes:

  • rewriting or versioning history as interpretations evolve;
  • blurring intermediate analysis with decision-ready truth;

These failures are not design flaws or operational bugs. They are structural consequences of conflating concerns. The three-layer model prevents them by making these distinctions architectural.

7.3 Semantic Transformations

Semantic transformations are the mechanism by which Context Engineering moves information between memory layers.

They use the system's native semantic operations to derive interpreted signals from episodic memory—such as entity resolutions, classifications, extracted concepts, and relationships—which populate semantic memory.

Some semantic transformations also operate over semantic memory to produce updates to state memory, translating interpretation into authoritative present-tense state through defined transitions.

7.4 Context Engineering Is Not Optional

Context engineering is required for correctness. Without enforced memory-layer structure and controlled semantic transformation, a Context Lake cannot preserve correctness under concurrent operation.

Without context engineering, memory is not organized proactively. As a result, the context available for decision-making is low quality, outdated, or missing.

Context engineering structures past experience so that patterns, interpretations, and conclusions can be discovered over time and carried forward. Decisions can then build on prior understanding rather than starting from scratch.

When context engineering is enforced, decisions no longer assemble inputs ad-hoc or infer meaning privately. Agents evaluate actions directly against shared, interpreted state that is current at the moment of the decision.


8. Agent Decision Admissibility Conditions

Context Lake provides transactional consistency and bounded temporal and concurrency envelopes over decision-relevant context. These guarantees are necessary but not sufficient on their own for Decision Coherence: they define the conditions under which coherent decisions are possible. Whether coherence is realized in practice further depends on how agents evaluate and act on that context.

This chapter defines closure conditions that constrain agent behavior and decision logic. These conditions are not enforced automatically by Context Lake. Rather, they specify the admissibility requirements that agent decisions must satisfy in order to be considered coherent when operating over shared context.

Violating any of these conditions allows an agent to produce decisions that are locally defensible yet globally incoherent, even when the underlying Context Lake is correct.

8.1 Elimination of Private Decision Premises

Closure condition: No decision with shared effects may be justified by decision-relevant context that is private to a single agent.

Agents may maintain private state, perform local reasoning, and form private interpretations. This is unavoidable. What is not admissible is for a decision with shared effects to rely on decision-relevant context that is visible only to the acting agent.

When decision-relevant context is agent-local, there is no guarantee that interacting decisions are evaluated against the same premises. Even when each decision is locally reasonable, their interaction reflects parallel interpretations of reality rather than a single coherent one.

For a decision with shared effects to be meaningful, all context that influences its admissibility must be part of the shared decision context and subject to the same visibility and consistency conditions as other decision-relevant information.

8.2 Elimination of Deferred Action

Closure condition: An agent may not take an action with shared effects based on context retrieved outside its admissible decision window.

Context retrieved from a Context Lake is authoritative at the moment it is observed. At that instant, it represents a coherent and complete account of decision-relevant reality, evaluated within the system's declared consistency, visibility, and temporal guarantees. That authority, however, is bounded: it holds only within the temporal envelope under which the context remains admissible for decision-making.

If an agent acts on expired context, the decision is evaluated against a reality that no longer exists. Other agents may observe the action and incorporate it into their own decision premises, causing effects to propagate through shared context and interpretation. The system irreversibly moves onto a different causal trajectory, and no later compensation can reconstruct a history in which the action was taken within its admissible window.

8.3 Elimination of Mixed Causal Cuts

Closure condition: A single decision may not be evaluated against multiple causal cuts of context.

If a decision is evaluated using context drawn from multiple causal cuts, the decision is evaluated against a world that never existed. Even when each observation is individually valid, their combination produces a fractured present in which the premises of the decision cannot be jointly satisfied. Such a decision has no coherent model under which it could be justified.

For a decision to be meaningful, all decision-relevant context must correspond to a single, coherent causal cut of shared reality.

8.4 Elimination of Implicit Semantics

Closure condition: Meaning that influences a decision must be explicit and shared.

If interpretation exists only within application logic, model prompts, or agent-local behavior, identical observations may yield incompatible interpretations across agents or executions. Decisions may then diverge despite operating over the same underlying data, because the meaning used to justify them is not part of the shared decision context.

When interpretation is implicit, there is no guarantee that different decisions are being evaluated against the same semantic premises. Even when each interpretation is locally reasonable, their divergence produces decisions that cannot be jointly explained as arising from a single, share understanding of the situation.

8.5 Completeness of Agent Decision Admissibility

Taken together, these closure conditions are complete with respect to agent-side decision admissibility. They span the dimensions along which agents can introduce incoherence despite operating over a correct Context Lake:

  • Spatial — use of agent-local decision context (§8.1)
  • Temporal — action taken against expired decision context (§8.2)
  • Causal — mixing of causal cuts within a single decision (§8.3)
  • Semantic — reliance on implicit or non-shared interpretation (§8.4)

Any agent-level failure of Decision Coherence reduces to a violation along one or more of these axes. A decision that satisfies all closure conditions is evaluated against a single, shared, coherent set of premises and therefore admits a well-defined justification.


9. Enforcement Boundaries

Decision Coherence constrains what must be true at the moment a decision takes effect. From that constraint, enforcement boundaries follow.

Some violations of coherence can be prevented only before context is observed. Others can be prevented only at the moment a decision is justified. Still others can be prevented only when meaning is produced and maintained over time. Once a decision produces irreversible effects, violations introduced earlier cannot be detected or repaired.

As a result, enforcement cannot be centralized or deferred. It must occur at the points in the system where violations become irreversible. This chapter derives those points.

Authority Boundaries for Decision-Making

Decision-Maker(agent / service)Context Preparation(context engineering)CONTEXT LAKEAuthoritative Shared Contextbase / derived / semanticRetrievalActionTransformation

9.1 Enforcement at the Context Lake Boundary

Once shared context is observed, any incoherence within it is indistinguishable from correctness. At that point, the system has already committed to a particular account of reality, and downstream consumers can only proceed relative to what they see.

Properties that must hold of shared context itself therefore belong at the boundary where context becomes shared and authoritative.

This boundary does not determine meaning, interpretation, or decision logic. It defines only what counts as shared context. How that context is prepared is addressed by context engineering; how it is used is addressed by agents. This section concerns only the point at which context becomes authoritative and irreversible from the perspective of its consumers.

9.2 Enforcement at Decision Time

Decisions take effect at the moment they are justified. Once a decision produces consequences, the system proceeds from that outcome.

Constraints on how context is used to justify decisions therefore belong at decision time, at the point where the decision is made. There is no later point at which applying those constraints can influence the resulting trajectory.

This enforcement concerns only the justification of a decision at the moment it takes effect. It does not address how context is prepared or how shared context is maintained.

Violations introduced at justification time cannot be undone. The only place they can be prevented is at the point of decision.

9.3 Enforcement at Context Preparation

Context preparation refers to the phase in which Context Engineering produces candidate context prior to its admission into the shared substrate. Context preparation determines what shared context exists before it is retrieved for decisions. Once prepared context becomes part of the shared substrate, it is indistinguishable from any other context served by the system.

If incoherence is introduced during preparation—through how observations are transformed, interpreted, revised, or maintained—that incoherence is carried forward into decision time. At that point, it cannot be isolated from correct context or selectively ignored.

Constraints on how shared context is produced and maintained therefore belong during context preparation, before that context becomes authoritative. There is no later point at which violations introduced during preparation can be reliably corrected without affecting decisions that already depend on it.

This enforcement does not concern how context is used in decisions, nor how shared context is served. It concerns only the conditions under which prepared context is allowed to enter the shared substrate.

9.4 Closure

Decision Coherence depends on enforcing constraints before they become irrelevant to outcomes. Because different violations become irreversible at different points in the decision loop, enforcement cannot be centralized or deferred.

The Context Lake establishes what counts as shared, authoritative context. Decisions establish outcomes. Context preparation establishes what context exists to be relied upon. Each boundary exists because, beyond it, correction is no longer possible without altering outcomes that have already occurred.

Correctness therefore does not emerge from coordination between these surfaces, but from respecting their separation. When enforcement is misplaced or blurred, incoherence is introduced without a clear point of failure.

The boundaries defined in this chapter do not prescribe implementation. They specify where enforcement must occur for Decision Coherence to hold under concurrency.


10. The Unavoidability of a New System Class

Constructive intelligence from concurrent agents requires Decision Coherence. When agents act in parallel and their actions interact irreversibly, decisions must be evaluated against a shared representation of reality at the moment those decisions take effect. Without this condition, agent behavior may be locally reasonable yet collectively incoherent, and intelligence cannot compound.

This condition cannot be deferred or reconstructed. Once a decision has taken effect, the premises under which it was evaluated become part of the system's causal history and shape subsequent behavior. Later reconciliation, compensation, or reinterpretation may influence future decisions, but it cannot establish coherence for a decision that has already acted on the world. In this regime, correctness is determined at decision time or it is undefined.

Transactional consistency alone is not sufficient. A transaction may ensure that reads observe a single, well-defined cut of system state, but the evaluation of a decision and the actions that follow may not themselves be part of that transaction. As a result, transactional guarantees do not ensure that the state against which a decision is evaluated corresponds to the reality in which its effects occur. Even perfect transactional isolation leaves decision coherence unconstrained.

No existing system class enforces decision coherence under continuous mutation and concurrency. Each provides subsets of the required properties, but none enforce their conjunction at decision time. Composing systems does not compose their guarantees. As a result, combining existing system classes cannot enforce decision coherence; the limitation is structural, not operational.

When autonomous agents operate without decision coherence, intelligence remains isolated. Agents may act correctly relative to their own observations, but their decisions cannot reliably reinforce one another. When agents operate constructively—when intelligence compounds across decisions—decision coherence is required.

When constructive multi-agent operation is required, this leaves no alternative. A system class defined by decision coherence is not an optimization, an integration pattern, or a matter of architectural preference. It is the minimal condition under which correctness remains definable for interacting decisions. Any system that does not enforce this condition may function in isolation, but it cannot support constructive intelligence from concurrent agents whose decisions interact.


Appendix A

Decision Coherence: Law and Operational Interpretation

This appendix formalizes Decision Coherence as a structural requirement for correctness in systems where autonomous agents act concurrently over shared context and take irreversible actions.

The purpose of this appendix is not to provide a complete mathematical model, but to clearly separate:

  • the law itself, motivated by how agents operate and why collective intelligence is possible at all, and
  • the operational interpretation of the law, specifying what real systems must enforce for the law to hold in practice.

All statements concern Decision Coherence as the sole law-level invariant.

A.1 Scope and Intent

This appendix establishes boundaries.

It defines:

  • the minimal vocabulary required to discuss decision coherence,
  • the Decision Coherence Law and its scope,
  • the operational constraints implied by the law,
  • the limits of composition,
  • and the enforcement structure required to make coherence non-bypassable.

The results here are structural and operational, not model-theoretic. They describe when correctness is possible, not how to optimize it.

A.2 Definitions

Agent

An agent is an autonomous system that:

  • observes shared context,
  • evaluates a decision predicate,
  • takes an action whose effects may interact with the actions of other agents.

Agents operate continuously and concurrently.

Decision

A decision is the evaluation of a predicate over context within a bounded decision window, followed by an irreversible action that mutates shared state or external reality.

Interacting Decisions

Two decisions are interacting if the effect of one can alter the validity, outcome, or admissibility of the other.

Coherent Representation

A database state that corresponds to some point in a serializable execution history. All observations made from this state are mutually consistent.

Correctness

A system execution is correct if there exists a single coherent history of reality against which all interacting decisions can be jointly explained.

A.3 The Decision Coherence Law

Autonomous agents differ from human operators in three decisive ways:

  • They act continuously, not in discrete analytical cycles.
  • Their actions are irreversible and take effect immediately.
  • Their actions interact through shared resources and shared state.

When agents operate constructively, the goal is not merely automation but compounding intelligence: one agent's work should inform and improve others' decisions rather than conflict with them.

This compounding is only possible if interacting decisions are grounded in the same understanding of what is currently true. If agents act on incompatible representations of reality—even briefly—their actions cannot be reconciled after the fact, and correctness collapses.

This motivates the law.

The Decision Coherence Law: When decisions interact under concurrency, correctness requires that all such decisions be evaluated against a single coherent representation of reality at decision time.

A.4 Operational Interpretation of Decision Coherence

The Decision Coherence Law is conceptual. This section interprets it operationally, specifying the system-level constraints required for the law to hold in practice.

These are not proofs, but necessary operational implications of the law.

A.4.1 Transactional Consistency

All decision-relevant state transitions must be observed atomically.

Partial, intermediate, or internally inconsistent state must not be visible to decisions, as decisions evaluated against such states cannot be embedded in a coherent history of reality.

A.4.2 Temporal Envelope

Decision-relevant mutations must become visible within the bounded decision window.

Correctness that arrives too late is indistinguishable from incorrectness once actions have taken effect. Systems that rely on eventual reconciliation violate Decision Coherence by construction.

A.4.3 Concurrency Envelope

Consistency and visibility guarantees must hold under sustained production concurrency.

Concurrency is the steady-state condition of agent systems. Guarantees that degrade under load fail precisely when constructive coordination is required and therefore do not preserve Decision Coherence.

A.4.4 Shared Semantic Authority

Meaning that influences decisions must be explicit, shared, and system-governed.

If semantic interpretation is embedded privately in agent logic, identical observations may yield incompatible predicates. Interacting decisions then act on different meanings of the same state, violating coherence even when raw data is consistent and fresh.

A.5 Composition Limits

Independently Advancing Systems

A system is independently advancing if it determines when state becomes visible according to local ingest-time policy and cannot gate visibility on an external decision boundary.

Most modern system classes fall into this category.

Composition Impossibility (Structural)

Decision Coherence under continuous mutation cannot be achieved by composing independently advancing systems while preserving their system classes.

Without a shared visibility boundary, no composition can guarantee that interacting decisions evaluate against the same coherent reality.

As a result, Decision Coherence can only be enforced within a single authoritative system boundary.

A.6 Authority Model

Authority

An authority exists if and only if there is no alternate execution path by which its invariant can be violated.

Exhaustive Failure Modes

All violations of Decision Coherence arise from one or more of the following divergence classes:

  • Temporal divergence — decisions observe different presents.
  • Interpretive divergence — decisions apply different meanings to the same observations.
  • Action divergence — decisions commit through non-uniform enforcement paths.

There are no additional failure modes.

Minimal Authority Set

Exactly three non-bypassable authorities are necessary and sufficient to preserve Decision Coherence:

  • Availability Authority — defines the authoritative present,
  • Semantic Authority — governs meaning that influences action,
  • Action Authority — enforces a single commit boundary for decisions.

A.7 Definition of Context Lake

A Context Lake is a system that enforces Decision Coherence by:

  • maintaining a single authoritative representation of reality for decisions,
  • ensuring timely visibility of all decision-relevant mutations,
  • preserving guarantees under sustained concurrency,
  • executing semantic interpretation and retrieval natively and authoritatively.

A Context Lake is a system class, not an optimization layer.

A.8 Closure

In systems with concurrent, irreversible decisions, Decision Coherence is a structural requirement, not an architectural preference.

When the law is violated, correctness is undefined.

When the law is enforced, collective intelligence becomes possible.

There is no stable intermediate configuration.

Context Lakes: A System Class Defined by Decision Coherence

Xiaowei Jiang • Canonical Document • Privacy