Beyond Aggregates: Correlation ≠ Shared State
What Years of DDD Taught Me About Letting Go

Greg Young recently asked me a fair question:
“Have you noticed that the events you depend on tend to correlate? For example, both check-in and check-out operations for inventory care about the same set of past events – check-ins, check-outs, audit changes, and metadata like max quantity. So why model them separately?”
The observation is right. The usual conclusion is not.
When we see correlation, the reflex is to centralize: wrap logic into one model, version it, and call it an aggregate. That move hides decisions behind abstractions, couples unrelated rules, and turns change into a negotiation with shared state.
Yes, events correlate. No, that does not mean the logic belongs together.
What correlation actually tells us is which facts are relevant to a decision, not that the decision should share a permanent model.
Correlation Implies Centralization
In traditional domain modeling, especially under the influence of Domain-Driven Design (DDD), event correlation is treated as a modeling constraint. If multiple commands depend on the same set of events, they’re assumed to operate on the same aggregate. And since aggregates are meant to guard invariants and encapsulate consistency boundaries, the logic must be grouped and versioned together.
This seems rational until you look closer.
Take Greg’s example: both CheckInInventory and CheckOutInventory need access to prior check-ins, check-outs, audits, and item metadata. The conventional move is to define an InventoryItem aggregate. It’s responsible for applying these commands, validating business rules, and emitting the resulting events. Everything flows through that one object.
But that aggregate is now doing too much in my experience. It contains the rules for multiple decisions. It’s responsible for maintaining state that might only be relevant in some contexts. And worst of all, it introduces false coupling between commands that could have been entirely independent – just because they read from overlapping history.
The result isn’t a clear boundary; it’s a choke point. Every change becomes harder. Every new behavior needs to be threaded through the same object, whether it fits there or not.
The root issue is this assumption: if events correlate, the logic must too. But that’s not a law; it’s an inherited habit from common mainstream object-oriented programming, where behaviour is assumed to ‘belong’ to state of an object cluster.
Context Over Structure
Instead of building a shared object, let each command define its own context.
- CheckInInventory queries the events it needs (recent check-outs, current quantity, audits), evaluates its rules, and emits its events.
- CheckOutInventory does the same, its own read, its own rules, its own result.
The overlap is a fact of the domain, not a reason to merge behavior. We protect causal relevance, not a shared state object. I call this Command Context Consistency: decide from the facts that matter to this command, and verify that those facts haven’t changed when you append.
What you don’t need: an aggregate, object lifecycles, or a central “apply” method.
Why This Changes Everything
When you stop modeling correlation as shared structure, three important things happen.
Behavior becomes explicit
You no longer guess why a command produced an error or made a decision. The full input is right there: the event context it queried, and the logic it applied. Debugging becomes inspection, not archaeology.
You also stop smuggling logic into abstractions like “apply” or “validate.” You write actual decisions, in terms of actual events.
Change becomes localized
Want to add a new validation to CheckOutInventory? Do it right there. You don’t risk breaking CheckInInventory, even if it reads similar events. That’s because nothing is centralized. You’re not editing a shared object. You are editing logic of one command.
The context is narrow. The impact is isolated. And you don’t need to refactor a core abstraction to support a corner case.
Race conditions disappear, or become manageable
In an aggregate-based model, all commands contend for the same version. But here, CheckInInventory only fails if the specific events it read have changed. That’s a finer-grained consistency boundary. It’s based on causal relevance, not shared state.
This doesn’t just improve throughput. It aligns consistency with meaning: commands fail when it makes sense for them to fail, not just because they bumped into another write.
How We Got Here: From Event Sourcing 2010 to 2025
Event sourcing didn’t begin incorrectly; it simply began in a different world.
In the early 2010s, most event-sourced systems were built by people coming from object-oriented backgrounds. The aggregate was the default unit of modeling, and Event Sourcing became a way to persist those aggregates by replaying and applying events instead of loading from a database row.
This is how I’ve worked on many projects over many years.
The structure looked like this:

Everything revolved around the aggregate. Even though the system was event-sourced, the mindset was still state-based. Commands were routed to objects. Consistency was tied to object versioning. Events were side effects of applying logic to an in-memory state object.
What we’ve learned since then – especially in the last years – is that this model carries more friction than benefit:
- Aggregates obscure why a decision was made. They collapse context into opaque state.
- Versioning leads to unnecessary conflicts – commands fail just because they touched the same object, not because they violated a rule.
- Behavior changes become harder over time, because the logic is scattered across lifecycle methods and shared abstractions.
My model today looks different:

There is no aggregate, no central model, and no object lifecycle to manage. Each command lives in its own world, with a clear input (event context) and a clear output (resulting events). Consistency is enforced not via a version number, but by verifying that the context is still valid at append time.
This shift from object state to event context is what enables true agility. You’re no longer building around long-lived structures. You’re designing short-lived, focused decisions that reflect the current rules of the business. And when those rules change, the change is local, explicit, and safe.
Conclusion: Let Correlation Be a Clue, Not a Constraint
Yes, events correlate. Commands often look at overlapping slices of history. But this doesn’t mean the logic belongs together. It doesn’t justify a shared model. It doesn’t require an aggregate.
What it tells us is that certain facts in the system are causally relevant across multiple decisions. That’s not a signal to centralize – it’s a sign that the domain has stable backbones of meaning. You don’t need to wrap that into an abstraction. You just need to expose the facts and let each command evaluate what matters.
For many years, I followed the typical object-oriented path. I applied DDDs tactical patterns, encapsulated state transitions, all of it. I tried to do it well, and in many ways, it worked.
Eventually, the model started pushing back. I had to bend it to support new rules. Logic leaked. Everything slowed down. That’s when I began a process I can only describe as deprogramming myself from object oriented thinking. Questioning the assumptions I never thought to challenge. Peeling away abstractions I once defended. And finding simpler models underneath – more direct, more explicit, and more aligned with how decisions actually happen.
That’s where this aggregateless approach (AES) came from – not from theory, but from years of friction. From watching things break in production. From debugging systems I had designed myself.
So when someone points out that “events correlate“, I agree. But I don’t follow it with “therefore, we need an aggregate.” I follow it with: „Good! Now we know which events matter“.
Cheers!
This article was originally published on Medium.