Performance
Reconstituting an aggregate’s state from events will negatively affect the system’s perfor‐ mance. It will degrade as events are added. How can this even work?
Projecting events into a state representation indeed requires compute power, and that need will grow as more events are added to an aggregate’s list.
It’s important to benchmark a projection’s impact on performance: the effect of working with hundreds or thousands of events. The results should be compared with the expected lifespan of an aggregate—the number of events expected to be recorded during an average lifespan.
In most systems, the performance hit will be noticeable only after 10,000+ events per aggregate. That said, in the vast majority of systems, an aggregate’s average lifespan won’t go over 100 events.
In the rare cases when projecting states does become a performance issue, another pattern can be implemented: snapshot. This pattern, shown in Figure 7-2, implements the following steps:
• A process continuously iterates new events in the event store, generates cor‐ responding projections, and stores them in a cache.
• An in-memory projection is needed to execute an action on the aggregate. In this case:
— The process fetches the current state projection from the cache.
— The process fetches the events that came after the snapshot version from the event store.
— The additional events are applied in-memory to the snapshot.
![]() |
Figure 7-2. Snapshotting an aggregate’s events
It’s worth reiterating that the snapshot pattern is an optimization that has to be justified. If the aggregates in your system won’t persist 10,000+ events, imple‐ menting the snapshot pattern is just an accidental complexity. But before you go ahead and implement the snapshot pattern, I recommend that you take a step back and double-check the aggregate’s boundaries.
This model generates enormous amounts of data. Can it scale?
The event-sourced model is easy to scale. Since all aggregate-related operations are done in the context of a single aggregate, the event store can be sharded by aggregate IDs: all events belonging to an instance of an aggregate should reside in a single shard (see Figure 7-3).
![]() |