CQRS · Domain Driven Design · Software Architecture

Distributed Architecture 03: CQRS and Data Distribution

In our journey we started with a business requirement, we model the requirement with commands, events, we ingest input data, we process the data, we implemented processing components using Event Sourcing pattern and we stored our Domain Events in small streams. All this represent the way to define and feed our write model.

The benefit discussed in this post is about data distribution. Once that your data are well stored in write models, your core domain, you will need to read and project the data to different reports or other interested components. This is the essence of CQRS. You can have as many read models with as many clients reading the data. This will not affect whatsoever your write models. This architecture offer the possibility to cover any new or different requirement from different people or departments in your organisation.

Examples or read models:

  • An Elastic Search dashboard showing what is happening in near real-time in your domain
  • Integrating with a legacy relational database
  • Sending data to a cache object for another component
  • Sending data to an AWS Kinesis stream for some Machine Learning algorithm
An example of CQRS architecture

Technically you can implement one Synchroniser process for each read model. Each Synchroniser can subscribe from all events or from a category of events. The type of subscription is not using the competing consumer pattern like we do in the processing phase. Here we use a Catch Up subscription model that behave like a sink for any relevant event. No need to scale horizontally. It’s better having the Events in order instead of scaling out.

A common challenge is having a write model with millions of events and struggling synchronising a read model with a single process. This challenge can be solved. The mistake here is to make a round trip to the target read model for each single message. Instead it’s better to buffer the messages in memory and flush down to the read model with a bulk operation batches of messages. You can then configure the interval balancing the need for speed with the memory consumption and the Eventual Consistency windows acceptable by the business.

This is an example in C# of a configurable generic indexer for Elastic Search. You can adapt it for other languages as I usually do this step with a node.js javascript process. The subscription for events will be managed in a separate SynchroniserService that then will use that indexer to synchronise an Elastic Search index.

Leave a Reply