Domain Driven Design, Event Sourcing and Micro-Services explained for developers

Event Sourcing

And when I speak of the other division of the intelligible, you will understand me to speak of that other sort of knowledge which reason herself attains by the power of dialectic, using the hypotheses not as first principles, but only as hypotheses — that is to say, as steps and points of departure into a world which is above hypotheses, in order that she may soar beyond them to the first principle of the whole

Plato’s Republic

Here are my notes about Domain Driven Design, Event Sourcing and Micro-Service architecture and the relations between these concepts.

I’d like to write it down in order to offer the people that I work with a way to get on with my software approach for Distributed Systems and how to shape them out of the darkness. It’s not a technical guide “how to” build a Micro-Service but instead it’s my explanation of Domain Driven Design concepts and Event Sourcing. There are other concepts that are essentials for a Micro-Service architecture like horizontal scalability that are not discussed here.
Code example

Table of contents

What are MicroServices Bounded-Contexts Aggregates?
What are the relations between them?
What is a Process Manager and do I need it?
Micro-Services interactions with the external world
Event Sourcing in a Micro-Service architecture
Difference between internal and external events
Ubiquitous language
Behaviors vs Data structures
Keep out of the Domain the external dependencies
Stay away from Shared Libraries across components

What are MicroServices Bounded-Contexts Aggregates?

  • A Micro-Service is a process that does some work around a relevant business feature
  • A Component is a process that does some work around a relevant business feature
  • A Bounded Context is a logical view of a relevant business feature
  • An Aggregate is where the behaviours (functions) related to a relevant business feature are located.
  • A relevant Business Feature for the sake of this article is composed by one or more related tasks that execute some logic to obtain some results. From a more technical point of view, a task is a function. From a less technical point of view it could be a requirement.

What are the relations between them?

A Bounded Context is a relevant business feature. But it doesn’t represent a running process. A running process belongs to the “physical” world and can be better represented by the terms Micro-Service or Component. The Bounded Context meaning belongs to the “logical” world. These 3 terms can be confused together but they represent a different point of view of the same “relevant” business feature like “As a user I want to fill a basket with Products So I can checkout the bill and own them”
What does relevant mean then? This term is used to differentiate infrastructure services from services that the business domain experts expect and can understand. Is a proxy service that calls a third party service a relevant business feature? Probably not. It is better to define a more understandable Basket Bounded Context contained in a Micro-Service interacting with the other components.
In the terminology that I use day to day, I often mix the physical with the logical using the term “Domain Component”.

To summarize

  • I use both Micro-Service or Component terms to indicate a physical running process
  • I use Bounded Context term to indicate the boundary of a relevant business feature
  • I use the term Domain Component to indicate a relevant business feature within a running process when there is not a need to differentiate between physical and logical views

The size of a component or a Micro-Service can be narrowed down to a single task or function but there could be a wealth of behaviour if the requirement is complex. Within the same Micro-Service there is an Application Service that behave as an orchestrator deciding which tasks or functions need to be called depending on received messages.

There is a good metaphor describing this layering as an Onion Architecture.
If we want to describe on a whiteboard the Onion we can draw the external onion’s layer as an Application Service Endpoint that subscribes and listens for events and converts them to commands. The middle layer Handler handle the commands and calls the behaviours (functions) exposed by the Aggregate. The Aggregate contains the logic related to the relevant business feature.

Event Sourcing, Domain Driven Design, CQRS

Event Sourcing, Domain Driven Design, CQRS

Generally speaking it’s better to keep the size small as much as we can. When I start shaping the Domain Model (Domain Model = group of components) I probably start with grouping more than one Aggregate in the same component. After a little while from the daily discussions with other devs and business stake holder I can see if they are using the components names easily or not. This can drive me onto a more granular size and it’s easier to split a fat Bounded Context in small parts.

What is a Process Manager and do I need it?

Another important aspect of a distributed architecture is the need for some sort of ‘Process Managers’. If you are building a simple ingestion system that goes from a set of files down to the processing component and out in the report synchronizers (CQRS) then maybe you can avoid building a Process Manager.
It can be useful when there are multiple components or legacy subsystems involved and the coordination between them depends on unpredictable conditions. In that case…

  • Domain Components are responsible to process commands and raise events
  • A Process Manager listens for events and sends commands.

Consider a Process Manager as a sort of Application Service on top of the other components. It can contains one or more stateful aggregate that keep the state of a long running process depending on requirements.

Micro-Services interactions with the external world

Through messages of course. The pattern could be a simple Event Driven Pub Sub. Why not “Request Response”? We can use the Request Response pattern defining an HTTP Api but this is more suitable when we need to expose service endpoints outside our domain and not within it. Within our domain, the components can better use a Publish Subscribe mechanism where a central message broker is responsible to dispatch commands and events around the distributed system. The asynchronous nature of Pub Sub is also a good way to achieve resiliency in case our apps are not always ready to process or send messages to others.

Event Sourcing in a Micro-Service architecture

With this Event Driven Pub Sub architecture we are able to describe the interaction between components. Event-Sourcing is a pattern to describe how one component processes and stores its results.

In other words, Event-Sourcing describes something happened within a single component whether Pub Sub is a pattern used to handle interactions between components.

Moving down our focus inside a single Event Sourced bounded context we can see how it stores time’s ordered internal domain events into streams. The business results are represented by a stream of events. The stream becomes the boundary of the events correlated to a specific instance of the relevant business feature.

Event Sourcing example

Event Sourcing example

In a Distributed Architecture not all components are Event Sourced. There could be components used as proxy services to call external services. There could be components that are not required to keep track of changes across the time. In that case they can store their results updating the current state in a store like MongoDb, Neo4J or even Sql Server and publish external events to communicate with the rest of the interested components.

Difference between internal and external events

In a Distributed Architecture any component can potentially publish External Events when something has happened. Any interested component can subscribe to those events and handle them if it is required. In that scenario when I change the content of an external event I must carefully consider any breaking change that can stop other components to handle those events correctly.

In a Distributed Architecture any Event Sourced component stores its internal Domain Events in its streams. Considering one Event Sourced component, if I change the scheme of its internal events I can only break itself. In that scenario I’m more free to change the Internal Domain Events.

It can be beneficial having in place a versioning strategy for changing the External Events in order to reduce the friction with other components.

But not all changes are breaking changes. In case of breaking changes, like if you remove or change an existing field it’s easier create a new Event.

Ubiquitous language

There is the need to define an ubiquitous language when you start shaping your domain components. In my experience as a developer you can get the right terminology discussing day after day with business stakeholders and other expert devs about use cases. You can also introduce some more structured type of discussions using Event Storming techniques.

Slowly you can see the picture coming out and then just start building the components. If I dedicate to much time drawing components and interactions on the papers I don’t get it right so it’s better to balance theories with early proof of concepts.

Behaviours vs Data structures

Another difference doing DDD compared with an old school layered architecture is that I need to stop thinking about the data. The data is not important. What is important are the behaviours. Think about what you have to do with the data, what is the business result that they expect instead of which tables and relations I need to create to store the data. The behaviours that you identify will be exposed from one or more aggregates as functions.

Keep out of the Domain the external dependencies

My domain project contains one or more aggregates. It is a good approach to try to keep this project clean from any external dependencies. Expand the list of references. Can you see a data storage library or some proxy client for external services in there? Remove them and instead define an abstraction interface in the Domain project. Your external host process will be responsible to inject concrete implementation of these dependencies.

In other words everything points in to the Domain. The Domain doesn’t point out.

Stay away from Shared Libraries across components

The DRY coding principle is valid within one domain component but not between different components. This principle, Don’t Repeat Yourself, keep on the right track during development of related classes with a high level of cohesion. This is a good approach. In a distributed architecture made of multiple separate Domain Components it’s much better if you don’t apply the DRY principle across them in a way that all the components are tight to that shared library (language, version, features).

I personally broke this approach and I always regret having done that. As an example, I created a shareable infrastructure library containing a few classes to offer an easy and fast way to build up well integrated aggregates and message handlers . This library doesn’t contains any external dependency. It’s just a bunch of a few classes and functions.

Once it is stable I could probably accept to attach it to all the C# components and follow the same approach in Java. In F#, Scala or Clojure, I don’t really need it or I can provide a slightly different set of features.

Instead of sharing it, it’s better to include the code in each single Domain Component (copy and paste). Keep it simple and let each single Domain Component evolve independently.

Enjoy!

External References

Related code from my Github repository

Thank you

A big thank you to Eric Evans, Greg Young, Udi Dahan for all their articles and talks. Whatever is contained in this post is my filtered version of their ideas.

Run EventStore OSS on an EC2 AWS instance

Event Sourcing

This post is a quick step by step guide on running a single EventStore https://geteventstore.com/ node on a free EC2 micro Aws instance. You can spin up quite easily this in order to start using the EventStore Web console and play with it for testing. In a real scenario, you must set a larger EC2 instance type as a single node or as a cluster of nodes behind a proxy and load-balancer.

There is this document in the EventStore documentation that you can follow. This micro-article is just a variation of it.

Let’s start.

Launch a AWS instance

  1. Open AWS Web console and select the “Launch instance” action
  2. Select “Ubuntu Server …” from the Quick Start list
  3. Select t2.micro free tier
  4. Click next until the “Configure Security Group”
  5. Add the following rules
    22 (SSH) 0.0.0.0/0
    1113 0.0.0.0/0
    2113 0.0.0.0/0
  6. Select “Review and Launch” and “Launch”
  7. Choose an existing Key Pair or create a new one. You will use this later to connect with the instance. Select “Launch” to finally launch the instance.

Install and Run EventStore

In order to connect with the AWS ubuntu instance you can select the instance from the list in your EC2 Web console and select the “Connect” action. This will open a windows with the link that can be used from a command line. I’m using windows and I have Git installed in my machine so I can use Git Bash command line and ssh.

  1. Open Git Bash on the foleder where your Key Pair key is stored
  2. To connect with your instance, select your running instance in the aws web console, open the Connection window and copy and paste the SSH Example connection link in your bash command line
  3. Get the latest OSS EventStore binaries. You can find the link to the latest from the GetEventStore website Downloads page. At the moment of this writing the Ubuntu link is EventStore-OSS-Ubuntu-14.04-v3.9.3.tar.gz . Copy this link address and in your connected bash command line run the following command to download the tar.gz with the EventStore binaries
> wget http://download.geteventstore.com/binaries/EventStore-OSS-Ubuntu-14.04-v3.9.3.tar.gz

Run the following command to unpack the tar.gz

> tar xvzf *.tar.gz

cd EventStore and start the single EvenStore node with a parameter to bound the service listening to all ip’s

> ./run-node.sh --ext-ip 0.0.0.0

Now you can connect to your public dns instance address on port 2113 and see the EventStore Web Console

Build an Angular2 app and run it on AWS using a Docker container in 30 minutes

Software Development
Angular Aws Docker talk

Angular Aws Docker talk

In this article I will try to trans-pile the content of a tech meetup where I conducted a demo about “Building an Angular2 app and run it in a Docker Container on AWS in 30 minutes”.

The main points are:

  • Build Angular2 project with Angular-CLI and customize the home page with a simple
  • Customers service
  • Dockerise the app and push the image in public Docker Hub
  • On AWS Launch a new ECS instance and deploy and run the app on it

Angular2 is a framework to build Web Application. It provide a complete set of components all working toghether and then it can be considered a full heavy loaded framework. It differs from other web solution like React because in Angular all the most common decisions about which http or routing or anything else to be used are already made by the Angular team. With react you have more flexibility given that it is a ‘pure’ javascript library and then it is javascript centric as opposed to Angular that it is still a composition of modules around html.

Anyway, if you like to buy your car already made and just focus on writing your UI and business logic, Angular2 is a good choice.

Build Angular2 project with Angular-CLI

Install Angular2 CLI:
> npm install –g angular-cli
Build an Angular2 seed project using the following command
> ng new test-proj
Build the simple project as a static website
> ng build –prod

The previous command generate the ‘Dist’ folder

You can download the Angular-CLI from
https://cli.angular.io/

Customize the default page

Let’s generate a model that represent a simple Customer item.
Write the following Angular-CLI command in your terminal

> ng generate class shared/customers.model

Paste the following code snippet into the customer.model file

export class Customers {
name:string;
city:string;
}

Now generate a service module that will use the model to provide a sample list of Customers.
Write the following Angular-CLI command in your terminal

> ng generate service shared/customers

Paste into the newly created service the code from the following Github Gist
https://gist.github.com/riccardone/dfc2d125258c9146e0891cfc9595c5db#file-customers-service-ts

Modify ‘app.module.ts’ with the following GitHub Gist https://gist.github.com/riccardone/dfc2d125258c9146e0891cfc9595c5db#file-app-module-ts

Modify ‘app.component.ts’ with the following GitHub Gist https://gist.github.com/riccardone/dfc2d125258c9146e0891cfc9595c5db#file-app-component-ts

Modify app.component.html with the following GitHub Gist https://gist.github.com/riccardone/dfc2d125258c9146e0891cfc9595c5db#file-app-component-html

Push the image in public Docker Hub

Create a Dockerfile in the app folder using the following code

FROM ubuntu:16.04
RUN apt update
RUN apt install -y apache2
COPY dist /var/www/html
CMD /usr/sbin/apache2ctl -D FOREGROUND
EXPOSE 80

This DockeFile pull a version of Ubuntu, install Apache2 and copy the content of the ‘Dist’ folder.

Build the Docker image
> docker build -t test-proj .
Tag the image
> docker tag imageid yourrepo/test-proj:latest
Login in your Docker Hub account
> docker login
Push the image in your public repo
> docker push yourrepo/test-proj:latest

Run locally for test
> docker run -dit -p 8080:80 yourrepo/test-proj:latest

AWS Launch a new ECS instance

  • Login in AWS -> EC2 and select “Launch Instance”
  • In the marketplace, search for ‘ecs’ and select ‘Amazon ECS-Optimized Amazon Linux AMI’
  • Select the ‘t2 micro’ free tier option -> next
  • Step 3 Configure Instance Details: Create new IAM Role ‘ecs-role’
  • Role Type: Amazon EC2 Role for EC2 Container Service
  • Attach the available policy
  • Back in the Step 3, select the new role -> next
  • Step 6 Configure Security Group
  • SSH select ‘my ip’
  • Add HTTP rule port 80, anywhere -> review and launch

Create a Task Definition

  • Select the created instance and verify that it is running
  • Select the ‘EC2 Container Service’ action
  • On Task Definition select “Create new Task Definition”
  • Define a name
  • Select “Add Container”
  • Define a container name
  • In the Image box add the public available Docker Image from Docker HUB or ECS Repo Example: riccardone/ciccio:latest
  • Set the memory hard limit to 256mb (free tier)
  • Set ports host: 80 container: 80
  • Select “Add” and then “Create”

Create a Service to run the task

  • Select Cluster from the main menu on the left and then “default”
  • Select “Create” action on the Services tab
  • Set a service name and set 1 as number of tasks
  • Select “Create Service” action

To verify that the Service is up and running, select the service, select the task, expand the task and click on the “External link” to open in a browser the running app

I hope you enjoyed!
Cheers

An Ingestion system of unstructured data-sources

Software Development

Nowadays I’m focused on semantic web and graph relations. The stack of techs that I’m using is formed by lightweight services like an input adapter and a domain processor and some infrastructure software like EventStore geteventstore and Neo4J.
I’m using EventStore as an input pipeline and Neo4J as a Domain Model store for a simple Domain Component.
In order to give a well defined structure to the Neo4J nodes I’m using some of the OWL elements

Following is a basic diagram of the ingestion system.

Ingestion System

Disparate data-sources ingestion system

Neo4J - OWL

Neo4J graph store using OWL elements

Functional Domain Driven Design and Event Sourcing example in C#

Event Sourcing

This is an example of an aggregate written in C# following a functional approach similar to what it is possible to achieve with languages like F#, Scala or Clojure.

What it IS NOT in the Domain code example:

  • An AggregateBase class
  • An Aggregate Stateful class

What it IS in the Domain code example:

  • An Aggregate with some functions that take a command as an input and return correlated events

You can see the codebase here https://github.com/riccardone/EventSourcingBasket

Current State is a Left Fold of previous behaviours

Using a ‘pure’ functional language you probably follow the simplest possible way that is

f(history, command) -> events
(where history is the series of past events for a single Aggregate instance)

C# example from the Basket Aggregate Root

public static List Buy(List history, AddProduct cmd)
 {
     // Some validation and some logic
     history.Add(new ProductAdded(Guid.NewGuid().ToString(), cmd.Id, cmd.CorrelationId, cmd.Name, cmd.Cost));
     return history;
 }

Outside the domain, you can then save the events using a Repository composing the left fold of previous behaviours

yourRepository.Save(Basket.CheckOut(Basket.Buy(Basket.Create(cmd1), cmd2), cmd3));

CausationId and CorrelationId and the current state of an Aggregate Root

The CorrelationId is used to related events together in the same conversation. The CausationId is the way to know which command caused that event. The same CorrelationId is shared across all the correlated events. In a conversation, events can share a command Id as CausationId or different events can have different commands Ids depending on the execution flow. CorrelationId and CausationId can become an integral part of a Domain Event beside the unique EventId.

CausationId And CorrelationId example

CausationId And CorrelationId example

Apply Agile SCRUM to a new team

Agile

A number of porcupines huddled together for warmth on a cold day in winter; but, as they began to prick one another with their quills, they were obliged to disperse. However the cold drove them together again, when just the same thing happened. At last, after many turns of huddling and dispersing, they discovered that they would be best off by remaining at a little distance from one another

Arthur Schopenhauer

Starting to implement Agile SCRUM is good fun. Depending on the team, you can implement almost everything from the start and succeed or you can slowly introduce changes one by one and fail.
The way the team absorbs and applied new habits and constraints is really important and you can play as Scrum Master a key contribution building up the confidence and trust within the team. There is not a common rule and sometime it is a matter of luck but I found it easier implement Agile SCRUM to a well established team where people know each other for quite a long time than with a brand new team.
As we know, another important factor is the clear definition of team roles. Try to maintain one person as Product Owner as the SCRUM rules dictates and avoid confusion of roles between project manager and scrum master or stakeholders and Product Owner.

For my future reference and if you found yourself working as SCRUM Master in the same situation with a “open mind team” and with the business side backing you, here is the list of actions required to introduce the Agile SCRUM framework.

You can apply them gradually depending on the time schedule and the difficult to change the daily habits but you have to apply them all if you want get back the benefits out of the SCRUM process that I can summarize as follow: good communication within and outside the team, good visibility of the progresses, what is going on, release plan.

It’s not simple and there could be pitfalls or problems depending on how the team members and the stakeholders are open to change. The team motivation is a key factor and you have to play your part trying to explain and build up the process.
If you just wait for the team to “democratically” adopt these rules it will not happen or the rules will be adopted in a way that is not the right way. If you expect developers to keep the whiteboard in sync with the issue tracker software or to do the pair review before completing tasks or to split requirements in small stories… well, good luck with that. This is why when you play as SCRUM Master you can leave to them all the implementation details but you have to apply and own the process, the set of rules and control that they are followed by the team. Stop being a SCRUM Nun and start behaving as a SCRUM Master 🙂

Agile Scrum sprint flow

Agile Scrum sprint flow

The Scrum Team is formed by one Product Owner (PO), Developers (Devs) and optionally Qa testers. In absence of specialized QA resources, Devs can test and verify the code and the PO can help with tests.

Short sprint of two weeks
This allows the team to be more flexible on sprint goals. The PO can have a better control about the capacity of the team and ensure it is in line with the planned release. There is no necessary correlation between Sprint length and Release date. The PO can decide at any time when there are enough features to ask for a new release. This moment can be predetermined up front but is not related to the length of the sprint.

Start each sprint with the Planning
During the Sprint Planning session the PO will outline the main goals for the incoming sprint. He describes the stories that are at the top of the Product Backlog and the team can technically discuss each story and give an estimation based on complexity (not man days). Once the PO knows how complex the stories are he can move the chosen stories from the Product Backlog to the Sprint Backlog. The total amount of points can be aligned with the average velocity of the team.

Do daily stand-ups
Every morning the Devs stand up for a scrum where each participant says what he or she did the day before and what is going to happen during the day. This stand-up is time boxed to 10 minutes maximum so it is important avoid technical details. The PO and other stake holders can join the meeting but they are not required. More detailed technical discussions can of course be taken offline.

Set up a whiteboard
The Kanban whiteboard displays what is going on with the sprint. It is important that each participant in the process keeps the whiteboard updated. Devs can move cards from “TODO” to “In Dev” and once that the card is complete they can move it to “In Test”. The person involved in QA or pair review can move the card from “In Test” to “Done” or back to “In Dev” if there are issues. There is no need to create bugs for stories that are in the current sprint. We can create bugs that are related to old stories or legacy code.

Requirements, Stories and Bugs
The PO can write Requirements in free hand form. He can add any documentation or description. During sprint grooming sessions, the PO describes a requirement and with the help of the team the Requirement is translated to one or more Stories.

A story has to follow the following template:
As a , I want so that

E.g.
“As an App administrator, I want to view an import exceptions screen so that I can understand what data has not been processed and why.”

Once a story is created the Devs can estimate its relative complexity. The estimation is expressed in points and is based on one or more acceptance criteria. An acceptance criterion is one expected behaviour. A story can have one or more acceptance criteria.
Acceptance criteria can be written using the following template
(Given) some context
(When) some action is carried out
(Then) a particular set of observable consequences should obtain

Grooming sessions
Grooming sessions are time boxed 1 hour meetings where the PO can discuss with Devs what is going on in the Product Backlog. He can discuss each new requirement or story and any relevant bug. During those sessions the team can translate requirements to stories and give them an estimation using planning poker cards. Grooming sessions play an important role in order to be ready for the next sprint so that everyone understands the work involved.

Team Retrospective
At the end of the sprint and before the sprint planning there will be a Retrospective meeting where each member of the Scrum Team can discuss what went well and what could have been improved in the previous sprint.

Sprint review
This is a session that happen at the end of the sprint and is used to show to the Product Owner and any other interested stakeholder the results of the sprint. Devs can bring on the table all the relevant implemented stories and fixed bugs cards and briefly show the result on a projector. This meeting is not technical so if there are cards that cannot be understood by the Product Owner they don’t need to be show here.

Definition of Done
The DoD is a check list that states how a story can be considered Done. Only when a story passes all of the checks defined in the check-list can be moved to the Done column. The DoD is defined by each team.

Example of a Dod

  1. Code produced (all ‘to do’ items in code completed) and meeting development standards
  2. Peer reviewed (or produced with pair programming)
  3. Unit tests written and passing (not for UI items)
  4. UI items: Approved by Product Owner or QA tester and signed off as meeting all acceptance criteria
  5. Code check in
  6. Update ticket documentation describing:
    Tech Solution
    Test steps
    Release notes

References:
http://www.scrumguides.org/docs/scrumguide/v1/scrum-guide-us.pdf

Domain Driven Design: Commands and Events Handlers different responsibilities

Event Sourcing

“Commands have an intent of asking the system to perform an operation where as events are a recording of the action that occurred…”
Greg Young, CQRS Documents Pag. 26

In this post I’d like to state something that I’ve just recalled from a Greg Young’s video. When I started creating an aggregate root I often ask myself… where do I put this logic? where do I have to set these properties? Is the Command or the Domain Event Handler responsible?

As we know, an aggregate root can be considered a boundary for a relevant business case. The public interface of an aggregate exposes behaviours and raises events.

Commands and Events responsibilities

Commands and Events responsibilities

With this in mind, when we design an aggregate root, we start adding behaviours/commands to it and then apply events. The following is a list of what a Command and a Domain-Event handler can and cannot do.

A Command handler is allowed to:

  • Throw an exception if the caller is not allowed to do that thing or passed parameters are not valid
  • Make calculations
  • Make a call to a third party system to get a state or any piece of information

A Command handler is not allowed to:

  • Mutate state (state transition in commands can’t be replayed later applying events)

It behaves like a guard against wrong changes.

A Domain Event Handler is allowed to:

  • Mutate state

A Domain Event Handler is not allowed to:

  • Throw an exception
  • Charge your credit card
  • Make a call to a third party system to get a piece of information
    (logic here can be replayed any time… for example your card can be charged any time you replay events)

They apply the required changes of the state. No validation, no exceptions, no logic.

For example, if we have a command handler method called Deposit that receives an amount of money and updates the balance, we can implement a simple algorithm in the method and raises an event MoneyDeposited with the result of the calculation and any other information that we want to keep stored in our event store.

Example:

public void Deposit(decimal quantity, DateTime timeStamp, Guid transactionId, bool fromATM = false)
{
  if (quantity <= 0)
    throw new Exception("You have to deposit something");
  var newBalance = balance + quantity;
  RaiseEvent(new MoneyDeposited(newBalance, timeStamp, ID, transactionId, fromATM));
}
private void Apply(MoneyDeposited obj) 
{
  balance = obj.NewBalance;
}

Hope this helps

CQRS WITH EVENT SOURCING USING EasynetQ, EVENT STORE, ELASTIC SEARCH, ANGULARJS AND ASP.NET MVC

Event Sourcing

“The magic for a developer is doing something cool that no one else can understand” (myself few minutes ago).

The first step into this maze of cool messaging terms and tools left me with the feeling that most of them are not really relevant. After years of studying and practising, all these messaging and service oriented concepts melted with DDD and CQRS are making a lot of sense.
In this post I’d like to take out of my personal repository something that helped me in that journey.

Some time ago I read a very interesting blog post by Pablo Castilla https://pablocastilla.wordpress.com/2014/09/22/cqrs-with-event-sourcing-using-nservicebus-event-store-elastic-search-angularjs-and-asp-net-mvc/
I started studying it and I found it simple and complex enough to be taken as base template for my DDD + CQRS + Bus projects. Unfortunately after a while the trial version of NServiceBus expired on my home PC and then I decided to change the bus infrastructure from NServiceBus/MSMQ to EasynetQ/RabbitMq. Everything else is the same as in the original blog by Pablo except that I have updated all projects and libraries to the latest versions.

The solution’s work flow…

Bus and Event Sourcing

Bus and Event Sourcing

When you switch from a Bus generic framework like NServiceBus to a Bus Specific tool like EasynetQ a lot of configuration settings disappear.

For example you don’t need any more EndPoint config classes implementing those curious interfaces like IWantCustomInitialization
If you decide to adopt a Broker based solution you don’t have to waste hours and set every aspect of all little nice features that NServiceBus provides using xml config files or its fluent configuration. Instead you can immediately start to be productive and without paying much attention to configuration, you can start coding your app features and you publish and subscribe messages. Pay attention to some of the bad side effects of message patterns and take care of problematic scenarios if you have to. RabbitMq is a mature and solid infrastructure piece of software and it offers many solutions. EasynetQ is a little easy to use tool that offer a quick and clean Publish Subscribe mechanism.

Another consideration on NServiceBus here is that using Event Store you can rely on a feature built in it that allows you to manage long running processes. In this way you don’t need one of the nice features that NServiceBus provides called SAGA http://docs.particular.net/nservicebus/sagas/
Instead of using a Saga with NSB that finally is a class that stores everything happening around a process in a single storage like SqlServer with Event Store, is like defining an Aggregate that contains a stream of events and define projections running continuously and that allows you to react when something happens.

Said that, NServiceBus is an elegant library which is full of message features that can solve a lot of distributed architecture problems.

This is the code: https://github.com/riccardone/CQRS-NServiceBus-EventStore-ElasticSearch

References:
EasyNetQ
http://easynetq.com/

RabbitMQ
https://www.rabbitmq.com/

Configure NServiceBus with Sql Server transport

Software Development

In order to use Sql Server as middle-ware transport infrastructure you have to install the following nuget package in your projects (where your publisher and subscribers are located)
NServiceBus.SqlServer

If you use a web-api project to send messages into the bus, you have to configure SqlServer transport to be used.
You can add this configuration into the global.asax App_Start method as in the following example


public static IBus Bus;

protected void Application_Start()
{
  var configuration = new BusConfiguration();
  configuration.UseTransport();
// other configurations .....
Bus = NServiceBus.Bus.Create(configuration).Start();
}

You also have to set the connection string element used by NServiceBus to connect with the transport, in our case a normal Sql Server connection string

  

Now focus on the project where your handling logic is. Install the NServiceBus.SqlServer nuget package.
In the EndpointConfig class you have to set SqlServerTransport instead of the default MsmqTransport

public class EndpointConfig : IConfigureThisEndpoint, AsA_Server, UsingTransport

and modify the configuration file with the proper connection string

  

Job done. Now when you start the solution, NServiceBus will automatically initialize the Sql Server database using it as the middleware infrastructure with all the required queues in the form of datatables.
NServiceBust use Sql Server database as an infrastructure to store messages. The queues here are in the form of datatable. NServiceBus is not relying on MSDTC or ServiceBroker. All the magic is done by the C# transport component.

If you want to reprocess messages with errors, you can write a procedure that pick up the message from the dbo.errors table and insert the message into the original destination queue/table in an atomic operation. Here is a good example
https://github.com/jerrosenberg/NServiceBus/blob/develop/src/SqlServer/Scripts/ReturnToSourceQueue.sql

One of the problems that I found using Sql Server as transport with NServiceBus is the fact that when the message is handled in your handler it is within a transaction. If you connect with an Entity Framework DataContext class or an Ado.Net connection to a database to do something and you are using a different connection string compared to the one used for the SqlTransport then NSB automatically promote the transaction as a distributed transaction and if you don’t have configured MSDTC you’ll receive an exception.
To fix this problem, you can suppress with your code the transaction wrapping your data access login with a using or you can disable MSDTC by configuration using the following setting in your endpoint config class:

configuration.Transactions().DisableDistributedTransactions();

using this setting, NSB consider all your transaction as normal atomic Ado.net transaction without escalate them to a distributed transaction.

As a general consideration, when you rely on Sql Server as transport you are in fact using it as a broker. A broker is a central dispatcher for all your messages. The side effect here is that relying on a central broker (the SqlServer Instance that host your NSB messaging infrastructure) and keeping your business Sql Server databases somewhere else force you to configure MSDTC to avoid potential data loss in case of disaster scenario. But unfortunately is not possible using MSDTC with Always On availability group as is documented in that article https://msdn.microsoft.com/en-us/library/ms366279.aspx

An interesting side effect using Sql Transport is that you have the possibility to set up Sql Server Always On feature. Using this feature, you can potentially keep synchronized two geographically separated Sql Server instances with all the messaging queuing infrastructure providing in this way a nice High Available solution. In case of disaster on the primary server you can have an automatic fail over. The applications can use a logic listener instead pointing to the physical database server.

More info about SqlServerTransport
http://docs.particular.net/nservicebus/sqlserver/design
http://docs.particular.net/nservicebus/sqlserver/configuration

Develop and publish a mobile app in 30 minutes

Software Development

This is the handout for the presentation I gave at an International Developer Tech Talk in London.

You have an idea. You want build a piece of software around your idea. You start from the end seeing yourself somewhere in California in a big villa organizing meeting and parties with your new friends.

Let’s rewind this film from the start and imagine reducing the idea to something that you can build in 30 minutes. Like in Ground Hog Day when Bill Murray was able to create an ice sculpture in 24 hours I want to create a mobile app in 30 minutes. The basic idea is log your weight.
Let’s begin from the start

5 minutes: Decide the infrastructure
Currently there are three main mobile platforms owned by the following companies: Apple – Ios, Google – Android and Microsoft – Windows Phone. Each of them creates a proprietary store where you can publish your app. Each store works in a similar way from a user point of view but behind the scenes is different from a developer point of view.

Each store use a different versioning system, a different set of icons, different metadata and so on. Be able to cover all different requirements from these stores is a time consuming step.

The easier option to start with is Microsoft. After some practice, the most developer friendly appears to be Google Android. Apple and Google are similar but for some unclear reason, Apple is a waste of time. Every time you publish your app all these platforms take some time. After the first step, Microsoft and Google are quick and straightforward and you can publish updates easily and fast. Apple takes the same long amount of time – at the moment it is seven to ten days! Even if you are just changing the rating or a small feature.

So you want to develop your idea and publish the app on all these platforms. Do you want your app be native or do you want to develop it using a single code base? I don’t have all different required languages and techniques and I don’t have enough money to pay other developers. So my first step is develop a single code base that allows me to learn one language and technology stack. Now it is time to decide what framework use to develop.

There are several frameworks available.
One of the most popular is called Xamarin. It allows the use of  C# + Xamarin/Xaml UI. I’m not a Xaml developer. I think that the learning curve to become a Xaml/Xamarin is long. This framework is not cheap and requires a special licence for each platform. It requires you to have and configure an Apple Mac device in order to develop, debug and publish for IPhone. I tried this framework for 90 days and I remember wasting most of the time configuring and studying and never focusing on my idea and implementing some features for 90 days.

Another option is Phonegap/Cordova. This is an open source and free thin layer of javascript on top of native features. Using this solution you can use javascript to implement your application logic and Html + Css to define the User Interface. I started using this solution but again it requires a combination of tools and components to be configured and this task takes quite a long time. Also it requires an Apple Mac to develop and debug for Ios.
Finally I found Telerik Appbuilder + Everlive services. Telerik is a well known company that develops tools and Kendo UI framework for developers to build responsive apps. They created an integrated stack of tools on top of Apache Cordova. This is a commercial product as Xamarin but it doesn’t require you to learn Xaml/Xamarin, you can use your Javascript / Html / Css skills. It doesn’t require you to have an Apple Mac to develop debug and publish a mobile app for IPhone. One of the tools is called AppBuilder and it can be used as an extension for Visual Studio, as a stand alone development environment or on line as a web application inside the Telerik Platform. One of the many benefit of it is that it simplify the set up of your mobile app for each different App Store allowing you to easily define icons, permissions, versions and other metadata and publish the package in a friendly way. Telerik provide also Kendo UI that is a framework to build the User Interface. AppBuilder is not related to Kendo Ui so you can use Ionic and Angular to build the app as an example.

If you choose an hybrid app, one of the most common concern about that is the lack of performance compared to native. This is due to the following two problems in order of importance:

  1. An hybrid app is hosted inside a webview. For some mysterius reasons, there is a 300ms delays every time the user of your app tap on the screen
  2. Using a WebView combined with html and Javascript your performance will be dragged down by the inefficient Dom traversing problem that affect normal website

Both these two problems are real and they have a negative impact on the user experience.
Both these two problems can be solved.

To solve the problem number one, you can use a special Javascript library like fastclick.js or any other tap library. Another way is use one of the most promising UI framework available for mobile called Ionic . This framework take care about the delay and remove it silently without need any other configuration.
To solve the second problem, bear in mind that it depend mostly on how you code your UI logic and it can be relevant if you are squeezing the performance to create graphic apps. Also in that case, you can stop designing your UI using html and javascript and start using a special library like Famo.us that is an engine that render the UI using only javascript and can be used in conjunction with Angular js.

15 minutes: Develop the code

Open Visual Studio and be sure to have installed the Telerik AppBuilder extension.
Create a new empty project using the Blank Javascript template from the Telerik Appbuilder category.
Edit the index.html file under the project root and replace all the existing code copying the content of index.html from the following git repository

https://github.com/riccardone/bodyapp.git

Remember to change the Telerik Everlive service key with your real key. You can find the key from the Telerik Platform selecting your project->BackendServices and the option Api Keys

5 minutes: Test

AppBuilder allows you to use several development tools. My favourite is Visual Studio. In order to quickly test your mobile app on a real device, you can:

  • Select AppBuilder menu
  • Select Build and publish to the cloud
  • Select one of the device mobile platform and press “Build”

On the device

  • Install Telerik Appbuilder companion app
  • Optional: If you use IPhone or Windows Phone, install a QR reader app. If you use Android, you can select the integrated QR reader option
  • Only the first time: Use a QR reader to scan the QR code on the screen
  • After the first time: Hold three fingers on the screen – Yoda gesture  – and this will trigger a LiveSync feature that automatically will download the latest package from the cloud

You don’t need a Mac to test on a real device. When you install Telerik Companion App, you can then use this app as a host for your package. On an IPhone, the companion app is certified and comes from the app store but your package comes from Telerik cloud and it doesn’t need to pass through the long and tedious certification provisioning and painful Apple way to deploy apps.
This plays a major role allowing you to start coding your app feature and test on an IPhone in few clicks.

5 minutes: Publish

The develop and test phase is nearly finished. You are quite happy with the first version of your app. My suggestion is to start with the publishing process as soon as possible. If this is your first app submission, you can’t imagine how this final task can be tricky, long and full of many little details.

First you start, first you’ll figure out this complexity. Each of the three current major platforms is different.

The easier option is the Microsoft Store https://dev.windows.com . It appear similar to a wizard. They are currently using Silverlight as a plugin so you have to activate this old fashion “thing”. Pay attention to the section where you upload your xap file. It contains also some of the related metadata and if you decide to delete a Xap file, you will delete also the metadata! Another UI problem is the Save button. It is at the bottom of the page and it doesn’t appear to allow you to save your form partially. So you have to fill in almost everything before click on Save. The Microsoft team behind the submission take less than one week to review your app and they provide you with useful suggestions on how to improve your app like how to manage the back button or how to show the user with some settings suggestions or the privacy document. After your first submission is accepted, any other update passes through the same procedure (replace the Xap package instead of delete) but it requires a few hours to be accepted. 

My favourite is Google Play Store https://play.google.com/apps/publish . It is a tailor made web application designed around developer needs. When you publish for the first time it requires a few days but I don’t think that they care about the app. I submitted the wrong APK package with problems with the back button and they published it without any concerns. Once your app is published and when you submit an update, it requires only few hours. They also provide a payable service to translate your app content in any language. You have to provide your localizable strings using an xml format.

The Apple App Store is similar to Google Play. In order to publish the app you have to pass through their painful provisioning certificate processhttps://developer.apple.com/account/ios/profile
If you use Telerik Appbuilder, it simplifies this process but you still need to take some steps to get a certificate, sign your package and be able to publish. Here is a link where Telerik try to simplify this process http://docs.telerik.com/platform/appbuilder/publishing-your-app/distribute-production/publish-ios
Once you have published your .ipa package, the review process takes a long time compared to the other stores. I did a first attempt that was rejected after 10 days. The reason was a wrong rating. So I fixed the rating and submitted the package again and it was then accepted after a further 9 days.

Summary
In this article I talked about my experience developing a mobile app from the idea to the app store. As you can read, I selected the solution that allow me to transform the idea in something usable in a short amount of time.

You can find the example app on GitHub in the following repository

https://github.com/riccardone/bodyapp.git