GDPR, Event Sourcing and immutable events

Event Sourcing, Software Development

The privacy of the data has been addressed in the early days even before the internet era. There has never been a lack of legislation. The problem is that the legislation has not been much effective to prevent data breaches nor to enforce the rights of individuals in front of companies or governments.

Looking at a bit of history we can see how the US and Europe are diverging in their approach to data privacy.

Gdpr history

Gdpr history

In the US the legislation pays a greater attention to not to block companies to do their job. This is probably the reason why the US legislation is made up of several different pieces of law. The data privacy is maintained under a sort of patchwork that often allows companies and government agencies to work around limits and take advantages of exceptions.

In Europe, there is a more attention to the rights of individuals despite the need for data required by the businesses. The main law was defined by a Data Privacy Directive defining a set of principles. Now the GDPR is a directly applicable law that from the 25th of May will be the main umbrella to protect individuals and their data.

The GDPR contains and better defines the existing protection plus it enforce the rights of individuals to claim the respect of these rights in case of a legal battle. It is not a technical document. It doesn’t innovate much in term of techniques to avoid data breaches. In fact, it considers data breaches as something possible. Data breaches will happen. What will make the difference is if your company is compliant or not with the law when the data breach has happened.

GDPR Entities

GDPR Entities

How can a company be compliant with GDPR?

Review your Privacy Policy document and explicitly ask for consent to your customer. Under this law, you can only hold personal data that are strictly required and only for the time that are required. During the review of the policy, data try to define a map of all the personal data and eventually remove or anonymize data that are not strictly required. This work will help you also to comply with the right of data portability as the individual can ask you a document that aggregates all his data.

Appoint a Data Protection Officer that will be responsible to enforce the respect of the law.

Define a strategy to handle the cases where the individual can claim rights

The Right to Erasure (Art. 17)

It entitles the data subject to have the data controller erase his/her personal data, cease further dissemination of the data, and potentially have third parties cease processing of the data.

GDPR Right to be forgotten

GDPR Right to be forgotten

The Right to Data Portability (Art. 20)

You have the right to receive your personal data from an organization in a commonly used form so that you can easily share it with another

GDPR Data Portability

GDPR Data Portability

The Right to Restriction of Processing (Art. 18)

Unless it is necessary by law or a contract, decisions affecting you cannot be made on the sole basis of an automated processing

In my words, use this right if you want to avoid being tagged on Facebook 😉

GDPR restrict data processing

GDPR restrict data processing

Your organizational GDPR strategies

  • Create or review your data privacy Policy and ask for Consent
  • If your company has >= 250 employees or works with sensitive data appoint a Data Protection Officer that will be responsible to ensure that the obligations under the GDPR are being met
  • Set a plan for each of the new rights that GDPR enforce

Software Strategies to comply with GDPR

…in case you are working with immutable events

  • Encryption
  • One stream per entity
  • Set retention period when possible
  • Provide a single view of the data

Always try to use

  • $correlationId’s
  • a map of where the personal data are stored

Encrypt the related data for a particular person with a symmetric key
Delete the key when the person claims the Right to be Forgotten

GDPR streams encryption

GDPR streams encryption

One stream per entity

Keep the data in well defined separated streams Delete the related streams when the person claiming the Right to be forgotten

GDPR one stream per entity

GDPR one stream per entity

Set a retention period on incoming streams

Items older than the $maxage will automatically be removed from data collection streams

GDPR retention period

GDPR retention period

Provide a single view of the data
GDPR single view of data

GDPR single view of data

Data ingestion with Logstash and EventStore

Event Sourcing, Software Development

Event Store is a database to store data as events in data streams

It allows to implement CQRS and Event Sourcing. It also allows to exchange messages between components using simple Pub/Sub pattern

It’s open source and there is a community edition free to use and a commercial license with support as well

This article is the written version of a Webinar and it is about data ingestion.

The challenge

To be able to ingest data from any csv files load the content and send it to the Event Store Http Api

To be precise the challenge is to convert this


to that (this is an example of Http Post to the Event Store Http API)

Method POST: http://localhost:2113/streams/inbound-data
Content-Type: application/

“eventId”: “fbf4b1a1-b4a3-4dfe-a01f-ec52c34e16e4”,
“eventType”: “InboundDataReceived”,
“data”: {
“message”:           “Europe,Italy,Clothes,Online,M,12/17/2013,278155219,1/10/2014,1165,109.28,35.84,127311.20,41753.60,85557.60”
“metadata”: { “host”: “box-1”, “path”: “/usr/data/sales.csv” }

And as you can see we need to set several different parts of the http post. The unique eventId and enrich each message with some more metadata like the host and the file path.

These data will be precious later to find the specific mapping for that particular data-source and to keep track of the origin

Data Ingestion from CSV to Microservices

The Api Gateway is our internal data collection endpoint. In my opinion it must be a simple Http Endpoint. There is no need for an heavy weight framework that can potentially add complexity and little value.

In this case we are using the existing Event Store Http Api as Api Gateway and Logstash to monitor folders and convert the csv lines in json messages.

Logstash is a powerful tool that can keep track of the last ingested position in case something goes wrong and file changes.

There are other tools available for this task like Telegraf but I have more experience with Logstash.

Once that the message are flowing through the Gateway Api we can then implement Domain Components or Microservices that subscribes to these messages and provide validation, mapping and Domain business logic. They will then save results in their own data streams as Domain Events. To avoid persisting the data in the inbound stream you can also think about to optionally set a retention period after which the messages are automatically removed from the system.

  1. Download Logstash and Event Store
  2. Create a folder “inbound” and copy and paste in a new logstash.yml file my gist
  3. If you are on Windows create a logstash.bat file to easily run logstash. The content depend on your paths but in my case is:
    C:\services\logstash-6.2.4\bin\logstash -f C:\inbound\logstash.yml
  4. From a command line run logstash
  5. Create a subdirectory inbound/<youclientname>. Example: c:\inbound\RdnSoftware
  6. Copy a csv file in the folder and let the ingestion starts

With this Logstash input configuration, when you have a new client you can just add another folder and start ingesting the data. For production environment you can combine Logstash with Filebeat to distribute part of the ingestion workload on separate boxes.

In some scenarios expecially when the data are not yet discovered and you don’t have yet a  mapper it will be easy to direct the data straight into a semi structured data lake for further analysis.

Here is the webinar

Hope that you enjoy my little data ingestion example.

Build an Angular2 app and run it on AWS using a Docker container in 30 minutes

Software Development
Angular Aws Docker talk

Angular Aws Docker talk

In this article I will try to trans-pile the content of a tech meetup where I conducted a demo about “Building an Angular2 app and run it in a Docker Container on AWS in 30 minutes”.

The main points are:

  • Build Angular2 project with Angular-CLI and customize the home page with a simple
  • Customers service
  • Dockerise the app and push the image in public Docker Hub
  • On AWS Launch a new ECS instance and deploy and run the app on it

Angular2 is a framework to build Web Application. It provide a complete set of components all working toghether and then it can be considered a full heavy loaded framework. It differs from other web solution like React because in Angular all the most common decisions about which http or routing or anything else to be used are already made by the Angular team. With react you have more flexibility given that it is a ‘pure’ javascript library and then it is javascript centric as opposed to Angular that it is still a composition of modules around html.

Anyway, if you like to buy your car already made and just focus on writing your UI and business logic, Angular2 is a good choice.

Build Angular2 project with Angular-CLI

Install Angular2 CLI:
> npm install –g angular-cli
Build an Angular2 seed project using the following command
> ng new test-proj
Build the simple project as a static website
> ng build –prod

The previous command generate the ‘Dist’ folder

You can download the Angular-CLI from

Customize the default page

Let’s generate a model that represent a simple Customer item.
Write the following Angular-CLI command in your terminal

> ng generate class shared/customers.model

Paste the following code snippet into the customer.model file

export class Customers {

Now generate a service module that will use the model to provide a sample list of Customers.
Write the following Angular-CLI command in your terminal

> ng generate service shared/customers

Paste into the newly created service the code from the following Github Gist

Modify ‘app.module.ts’ with the following GitHub Gist

Modify ‘app.component.ts’ with the following GitHub Gist

Modify app.component.html with the following GitHub Gist

Push the image in public Docker Hub

Create a Dockerfile in the app folder using the following code

FROM ubuntu:16.04
RUN apt update
RUN apt install -y apache2
COPY dist /var/www/html
CMD /usr/sbin/apache2ctl -D FOREGROUND

This DockeFile pull a version of Ubuntu, install Apache2 and copy the content of the ‘Dist’ folder.

Build the Docker image
> docker build -t test-proj .
Tag the image
> docker tag imageid yourrepo/test-proj:latest
Login in your Docker Hub account
> docker login
Push the image in your public repo
> docker push yourrepo/test-proj:latest

Run locally for test
> docker run -dit -p 8080:80 yourrepo/test-proj:latest

AWS Launch a new ECS instance

  • Login in AWS -> EC2 and select “Launch Instance”
  • In the marketplace, search for ‘ecs’ and select ‘Amazon ECS-Optimized Amazon Linux AMI’
  • Select the ‘t2 micro’ free tier option -> next
  • Step 3 Configure Instance Details: Create new IAM Role ‘ecs-role’
  • Role Type: Amazon EC2 Role for EC2 Container Service
  • Attach the available policy
  • Back in the Step 3, select the new role -> next
  • Step 6 Configure Security Group
  • SSH select ‘my ip’
  • Add HTTP rule port 80, anywhere -> review and launch

Create a Task Definition

  • Select the created instance and verify that it is running
  • Select the ‘EC2 Container Service’ action
  • On Task Definition select “Create new Task Definition”
  • Define a name
  • Select “Add Container”
  • Define a container name
  • In the Image box add the public available Docker Image from Docker HUB or ECS Repo Example: riccardone/ciccio:latest
  • Set the memory hard limit to 256mb (free tier)
  • Set ports host: 80 container: 80
  • Select “Add” and then “Create”

Create a Service to run the task

  • Select Cluster from the main menu on the left and then “default”
  • Select “Create” action on the Services tab
  • Set a service name and set 1 as number of tasks
  • Select “Create Service” action

To verify that the Service is up and running, select the service, select the task, expand the task and click on the “External link” to open in a browser the running app

I hope you enjoyed!

An Ingestion system of unstructured data-sources

Software Development

Nowadays I’m focused on semantic web and graph relations. The stack of techs that I’m using is formed by lightweight services like an input adapter and a domain processor and some infrastructure software like EventStore geteventstore and Neo4J.
I’m using EventStore as an input pipeline and Neo4J as a Domain Model store for a simple Domain Component.
In order to give a well defined structure to the Neo4J nodes I’m using some of the OWL elements

Following is a basic diagram of the ingestion system.

Ingestion System

Disparate data-sources ingestion system

Neo4J - OWL

Neo4J graph store using OWL elements

Configure NServiceBus with Sql Server transport

Software Development

In order to use Sql Server as middle-ware transport infrastructure you have to install the following nuget package in your projects (where your publisher and subscribers are located)

If you use a web-api project to send messages into the bus, you have to configure SqlServer transport to be used.
You can add this configuration into the global.asax App_Start method as in the following example

public static IBus Bus;

protected void Application_Start()
  var configuration = new BusConfiguration();
// other configurations .....
Bus = NServiceBus.Bus.Create(configuration).Start();

You also have to set the connection string element used by NServiceBus to connect with the transport, in our case a normal Sql Server connection string


Now focus on the project where your handling logic is. Install the NServiceBus.SqlServer nuget package.
In the EndpointConfig class you have to set SqlServerTransport instead of the default MsmqTransport

public class EndpointConfig : IConfigureThisEndpoint, AsA_Server, UsingTransport

and modify the configuration file with the proper connection string


Job done. Now when you start the solution, NServiceBus will automatically initialize the Sql Server database using it as the middleware infrastructure with all the required queues in the form of datatables.
NServiceBust use Sql Server database as an infrastructure to store messages. The queues here are in the form of datatable. NServiceBus is not relying on MSDTC or ServiceBroker. All the magic is done by the C# transport component.

If you want to reprocess messages with errors, you can write a procedure that pick up the message from the dbo.errors table and insert the message into the original destination queue/table in an atomic operation. Here is a good example

One of the problems that I found using Sql Server as transport with NServiceBus is the fact that when the message is handled in your handler it is within a transaction. If you connect with an Entity Framework DataContext class or an Ado.Net connection to a database to do something and you are using a different connection string compared to the one used for the SqlTransport then NSB automatically promote the transaction as a distributed transaction and if you don’t have configured MSDTC you’ll receive an exception.
To fix this problem, you can suppress with your code the transaction wrapping your data access login with a using or you can disable MSDTC by configuration using the following setting in your endpoint config class:


using this setting, NSB consider all your transaction as normal atomic transaction without escalate them to a distributed transaction.

As a general consideration, when you rely on Sql Server as transport you are in fact using it as a broker. A broker is a central dispatcher for all your messages. The side effect here is that relying on a central broker (the SqlServer Instance that host your NSB messaging infrastructure) and keeping your business Sql Server databases somewhere else force you to configure MSDTC to avoid potential data loss in case of disaster scenario. But unfortunately is not possible using MSDTC with Always On availability group as is documented in that article

An interesting side effect using Sql Transport is that you have the possibility to set up Sql Server Always On feature. Using this feature, you can potentially keep synchronized two geographically separated Sql Server instances with all the messaging queuing infrastructure providing in this way a nice High Available solution. In case of disaster on the primary server you can have an automatic fail over. The applications can use a logic listener instead pointing to the physical database server.

More info about SqlServerTransport

Develop and publish a mobile app in 30 minutes

Software Development

This is the handout for the presentation I gave at an International Developer Tech Talk in London.

You have an idea. You want build a piece of software around your idea. You start from the end seeing yourself somewhere in California in a big villa organizing meeting and parties with your new friends.

Let’s rewind this film from the start and imagine reducing the idea to something that you can build in 30 minutes. Like in Ground Hog Day when Bill Murray was able to create an ice sculpture in 24 hours I want to create a mobile app in 30 minutes. The basic idea is log your weight.
Let’s begin from the start

5 minutes: Decide the infrastructure
Currently there are three main mobile platforms owned by the following companies: Apple – Ios, Google – Android and Microsoft – Windows Phone. Each of them creates a proprietary store where you can publish your app. Each store works in a similar way from a user point of view but behind the scenes is different from a developer point of view.

Each store use a different versioning system, a different set of icons, different metadata and so on. Be able to cover all different requirements from these stores is a time consuming step.

The easier option to start with is Microsoft. After some practice, the most developer friendly appears to be Google Android. Apple and Google are similar but for some unclear reason, Apple is a waste of time. Every time you publish your app all these platforms take some time. After the first step, Microsoft and Google are quick and straightforward and you can publish updates easily and fast. Apple takes the same long amount of time – at the moment it is seven to ten days! Even if you are just changing the rating or a small feature.

So you want to develop your idea and publish the app on all these platforms. Do you want your app be native or do you want to develop it using a single code base? I don’t have all different required languages and techniques and I don’t have enough money to pay other developers. So my first step is develop a single code base that allows me to learn one language and technology stack. Now it is time to decide what framework use to develop.

There are several frameworks available.
One of the most popular is called Xamarin. It allows the use of  C# + Xamarin/Xaml UI. I’m not a Xaml developer. I think that the learning curve to become a Xaml/Xamarin is long. This framework is not cheap and requires a special licence for each platform. It requires you to have and configure an Apple Mac device in order to develop, debug and publish for IPhone. I tried this framework for 90 days and I remember wasting most of the time configuring and studying and never focusing on my idea and implementing some features for 90 days.

Another option is Phonegap/Cordova. This is an open source and free thin layer of javascript on top of native features. Using this solution you can use javascript to implement your application logic and Html + Css to define the User Interface. I started using this solution but again it requires a combination of tools and components to be configured and this task takes quite a long time. Also it requires an Apple Mac to develop and debug for Ios.
Finally I found Telerik Appbuilder + Everlive services. Telerik is a well known company that develops tools and Kendo UI framework for developers to build responsive apps. They created an integrated stack of tools on top of Apache Cordova. This is a commercial product as Xamarin but it doesn’t require you to learn Xaml/Xamarin, you can use your Javascript / Html / Css skills. It doesn’t require you to have an Apple Mac to develop debug and publish a mobile app for IPhone. One of the tools is called AppBuilder and it can be used as an extension for Visual Studio, as a stand alone development environment or on line as a web application inside the Telerik Platform. One of the many benefit of it is that it simplify the set up of your mobile app for each different App Store allowing you to easily define icons, permissions, versions and other metadata and publish the package in a friendly way. Telerik provide also Kendo UI that is a framework to build the User Interface. AppBuilder is not related to Kendo Ui so you can use Ionic and Angular to build the app as an example.

If you choose an hybrid app, one of the most common concern about that is the lack of performance compared to native. This is due to the following two problems in order of importance:

  1. An hybrid app is hosted inside a webview. For some mysterius reasons, there is a 300ms delays every time the user of your app tap on the screen
  2. Using a WebView combined with html and Javascript your performance will be dragged down by the inefficient Dom traversing problem that affect normal website

Both these two problems are real and they have a negative impact on the user experience.
Both these two problems can be solved.

To solve the problem number one, you can use a special Javascript library like fastclick.js or any other tap library. Another way is use one of the most promising UI framework available for mobile called Ionic . This framework take care about the delay and remove it silently without need any other configuration.
To solve the second problem, bear in mind that it depend mostly on how you code your UI logic and it can be relevant if you are squeezing the performance to create graphic apps. Also in that case, you can stop designing your UI using html and javascript and start using a special library like that is an engine that render the UI using only javascript and can be used in conjunction with Angular js.

15 minutes: Develop the code

Open Visual Studio and be sure to have installed the Telerik AppBuilder extension.
Create a new empty project using the Blank Javascript template from the Telerik Appbuilder category.
Edit the index.html file under the project root and replace all the existing code copying the content of index.html from the following git repository

Remember to change the Telerik Everlive service key with your real key. You can find the key from the Telerik Platform selecting your project->BackendServices and the option Api Keys

5 minutes: Test

AppBuilder allows you to use several development tools. My favourite is Visual Studio. In order to quickly test your mobile app on a real device, you can:

  • Select AppBuilder menu
  • Select Build and publish to the cloud
  • Select one of the device mobile platform and press “Build”

On the device

  • Install Telerik Appbuilder companion app
  • Optional: If you use IPhone or Windows Phone, install a QR reader app. If you use Android, you can select the integrated QR reader option
  • Only the first time: Use a QR reader to scan the QR code on the screen
  • After the first time: Hold three fingers on the screen – Yoda gesture  – and this will trigger a LiveSync feature that automatically will download the latest package from the cloud

You don’t need a Mac to test on a real device. When you install Telerik Companion App, you can then use this app as a host for your package. On an IPhone, the companion app is certified and comes from the app store but your package comes from Telerik cloud and it doesn’t need to pass through the long and tedious certification provisioning and painful Apple way to deploy apps.
This plays a major role allowing you to start coding your app feature and test on an IPhone in few clicks.

5 minutes: Publish

The develop and test phase is nearly finished. You are quite happy with the first version of your app. My suggestion is to start with the publishing process as soon as possible. If this is your first app submission, you can’t imagine how this final task can be tricky, long and full of many little details.

First you start, first you’ll figure out this complexity. Each of the three current major platforms is different.

The easier option is the Microsoft Store . It appear similar to a wizard. They are currently using Silverlight as a plugin so you have to activate this old fashion “thing”. Pay attention to the section where you upload your xap file. It contains also some of the related metadata and if you decide to delete a Xap file, you will delete also the metadata! Another UI problem is the Save button. It is at the bottom of the page and it doesn’t appear to allow you to save your form partially. So you have to fill in almost everything before click on Save. The Microsoft team behind the submission take less than one week to review your app and they provide you with useful suggestions on how to improve your app like how to manage the back button or how to show the user with some settings suggestions or the privacy document. After your first submission is accepted, any other update passes through the same procedure (replace the Xap package instead of delete) but it requires a few hours to be accepted. 

My favourite is Google Play Store . It is a tailor made web application designed around developer needs. When you publish for the first time it requires a few days but I don’t think that they care about the app. I submitted the wrong APK package with problems with the back button and they published it without any concerns. Once your app is published and when you submit an update, it requires only few hours. They also provide a payable service to translate your app content in any language. You have to provide your localizable strings using an xml format.

The Apple App Store is similar to Google Play. In order to publish the app you have to pass through their painful provisioning certificate process
If you use Telerik Appbuilder, it simplifies this process but you still need to take some steps to get a certificate, sign your package and be able to publish. Here is a link where Telerik try to simplify this process
Once you have published your .ipa package, the review process takes a long time compared to the other stores. I did a first attempt that was rejected after 10 days. The reason was a wrong rating. So I fixed the rating and submitted the package again and it was then accepted after a further 9 days.

In this article I talked about my experience developing a mobile app from the idea to the app store. As you can read, I selected the solution that allow me to transform the idea in something usable in a short amount of time.

You can find the example app on GitHub in the following repository