Cloud · DevOps

Deploy EventStore to Kubernetes with Helm on Azure Cloud AKS

The “Official” EventStore Helm Chart has been archived by the EventStore team. The reason behind that decision is strongly opinionable to say the least. This is the chart I use

This guide is to show how to use the official Event Store Helm Chart to interactively deploy an Event Store Cluster in Kubernetes Azure Cloud AKS service.

Configuration steps
Deploy Event Store Cluster with Helm
Upgrade the cluster with a newer version
Rollback to a previous version
Delete resources


Install the following utilities in your dev machine.
The Azure CLI

Configuration steps

Login in your Azure Cloud account using the az cli program. This is a 2 factor authentication that will automatically launch the default browser to select the account credentials.

az login

Create a new resource group

az group create -n <resourcegroupname> -l <location-compatible-with-aks>

Example using centralus:

az group create -n mygroup -l centralus

Create the Kubernetes cluster with 3 nodes. This command accept various parameters like the version of Kubernetes to be installed. For this tutorial we use default options.

az aks create -n <clustername> -g <resourcegroupname> -c 3

The command will return a json object with all the details about the new Kubernetes cluster. You can now get the list of all Kubernetes clusters in your Azure account

az aks list -o table

We are going to use kubectl for managing resources in our Kubernetes cluster. Set the current context for kubectl cli and merge it with any existing configuration in your existing config file.

az aks get-credentials -n <clustername> -g <groupname>

Get the list of nodes using kubectl

kubectl get nodes

It’s time now to access the web based Kubernetes Dashboard. To browse the Kubernetes Dashboard we need to sort out the Role Base Access Control that is enabled by default on Azure AKS.

Create a file rbac-config.yaml containing the following yaml

kind: ClusterRoleBinding
name: kubernetes-dashboard
k8s-app: kubernetes-dashboard
kind: ClusterRole
name: cluster-admin
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

Create a deployment for this ClusterRoleBinding object

kubectl create -f ./rbac-config.yaml

Finally create the binding between the system account and the role

kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

To access the dashboard you can now use the ‘browse’ command of the az cli. This command is a wrapper around the ‘proxy’ command of the kubectl. It creates a local web server with a tunnel to the cluster hosted in Azure AKS web server

az aks browse -n <clustername> -g <groupname>

Deploy Event Store Cluster with Helm

Helm is the package manager for Kubernetes. After you’ve created a new Kubernetes cluster you usually need to configure one off Helm in order for your local helm cli to connect to a configured service account on the server side. The service account used by Helm is called Tiller. Run the command:

helm init --service-account tiller

You can then check if the ’tiller-deploy-xxxx’ pod is running

kubectl -n kube-system get pod

Note: If the pod for tiller is not running then you can use the following commands. You will probably need to start from scratch with the steps or delete the tiller service and binding.

// the following are an alternative instead of run kubectl create -f ./rbac-config.yaml
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

It’s time now to deploy the Event Store cluster using the official Helm Chart with the following commands:

helm repo add eventstore
helm repo update
helm install -n eventstore eventstore/eventstore --set persistence.enabled=true

The Event Store cluster is now deployed and available in a couple of minutes. The default cluster size in the Helm Chart is set to 3 so this will end up with 3 nodes Event Store cluster over the 3 nodes Kubernetes cluster. The setting ‘persistence.enable=true’ will use a ‘PersistentVolumeClaim’ on your Kubernetes cluster to claim dynamically persistent storage volumes. This can be reconfigured in order to use statically defined volumes if required.

Upgrade the Event Store cluster with a newer version

Verify your current Event Store cluster

helm status eventstore

Fork the official Helm Chart Event Store repository and change the version of the image in the chart values.yaml.
Example running the command in the same directory where the chart is

helm upgrade eventstore . --set persistence.enabled=true

The upgrade command will silently upgrade all the pods one by one without downtime. Helm will take care of attaching the existing volumes to the new pods during the upgrade.

Rollback to a previous version

To rollback the upgrade you can first use the following command to display the history

helm history eventstore

And then the following command to rollback to a specific revision

helm rollback eventstore 1 

Delete resources

az aks delete -n <clustername> -g <groupname>
az group delete -n <groupname>