This guide is to show how to use the official Event Store Helm Chart to interactively deploy an Event Store Cluster in Kubernetes Google Cloud GKE service.
Install the following utilities in your dev machine.
Gcloud sdk with beta commands https://cloud.google.com/sdk/install
Before proceed you need to create a Project in your Google Cloud account and decide the region and zone where you want set up the Kubernetes Cluster. You can use the Web Ui or the gcloud cli commands. These information will be used in the following steps. When you want to refer to your project in the cli commands remember to use the PROJECT_ID and not the friendly name.
Login in your Google Cloud account from the cli. This is a 2 factor authentication that will require you to login using any browser and copy/paste a key back in the cli.
gcloud auth login --no-launch-browser
gcloud config set compute/region <regionname>
gcloud config set project <projectid>
Create a Kubernetes Cluster in your Google Cloud Account.
The following command does not specify the number of nodes therefore the default will be 3 nodes.
gcloud container clusters create <clustername> --zone <zonename>
We are going to use kubectl for managing resources in our Kubernetes cluster. Set the current context for kubectl cli
gcloud beta container clusters get-credentials <clustername> --zone <zonename> --project <projectid>
On the server side Helm relies on a service account called Tiller. This account need to be configured for Role Base Access as this is enabled by default on Google Cloud GKE service. To configure RBAC you can follow these instructions. Basically you need to create a special deployment with the Tiller user settings before run the ‘helm init’ command.
You can then check if the ’tiller-deploy-xxxx’ pod is running
kubectl -n kube-system get pod
Deploy Event Store Cluster with Helm
Note that it is possible to specify many options to customise Event Store deployment. The one used in this guide is the Persistent Volume that allow to deploy also a Persistent Volume Claim. This Claim is an abstraction that require Kubernetes to set up one persistent volume per each Event Store node and assign an id to it. These volumes will be then reused in case for example we want to upgrade the version of the Cluster and retain the data. If we don’t specify an existing volume then the volumes will be dynamically created.
helm repo add eventstore https://eventstore.github.io/EventStore.Charts
helm repo update
helm install -n eventstore eventstore/eventstore --set persistence.enabled=true
On Google Cloud GKE the authentication is set for use RBAC per default. For that reason in order to reach your Event Store cluster you have to set up access for the anonymous user. This is something eventually to do only for a test environment running the following command
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
Upgrade the cluster with a newer version
Verify your current Event Store cluster
helm status eventstore
Fork the official Helm Chart Event Store repository and change the version of the image in the chart values.yaml.
Example running the command in the same directory where the chart is
helm upgrade eventstore . --set persistence.enabled=true
The upgrade command will silently upgrade all the pods one by one without downtime. Helm will take care of attaching the existing volumes to the new pods during the upgrade.
Rollback to a previous version
To rollback the upgrade you can first use the following command to display the history
helm history eventstore
And then the following command to rollback to a specific revision
helm rollback eventstore 1
gcloud container clusters delete <clustername> --zone <zonename>
You can then login in in the Web UI of your Google Cloud and in the Kubernetes Engine view delete the Kubernetes Cluster using the bin icon beside it.