Installation - Standalone cluster

Using Helm

To install the chart you first have to prepare some required kubernetes resources, and then we can install the helm chart.

1. Create the namespace to contain all cluster resources

kubectl create namespace my-microstream-cluster
  • If it says created successful all is good. If not there might be some permission / io or other problems on the user side.

2. Switch into the namespace

kubectl config set-context --namespace='my-microstream-cluster' --current
  • This just makes it so when the user types in kubectl and helm commands, they are issued to the specified namespace (so kubectl create actually does kubectl create --namespace my-microstream-cluster)

3. Create the docker registry credential secret

kubectl create secret docker-registry microstream-ocir-credentials --docker-server='https://ocir.microstream.one/onprem' --docker-username='DOCKER_USER' --docker-password='DOCKER_PASSWORD'
  • If this fails then our container registry might be down or the user doesn’t have access. Illegal password/user etc.

  • Replace "DOCKER_USER" and "DOCKER_PASSWORD" with your own credentials.

If you don’t have the credentials yet, and you want to try out the on-prem cluster you can write us an email at hello@microstream.one or contact us via any other method listed on our website

4. Save the latest version in a variable

VERSION=$(curl --silent https://api.github.com/repos/microstream-one/microstream-cluster-kubernetes-files/releases/latest | grep tag_name | awk -F'"' '{print $4}')-helm
  • There might be problems with our server (if it doesn’t exit for example…​). This will be important for all other steps as we pull all the required resources from that server. Should say nothing if it was successful.

5. Install the helm chart

helm install my-microstream-cluster "https://github.com/microstream-one/microstream-cluster-kubernetes-files/archive/refs/tags/$VERSION.tar.gz"
  • The helm output will explain if something went wrong. If everything worked then we can move on to the next steps.

Installing with custom configuration (optional) You can customize the configuration with --set. To print all available values you can execute

helm show values "https://github.com/microstream-one/microstream-cluster-kubernetes-files/archive/refs/tags/$VERSION.tar.gz"
  • For example if you wanted to start 3 storage nodes instead of 2 then you can just append --set storageNode.replicas=3 to the helm install command.

Strimzi will be deployed as a dependency. If you have your own cluster-wide strimzi-operator installed, you can disable the dependency with --set strimzi-kafka-operator.enabled=false

6. Upload your application

  • Then all that’s left is to import your application. Import the jar file with the following commands, substituting the my-microstream-cluster-es-cluster-master-node-d69f5c79c-lvxxj with the name of your masternode pod. You can find the masternode pod name by executing: kubectl get pod -l app.kubernetes.io/component=master-node:

kubectl cp -c prepare-masternode /path/to/app my-microstream-cluster-es-cluster-master-node-d69f5c79c-lvxxj:/app/application.jar
  • Import the libs folder (if you have one. Fat jars don’t have a libs folder) with:

kubectl cp -c prepare-masternode /path/to/libs my-microstream-cluster-es-cluster-master-node-d69f5c79c-lvxxj:/app
  • Tell the masternode that the user project is good to go with:

kubectl exec -ti -c prepare-masternode pod/my-microstream-cluster-es-cluster-master-node-d69f5c79c-lvxxj -- touch /app/ready

Check that the created resource exists with kubectl get pod. It’s fully up and running when it says Ready 1/1 If something goes wrong, check the event log with kubectl describe pod/masternode (at the bottom)
Check the stdout/stderr logs of the init container with kubectl logs -c prepare-masternode pod/masternode.
Check the normal logs with kubectl logs -c masternode pod/masternode

Using kubectl

1. Create the namespace to contain all cluster resources

kubectl create namespace my-microstream-cluster
  • If it says created successful all is good. If not there might be some permission / io or other problems on the user side.

2. Switch into the namespace

kubectl config set-context --namespace='my-microstream-cluster' --current
  • This just makes it so when the user types in kubectl commands, they are issued to the specified namespace (so kubectl create actually does kubectl create --namespace my-microstream-cluster)

3. Save the latest version in a variable

VERSION=$(curl --silent https://api.github.com/repos/microstream-one/microstream-cluster-kubernetes-files/releases/latest | grep tag_name | awk -F'"' '{print $4}')
  • There might be problems with our server (if it doesn’t exit for example…​). This will be important for all other steps as we pull all the required resources from that server. Should say nothing if it was successful.

4. Create the docker registry credential secret

kubectl create secret docker-registry microstream-ocir-credentials --docker-server='https://ocir.microstream.one/onprem' --docker-username='DOCKER_USER' --docker-password='DOCKER_PASSWORD'
  • If this fails then our container registry might be down or the user doesn’t have access. Illegal password/user etc.

  • Replace "DOCKER_USER" and "DOCKER_PASSWORD" with your own credentials.

If you don’t have the credentials yet, and you want to try out the on-prem cluster you can write us an email at hello@microstream.one or contact us via any other method listed on our website

5. Apply the Kubernetes Resources

For the Kafka cluster you can either use your own Kafka deployment or use Strimzi. We provide the required Strimzi resources with the kafka.yaml file and the client configurations in kafka-config.yaml.

5.1 Install the strimzi-kafka-operator

  • Install the Strimzi CRDs by downloading this file and running

`kubectl apply -f strimzi-crds-0.45.2.yaml`
  • Install the strimzi-kafka-operator by downloading this file and running

kubectl apply -f strimzi-cluster-operator-0.45.2.yaml

5.2 Install the cluster

  • Install the Kafka cluster for the message distribution by running

kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/$VERSION/kafka.yaml"
  • Install the Kafka client configuration by running

kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/$VERSION/kafka-config.yaml"
  • Then to install the actual eclipse-store cluster by running

kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/$VERSION/cluster.yaml"

By applying the resources you will deploy all the necessary cluster components like the storage-node deployments, pods, services, persistent volume-claims etc. which will run the programs and configure the routing between and the persisting of storage data.

For the Kafka Deployment:

Check that the created resource exists with kubectl get pod. It’s fully up and running when it says Ready 1/1.
If something goes wrong, check the event log with kubectl describe pod/PODNAME (at the bottom).
Check the stdout/stderr logs with kubectl logs pod/PODNAME.

6. Upload your application

  • Then all that’s left is to import your application. Import the jar file with the following commands, substituting the masternode-7964c6f844-zntt8 with the name of your masternode pod. You can find the masternode pod name by executing: kubectl get pod -l microstream.one/cluster-component=masternode:

kubectl cp -c prepare-masternode /path/to/app masternode-7964c6f844-zntt8:/app/application.jar
  • Import the libs folder (if you have one. Fat jars don’t have a libs folder) with:

kubectl cp -c prepare-masternode /path/to/libs masternode-7964c6f844-zntt8:/app
  • Tell the masternode that the user project is good to go with:

kubectl exec -ti -c prepare-masternode pod/masternode-7964c6f844-zntt8 -- touch /app/ready

Check that the created resource exists with kubectl get pod. It’s fully up and running when it says Ready 1/1 If something goes wrong, check the event log with kubectl describe pod/masternode-7964c6f844-zntt8 (at the bottom)
Check the stdout/stderr logs of the init container with kubectl logs -c prepare-masternode pod/masternode-7964c6f844-zntt8.
Check the normal logs with kubectl logs -c masternode pod/masternode-7964c6f844-zntt8

Accessing the Cluster

After the installation is complete we will need to wait for all the nodes to be ready. To do this check with kubectl get pod the current status or watch for changes with kubectl get pod -w.

Any traffic will need to go through the proxy, so for this preferably an ingress should be created. To test the cluster on the local machine the proxy service can be forwarded with kubectl port-forward. To find the proxy service name kubectl get service can be used. In the case of the helm installation it would be kubectl port-forward svc/my-microstream-cluster-es-cluster-proxy 8080:80.

Now any REST-Requests sent to http://localhost:8080/ will reach the cluster.