Installation - Stand alone cluster
1. Create the namespace to contain all cluster resources
kubectl create namespace my-microstream-cluster
-
If it says created successful all is good. If not there might be some permission / io or other problems on the user side.
2. Switch into the namespace
kubectl config set-context --namespace='my-microstream-cluster' --current
-
This just makes it so when the user types in kubectl commands, they are issued to the specified namespace (so
kubectl create
actually doeskubectl create --namespace my-microstream-cluster
)
3. Save the latest version in a variable
VERSION=$(curl --silent https://api.github.com/repos/microstream-one/microstream-cluster-kubernetes-files/releases/latest | grep tag_name | awk -F'"' '{print $4}')
-
There might be problems with our server (if it doesn’t exit for example…). This will be important for all other steps as we pull all the required resources from that server. Should say nothing if it was successful.
4. Create the docker registry credential secret
kubectl create secret docker-registry microstream-ocir-credentials --docker-server='https://ocir.microstream.one/onprem' --docker-username='DOCKER_USER' --docker-password='DOCKER_PASSWORD
-
If this fails then our ocir might be down or the user doesn’t have access. Illegal password/user etc.
-
Replace "DOCKER_USER" and "DOCKER_PASSWORD" with your own credencials.
5. Create the kafka cluster
Distributes data from the writer storage node to other storage nodes and the master node
-
Create the kafka service. This allows the storage and master pods to communicate with kafka
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/kafka_service.yaml"
Check that the created resource exists with kubectl get service
-
Create the kafka statefulset. This uses our wrapped kafka image that converts envars to config values and interprets some envar template things
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/kafka_statefulset.yaml"
Check that the created resource exists with kubectl get pod
. It’s fully up and running when it says Ready 1/1
If something goes wrong, check the event log with kubectl describe deployment/kafka
(at the bottom)
Check the stdout/stderr logs with kubectl logs deployment/kafka
Check that the pvc is created and ok with kubectl get pvc
and the event log with kubectl describe pvc/logs-kafka-0
-
Now we need to create the topic on the kafka. Default topic name is "storage-data"
kubectl exec pod/kafka-0 -ti -- /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic storage-data --create
This command will spit out an error if something goes wrong. If it succeeds it’s all good!
6. Create the master node
-
This holds the storage in a persistent volume and provides it to newly started storage nodes.
-
Create the masternode storage persistent volume claim. This creates a dynamic persistent volume handle with the given storage capacity in the pre-configured storage backend. Note: It must support ReadWriteMany.
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/masternode_pvclaim.yaml"
Check that the created resource exists with kubectl get pvc
. Check there are no errors in the event log with kubectl describe pvc/masternode-storage
-
Create the masternode pod. It is used to update the storage from pulling updates from kafka.
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/masternode_pod.yaml
-
Import your rest service application. Import the jar file with:
kubectl cp -c prepare-masternode /path/to/jar masternode:/storage/project/project.jar
-
Import the libs folder (if you have one. Fatjars don’t have a libs folder) with:
kubectl cp -c prepare-masternode /path/to/libs masternode:/storage/project
-
Tell the masternode that the user project is good to go with:
kubectl exec -ti -c prepare-masternode pod/masternode -- touch /storage/project/ready
Check that the created resource exists with kubectl get pod
. It’s fully up and running when it says Ready 1/1
If something goes wrong, check the event log with kubectl describe pod/masternode
(at the bottom)
Check the stdout/stderr logs of the init container with kubectl logs -c prepare-masternode pod/masternode
.
Check the normal logs with kubectl logs -c masternode pod/masternode
7. Create the storage nodes
-
These hold copies of the storage data and process the read and write requests. One special node is a writer node that processes all write requests.
-
Create the storagenode and storagenode-headless service. These are accessed by the proxy pods.
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/storagenode_service.yaml"
Check that the created resource exists with kubectl get service
. There should be a node-headless and a storagenode service.
-
Create the storagenode statefulset. This contains the information for replication count for the storagenode pods.
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/storagenode_statefulset.yaml"
Check that the created resource exists with kubectl get pod
. It’s fully up and running when it says Ready 1/1
If something goes wrong, check the event log with kubectl describe statefulset/storagenode
(at the bottom)
Check the stdout/stderr logs of the init container with kubectl logs -c prepare-storagenode statefulset/storagenode
.
Check the normal logs with kubectl logs -c storagenode statefulset/storagenode
8. Create the proxy
Use this service to access your cluster.
-
Create the nginx config as a configmap. Please keep in mind, that the name of the namespace appears several times in the config map. If your name is different you have to change that.
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/proxy_configmap.yaml"
Check that the created resource exists with kubectl get configmap
Check the contents with kubectl describe configmap/proxy
-
Create the proxy service. This allows other pods and services to reach the proxy pods
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/proxy_service.yaml"
Check that the created resource exists with kubectl get service
-
Create the proxy deployment. Automatically manages the scaling of proxy pods
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/proxy_deployment.yaml"
Check that the created resource exists with kubectl get pod
. It’s fully up and running when it says Ready 1/1.
If something goes wrong, check the event log with kubectl describe deploy/proxy
(at the bottom).
Check the stdout/stderr logs with kubectl logs deploy/proxy
9. Create the writerproxy.
Stores the current writer and elects new ones when old ones go down. Proxies requests from the proxy to the storage nodes
-
Create the writerproxy service. This allows the proxy pods to reach the writerproxy pod
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/writerproxy_service.yaml"
-
Create the writerproxy rbac resources. These give the writerproxy access to list all pods in the namespace. This is needed so the writerproxy knows what storagenodes are running.
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/writerproxy_rbac.yaml"
Check the resources exist with kubectl get sa,role,rolebinding
-
Create the writerproxy pod
kubectl apply -f "https://raw.githubusercontent.com/microstream-one/microstream-cluster-kubernetes-files/refs/tags/v1.0.0/writerproxy_pod.yaml"
Check that the created resource exists with kubectl get pod
. It’s fully up and running when it says Ready 1/1.
If something goes wrong, check the event log with kubectl describe pod/writerproxy
(at the bottom)
Check the stdout/stderr logs with kubectl logs pod/writerproxy