Deployment
The clustered cache requires Apache Kafka to be available for inter-node communication. Each application node connects to the same Kafka cluster and automatically discovers other cache nodes.
There are many ways to deploy the application, but here we show two common approaches: locally with Docker Compose and in a Kubernetes cluster.
Building the Docker Image
First, build the Maven project:
mvn clean package
Then create the Docker image:
docker build --tag cluster-storage-demo:1.0.0 .
You can skip this step by using the prebuilt Docker image mstoer/cluster-storage-demo:1.1.0.
|
Deploying to Local Docker
Deploying to a local Docker environment is a quick and easy way to test the clustered cache without needing any online services.
Docker Compose File
services:
postgres:
image: postgres:latest
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: "mysecretpassword"
kafka:
image: apache/kafka:latest
ports:
- "9092:9092"
environment:
KAFKA_NODE_ID: "1"
KAFKA_PROCESS_ROLES: "broker,controller"
KAFKA_LISTENERS: "PLAINTEXT://:9092,CONTROLLER://:9093"
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"
KAFKA_CONTROLLER_LISTENER_NAMES: "CONTROLLER"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT"
KAFKA_CONTROLLER_QUORUM_VOTERS: "1@kafka:9093"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: "1"
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: "1"
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: "0"
KAFKA_NUM_PARTITIONS: "1"
app:
deploy:
mode: replicated
replicas: 3
image: <your-application-image>
ports:
- ":8080"
environment:
DATASOURCES_DEFAULT_PASSWORD: "mysecretpassword"
DATASOURCES_DEFAULT_URL: "jdbc:postgresql://postgres:5432/mydb"
KAFKA_BOOTSTRAP_SERVERS: "kafka:9092"
To use a different image or add environment variables, edit the app service in the docker-compose.yaml file.
Create the Database
The PostgreSQL database needs to be created manually. Open a shell inside the PostgreSQL container:
docker exec -ti <postgres-container-name> sh
su - postgres
createdb <your-database-name>
Press CTRL+D twice to exit the session.
Restart the Application Containers
Since the application nodes could not reach the database during initial startup, they may have crashed. Restart them by running:
docker-compose up -d
Find the Application Ports
The application nodes are assigned random host ports. To find out which ports they are running on:
docker container ls
Look for the app containers in the output.
The PORTS column shows the host port on the left side and the container port (always 8080) on the right side.
Send REST requests to the host port to reach the application.
Deploying to Kubernetes
For Kubernetes deployments, a StatefulSet is used for the application nodes to ensure stable network identities.
If you built your own Docker image, upload it to an image registry and update the image reference in the deployment manifest at spec.template.spec.containers[].image.
Apply the Deployment Manifest
kubectl apply -f deployment.yaml
This deploys all necessary Kubernetes resources including PostgreSQL, Kafka, and the application nodes.
Create the Database
Once the PostgreSQL pod is running, create the database:
kubectl exec -ti deploy/postgres -- sh
su - postgres
createdb <your-database-name>
Restart Crashed Application Pods
The application pods may have crashed while trying to connect to the non-existent database. Delete them to trigger a restart:
kubectl delete pod node-0 node-1 node-2
Access the Application Nodes
Use port-forwarding to connect to individual cache nodes:
# Connect to node-0
kubectl port-forward pod/node-0 8080:8080
# Connect to node-1
kubectl port-forward pod/node-1 8081:8080
# Connect to node-2
kubectl port-forward pod/node-2 8082:8080
Ensure that the Kafka bootstrap.servers address is reachable from all application pods.
In Kubernetes, use the full service DNS name (e.g., kafka-0.kafka.default.svc.cluster.local:9092).
|