6.3.25 By Anton Kozin
Deploying a production-ready database in a cloud environment requires more than basic functionality—it demands scalable architecture, redundancy, and automated recovery mechanisms to ensure high availability. This article demonstrates how to deploy Bitnami’s MongoDB Helm chart on Kubernetes, leveraging a StatefulSet with strict pod anti-affinity rules, persistent storage, and replica set configuration to achieve fault tolerance. By coordinating primary and secondary nodes with asynchronous oplog replication, the setup ensures data durability even during node failures. The guide bridges theory and practice, explaining how Kubernetes-native features like podAntiAffinityPreset: hard
and automated failover safeguard against outages. Follow this walkthrough to implement a resilient, enterprise-grade MongoDB cluster optimized for real-world demands.
Deploying a redundant, scalable MongoDB cluster on Kubernetes is critical for modern applications but often involves navigating complex configurations and subtle pitfalls. This guide simplifies the process by combining practical, step-by-step instructions with foundational concepts like replica sets and anti-affinity rules.
brew install helm
helm version
helm repo add bitnami <https://charts.bitnami.com/bitnami>
helm repo update
architecture: replicaset
replicaCount: 2
auth:
rootPassword: "rootP@ss"
username: "admin"
password: "p@ssword"
database: "my-db"
persistence:
enabled: true
size: 8Gi
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
podAntiAffinityPreset: hard
podAntiAffinityPreset
setting:The podAntiAffinityPreset: hard
configuration in your Helm chart ensures no two MongoDB pods from the same deployment will run on the same Kubernetes node, provided there are enough nodes available. This is critical for high availability: if a node fails, only one replica is affected, preserving redundancy. For this setup with replicaCount: 2
, this means:
requiredDuringSchedulingIgnoredDuringExecution
anti-affinity, meaning the scheduler must place pods on separate nodes. If insufficient nodes exist, excess pods will remain unscheduled (e.g., 2 replicas require at least 2 nodes).This does not guarantee "one pod per node" (nodes can host other applications), but it ensures MongoDB replicas are distributed for maximum fault tolerance.
kubectl create namespace mongo-ns
helm install my-mongodb -f values.yaml bitnami/mongodb --namespace mongo-ns