6.3.25 By Anton Kozin

Abstract:

Deploying a production-ready database in a cloud environment requires more than basic functionality—it demands scalable architecture, redundancy, and automated recovery mechanisms to ensure high availability. This article demonstrates how to deploy Bitnami’s MongoDB Helm chart on Kubernetes, leveraging a StatefulSet with strict pod anti-affinity rules, persistent storage, and replica set configuration to achieve fault tolerance. By coordinating primary and secondary nodes with asynchronous oplog replication, the setup ensures data durability even during node failures. The guide bridges theory and practice, explaining how Kubernetes-native features like podAntiAffinityPreset: hard and automated failover safeguard against outages. Follow this walkthrough to implement a resilient, enterprise-grade MongoDB cluster optimized for real-world demands.

Why should you care:

Deploying a redundant, scalable MongoDB cluster on Kubernetes is critical for modern applications but often involves navigating complex configurations and subtle pitfalls. This guide simplifies the process by combining practical, step-by-step instructions with foundational concepts like replica sets and anti-affinity rules.

Install helm:

brew install helm
helm version

Get the chart:

helm repo add bitnami <https://charts.bitnami.com/bitnami>
helm repo update

Create a values.yaml configuration file:

architecture: replicaset
replicaCount: 2
auth:
  rootPassword: "rootP@ss"
  username: "admin"
  password: "p@ssword"
  database: "my-db"
persistence:
  enabled: true
  size: 8Gi
resources:
  limits:
    cpu: 500m
    memory: 512Mi
  requests:
    cpu: 250m
    memory: 256Mi
podAntiAffinityPreset: hard

On podAntiAffinityPreset setting:

The podAntiAffinityPreset: hard configuration in your Helm chart ensures no two MongoDB pods from the same deployment will run on the same Kubernetes node, provided there are enough nodes available. This is critical for high availability: if a node fails, only one replica is affected, preserving redundancy. For this setup with replicaCount: 2, this means:

  1. Strict Placement: Kubernetes uses requiredDuringSchedulingIgnoredDuringExecution anti-affinity, meaning the scheduler must place pods on separate nodes. If insufficient nodes exist, excess pods will remain unscheduled (e.g., 2 replicas require at least 2 nodes).
  2. Node-Level Isolation: Prevents "double failure" scenarios where a single node outage could take down multiple MongoDB instances.

This does not guarantee "one pod per node" (nodes can host other applications), but it ensures MongoDB replicas are distributed for maximum fault tolerance.

Deploy (install) the chart to the cluster:

kubectl create namespace mongo-ns
helm install my-mongodb -f values.yaml bitnami/mongodb --namespace mongo-ns

Wait a few minutes until the pods start

When the pods are ready you should see something like this: