Node Pools

The NodePool custom resource (CR) lets you manage groups of Redpanda brokers as independent units within a single cluster. Each NodePool creates its own StatefulSet, giving you fine-grained control over broker placement, resources, and lifecycle. This is especially useful for blue/green migrations, vertical scaling, and running brokers on different hardware configurations.

After reading this page, you will be able to:

  • Describe what NodePool CRs are and when to use them

  • Enable the NodePool CRD on the Redpanda Operator

  • Create a NodePool to manage a group of brokers

The NodePool CRD is a beta feature. You must enable experimental CRDs and the --enable-v2-nodepools operator flag to use it.

How NodePools work

A NodePool is a Kubernetes custom resource (cluster.redpanda.com/v1alpha2, kind: NodePool) that defines a group of Redpanda brokers with shared configuration. Each NodePool:

  • Creates and manages its own StatefulSet.

  • Names Pods after the pool. For example, a NodePool named pool1 with three replicas creates Pods redpanda-pool1-0, redpanda-pool1-1, and redpanda-pool1-2.

  • Can target specific Kubernetes nodes using nodeSelector and tolerations.

  • Specifies its own replica count independent of other pools.

When you use NodePools, the Redpanda CR delegates broker management to the NodePool CRs. Set statefulset.replicas to 0 in the Redpanda CR so that NodePool CRs control the broker count.

When to use NodePools

NodePools are useful when you need to:

  • Migrate brokers to new nodes: Perform blue/green migrations by creating a new NodePool on the target node pool, then scaling down the old NodePool. The Redpanda Operator handles decommissioning automatically. See Migrate Node Pools.

  • Scale vertically: Move brokers to nodes with more CPU, memory, or storage without manual decommission and re-provision steps.

  • Upgrade Kubernetes: Migrate to a new Kubernetes node pool running a newer version while maintaining cluster availability.

Always scale up before scaling down

Never create and delete NodePools simultaneously. Due to the eventual consistency model of Kubernetes, simultaneous creation and deletion of NodePools could result in a cluster that is deleted and recreated. Always ensure the new NodePool is fully stable before removing the old one.

When migrating between NodePools, you must always scale up the new NodePool first and verify that all new brokers are healthy before scaling down or deleting the old NodePool. There are two supported approaches:

  • Big bang: Create all replicas in the new NodePool, wait for the cluster to stabilize with 2x the usual broker count, then delete the old NodePool. This is the approach described in Migrate Node Pools.

  • Incremental: Scale the new NodePool up by one broker, wait for it to join and stabilize, then scale the old NodePool down by one broker. Repeat until all brokers have migrated. This approach uses fewer temporary resources but takes longer.

With either approach, always confirm stability before removing any broker. Run rpk cluster health and verify zero leaderless and zero under-replicated partitions before each scale-down step.

Prerequisites

  • Redpanda Operator v26.1.2 or later.

  • The Operator must be deployed with experimental CRDs and the NodePool flag enabled. See Enable NodePools.

Enable NodePools

To enable the NodePool CRD, configure the Redpanda Operator Helm values with experimental CRDs and the --enable-v2-nodepools flag:

operator-values.yaml
image:
  tag: v26.1.2
crds:
  enabled: true
  experimental: true
additionalCmdFlags:
  - "--enable-v2-nodepools"

Deploy or upgrade the Operator with these values:

helm upgrade --install redpanda-controller redpanda/operator \
  --namespace <namespace> --create-namespace \
  --values operator-values.yaml

Next, configure the Redpanda CR to delegate broker management to NodePools by setting statefulset.replicas to 0:

redpanda-cr.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
  namespace: <namespace>
spec:
  image: redpandadata/redpanda:v26.1.2
  clusterSpec:
    statefulset:
      replicas: 0 (1)
      budget:
        maxUnavailable: 1
    resources:
      cpu:
        cores: 1
      memory:
        container:
          max: 2Gi
    storage:
      persistentVolume:
        enabled: true
        size: 5Gi
        storageClass: premium-rwo
1 Setting replicas to 0 delegates broker management to NodePool CRs.

Create a NodePool

Define a NodePool CR that specifies the number of replicas and the target Kubernetes nodes:

nodepool-pool1.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: NodePool
metadata:
  name: pool1
  namespace: <namespace>
spec:
  clusterRef:
    name: redpanda
  replicas: 3
  nodeSelector:
    nodetype: redpanda-pool1 (1)
  tolerations:
    - key: redpanda-pool1 (2)
      effect: NoSchedule
1 Targets Kubernetes nodes labeled nodetype: redpanda-pool1.
2 Allows scheduling on nodes tainted with redpanda-pool1:NoSchedule.

Apply the NodePool:

kubectl apply -f nodepool-pool1.yaml

Verify that the brokers are running:

kubectl get pods -n <namespace> -l app.kubernetes.io/name=redpanda

Control node placement with nodeSelector and tolerations

Use nodeSelector and tolerations together to ensure that each NodePool’s brokers run only on specific Kubernetes nodes. This guarantees uniform hardware for all brokers in a NodePool and prevents other workloads from consuming resources on those nodes.

How it works

nodeSelector

Constrains Pods to Kubernetes nodes with a matching label. The NodePool’s brokers are scheduled only on nodes that have the specified label.

tolerations

Allow Pods to be scheduled on nodes that have a matching taint. When you taint nodes with NoSchedule, only Pods with a matching toleration can run there.

Together, these fields provide two-way isolation:

  • NodePool brokers can only run on the designated nodes (enforced by nodeSelector).

  • No other workloads can run on those nodes (enforced by the NoSchedule taint).

This is especially important when you need to ensure that all brokers in a NodePool run on identical hardware, such as the same machine type in a cloud provider’s Kubernetes node pool.

Label and taint your Kubernetes nodes

Before creating a NodePool, label and taint the target Kubernetes nodes. Each cloud provider handles this differently when you create a node pool:

  • GKE

  • EKS

  • AKS

gcloud container node-pools create redpanda-pool1 \
  --cluster <cluster-name> \
  --machine-type e2-standard-8 \
  --node-labels nodetype=redpanda-pool1 \
  --node-taints redpanda-pool1=true:NoSchedule
eksctl create nodegroup \
  --cluster <cluster-name> \
  --name redpanda-pool1 \
  --node-type m5.2xlarge \
  --node-labels nodetype=redpanda-pool1 \
  --node-taints redpanda-pool1=true:NoSchedule
az aks nodepool add \
  --resource-group <resource-group> \
  --cluster-name <cluster-name> \
  --name rppool1 \
  --node-vm-size Standard_D8s_v3 \
  --labels nodetype=redpanda-pool1 \
  --node-taints redpanda-pool1=true:NoSchedule

For existing nodes, you can apply labels and taints manually:

kubectl label nodes <node-name> nodetype=redpanda-pool1
kubectl taint nodes <node-name> redpanda-pool1=true:NoSchedule

Configure the NodePool CR

Reference the labels and taints in your NodePool CR:

apiVersion: cluster.redpanda.com/v1alpha2
kind: NodePool
metadata:
  name: pool1
  namespace: <namespace>
spec:
  clusterRef:
    name: redpanda
  replicas: 3
  nodeSelector:
    nodetype: redpanda-pool1 (1)
  tolerations:
    - key: redpanda-pool1 (2)
      operator: Equal
      value: "true"
      effect: NoSchedule
1 Must match the label applied to the target Kubernetes nodes.
2 Must match the taint key and effect on the target Kubernetes nodes.

Verify node placement

After applying the NodePool, confirm that all Pods are running on the expected nodes:

kubectl get pods -n <namespace> -o wide -l app.kubernetes.io/name=redpanda

The NODE column should show only nodes from the target Kubernetes node pool.

Migrate an existing cluster to NodePools

If you have an existing Redpanda cluster managed by the Redpanda Operator without NodePools, you must migrate it to use a NodePool CR before you can perform NodePool-based migrations. This process transitions your cluster from operator-managed StatefulSet replicas to NodePool-managed brokers with no downtime.

This migration is a prerequisite for using the NodePool CRD to upgrade or migrate node pools. Complete this process before attempting a blue/green NodePool migration.

Verify cluster health

Confirm the existing cluster is healthy before making changes:

rpk cluster health
rpk cluster info

Note the number of brokers in your cluster. You will use this value for the NodePool replicas field.

Enable NodePool support on the Operator

If you have not already done so, upgrade the Redpanda Operator with experimental CRDs and the NodePool flag enabled. See Enable NodePools.

Update the Redpanda CR

Patch the Redpanda CR to set statefulset.replicas to 0. This delegates broker management to NodePool CRs:

cat <<EOF > patch-replicas.yaml
spec:
  clusterSpec:
    statefulset:
      replicas: 0
EOF
kubectl patch redpanda redpanda -n <namespace> --type merge --patch-file patch-replicas.yaml
Setting replicas to 0 does not shut down your existing brokers. It tells the operator to stop managing the StatefulSet replica count directly, allowing NodePool CRs to take over.

Create a NodePool for the existing brokers

Create a NodePool CR that matches your current cluster configuration. Set replicas to the number of brokers currently running, and use nodeSelector and tolerations that match the nodes your brokers are already running on:

nodepool-pool1.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: NodePool
metadata:
  name: pool1
  namespace: <namespace>
spec:
  clusterRef:
    name: redpanda
  replicas: 3 (1)
  nodeSelector:
    nodetype: redpanda-pool1 (2)
  tolerations:
    - key: redpanda-pool1 (3)
      effect: NoSchedule
1 Set to the number of brokers in your existing cluster.
2 Replace with the label on your current Kubernetes nodes.
3 Replace with the taint key on your current Kubernetes nodes.

Apply the NodePool:

kubectl apply -f nodepool-pool1.yaml

Verify the migration

Confirm the NodePool has adopted the existing brokers:

kubectl get nodepool -n <namespace>

Verify the brokers are healthy:

rpk cluster health
rpk cluster info

You should see the same number of healthy brokers as before. The cluster is now managed by the NodePool CR and ready for NodePool-based migrations.

Remove NodePools and return to operator-managed replicas

If you no longer need NodePools, you can migrate brokers back to the default StatefulSet managed by the Redpanda CR. This process restores the original replica count in the Redpanda CR, waits for the new brokers to stabilize, then removes the NodePool CR. The operator decommissions the NodePool brokers one by one.

Restore replicas in the Redpanda CR

Set statefulset.replicas in the Redpanda CR back to the desired broker count. For example, if your NodePool has 3 replicas, set the Redpanda CR to 3:

cat <<EOF > patch-replicas.yaml
spec:
  clusterSpec:
    statefulset:
      replicas: 3
EOF
kubectl patch redpanda redpanda -n <namespace> --type merge --patch-file patch-replicas.yaml

The operator creates new brokers in the default StatefulSet. The cluster temporarily runs 2x the usual number of brokers (for example, 6 brokers: 3 from the NodePool and 3 from the Redpanda CR).

Wait for all brokers to stabilize

Wait until all brokers are healthy and data is fully replicated across the cluster:

rpk cluster health
rpk cluster info
Expected output (6 brokers)
CLUSTER HEALTH OVERVIEW
=======================
Healthy:                          true
Unhealthy reasons:                []
Controller ID:                    0
All nodes:                        [0 1 2 3 4 5]
Nodes down:                       []
Leaderless partitions (0):        []
Under-replicated partitions (0):  []

Do not proceed until the cluster reports as healthy with all brokers online.

Delete the NodePool CR

Delete the NodePool CR. The operator decommissions the NodePool brokers one by one, draining partitions to the Redpanda CR-managed brokers:

kubectl delete nodepool pool1 -n <namespace>

Monitor the decommission progress:

rpk redpanda admin brokers list

Wait until only the Redpanda CR-managed brokers remain.

Verify the cluster

Confirm that the cluster is healthy and running only the expected brokers:

rpk cluster health
rpk cluster info

The cluster is now fully managed by the Redpanda CR without NodePools.

Cluster maintenance

For ongoing management of brokers in your cluster, see:

Suggested reading