Manage Pod Resources in Kubernetes

Managing Pod resources, such as CPU, memory, and storage, is critical for ensuring that your Redpanda cluster runs well in Kubernetes. In this guide, you’ll learn how to define and configure resource requests and limits using the Redpanda Helm chart and Redpanda CRD. Setting these values before deployment helps guarantee predictable scheduling, enforces the Guaranteed QoS classification, and minimizes the risk of performance issues such as resource contention or unexpected evictions.

Prerequisites

  • See Kubernetes Cluster Requirements and Recommendations for the minimum worker node, memory, CPU, and storage requirements.

  • Make sure that your physical or virtual machines have enough resources to give to Redpanda. To see the available resources on the worker nodes that you have provisioned:

    kubectl describe nodes

Production considerations

Configure memory resources

When deploying Redpanda, you must reserve sufficient memory for both Redpanda and other system processes. Redpanda uses the Seastar framework to manage memory through two important flags. In Kubernetes, the values of these flags are usually set for you, depending on how you configure the Redpanda CRD or the Helm chart.

  • --memory: When set, explicitly defines the Redpanda heap size.

  • --reserve-memory: When set, reserves a specific amount of memory for system overhead. If not set, a reserve is automatically calculated.

Learn more about these Seastar flags

--memory Not set

--memory Set (M)

--reserve-memory Not set

Heap size = available memory - calculated reserve

Heap size = exactly M (if M plus calculated reserve ≤ available memory). Otherwise, startup fails

--reserve-memory set (R)

Heap size = available memory - R

Heap size = exactly M (if M + R ≤ available memory). Otherwise, startup fails

Definitions and behavior:

  • Available memory: The memory remaining after subtracting system requirements, such as /proc/sys/vm/min_free_kbytes, from the total or cgroup-limited memory.

  • Calculated reserve: The greater of 1.5 GiB or 7% of available memory used when --reserve-memory is not explicitly set.

These flags are set for you by default when using the Redpanda Helm chart or Operator. However, you can manually set these flags using the statefulset.additionalRedpandaCmdFlags configuration if needed.

Avoid manually setting the --memory and --reserve-memory flags unless absolutely necessary. Incorrect settings can lead to performance issues, instability, or data loss.

Legacy Behavior (Default)

Production Recommendation

resources.memory.container.max, which allocates:

  • 80% of container memory to --memory

  • 20% to --reserve-memory

This option is deprecated and maintained only for backward compatibility.

Use resources.requests.memory with matching resources.limits.memory to provide a predictable, dedicated heap for Redpanda while allowing Kubernetes to effectively schedule and enforce resource limits.

These options ensure:

  • Predictable scheduling: Kubernetes uses memory requests to accurately place Pods on nodes with sufficient resources.

  • Guaranteed QoS: Matching requests and limits ensure the Pod receives the Guaranteed QoS class, reducing eviction risk.

  • Dedicated allocation:

    • 90% of the requested memory is allocated to the Redpanda heap using the --memory flag.

    • The remaining 10% is reserved for other processes such as exec probes, emptyDirs, and kubectl exec, helping prevent transient spikes in memory from causing Redpanda to be killed (OOMKilled).

  • The --reserve-memory flag is fixed at 0 because the kubelet already manages system-level memory reservations.

  • Helm + Operator

  • Helm

redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
spec:
  chartRef: {}
  clusterSpec:
    statefulset:
      additionalRedpandaCmdFlags:
        - '--lock-memory' (1)
    resources:
      requests:
        memory: <number><unit> (2)
      limits:
        memory: <number><unit> (3)
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
  • --values

  • --set

memory.yaml
statefulset:
  additionalRedpandaCmdFlags:
    - '--lock-memory' (1)
resources:
  requests:
    memory: <number><unit> (2)
  limits:
    memory: <number><unit> (3)
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --values memory.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --set statefulset.additionalRedpandaCmdFlags=="{--lock-memory}" \ (1)
  --set resources.requests.memory=<number><unit> \ (2)
  --set resources.limits.memory=<number><unit> (3)
1 Enable memory locking to prevent the operating system from paging out Redpanda’s memory to disk. This can significantly improve performance by ensuring Redpanda has uninterrupted access to its allocated memory.
2 Request at least 2.22 Gi of memory per core to ensure Redpanda has the 2 Gi per core it requires after accounting for the 90% allocation to the --memory flag.

Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi.

Memory units are truncated to the nearest whole MiB. For example, a memory request of 1024 KiB will result in 1 MiB being allocated. For a description of memory resource units, see the Kubernetes documentation.

3 Set the memory limit to match the memory request. This ensures Kubernetes enforces a strict upper bound on memory usage and helps maintain the Guaranteed QoS classification.

When the StatefulSet is deployed, make sure that the memory request and limit are set:

kubectl --namespace=<namespace> get pod <pod-name> -o jsonpath='{.spec.containers[?(@.name=="redpanda")].resources}{"\n"}'

Configure CPU resources

Redpanda uses the Seastar framework to manage CPU usage through the --smp flag, which sets the number of CPU cores available to Redpanda. This is configured using resources.cpu.cores, which automatically applies the same value to both resources.requests.cpu and resources.limits.cpu.

Default Configuration Production Recommendation

resources.cpu.cores: 1 Equivalent to setting resources.requests.cpu and resources.limits.cpu to 1.

Set resources.cpu.cores to 4 or greater.

Set CPU cores to an even integer to leverage the static CPU manager policy, granting the Pod exclusive access to the requested cores for optimal performance.

  • Helm + Operator

  • Helm

redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
spec:
  chartRef: {}
  clusterSpec:
    resources:
      cpu:
        cores: <number-of-cpu-cores> (1)
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
  • --values

  • --set

cpu-cores.yaml
resources:
  cpu:
    cores: <number-of-cpu-cores> (1)
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --values cpu-cores.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --set resources.cpu.cores=<number-of-cpu-cores> \ (1)
1 Set resources.cpu.cores to the desired number of CPU cores for Redpanda. This value is applied as both the CPU request and limit, ensuring that the Pod maintains the Guaranteed QoS classification by enforcing a strict upper bound on CPU usage.

When the StatefulSet is deployed, make sure that the CPU request and limit are set:

kubectl --namespace <namespace> get pod <pod-name> -o jsonpath='{.spec.containers[?(@.name=="redpanda")].resources}{"\n"}'

Quality of service and resource guarantees

To ensure that Redpanda receives stable and consistent resources, set the quality of service (QoS) class to Guaranteed by matching resource requests and limits on all containers in the Pods that run Redpanda.

Kubernetes uses QoS to decide which Pods to evict from a worker node that runs out of resources. When a worker node runs out of resources, Kubernetes evicts Pods with a Guaranteed QoS last. This stability is crucial for Redpanda because it requires consistent computational and memory resources to maintain high performance.

Kubernetes gives a Pod a Guaranteed QoS class when every container inside it has identical resource requests and limits set for both CPU and memory. This strict configuration signals to Kubernetes that these resources are critical and should not be throttled or reclaimed under normal operating conditions. To configure the Pods that run Redpanda with Guaranteed QoS, specify both resource requests and limits for all enabled containers in the Pods.

For example:

  • Helm + Operator

  • Helm

redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
spec:
  chartRef: {}
  clusterSpec:
    resources:
      cpu:
        cores: <number-of-cpu-cores>
      requests:
        memory: <redpanda-container-memory>
      limits:
        memory: <redpanda-container-memory> # Matches the request
    statefulset:
      sideCars:
        configWatcher:
          resources:
            requests:
              cpu: <redpanda-sidecar-container-cpu>
              memory: <redpanda-sidecar-container-memory>
            limits:
              cpu: <redpanda-sidecar-container-cpu> # Matches the request
              memory: <redpanda-sidecar-container-memory> # Matches the request
      initContainers:
        tuning:
          resources:
            requests:
              cpu: <redpanda-tuning-container-cpu>
              memory: <redpanda-tuning-container-memory>
            limits:
              cpu: <redpanda-tuning-container-cpu> # Matches the request
              memory: <redpanda-tuning-container-memory> # Matches the request
        setTieredStorageCacheDirOwnership:
          resources:
            requests:
              cpu: <redpanda-ts-cache-ownership-container-cpu>
              memory: <redpanda-ts-cache-ownership-container-memory>
            limits:
              cpu: <redpanda-ts-cache-ownership-container-cpu> # Matches the request
              memory: <redpanda-ts-cache-ownership-container-memory> # Matches the request
        configurator:
          resources:
            requests:
              cpu: <redpanda-configurator-container-cpu>
              memory: <redpanda-configurator-container-memory>
            limits:
              cpu: <redpanda-configurator-container-cpu> # Matches the request
              memory: <redpanda-configurator-container-memory> # Matches the request
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
  • --values

  • --set

memory.yaml
resources:
  cpu:
    cores: <number-of-cpu-cores>
  requests:
    memory: <redpanda-container-memory>
  limits:
    memory: <redpanda-container-memory> # Matches the request
statefulset:
  sideCars:
    configWatcher:
      resources:
        requests:
          cpu: <redpanda-sidecar-container-cpu>
          memory: <redpanda-sidecar-container-memory>
        limits:
          cpu: <redpanda-sidecar-container-cpu> # Matches the request
          memory: <redpanda-sidecar-container-memory> # Matches the request
  initContainers:
    tuning:
      resources:
        requests:
          cpu: <redpanda-tuning-container-cpu>
          memory: <redpanda-tuning-container-memory>
        limits:
          cpu: <redpanda-tuning-container-cpu> # Matches the request
          memory: <redpanda-tuning-container-memory> # Matches the request
    setTieredStorageCacheDirOwnership:
      resources:
        requests:
          cpu: <redpanda-ts-cache-ownership-container-cpu>
          memory: <redpanda-ts-cache-ownership-container-memory>
        limits:
          cpu: <redpanda-ts-cache-ownership-container-cpu> # Matches the request
          memory: <redpanda-ts-cache-ownership-container-memory> # Matches the request
    configurator:
      resources:
        requests:
          cpu: <redpanda-configurator-container-cpu>
          memory: <redpanda-configurator-container-memory>
        limits:
          cpu: <redpanda-configurator-container-cpu> # Matches the request
          memory: <redpanda-configurator-container-memory> # Matches the request
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --values memory.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --set resources.cpu.cores=<number-of-cpu-cores> \
  --set resources.requests.memory=<redpanda-container-memory> \
  --set resources.limits.memory=<redpanda-container-memory> \
  --set statefulset.sideCars.configWatcher.resources.requests.cpu=<redpanda-sidecar-container-cpu> \
  --set statefulset.sideCars.configWatcher.resources.requests.memory=<redpanda-sidecar-container-memory> \
  --set statefulset.sideCars.configWatcher.resources.limits.cpu=<redpanda-sidecar-container-cpu> \
  --set statefulset.sideCars.configWatcher.resources.limits.memory=<redpanda-sidecar-container-memory> \
  --set statefulset.initContainers.tuning.resources.requests.cpu=<redpanda-tuning-container-cpu> \
  --set statefulset.initContainers.tuning.resources.requests.memory=<redpanda-tuning-container-memory> \
  --set statefulset.initContainers.tuning.resources.limits.cpu=<redpanda-tuning-container-cpu> \
  --set statefulset.initContainers.tuning.resources.limits.memory=<redpanda-tuning-container-memory> \
  --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.requests.cpu=<redpanda-ts-cache-ownership-container-cpu> \
  --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.requests.memory=<redpanda-ts-cache-ownership-container-memory> \
  --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.limits.cpu=<redpanda-ts-cache-ownership-container-cpu> \
  --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.limits.memory=<redpanda-ts-cache-ownership-container-memory> \
  --set statefulset.initContainers.configurator.resources.requests.cpu=<redpanda-configurator-container-cpu> \
  --set statefulset.initContainers.configurator.resources.requests.memory=<redpanda-configurator-container-memory> \
  --set statefulset.initContainers.configurator.resources.limits.cpu=<redpanda-configurator-container-cpu> \
  --set statefulset.initContainers.configurator.resources.limits.memory=<redpanda-configurator-container-memory>

When the StatefulSet is deployed, make sure that the QoS for the Pods is set to Guaranteed:

kubectl --namespace=<namespace> get pod <pod-name> -o jsonpath='{ .status.qosClass}{"\n"}'

Configure storage capacity

Make sure to provision enough disk storage for your streaming workloads.

If you use PersistentVolumes, you can set the storage capacity for each volume. For instructions, see Configure Storage for the Redpanda data directory in Kubernetes.

Run Redpanda in shared environments

If Redpanda runs in a shared environment, where multiple applications run on the same worker node, you can make Redpanda less aggressive in CPU usage by enabling overprovisioning. This adjustment ensures a fairer distribution of CPU time among all processes, improving overall system efficiency at the cost of Redpanda’s performance.

You can enable overprovisioning by either setting the CPU request to a fractional value less than 1 or enabling the --overprovisioned flag.

You cannot enable overprovisioning using resources.cpu.overprovisioned when both resources.requests and resources.limits are set. When both of these configurations are set, the resources.cpu parameter (including cores) is ignored. Instead, use the statefulset.additionalRedpandaCmdFlags configuration to enable overprovisioning.
  • Helm + Operator

  • Helm

redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
spec:
  chartRef: {}
  clusterSpec:
    statefulset:
      additionalRedpandaCmdFlags:
        - '--overprovisioned'
    resources:
      cpu:
        cores: <number-of-cpu-cores>
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
  • --values

  • --set

cpu-cores-overprovisioned.yaml
statefulset:
  additionalRedpandaCmdFlags:
    - '--overprovisioned'
resources:
  cpu:
    cores: <number-of-cpu-cores>
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --values cpu-cores-overprovisioned.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --set resources.cpu.cores=<number-of-cpu-cores> \
  --set statefulset.additionalRedpandaCmdFlags=="{--overprovisioned}"

If you’re experimenting with Redpanda in Kubernetes, you can also set the number of CPU cores to millicores to automatically enable overprovisioning.

cpu-cores.yaml
resources:
  cpu:
    cores: 200m