Manage Pod Resources in Kubernetes

You can define requirements for Pod resources such as CPU, memory, and storage. Redpanda Data recommends that you determine and set these values before deploying the cluster, but you can also update the values on a running cluster.

Prerequisites

  • See Kubernetes Cluster Requirements and Recommendations for the minimum worker node, memory, CPU, and storage requirements.

  • Make sure that your physical or virtual machines have enough resources to give to Redpanda. To see the available resources on the worker nodes that you have provisioned:

    kubectl describe nodes

Production considerations

Limitations

Decreasing the number of CPU cores in a production cluster is not supported in this version of Redpanda. However, you can enable intra-broker partition balancing to reduce CPU cores in a development cluster.

Starting from version 24.3, decreasing the number of CPU cores in a production cluster is supported. Upgrade to version 24.3 or later to access this feature.

Configure memory resources

On a worker node, Kubernetes and Redpanda processes are running at the same time, including the Seastar subsystem that is built into the Redpanda binary. Each of these processes consumes memory. You can configure the memory resources that are allocated to these processes.

By default, the Helm chart allocates 80% of the configured memory in resources.memory.container to Redpanda, with the remaining reserved for overhead such as the Seastar subsystem and other container processes. Redpanda Data recommends this default setting.

Although you can also allocate the exact amount of memory for Redpanda and the Seastar subsystem manually, Redpanda Data does not recommend this approach because setting the wrong values can lead to performance issues, instability, or data loss. As a result, this approach is not documented here.
  • Helm + Operator

  • Helm

redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
spec:
  chartRef: {}
  clusterSpec:
    resources:
      memory:
        enable_memory_locking: true (1)
        container:
          # If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits)
          # min:
          max: <number><unit> (2)
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
  • --values

  • --set

memory.yaml
resources:
  memory:
    enable_memory_locking: true (1)
    container:
      # If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits)
      # min:
      max: <number><unit> (2)
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --values memory.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --set resources.memory.enable_memory_locking=true \ (1)
  --set resources.memory.container.max=<number><unit> (2)
1 For production, enable memory locking to prevent the operating system from paging out Redpanda’s memory to disk, which can significantly impact performance.
2 The amount of memory to give Redpanda, Seastar, and the other container processes. You should give Redpanda at least 2 Gi of memory per core. Given that the Helm chart allocates 80% of the container’s memory to Redpanda, leaving the rest for the Seastar subsystem and other processes, set this value to at least 2.5 Gi per core to ensure Redpanda has a full 2 Gi. Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. Memory units are converted to the nearest whole MiB. For a description of memory resource units, see the Kubernetes documentation.

Quality of service and resource guarantees

To ensure that Redpanda receives stable and consistent resources, set the quality of service (QoS) class to Guaranteed by matching resource requests and limits on all containers in the Pods that run Redpanda.

Kubernetes uses QoS to decide which Pods to evict from a worker node that runs out of resources. When a worker node runs out of resources, Kubernetes evicts Pods with a Guaranteed QoS last. This stability is crucial for Redpanda because it requires consistent computational and memory resources to maintain high performance.

Kubernetes gives a Pod a Guaranteed QoS class when every container inside it has identical resource requests and limits set for both CPU and memory. This strict configuration signals to Kubernetes that these resources are critical and should not be throttled or reclaimed under normal operating conditions. To configure the Pods that run Redpanda with Guaranteed QoS, specify both resource requests and limits for all enabled containers in the Pods.

For example:

  • Helm + Operator

  • Helm

redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
spec:
  chartRef: {}
  clusterSpec:
    resources:
      cpu:
        cores: <number-of-cpu-cores>
      memory:
        container:
          min: <redpanda-container-memory>
          max: <redpanda-container-memory>
    statefulset:
      sideCars:
        configWatcher:
          resources:
            requests:
              cpu: <redpanda-sidecar-container-cpu>
              memory: <redpanda-sidecar-container-memory>
            limits:
              cpu: <redpanda-sidecar-container-cpu> # Matches the request
              memory: <redpanda-sidecar-container-memory> # Matches the request
      initContainers:
        tuning:
          resources:
            requests:
              cpu: <redpanda-tuning-container-cpu>
              memory: <redpanda-tuning-container-memory>
            limits:
              cpu: <redpanda-tuning-container-cpu> # Matches the request
              memory: <redpanda-tuning-container-memory> # Matches the request
        setTieredStorageCacheDirOwnership:
          resources:
            requests:
              cpu: <redpanda-ts-cache-ownership-container-cpu>
              memory: <redpanda-ts-cache-ownership-container-memory>
            limits:
              cpu: <redpanda-ts-cache-ownership-container-cpu> # Matches the request
              memory: <redpanda-ts-cache-ownership-container-memory> # Matches the request
        configurator:
          resources:
            requests:
              cpu: <redpanda-configurator-container-cpu>
              memory: <redpanda-configurator-container-memory>
            limits:
              cpu: <redpanda-configurator-container-cpu> # Matches the request
              memory: <redpanda-configurator-container-memory> # Matches the request
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
  • --values

  • --set

memory.yaml
resources:
  cpu:
    cores: <number-of-cpu-cores>
  memory:
    container:
      min: <redpanda-container-memory>
      max: <redpanda-container-memory>
statefulset:
  sideCars:
    configWatcher:
      resources:
        requests:
          cpu: <redpanda-sidecar-container-cpu>
          memory: <redpanda-sidecar-container-memory>
        limits:
          cpu: <redpanda-sidecar-container-cpu> # Matches the request
          memory: <redpanda-sidecar-container-memory> # Matches the request
  initContainers:
    tuning:
      resources:
        requests:
          cpu: <redpanda-tuning-container-cpu>
          memory: <redpanda-tuning-container-memory>
        limits:
          cpu: <redpanda-tuning-container-cpu> # Matches the request
          memory: <redpanda-tuning-container-memory> # Matches the request
    setTieredStorageCacheDirOwnership:
      resources:
        requests:
          cpu: <redpanda-ts-cache-ownership-container-cpu>
          memory: <redpanda-ts-cache-ownership-container-memory>
        limits:
          cpu: <redpanda-ts-cache-ownership-container-cpu> # Matches the request
          memory: <redpanda-ts-cache-ownership-container-memory> # Matches the request
    configurator:
      resources:
        requests:
          cpu: <redpanda-configurator-container-cpu>
          memory: <redpanda-configurator-container-memory>
        limits:
          cpu: <redpanda-configurator-container-cpu> # Matches the request
          memory: <redpanda-configurator-container-memory> # Matches the request
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --values memory.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --set resources.cpu.cores=<number-of-cpu-cores> \
  --set resources.memory.container.min=<redpanda-container-memory> \
  --set resources.memory.container.max=<redpanda-container-memory> \
  --set statefulset.sideCars.configWatcher.resources.requests.cpu=<redpanda-sidecar-container-cpu> \
  --set statefulset.sideCars.configWatcher.resources.requests.memory=<redpanda-sidecar-container-memory> \
  --set statefulset.sideCars.configWatcher.resources.limits.cpu=<redpanda-sidecar-container-cpu> \
  --set statefulset.sideCars.configWatcher.resources.limits.memory=<redpanda-sidecar-container-memory> \
  --set statefulset.initContainers.tuning.resources.requests.cpu=<redpanda-tuning-container-cpu> \
  --set statefulset.initContainers.tuning.resources.requests.memory=<redpanda-tuning-container-memory> \
  --set statefulset.initContainers.tuning.resources.limits.cpu=<redpanda-tuning-container-cpu> \
  --set statefulset.initContainers.tuning.resources.limits.memory=<redpanda-tuning-container-memory> \
  --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.requests.cpu=<redpanda-ts-cache-ownership-container-cpu> \
  --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.requests.memory=<redpanda-ts-cache-ownership-container-memory> \
  --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.limits.cpu=<redpanda-ts-cache-ownership-container-cpu> \
  --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.limits.memory=<redpanda-ts-cache-ownership-container-memory> \
  --set statefulset.initContainers.configurator.resources.requests.cpu=<redpanda-configurator-container-cpu> \
  --set statefulset.initContainers.configurator.resources.requests.memory=<redpanda-configurator-container-memory> \
  --set statefulset.initContainers.configurator.resources.limits.cpu=<redpanda-configurator-container-cpu> \
  --set statefulset.initContainers.configurator.resources.limits.memory=<redpanda-configurator-container-memory>

When the StatefulSet is deployed, make sure that the QoS for the Pods is set to Guaranteed:

kubectl --namespace=<namespace> get pod <pod-name> -o jsonpath='{ .status.qosClass}{"\n"}'

Configure storage capacity

Make sure to provision enough disk storage for your streaming workloads.

If you use PersistentVolumes, you can set the storage capacity for each volume. For instructions, see Configure Storage for the Redpanda data directory in Kubernetes.

Run Redpanda in shared environments

If Redpanda runs in a shared environment, where multiple applications run on the same worker node, you can make Redpanda less aggressive in CPU usage by enabling overprovisioning. This adjustment ensures a fairer distribution of CPU time among all processes, improving overall system efficiency at the cost of Redpanda’s performance.

You can enable overprovisioning by either setting the CPU request to a fractional value or setting overprovisioned to true.

  • Helm + Operator

  • Helm

redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
spec:
  chartRef: {}
  clusterSpec:
    resources:
      cpu:
        cores: <number-of-cpu-cores>
        overprovisioned: true
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
  • --values

  • --set

cpu-cores-overprovisioned.yaml
resources:
  cpu:
    cores: <number-of-cpu-cores>
    overprovisioned: true
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --values cpu-cores-overprovisioned.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --set resources.cpu.cores=<number-of-cpu-cores> \
  --set resources.cpu.overprovisioned=true

If you’re experimenting with Redpanda in Kubernetes, you can also set the number of CPU cores to millicores to automatically enable overprovisioning.

cpu-cores.yaml
resources:
  cpu:
    cores: 200m