Docs Self-Managed Manage Kubernetes Manage Pod Resources This is documentation for Self-Managed v23.3, which is no longer supported. To view the latest available version of the docs, see v24.3. Manage Pod Resources in Kubernetes You can define requirements for Pod resources such as CPU, memory, and storage. Redpanda Data recommends that you determine and set these values before deploying the cluster, but you can also update the values on a running cluster. Prerequisites See Kubernetes Cluster Requirements and Recommendations for the minimum worker node, memory, CPU, and storage requirements. Make sure that your physical or virtual machines have enough resources to give to Redpanda. To see the available resources on the worker nodes that you have provisioned: kubectl describe nodes Production considerations Enable CPU pinning. This configuration allows Redpanda to maintain exclusive access to designated CPU cores, improving throughput and reducing latency caused by CPU context switches. Enable the Guaranteed quality of service class for Pods that run Redpanda. This setup ensures that the CPU and memory allocated to Redpanda are not subject to throttling or other contention issues, providing a stable and predictable performance environment. Enable memory locking. This configuration prevents the operating system from paging out Redpanda’s memory to disk, which can significantly impact performance. Limitations Redpanda does not support decreasing the CPU cores for brokers in an existing cluster. CPU pinning Redpanda is designed to maximize throughput by pinning its threads to specific CPU cores (CPU pinning). However, CPU pinning is unavailable by default in Kubernetes because it uses the Completely Fair Scheduler (CFS) to dynamically manage CPU allocation. The CFS distributes CPU time among all processes based on their priority and resource requests, which can lead to frequent context switches and migration of processes across different cores. This behavior can degrade the performance of Redpanda due to: Loss of CPU cache affinity: As processes move across cores, they lose the cache locality, leading to increased cache misses and higher latency. Scheduling latency: Dynamic CPU allocation can introduce delays as the scheduler determines which core a process should run on, based on current load and availability. To enable CPU pinning in Kubernetes, you must enable the static policy for the CPU manager. With this policy enabled, the CPU Manager controls a shared pool of CPUs. Initially, this shared pool contains all the CPUs in the worker node. However, when a container is created with an integral CPU request in a Guaranteed Pod, CPUs for that container are removed from the shared pool and assigned exclusively to that container for its lifetime. Other containers are migrated off these exclusively allocated CPUs. Enable CPU pinning To enable CPU pinning for Redpanda in Kubernetes: Ensure your Kubernetes cluster is using the static CPU Manager. With this policy enabled, a request for an integral CPU amount in a Guaranteed Pod will result in that container being allocated exclusive access to that number of CPUs through cgroup’s cpuset. Configure the Redpanda container to request an integral CPU amount: Helm + Operator Helm redpanda-cluster.yaml apiVersion: cluster.redpanda.com/v1alpha1 kind: Redpanda metadata: name: redpanda spec: chartRef: {} clusterSpec: resources: cpu: cores: <number-of-cpu-cores> (1) overprovisioned: false (2) kubectl apply -f redpanda-cluster.yaml --namespace <namespace> --values --set cpu-cores.yaml resources: cpu: cores: <number-of-cpu-cores> (1) overprovisioned: false (2) helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \ --values cpu-cores.yaml --reuse-values helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \ --set resources.cpu.cores=<number-of-cpu-cores> \ (1) --set resources.cpu.overprovisioned=false (2) 1 Set the number of cores to an integer such as 4. The more cores that are allocated to Redpanda, the more throughput it can support. For a description of CPU resource units, see the Kubernetes documentation. If you use fractional values, CPU pinning is disabled. 2 This default setting allows each Redpanda thread to run on a dedicated core for maximum performance. Enable the Guaranteed QoS class for your Pods that run Redpanda. Only containers that are both part of a Guaranteed Pod and have integer CPU requests are assigned exclusive CPUs. Disable pinning in shared environments If you don’t enable CPU pinning and Redpanda runs in a shared environment, you can make Redpanda less aggressive in CPU usage by enabling overprovisioning. This adjustment ensures a fairer distribution of CPU time among all processes, improving overall system efficiency at the cost of Redpanda’s performance. You can enable overprovisioning by either setting the CPU request to a fractional value or setting overprovisioned to true. When resources.cpu.cores is set to a fractional value, such as 1100m or 1.1, the Pod is not eligible for exclusive access to the cores which leads to performance bottlenecks. Helm + Operator Helm redpanda-cluster.yaml apiVersion: cluster.redpanda.com/v1alpha1 kind: Redpanda metadata: name: redpanda spec: chartRef: {} clusterSpec: resources: cpu: cores: <number-of-cpu-cores> overprovisioned: true kubectl apply -f redpanda-cluster.yaml --namespace <namespace> --values --set cpu-cores-overprovisioned.yaml resources: cpu: cores: <number-of-cpu-cores> overprovisioned: true helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \ --values cpu-cores-overprovisioned.yaml --reuse-values helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \ --set resources.cpu.cores=<number-of-cpu-cores> \ --set resources.cpu.overprovisioned=true If you’re experimenting with Redpanda in Kubernetes, you can also set the number of CPU cores to millicores to automatically enable overprovisioning. cpu-cores.yaml resources: cpu: cores: 200m Configure memory resources On a worker node, Kubernetes and Redpanda processes are running at the same time, including the Seastar subsystem that is built into the Redpanda binary. Each of these processes consumes memory. You can configure the memory resources that are allocated to these processes. By default, the Helm chart allocates 80% of the configured memory in resources.memory.container to Redpanda, with the remaining reserved for overhead such as the Seastar subsystem and other container processes. Redpanda Data recommends this default setting. Although you can also allocate the exact amount of memory for Redpanda and the Seastar subsystem manually, Redpanda Data does not recommend this approach because setting the wrong values can lead to performance issues, instability, or data loss. As a result, this approach is not documented here. Helm + Operator Helm redpanda-cluster.yaml apiVersion: cluster.redpanda.com/v1alpha1 kind: Redpanda metadata: name: redpanda spec: chartRef: {} clusterSpec: resources: memory: enable_memory_locking: true (1) container: # If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits) # min: max: <number><unit> (2) kubectl apply -f redpanda-cluster.yaml --namespace <namespace> --values --set memory.yaml resources: memory: enable_memory_locking: true (1) container: # If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits) # min: max: <number><unit> (2) helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \ --values memory.yaml --reuse-values helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \ --set resources.memory.enable_memory_locking=true \ (1) --set resources.memory.container.max=<number><unit> (2) 1 For production, enable memory locking to prevent the operating system from paging out Redpanda’s memory to disk, which can significantly impact performance. 2 The amount of memory to give Redpanda, Seastar, and the other container processes. You should give Redpanda at least 2 Gi of memory per core. Given that the Helm chart allocates 80% of the container’s memory to Redpanda, leaving the rest for the Seastar subsystem and other processes, set this value to at least 2.5 Gi per core to ensure Redpanda has a full 2 Gi. Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. Memory units are converted to the nearest whole MiB. For a description of memory resource units, see the Kubernetes documentation. Quality of service and resource guarantees To ensure that Redpanda receives stable and consistent resources, set the quality of service (QoS) class to Guaranteed by matching resource requests and limits on all containers in the Pods that run Redpanda. Kubernetes uses QoS to decide which Pods to evict from a worker node that runs out of resources. When a worker node runs out of resources, Kubernetes evicts Pods with a Guaranteed QoS last. This stability is crucial for Redpanda because it requires consistent computational and memory resources to maintain high performance. Kubernetes gives a Pod a Guaranteed QoS class when every container inside it has identical resource requests and limits set for both CPU and memory. This strict configuration signals to Kubernetes that these resources are critical and should not be throttled or reclaimed under normal operating conditions. To configure the Pods that run Redpanda with Guaranteed QoS, specify both resource requests and limits for all enabled containers in the Pods, including For default values and documentation for configuration options, see the values.yaml file. and For default values and documentation for configuration options, see the values.yaml file.. For example: Helm + Operator Helm redpanda-cluster.yaml apiVersion: cluster.redpanda.com/v1alpha1 kind: Redpanda metadata: name: redpanda spec: chartRef: {} clusterSpec: resources: memory: container: min: <redpanda-container-memory> max: <redpanda-container-memory> statefulset: sideCars: configWatcher: resources: requests: cpu: <redpanda-sidecar-container-cpu> memory: <redpanda-sidecar-container-memory> limits: cpu: <redpanda-sidecar-container-cpu> # Matches the request memory: <redpanda-sidecar-container-memory> # Matches the request initContainers: tuning: resources: requests: cpu: <redpanda-tuning-container-cpu> memory: <redpanda-tuning-container-memory> limits: cpu: <redpanda-tuning-container-cpu> # Matches the request memory: <redpanda-tuning-container-memory> # Matches the request setTieredStorageCacheDirOwnership: resources: requests: cpu: <redpanda-ts-cache-ownership-container-cpu> memory: <redpanda-ts-cache-ownership-container-memory> limits: cpu: <redpanda-ts-cache-ownership-container-cpu> # Matches the request memory: <redpanda-ts-cache-ownership-container-memory> # Matches the request configurator: resources: requests: cpu: <redpanda-configurator-container-cpu> memory: <redpanda-configurator-container-memory> limits: cpu: <redpanda-configurator-container-cpu> # Matches the request memory: <redpanda-configurator-container-memory> # Matches the request kubectl apply -f redpanda-cluster.yaml --namespace <namespace> --values --set memory.yaml resources: memory: container: min: <redpanda-container-memory> max: <redpanda-container-memory> statefulset: sideCars: configWatcher: resources: requests: cpu: <redpanda-sidecar-container-cpu> memory: <redpanda-sidecar-container-memory> limits: cpu: <redpanda-sidecar-container-cpu> # Matches the request memory: <redpanda-sidecar-container-memory> # Matches the request initContainers: tuning: resources: requests: cpu: <redpanda-tuning-container-cpu> memory: <redpanda-tuning-container-memory> limits: cpu: <redpanda-tuning-container-cpu> # Matches the request memory: <redpanda-tuning-container-memory> # Matches the request setTieredStorageCacheDirOwnership: resources: requests: cpu: <redpanda-ts-cache-ownership-container-cpu> memory: <redpanda-ts-cache-ownership-container-memory> limits: cpu: <redpanda-ts-cache-ownership-container-cpu> # Matches the request memory: <redpanda-ts-cache-ownership-container-memory> # Matches the request configurator: resources: requests: cpu: <redpanda-configurator-container-cpu> memory: <redpanda-configurator-container-memory> limits: cpu: <redpanda-configurator-container-cpu> # Matches the request memory: <redpanda-configurator-container-memory> # Matches the request helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \ --values memory.yaml --reuse-values helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \ --set resources.memory.container.min=<redpanda-container-memory> \ --set resources.memory.container.max=<redpanda-container-memory> \ --set statefulset.sideCars.configWatcher.resources.requests.cpu=<redpanda-sidecar-container-cpu> \ --set statefulset.sideCars.configWatcher.resources.requests.memory=<redpanda-sidecar-container-memory> \ --set statefulset.sideCars.configWatcher.resources.limits.cpu=<redpanda-sidecar-container-cpu> \ --set statefulset.sideCars.configWatcher.resources.limits.memory=<redpanda-sidecar-container-memory> \ --set statefulset.initContainers.tuning.resources.requests.cpu=<redpanda-tuning-container-cpu> \ --set statefulset.initContainers.tuning.resources.requests.memory=<redpanda-tuning-container-memory> \ --set statefulset.initContainers.tuning.resources.limits.cpu=<redpanda-tuning-container-cpu> \ --set statefulset.initContainers.tuning.resources.limits.memory=<redpanda-tuning-container-memory> \ --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.requests.cpu=<redpanda-ts-cache-ownership-container-cpu> \ --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.requests.memory=<redpanda-ts-cache-ownership-container-memory> \ --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.limits.cpu=<redpanda-ts-cache-ownership-container-cpu> \ --set statefulset.initContainers.setTieredStorageCacheDirOwnership.resources.limits.memory=<redpanda-ts-cache-ownership-container-memory> \ --set statefulset.initContainers.configurator.resources.requests.cpu=<redpanda-configurator-container-cpu> \ --set statefulset.initContainers.configurator.resources.requests.memory=<redpanda-configurator-container-memory> \ --set statefulset.initContainers.configurator.resources.limits.cpu=<redpanda-configurator-container-cpu> \ --set statefulset.initContainers.configurator.resources.limits.memory=<redpanda-configurator-container-memory> When the StatefulSet is deployed, make sure that the QoS for the Pods is set to Guaranteed: kubectl --namespace=<namespace> get pod <pod-name> -o jsonpath='{ .status.qosClass}{"\n"}' Configure storage capacity Make sure to provision enough disk storage for your streaming workloads. If you use PersistentVolumes, you can set the storage capacity for each volume. For instructions, see Configure Storage for the Redpanda data directory in Kubernetes. Suggested reading Redpanda Helm Specification Redpanda CRD Reference Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution Remote Read Replicas Scale