Manage Pod Resources in Kubernetes
You can define requirements for Pod resources such as CPU, memory, and storage. Redpanda Data recommends that you determine and set these values before deploying the cluster, but you can also update the values on a running cluster.
In a running Redpanda cluster, you cannot decrease the number of CPU cores. You can only increase the number of CPU cores. |
To see the available resources on the worker nodes that you have provisioned:
kubectl describe nodes
Prerequisites
See Kubernetes Requirements and Recommendations for memory, CPU, and storage.
Make sure that your physical or virtual machines have enough resources to give to Redpanda. You assign resources to each Redpanda broker using the values.yaml
settings. If you need to replace a worker node with one that has more resources, decommission the brokers before you replace the worker nodes.
Configure CPU resources
By default, Redpanda pins its threads to all cores that you allocate to it. The more cores that are allocate to Redpanda, the more throughput it can support. You can configure the CPU resources allocated to each Redpanda broker by overriding the default values. For a description of CPU resource units, see the Kubernetes documentation.
For production clusters, you should never set resources.cpu.cores
to anything less than full cores.
-
Helm + Operator
-
Helm
redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha1
kind: Redpanda
metadata:
name: redpanda
spec:
chartRef: {}
clusterSpec:
resources:
cpu:
cores: <number-of-cpu-cores>
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
-
--values
-
--set
cpu-cores.yaml
resources:
cpu:
cores: <number-of-cpu-cores>
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
--values cpu-cores.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
--set resources.cpu.cores=<number-of-cpu-cores>
You cannot decrease the number of CPU cores in a running cluster. |
If you’re experimenting with Redpanda in Kubernetes, you can set the number of CPU cores to millicores, which represent fractions of CPU time. cpu-cores.yaml
|
Disable pinning in shared environments
By default, Redpanda pins its threads to all cores that you allocate to it. On a dedicated worker node, CPU pinning allows maximum throughput, but in a development environment, where you may run other Pods on the same worker node as Redpanda, CPU pinning can cause your processes to run slower than usual. In these shared environments, you can tell Redpanda not to pin its threads or memory, and reduce the amount of polling it does to a minimum.
In production environments, you must run Redpanda brokers on dedicated worker nodes. See Kubernetes Cluster Requirements. |
-
Helm + Operator
-
Helm
redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha1
kind: Redpanda
metadata:
name: redpanda
spec:
chartRef: {}
clusterSpec:
resources:
cpu:
cores: <number-of-cpu-cores>
overprovisioned: true
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
-
--values
-
--set
cpu-cores-overprovisioned.yaml
resources:
cpu:
cores: <number-of-cpu-cores>
overprovisioned: true
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
--values cpu-cores-overprovisioned.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
--set resources.cpu.cores=<number-of-cpu-cores> \
--set resources.cpu.overprovisioned=true
Configure memory resources
On a worker node, Kubernetes and Redpanda processes are running at the same time, including the Seastar subsystem that is built into the Redpanda binary. Each of these processes consumes memory. You can configure the memory resources that are allocated to these processes.
By default, the Helm chart allocates 80% of the configured memory to Redpanda, leaving the rest for the Seastar subsystem and other container processes. Redpanda recommends this default setting.
Although you can also allocate the exact amount of memory for Redpanda and the Seastar subsystem manually, Redpanda does not recommend this approach because setting the wrong values can lead to performance issues, instability, or data loss. As a result, this approach is not documented here. |
Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. Memory units are converted to the nearest whole MiB. For a description of memory resource units, see the Kubernetes documentation.
-
Helm + Operator
-
Helm
redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha1
kind: Redpanda
metadata:
name: redpanda
spec:
chartRef: {}
clusterSpec:
resources:
memory:
container:
# Omit min to set it to the same value as max.
# min:
max: <number><unit>
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
-
--values
-
--set
memory.yaml
resources:
memory:
container:
# Omit min to set it to the same value as max.
# min:
max: <number><unit>
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
--values memory.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
--set resources.memory.container.max=<number><unit>
Setting the min
and max
configurations to the same values ensures that Kubernetes assigns a Guaranteed
Quality of Service (QoS) class to your Pods. Kubernetes uses QoS classes to decide which Pods to evict from a node that runs out of resources. When a node runs out of resources, Kubernetes evicts Pods with a Guaranteed
QoS last. For more details about QoS, see the Kubernetes documentation.
Configure storage capacity
Make sure to provision enough disk storage for your streaming workloads.
If you use PersistentVolumes, you can set the storage capacity for each volume. For instructions, see Configure Storage for the Redpanda data directory in Kubernetes.