Get Started with Redpanda in Kubernetes
In this tutorial, you learn how to deploy Redpanda on Kubernetes using the Helm chart. You'll deploy a Redpanda cluster and an instance of Redpanda Console. You'll explore the Kubernetes components that the Helm chart deploys. Then, you'll use rpk both as an internal client and an external client to interact with your Redpanda cluster from the command line.
Looking for the Redpanda Operator?
If you're an existing user of the Redpanda Operator, see the Redpanda Operator documentation.
Redpanda does not recommend the Redpanda Operator for new deployments. The Redpanda Operator was built for Redpanda Cloud and has unique features and workflows for that specific use case. If you're not already using the Redpanda Operator, continue with this tutorial to get started with the Redpanda Helm chart.
Prerequisites​
Before you begin, make sure that you have the correct software for your chosen Kubernetes platform:
- All platforms
- kind
- minikube
- Amazon EKS
- Google GKE
A user with IAM permissions for creating clusters.
jq for working with JSON results setting environment variables
Complete the 'Before you begin' steps and the 'Launch Cloud Shell' steps of the GKE quickstart. Cloud Shell comes preinstalled with the Google Cloud CLI, the kubectl
command-line tool, and the Helm package manager.
Create a Kubernetes cluster​
Redpanda brokers are designed to have access to all resources, such as CPU and memory, on their host machine. As a result, in a Kubernetes environment, it's recommended to run each Redpanda broker on its own Kubernetes node.
Redpanda recommends at least three Redpanda brokers to use as seed servers. Seed servers are used to bootstrap the gossip process for new brokers joining a cluster. When a new broker joins, it connects to the seed servers to find out the topology of the Redpanda cluster. A larger number of seed servers makes consensus more robust and minimizes the chance of unwanted clusters forming when brokers are restarted without any data.
In this step, you create the Kubernetes nodes to host the Pods for the Redpanda brokers.
- kind
- minikube
- Amazon EKS
- Google GKE
Define a cluster in the
kind.yaml
configuration file:cat <<EOF >kind.yaml
---
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOFCreate the Kubernetes cluster from the configuration file:
kind create cluster --config kind.yaml
Create the Kubernetes cluster:
minikube start --namespace redpanda --nodes 4
Prevent applications from being scheduled on the Kubernetes control plane node:
kubectl taint node \
-l node-role.kubernetes.io/control-plane="" \
node-role.kubernetes.io/control-plane=:NoSchedule
Create an EKS cluster in your default region:
eksctl create cluster --name redpanda \
--external-dns-access \
--nodegroup-name standard-workers \
--node-type m5.xlarge \
--nodes 3 \
--nodes-min 3 \
--nodes-max 4noteIf your account is configured for OIDC, add the
--with-oidc
flag to thecreate cluster
command.Your local
kubeconfig
file should now point to the new cluster.kubectl get service
If the
kubectl
command cannot connect to your cluster, update your local kubeconfig to point to your new cluster. Replace the region with your default region, which is located in the~/.aws/config
file.aws eks update-kubeconfig --region us-east-2 --name redpanda
Create the IAM role needed for the Amazon Elastic Block Store (EBS) Cluster Storage Interface (CSI):
eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster redpanda \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve \
--role-only \
--role-name AmazonEKS_EBS_CSI_DriverRoleGet your AWS account ID:
AWS_ACCOUNT_ID=`aws sts get-caller-identity | jq -r '.Account'`
Add the EBS CSI add-on:
eksctl create addon \
--name aws-ebs-csi-driver \
--cluster redpanda \
--service-account-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole \
--force
Create a GKE cluster. You can replace the region with your own.
gcloud container clusters create redpanda --machine-type n1-standard-4 --num-nodes=3 --zone=us-east1-b
Deploy Redpanda​
Install Redpanda using Helm:
helm repo add redpanda https://charts.redpanda.com/
helm repo update
helm install redpanda redpanda/redpanda \
--namespace redpanda \
--create-namespaceThe installation displays some tips for getting started.
Wait for the Redpanda cluster to be ready:
kubectl -n redpanda rollout status statefulset redpanda --watch
When the Redpanda cluster is ready, the output should look similar to the following:
statefulset rolling update complete 3 pods at revision redpanda-8654f645b4...
If your cluster remains in a pending state, see Troubleshooting
Verify that each Redpanda broker is scheduled on only one Kubernetes node:
kubectl get pod -n redpanda \
-o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name \
| grep 'redpanda-[0-9]'Example output
example-worker3 redpanda-0
example-worker2 redpanda-1
example-worker redpanda-2
Explore the Kubernetes components​
Your deployment includes the following components by default:
- A StatefulSet with three Pods.
- One PersistentVolumeClaim for each Pod, each with a capacity of 20Gi.
- A headless ClusterIP Service and a NodePort Service for each Kubernetes node that runs a Redpanda broker.
StatefulSet​
Redpanda is a stateful application. Each Redpanda broker needs to store its own state (topic partitions) in its own storage volume. As a result, the Helm chart deploys a StatefulSet to manage the Pods in which the Redpanda brokers are running.
kubectl get statefulset -n redpanda
Example output
NAME READY AGE
redpanda 3/3 3m11s
StatefulSets ensure that the state associated with a particular Pod replica is always the same, no matter how often the Pod is recreated. Each Pod is also given a unique ordinal number in its name such as redpanda-0
. A Pod with a particular ordinal number is always associated with a PersistentVolumeClaim with the same number. When a Pod in the StatefulSet is deleted and recreated, it is given the same ordinal number and so it mounts the same storage volume as the deleted Pod that it replaced.
kubectl get pod -n redpanda
Example output
NAME READY STATUS RESTARTS AGE
redpanda-0 1/1 Running 0 6m9s
redpanda-1 1/1 Running 0 6m9s
redpanda-2 1/1 Running 0 6m9s
redpanda-post-install-9t5qr 0/1 Completed 0 6m9s
The post-install Job updates the Redpanda runtime configuration.
PersistentVolumeClaim​
Redpanda brokers must be able to store their data on disk. By default, the Helm chart uses the default StorageClass in the Kubernetes cluster to create a PersistentVolumeClaim for each Pod. The default StorageClass in your Kubernetes cluster depends on the Kubernetes platform that you are using.
kubectl get persistentvolumeclaims -n redpanda
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-redpanda-0 Bound pvc-3311ade3-de84-4027-80c6-3d8347302962 20Gi RWO standard 75s
datadir-redpanda-1 Bound pvc-4ea8bc03-89a6-41e4-b985-99f074995f08 20Gi RWO standard 75s
datadir-redpanda-2 Bound pvc-45c3555f-43bc-48c2-b209-c284c8091c45 20Gi RWO standard 75s
Service​
The clients writing to or reading from a given partition have to connect directly to the leader broker that hosts the partition. As a result, clients needs to be able to connect directly to each Pod. To allow internal and external clients to connect to each Pod that hosts a Redpanda broker, the Helm chart configures two Services:
- Internal using the Headless ClusterIP
- External using the NodePort
kubectl get service -n redpanda
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redpanda ClusterIP None <none> <none> 5m37s
redpanda-external NodePort 10.96.137.220 <none> 9644:31644/TCP,9094:31092/TCP,8083:30082/TCP,8080:30081/TCP 5m37s
Headless ClusterIP Service​
The headless Service associated with a StatefulSet gives the Pods their network identity in the form of a fully qualified domain name (FQDN). Both Redpanda brokers in the same Redpanda cluster and clients within the same Kubernetes cluster use this FQDN to communicate with each other.
When you created your first topic in the previous section, the address in the --brokers
parameter used the internal FQDN to connect rpk to the Redpanda brokers:
redpanda-0.redpanda.redpanda.svc.cluster.local.
An important requirement of distributed applications such as Redpanda is peer discovery: The ability for each broker to find other brokers in the same cluster. When each Pod is rolled out, its seed_servers
field is updated with the FQDN of each Pod in the cluster so that they can discover each other.
kubectl -n redpanda exec redpanda-0 -c redpanda -- cat etc/redpanda/redpanda.yaml
redpanda:
data_directory: /var/lib/redpanda/data
empty_seed_starts_cluster: false
seed_servers:
- host:
address: redpanda-0.redpanda.redpanda.svc.cluster.local.
port: 33145
- host:
address: redpanda-1.redpanda.redpanda.svc.cluster.local.
port: 33145
- host:
address: redpanda-2.redpanda.redpanda.svc.cluster.local.
port: 33145
NodePort Service​
External access is made available by a NodePort service that opens the following ports by default:
Node port | Pod port | Purpose |
---|---|---|
30081 | 8080 | Schema registry |
30082 | 8083 | HTTP proxy |
31092 | 9094 | Kafka API |
31644 | 9644 | Admin API |
Deploy Redpanda Console​
Redpanda Console is a developer-friendly web UI for managing and debugging your Redpanda cluster and your applications.
Install Redpanda Console using Helm:
helm repo add redpanda 'https://charts.redpanda.com/'
helm repo update
helm install redpanda-console redpanda/console \
--namespace redpanda \
--create-namespace \
--set console.config.kafka.brokers=redpanda-0.redpanda.redpanda.svc.cluster.local.:9093 \
--set service.type=NodePort \
--set service.targetPort=8080
--namespace redpanda
: Deploy Redpanda Console in the same namespace as your Redpanda cluster so that it can access the brokers using their internal FQDNs.--set console.config.kafka.brokers=redpanda-0.redpanda.redpanda.svc.cluster.local.:9093
: The Redpanda broker to which Redpanda Console connects to get information from the Redpanda cluster.--set service.type=NodePort
: The Service that Redpanda Console should be exposed to. A NodePort Service allows you to access Redpand Console from outside Kubernetes.--set service.targetPort=8080
The port on which the Redpanda Console container expects traffic.
Start streaming​
Each Redpanda broker comes with rpk, which is a CLI tool for connecting to and interacting with Redpanda brokers. You can use rpk inside one of the Redpanda broker's Docker containers to create a topic, produce messages to it, and consume messages from it.
Create an alias to simplify the
rpk
commands:alias rpk=" kubectl -n redpanda exec -ti redpanda-0 -c redpanda -- rpk --brokers redpanda-0.redpanda.redpanda.svc.cluster.local.:9093,redpanda-1.redpanda.redpanda.svc.cluster.local.:9093,redpanda-2.redpanda.redpanda.svc.cluster.local.:9093"
Create a topic called
twitch_chat
:rpk topic create twitch_chat
Example output
TOPIC STATUS
twitch_chat OKDescribe the topic:
rpk topic describe twitch_chat
Example output
SUMMARY
=======
NAME twitch_chat
PARTITIONS 1
REPLICAS 1
CONFIGS
=======
KEY VALUE SOURCE
cleanup.policy delete DYNAMIC_TOPIC_CONFIG
compression.type producer DEFAULT_CONFIG
message.timestamp.type CreateTime DEFAULT_CONFIG
partition_count 1 DYNAMIC_TOPIC_CONFIG
redpanda.datapolicy function_name: script_name: DEFAULT_CONFIG
redpanda.remote.read false DEFAULT_CONFIG
redpanda.remote.write false DEFAULT_CONFIG
replication_factor 1 DYNAMIC_TOPIC_CONFIG
retention.bytes -1 DEFAULT_CONFIG
retention.ms 604800000 DEFAULT_CONFIG
segment.bytes 1073741824 DEFAULT_CONFIGProduce a message to the topic:
rpk topic produce twitch_chat
Type a message, then press Enter:
Pandas are fabulous!
Example output
Produced to partition 0 at offset 0 with timestamp 1663282629789.
Press Ctrl+C to finish producing messages to the topic.
Consume one message from the topic:
rpk topic consume twitch_chat --num 1
Example output
Your message is displayed along with its metadata,:
{
"topic": "twitch_chat",
"value": "Pandas are fabulous!",
"timestamp": 1663282629789,
"partition": 0,
"offset": 0
}
Explore your topic in Redpanda Console​
Find the node port of the Redpanda Console's NodePort Service:
kubectl get svc redpanda-console -n redpanda
Example output
Here, the node port is 30858.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redpanda-console NodePort 10.111.229.17 <none> 8080:30858/TCP 2d19hFind out on which worker node Redpanda Console is running:
kubectl get pod -n redpanda -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name | grep 'redpanda-console'
Example output
worker3 redpanda-console-7bcb56f5f7-56w2h
Open Redpanda Console in a web browser:
Replace
<node-ip>
with the IP address of the worker node that's running Redpanda Console. Replace<node-port
with the node port you found in the previous step.http://<node-ip>:<node-port>/brokers
All your Redpanda brokers are listed along with their IP addresses and IDs.
Go to Topics > twitch_chat.
The message that you produced to the topic is displayed along with some other details about the topic.
Configure external access to the Redpanda brokers​
Since external clients are not in the Kubernetes cluster where the Redpanda brokers are running, they cannot resolve the internal address of the headless ClusterIP Service. Instead, external clients must connect to the external addresses that the Redpanda brokers advertise. By default, each Redpanda broker advertises the redpanda-<ordinal-number>.local
hostname, where .local
is set in the redpanda.external.domain
field of the values.yaml
file. For example, redpanda-0.local
. To connect an external client to the Redpanda cluster, you must ensure that these external addresses being advertised by Redpanda brokers are resolvable on your system.
The simplest option for development is to configure the Redpanda brokers to advertise the IP addresses of the Kubernetes nodes on which they are running, instead of the redpanda-<ordinal-number>.local
hostname.
IP addresses can change. If the IP addresses of your Kubernetes nodes change, you must reconfigure the Redpanda brokers and all your external clients with the new IP addresses.
In production environments, it's best to customize the domain in the redpanda.external.domain
field of the values.yaml
file to point to something resolvable through DNS.
- kind
- minikube
- Amazon EKS
- Google GKE
- Linux
- macOS
- Windows
Find the internal IP addresses of all Kubernetes nodes in the cluster:
kubectl get nodes -o wide
Example output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane 4m52s v1.25.3 172.18.0.4 <none> Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.9
kind-worker Ready <none> 4m28s v1.25.3 172.18.0.2 <none> Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.9
kind-worker2 Ready <none> 4m28s v1.25.3 172.18.0.3 <none> Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.9
kind-worker3 Ready <none> 4m29s v1.25.3 172.18.0.5 <none> Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.9This example shows three Kubernetes nodes with the internal IP addresses
172.18.0.2
,172.18.0.3
, and172.18.0.5
.Find out on which Kubernetes node each Pod is running:
kubectl get pods -o wide -n redpanda
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redpanda-0 1/1 Running 0 2m39s 10.244.2.3 kind-worker2 <none> <none>
redpanda-1 1/1 Running 0 2m39s 10.244.3.3 kind-worker <none> <none>
redpanda-2 1/1 Running 0 2m39s 10.244.1.4 kind-worker3 <none> <none>This example shows that the
redpanda-0
Pod is running on theminikube-m02
Kubernetes node.Use the node names to find the internal IP address of the Kubernetes node that runs each Pod. For these examples, the result is the following:
Redpanda broker Internal IP redpanda-0 172.18.0.3 redpanda-1 172.18.0.2 redpanda-2 172.18.0.5 noteMake sure that the internal IP address of the Kubernetes node matches the correct Redpanda broker. The Kubernetes nodes are not always displayed in the same order as the Pods.
Add entries to
/etc/hosts
file on your local machine for each Kubernetes node that's running a Redpanda broker. For example:172.18.0.3 redpanda-0.local # kind-worker2
172.18.0.2 redpanda-1.local # kind-worker
172.18.0.5 redpanda-2.local # kind-worker3Install rpk on your local machine, not on a Pod.
Set the
REDPANDA_BROKERS
environment variable to the IP addresses of your Kubernetes nodes:export REDPANDA_BROKERS=172.18.0.3:31092,172.18.0.2:31092,172.18.0.5:31092
note31092 is the Kafka API port that's exposed by the NodePort Service.
Get the cluster info:
rpk cluster info
Example output
BROKERS
=======
ID HOST PORT
0* redpanda-0.local 31092
1 redpanda-1.local 31092
2 redpanda-2.local 31092
On macOS, the kind tool is meant for local testing. To test external access, use GKE or EKS.
On Windows, the kind tool is meant for local testing. To test external access, use GKE or EKS.
- Linux
- macOS
- Windows
Find the internal IP addresses of all Kubernetes nodes in the cluster:
kubectl get nodes -o wide
Example output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane 25h v1.24.1 192.168.49.2 <none> Ubuntu 20.04.4 LTS 5.15.0-41-generic docker://20.10.17
minikube-m02 Ready <none> 25h v1.24.1 192.168.49.3 <none> Ubuntu 20.04.4 LTS 5.15.0-41-generic docker://20.10.17
minikube-m03 Ready <none> 25h v1.24.1 192.168.49.4 <none> Ubuntu 20.04.4 LTS 5.15.0-41-generic docker://20.10.17
minikube-m04 Ready <none> 25h v1.24.1 192.168.49.5 <none> Ubuntu 20.04.4 LTS 5.15.0-41-generic docker://20.10.17This example shows three Kubernetes nodes with the internal IP addresses
192.168.49.3
,192.168.49.4
, and192.168.49.5
.Find out on which Kubernetes node each Pod is running:
kubectl get pods -o wide -n redpanda
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redpanda-0 1/1 Running 0 14m 10.244.1.11 minikube-m02 <none> <none>
redpanda-1 1/1 Running 0 14m 10.244.2.11 minikube-m03 <none> <none>
redpanda-2 1/1 Running 0 14m 10.244.3.11 minikube-m04 <none> <none>This example shows that the
redpanda-0
Pod is running on theminikube-m02
Kubernetes node.Use the node names to find the internal IP address of the Kubernetes node that runs each Pod. For these examples, the result is the following:
Redpanda broker Internal IP redpanda-0 192.168.49.3 redpanda-1 192.168.49.4 redpanda-2 192.168.49.5 noteMake sure that the internal IP address of the Kubernetes node matches the correct Redpanda broker. The Kubernetes nodes are not always displayed in the same order as the Pods.
Add entries to
/etc/hosts
file on your local machine for each Kubernetes node that's running a Redpanda broker. For example:192.168.49.3 redpanda-0.local # minikube-m02
192.168.49.4 redpanda-1.local # minikube-m03
192.168.49.5 redpanda-2.local # minikube-m04Install rpk on your local machine, not on a Pod.
Set the
REDPANDA_BROKERS
environment variable to the IP addresses of your Kubernetes nodes:export REDPANDA_BROKERS=192.168.49.3:31092,192.168.49.4:31092,192.168.49.5:31092
note31092 is the Kafka API port that's exposed by the NodePort Service.
Get the cluster info:
rpk cluster info
Example output
BROKERS
=======
ID HOST PORT
0* redpanda-0.local 31092
1 redpanda-1.local 31092
2 redpanda-2.local 31092
On macOS, the minikube tool is meant for local testing. To test external access, use GKE or EKS.
On Windows, the minikube tool is meant for local testing. To test external access, use GKE or EKS.
Add inbound firewall rules to your instances so that external traffic can reach each node port on all Kubernetes worker nodes in the cluster. See the Amazon EC2 documentation for details.
Find the internal and external IP addresses of all Kubernetes nodes in the cluster:
kubectl get nodes -o wide
Example output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-31-253.us-east-2.compute.internal Ready <none> 24m v1.22.15-eks-fb459a0 192.168.31.253 203.0.113.3 Amazon Linux 2 5.4.219-126.411.amzn2.x86_64 docker://20.10.17
ip-192-168-41-207.us-east-2.compute.internal Ready <none> 24m v1.22.15-eks-fb459a0 192.168.41.207 203.0.113.5 Amazon Linux 2 5.4.219-126.411.amzn2.x86_64 docker://20.10.17
ip-192-168-90-157.us-east-2.compute.internal Ready <none> 24m v1.22.15-eks-fb459a0 192.168.90.157 203.0.113.7 Amazon Linux 2 5.4.219-126.411.amzn2.x86_64 docker://20.10.17This example shows three Kubernetes nodes with the external IP addresses
203.0.113.3
,203.0.113.5
, and203.0.113.7
.Find out on which Kubernetes node each Pod is running:
kubectl get pods -o wide -n redpanda
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redpanda-0 1/1 Running 0 109m 192.168.30.134 ip-192-168-31-253.us-east-2.compute.internal <none> <none>
redpanda-1 1/1 Running 0 109m 192.168.60.112 ip-192-168-41-207.us-east-2.compute.internal <none> <none>
redpanda-2 1/1 Running 0 110m 192.168.68.55 ip-192-168-90-157.us-east-2.compute.internal <none> <none>This example shows that the
redpanda-0
Pod is running on a Kubernetes node with the internal IP address of192.168.31.253
.Combine the internal IP addresses with the output from the previous command to find the external IP address of the Kubernetes node that runs each Pod. For these examples, the result is the following:
Redpanda broker External IP redpanda-0 203.0.113.3 redpanda-1 203.0.113.5 redpanda-2 203.0.113.7 noteMake sure that the external IP address of the Kubernetes node matches the correct Redpanda broker. The Kubernetes nodes are not always displayed in the same order as the Pods.
Create a file called
my-values.yaml
and add the external IP addresses of your Kubernetes nodes to theredpanda.external.addresses
field, making sure to match the order of the Pod names.my-values.yamlexternal:
addresses:
- 203.0.113.3
- 203.0.113.5
- 203.0.113.7noteThe order of the IP addresses is important. Each IP address is assigned to a Redpanda broker to advertise, starting from
redpanda-0
. This is how clients learn how to address individual Redpanda brokers.Apply these updates to the Helm deployment:
helm upgrade redpanda redpanda/redpanda -n redpanda -f my-values.yaml
Install rpk on your local machine, not on a Pod.
Set the
REDPANDA_BROKERS
environment variable to the external IP addresses of your Kubernetes nodes:export REDPANDA_BROKERS=203.0.113.3:31092,203.0.113.5:31092,203.0.113.7:31092
note31092 is the Kafka API port that's exposed by the NodePort Service.
Get the cluster info:
rpk cluster info
Example output
CLUSTER
=======
redpanda.ad4a2f55-f34e-4a7f-babd-01d392cf4e80
BROKERS
=======
ID HOST PORT
0 203.0.113.3 31092
1* 203.0.113.5 31092
2 203.0.113.7 31092
Add inbound firewall rules to your instances so that external traffic can reach each node port on all Kubernetes worker nodes in the cluster. See the Google VPC documentation for details.
Find the external IP addresses of all Kubernetes nodes in the cluster:
kubectl get nodes -o wide
Example output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-redpanda-default-pool-cba560e9-59kj Ready <none> 7m22s v1.24.5-gke.600 10.142.0.4 203.0.113.3 Container-Optimized OS from Google 5.10.133+ containerd://1.6.6
gke-redpanda-default-pool-cba560e9-5lcd Ready <none> 7m22s v1.24.5-gke.600 10.142.0.5 203.0.113.7 Container-Optimized OS from Google 5.10.133+ containerd://1.6.6
gke-redpanda-default-pool-cba560e9-bjxk Ready <none> 7m22s v1.24.5-gke.600 10.142.0.3 203.0.113.5 Container-Optimized OS from Google 5.10.133+ containerd://1.6.6This example shows three Kubernetes nodes with the external IP addresses
203.0.113.3
,203.0.113.5
, and203.0.113.7
.Find out on which Kubernetes node each Pod is running:
kubectl get pods -o wide -n redpanda
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redpanda-0 1/1 Running 0 2m40s 10.52.1.9 gke-redpanda-default-pool-cba560e9-59kj <none> <none>
redpanda-1 1/1 Running 0 2m40s 10.52.0.5 gke-redpanda-default-pool-cba560e9-bjxk <none> <none>
redpanda-2 1/1 Running 0 2m40s 10.52.2.4 gke-redpanda-default-pool-cba560e9-5lcd <none> <none>This example shows that the
redpanda-0
Pod is running on a Kubernetes node withgke-redpanda-default-pool-cba560e9-59kj
in its name.Use the node names to find the external IP address of the Kubernetes node that runs each Pod. For these examples, the result is the following:
Redpanda broker External IP redpanda-0 203.0.113.3 redpanda-1 203.0.113.5 redpanda-2 203.0.113.7 noteMake sure that the external IP address of the Kubernetes node matches the correct Redpanda broker. The Kubernetes nodes are not always displayed in the same order as the Pods.
Create a file called
my-values.yaml
and add the external IP addresses of your Kubernetes nodes to theredpanda.external.addresses
field, making sure to match the order of the Pod names.my-values.yamlexternal:
addresses:
- 203.0.113.3
- 203.0.113.5
- 203.0.113.7noteThe order of the IP addresses is important. Each IP address is assigned to a Redpanda broker to advertise, starting from
redpanda-0
. This is how clients learn how to address individual Redpanda brokers.Apply these updates to the Helm deployment:
helm upgrade redpanda redpanda/redpanda -n redpanda -f my-values.yaml
Install rpk on your local machine, not on a Pod.
Set the
REDPANDA_BROKERS
environment variable to the external IP addresses of your Kubernetes nodes:export REDPANDA_BROKERS=203.0.113.3:31092,203.0.113.5:31092,203.0.113.7:31092
note31092 is the Kafka API port that's exposed by the NodePort Service.
Get the cluster info:
rpk cluster info
Example output
CLUSTER
=======
redpanda.ad4a2f55-f34e-4a7f-babd-01d392cf4e80
BROKERS
=======
ID HOST PORT
0 203.0.113.3 31092
1* 203.0.113.5 31092
2 203.0.113.7 31092
Troubleshooting​
StatefulSet never rolls out​
If the StatefulSet Pods remain in a pending state, they are waiting for resources to become available:
Make sure that you have a default StorageClass available in your Kubernetes cluster and that the underlying volumes have at least 20Gi of storage capacity.
If you're running on EKS, make sure that you installed the Amazon EBS CSI driver. This driver is required to allow EKS to create PersistentVolumes.
dig not defined​
Error: parse error at (redpanda/templates/statefulset.yaml:203): function "dig" not defined
If you see this error, make sure that you are using the correct version of Helm. See the prerequisites.
Next steps​
Suggested reading​
Explore the Redpanda Helm chart's values.yaml
file to learn what else you can configure.
Explore the Redpanda Console Helm chart's values.yaml
file to learn what else you can configure.
Learn more about advertised listeners.