Kubernetes Quick Start Guide
With Redpanda you can get up and running with streaming quickly and be fully compatible with the Kafka ecosystem.
This quick start guide can help you get started with Redpanda for development and testing purposes. To get up and running you need to create a cluster and deploy the Redpanda operator on the cluster.
- For production or benchmarking, set up a production deployment.
- You can also set up a Kubernetes cluster with external access.
Note - Run a container inside the Kubernetes cluster to communicate with the Redpanda cluster. Currently, a load balancer is not automatically created during deployment by default.
Note - In the steps below, the
.yaml file that you use to install Redpanda sets
If you want to set
developerMode: false, for optimal configuration it is recommended that you run
rpk redpanda tune all directly on the host before you create a Redpanda cluster.
You can find more information about the command as well as tuning recommendations in the Set Redpanda production mode documentation.
rpk is not available, verify that
fs.aio-max-nr is set to
1048576 or greater.
You can set
fs.aio-max-nr by running
sysctl -w fs.aio-max-nr=1048576.
Before you start installing Redpanda you need to setup your Kubernetes environment.
Install Kubernetes, Helm, and cert-manager
You'll need to install:
Kubernetes v1.19 or above
kubectl v1.19 or above
helm v3.0.0 or above
cert-manager v1.5.0 or above
Follow the instructions to verify that cert-manager is ready to create certificates.
Make sure you also have these common tools installed:
To run locally
- Kind v0.12 or above
Make sure that you have kind configured in your path. This reference in the GO documentation can help you configure the path.
Create a Kubernetes cluster
You can either create a Kubernetes cluster on your local machine or on a cloud provider.
- AWS EKS
- Google GKE
- Digital Ocean
Kind is a tool that lets you create local Kubernetes clusters using Docker. After you install Kind, verify that port 30001 is free on your machine before you set up a cluster or cluster creation will fail. Set up a cluster with:
kind create cluster --config docs/kind-external.yaml
- role: control-plane
- containerPort: 30001
Use the EKS Getting Started guide to set up EKS.
When you finish, you'll have
eksctl installed so that you can create and delete clusters in EKS.
Then, create an EKS cluster with:
eksctl create cluster \
--name redpanda \
--nodegroup-name standard-workers \
--node-type m5.xlarge \
--nodes 1 \
--nodes-min 1 \
It will take about 10-15 minutes for the process to finish.
First complete the "Before You Begin" steps describe in Google Kubernetes Engine Quickstart. Then, create a cluster with:
gcloud container clusters create redpanda --machine-type n1-standard-4 --num-nodes=1
You may need to add a
--zone to this command.
First, set up your Digital Ocean account and install
Remember to setup your personal access token.
For additional information, check out the Digital Ocean setup docs.
Then you can create a cluster for your Redpanda deployment:
doctl kubernetes cluster create redpanda --wait --size s-4vcpu-8gb
Most cloud utility tools will automatically change your
kubectl config file.
To check if you're in the correct context, run the command:
kubectl config current-context
For Digital Ocean for example, the output will look similar to this:
If you're running multiple clusters or if the config file wasn't set up automatically, look for more information in the Kubernetes documentation.
The Redpanda operator requires cert-manager to create certificates for TLS communication. You can install cert-manager with a CRD, but here's the command to install using helm:
helm repo add jetstack https://charts.jetstack.io && \
helm repo update && \
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.8.0 \
We recommend that you use the verification procedure in the cert-manager docs to verify that cert-manager is working correcly.
Use Helm to install Redpanda operator
Using Helm, add the Redpanda chart repository and update it:
helm repo add redpanda https://charts.vectorized.io/ && \
helm repo update
Just to simplify the commands, create a variable to hold the latest version number:
export VERSION=$(curl -s https://api.github.com/repos/redpanda-data/redpanda/releases/latest | jq -r .tag_name)note
You can find information about the versions of the operator in the list of operator releases.
jqto help us. If you don't have it installed run this command:
sudo apt-get update && \
sudo apt-get install jq
brew install jq
Install the Redpanda operator CRD:
kubectl apply \
noglob kubectl apply \
Install the Redpanda operator on your Kubernetes cluster with:
helm install \
--namespace redpanda-system \
Install and connect to a Redpanda cluster
After you set up Redpanda in your Kubernetes cluster, you can use our samples to install a cluster and see Redpanda in action.
Let's try setting up a Redpanda topic to handle a stream of events from a chat application with 5 chat rooms:
Create a namespace for your cluster:
kubectl create ns chat-with-me
Install a cluster from our sample files, for example the single-node cluster:
kubectl apply \
-n chat-with-me \
You can see the resource configuration options in the cluster_types file.
Check etc/hosts file
0.local.rp is mapped to
127.0.0.1 on your system. It will contain a line similar to this:
Do some streaming
Here are some sample commands to produce and consume streams:
Create a topic. We'll call it "twitch_chat":
rpk topic create twitch_chat --brokers=0.local.rp:30001
Produce messages to the topic:
rpk topic produce twitch_chat --brokers=0.local.rp:30001
Type text into the topic and press Ctrl + D to separate between messages.
Press Ctrl + C to exit the produce command.
Consume (or read) the messages in the topic:
rpk topic consume twitch_chat --brokers=0.local.rp:30001
Each message is shown with its metadata, like this:
"message": "How do you stream with Redpanda?\n",
- Check out our in-depth explanation of how to connect external clients to a Redpanda Kubernetes deployment.
- Contact us in our Slack community so we can work together to implement your Kubernetes use cases.