Skip to main content
Version: 23.1

Connecting Remotely to Kubernetes

This section shows how to set up Kubernetes with the Redpanda operator in Google GKE, Amazon EKS, or Digital Ocean, so you can work with Redpanda from outside of the Kubernetes network.

Create a Kubernetes cluster

Create a three-node cluster for your Redpanda deployment on any of the following platforms:

Use the EKS Getting Started guide to set up EKS. When you finish, you have eksctl installed, so that you can create and delete clusters in EKS.

To create a cluster:

eksctl create cluster \
--name redpanda \
--nodegroup-name standard-workers \
--node-type m5.xlarge \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4

The process takes about 10-15 minutes to finish.

kubectl context

Most cloud utility tools automatically change your kubectl config file.

To check if you're in the correct context:

kubectl config current-context

For Digital Ocean, for example, the output looks similar to this:


If you're running multiple clusters, or if the config file wasn't set up automatically, see the Kubernetes documentation.

Prepare TLS certificate infrastructure

The Redpanda cluster uses cert-manager to create TLS certificates for communication between the cluster nodes.

To use Helm to install cert-manager:

helm repo add jetstack && \
helm repo update && \
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.4.4 \
--set installCRDs=true

Install the Redpanda operator and cluster

  1. To simplify the commands, create a variable to hold the latest version number:

    export VERSION=$(curl -s | jq -r .tag_name)

    This section uses jq to help. If you don't have jq installed:

    sudo apt-get update && \
    sudo apt-get install jq

    You can also get operator versions from the list of operator releases.

  2. To install the latest Redpanda operator:

    kubectl apply -k$VERSION && \
    helm repo add redpanda && \
    helm repo update && \
    helm install \
    --namespace redpanda-system \
    --create-namespace redpanda-operator \
    --version $VERSION \
  3. To install a cluster with external connectivity:

    kubectl apply -f$VERSION/src/go/k8s/config/samples/external_connectivity.yaml
  4. To get the addresses of the brokers:

    kubectl get clusters external-connectivity -o=jsonpath='{.status.nodes.external}'

    The broker addresses are shown in the command output. For example:


    If you don't get any response for this command, check if the pods are healthy and are running with no errors.

    The following commands help you better understand what's happening:

    kubectl describe statefulset external-connectivity
    kubectl describe pods external-connectivity-0
  5. To configure security access:

    When you run eksctl, it automatically creates a lot of resources for you (dedicated VPC, new Security Group, and others). Because of that, you have to enter your security configurations and open the ports that external-connectivity uses in order to follow the next steps. The easiest way to do that is to:

    a. Get the ports that you need to open with the command you ran in the previous step.

    b. Go to your Security Group configurations and check the newly created rule for your cluster.

    c. Open TCP traffic to the ports.

    For more information, see the AWS guide for configuring VPCs and Security Groups.

Verify the connection

  1. From a remote machine that has rpk installed, to get information about the cluster:

    rpk --brokers,, \
    cluster info

    Check if you're using the correct address and ports. Otherwise you may run into errors like the following:

    unable to create topics [chat-rooms]: invalid large response size 1213486160 > limit 104857600
  2. To create a topic in your Redpanda cluster:

    rpk --brokers,, \
    topic create chat-rooms -p 5
  3. To show the list of topics:

    rpk --brokers,, \
    topic list

Next steps

What do you like about this doc?

Optional: Share your email address if we can contact you about your feedback.

Let us know what we do well: