Deploy a Redpanda Cluster in Amazon Elastic Kubernetes Service
Deploy a secure Redpanda cluster and Redpanda Console in Amazon Elastic Kubernetes Service (EKS) using the Helm chart. Then, use rpk both as an internal client and an external client to interact with your Redpanda cluster from the command line.
The Redpanda cluster has the following security features:
-
SASL for authenticating users' connections.
-
TLS with self-signed certificates for secure communication between the cluster and clients.
Looking for the Redpanda Operator?
If you’re an existing user of the Redpanda Operator, see the Redpanda Operator documentation.
The Redpanda Operator is for experienced users. The Redpanda Operator was built for Redpanda Cloud and has unique features and workflows for that specific use case. Redpanda Data recommends the Redpanda Helm chart for new users and for those who are getting started. |
Prerequisites
Before you begin, you must have the following prerequisites.
IAM user
You need an IAM user with at least the following policies:
Policies
Replace <account-id>
with your own account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"autoscaling.amazonaws.com",
"ec2scheduled.amazonaws.com",
"elasticloadbalancing.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"transitgateway.amazonaws.com"
]
}
}
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:*"
],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "eks:*",
"Resource": "*"
},
{
"Action": [
"ssm:GetParameter",
"ssm:GetParameters"
],
"Resource": [
"arn:aws:ssm:*:<account-id>:parameter/aws/*",
"arn:aws:ssm:*::parameter/aws/*"
],
"Effect": "Allow"
},
{
"Action": [
"kms:CreateGrant",
"kms:DescribeKey"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"logs:PutRetentionPolicy"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:GetInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"iam:GetRole",
"iam:CreateRole",
"iam:DeleteRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:ListInstanceProfiles",
"iam:AddRoleToInstanceProfile",
"iam:ListInstanceProfilesForRole",
"iam:PassRole",
"iam:DetachRolePolicy",
"iam:DeleteRolePolicy",
"iam:GetRolePolicy",
"iam:GetOpenIDConnectProvider",
"iam:CreateOpenIDConnectProvider",
"iam:DeleteOpenIDConnectProvider",
"iam:TagOpenIDConnectProvider",
"iam:ListAttachedRolePolicies",
"iam:TagRole",
"iam:GetPolicy",
"iam:CreatePolicy",
"iam:DeletePolicy",
"iam:ListPolicyVersions"
],
"Resource": [
"arn:aws:iam::<account-id>:instance-profile/eksctl-*",
"arn:aws:iam::<account-id>:role/eksctl-*",
"arn:aws:iam::<account-id>:policy/eksctl-*",
"arn:aws:iam::<account-id>:oidc-provider/*",
"arn:aws:iam::<account-id>:role/aws-service-role/eks-nodegroup.amazonaws.com/AWSServiceRoleForAmazonEKSNodegroup",
"arn:aws:iam::<account-id>:role/eksctl-managed-*",
"arn:aws:iam::<account-id>:role/AmazonEKS_EBS_CSI_DriverRole"
]
},
{
"Effect": "Allow",
"Action": [
"iam:GetRole"
],
"Resource": [
"arn:aws:iam::<account-id>:role/*"
]
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"eks.amazonaws.com",
"eks-nodegroup.amazonaws.com",
"eks-fargate.amazonaws.com"
]
}
}
}
]
}
See the AWS documentation for help creating IAM users or for help troubleshooting IAM.
AWS CLI
You need the AWS CLI to get your AWS account ID and to configure kubeconfig
.
Install the AWS CLI. After you’ve installed the AWS CLI, make sure to configure it with credentials for your IAM user.
If your account uses an identity provider in the IAM Identity Center (previously AWS SSO), authenticate with the IAM Identity Center (aws sso login ).
|
For troubleshooting, see the AWS CLI documentation.
eksctl
You need eksctl
to create an EKS cluster. Install eksctl
.
jq
You need jq to parse JSON results and store the value in environment variables. Install jq.
kubectl
You must have kubectl
with the following minimum required Kubernetes version: 1.21
To check if you have kubectl
installed:
kubectl version --short --client
Create an EKS cluster
In this step, you create three worker nodes: one for each Redpanda broker.
You also configure your EKS cluster to allow external access to the node ports on which the Redpanda deployment will be exposed. You’ll use these node ports in later steps to configure external access to your Redpanda cluster.
The Helm chart configures podAntiAffinity rules to make sure that only one Redpanda broker Pod is scheduled on each worker node. For more information, see Kubernetes Cluster Requirements.
|
-
Create an EKS cluster in your default region and give it a unique name:
eksctl create cluster --name <cluster-name> \ --external-dns-access \ --nodegroup-name standard-workers \ --node-type m5.xlarge \ --nodes 3 \ --nodes-min 3 \ --nodes-max 4
If your account is configured for OIDC, add the
--with-oidc
flag to thecreate cluster
command:eksctl create cluster --with-oidc --name <cluster-name> \ --external-dns-access \ --nodegroup-name standard-workers \ --node-type m5.xlarge \ --nodes 3 \ --nodes-min 3 \ --nodes-max 4
To see all options that you can specify when creating a cluster, use the following command:
eksctl create cluster --help
Or, for help creating an EKS cluster, see the EKS documentation.
-
Make sure that your local
kubeconfig
file points to your EKS cluster:kubectl get service
You should see the a ClusterIP Service called
kubernetes
.If the
kubectl
command cannot connect to your cluster, update your localkubeconfig
file to point to your EKS cluster.Your default region is located in the
~/.aws/credentials
file.aws eks update-kubeconfig --region <region> --name <cluster-name>
-
Create the IAM role needed for the Amazon Elastic Block Store (EBS) Cluster Storage Interface (CSI):
eksctl create iamserviceaccount \ `# Do not change the name. It is required by EKS.` \ --name ebs-csi-controller-sa \ `# Do not change the namespace. It is required by EKS.` \ --namespace kube-system \ --cluster <cluster-name> \ --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \ --approve \ --role-only \ --role-name AmazonEKS_EBS_CSI_DriverRole-<cluster-name>
-
Get your AWS account ID:
AWS_ACCOUNT_ID=`aws sts get-caller-identity | jq -r '.Account'`
-
Add the EBS CSI add-on:
eksctl create addon \ --name aws-ebs-csi-driver \ --cluster <cluster-name> \ --service-account-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole-<cluster-name> \ --force
-
Get the ID of the security group associated with the nodes in your EKS cluster:
AWS_SECURITY_GROUP_ID=`aws eks describe-cluster --name <cluster-name> | jq -r '.cluster.resourcesVpcConfig.clusterSecurityGroupId'`
-
Add inbound firewall rules to your EC2 instances so that external traffic can reach the node ports exposed on all Kubernetes worker nodes in the cluster:
aws ec2 authorize-security-group-ingress \ --group-id ${AWS_SECURITY_GROUP_ID} \ --ip-permissions "[ \ { \ \"IpProtocol\": \"tcp\", \ \"FromPort\": 30081, \ \"ToPort\": 30081, \ \"IpRanges\": [{\"CidrIp\": \"0.0.0.0/0\"}] \ }, \ { \ \"IpProtocol\": \"tcp\", \ \"FromPort\": 30082, \ \"ToPort\": 30082, \ \"IpRanges\": [{\"CidrIp\": \"0.0.0.0/0\"}] \ }, \ { \ \"IpProtocol\": \"tcp\", \ \"FromPort\": 31644, \ \"ToPort\": 31644, \ \"IpRanges\": [{\"CidrIp\": \"0.0.0.0/0\"}] \ }, \ { \ \"IpProtocol\": \"tcp\", \ \"FromPort\": 31092, \ \"ToPort\": 31092, \ \"IpRanges\": [{\"CidrIp\": \"0.0.0.0/0\"}] \ } \ ]"
If you use 0.0.0.0/0
, you enable all IPv4 addresses to access your instances on those node ports. In production, you should authorize only a specific IP address or range of addresses to access your instances.For help creating firewall rules, see the Amazon EC2 documentation.
Deploy Redpanda and Redpanda Console
In this step, you deploy Redpanda with SASL authentication and self-signed TLS certificates. Redpanda Console is included as a subchart in the Redpanda Helm chart.
-
Install cert-manager using Helm:
helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager \ --set installCRDs=true \ --namespace cert-manager \ --create-namespace
TLS is enabled by default. The Redpanda Helm chart uses cert-manager to manage TLS certificates by default.
-
Install Redpanda with SASL enabled:
helm repo add redpanda https://charts.redpanda.com export DOMAIN=customredpandadomain.local && \ helm install redpanda redpanda/redpanda --namespace redpanda --create-namespace \ --set auth.sasl.enabled=true \ --set "auth.sasl.users[0].name=superuser" \ --set "auth.sasl.users[0].password=secretpassword" \ --set external.domain=${DOMAIN} --wait
Here, you create a superuser called
superuser
that can grant permissions to new users in your cluster using access control lists (ACLs).The installation displays some tips for getting started.
If the installation is taking a long time, see [Troubleshooting].
-
Verify that each Redpanda broker is scheduled on only one Kubernetes node:
kubectl get pod -n redpanda \ -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -l \ app.kubernetes.io/component=redpanda-statefulset
Example output:
example-worker3 redpanda-0 example-worker2 redpanda-1 example-worker redpanda-2
Create a user
In this step, you use rpk to create a new user. Then, you authenticate to Redpanda with the superuser to grant permissions to the new user. You’ll authenticate to Redpanda with this new user to create a topic in the next steps.
As a security best practice, you should use the superuser only to grant permissions to new users through ACLs. Never delete the superuser. You need the superuser to grant permissions to new users. |
-
Create a new user called
redpanda-twitch-account
with the passwordchangethispassword
:kubectl -n redpanda exec -ti redpanda-0 -c redpanda -- \ rpk acl user create redpanda-twitch-account \ -p changethispassword \ --admin-api-tls-enabled \ --admin-api-tls-truststore /etc/tls/certs/default/ca.crt \ --api-urls redpanda-0.redpanda.redpanda.svc.cluster.local.:9644,redpanda-1.redpanda.redpanda.svc.cluster.local.:9644,redpanda-2.redpanda.redpanda.svc.cluster.local.:9644
Example output:
Created user "redpanda-twitch-account".
-
Use the superuser to grant the
redpanda-twitch-account
user permission to execute all operations only for a topic calledtwitch_chat
.kubectl exec -n redpanda -c redpanda redpanda-0 -- \ rpk acl create --allow-principal User:redpanda-twitch-account \ --operation all \ --topic twitch_chat \ --tls-enabled \ --tls-truststore /etc/tls/certs/default/ca.crt \ --user=superuser --password=secretpassword --sasl-mechanism SCRAM-SHA-512 \ --brokers redpanda-0.redpanda.redpanda.svc.cluster.local:9093
Example output:
PRINCIPAL RESOURCE-TYPE RESOURCE-NAME OPERATION PERMISSION User:redpanda TOPIC twitch_chat ALL ALLOW
Start streaming
In this step, you authenticate to Redpanda with the redpanda-twitch-account
user to create a topic called twitch_chat
. This topic is the only one that the redpanda-twitch-account
user has permission to access. Then, you produce messages to the topic, and consume messages from it.
-
Create an alias to simplify the
rpk
commands:alias rpk-topic="kubectl -n redpanda exec -i -t redpanda-0 -c redpanda -- rpk topic --brokers redpanda-0.redpanda.redpanda.svc.cluster.local.:9093,redpanda-1.redpanda.redpanda.svc.cluster.local.:9093,redpanda-2.redpanda.redpanda.svc.cluster.local.:9093 --tls-truststore /etc/tls/certs/default/ca.crt --tls-enabled --user=redpanda-twitch-account --password=changethispassword --sasl-mechanism SCRAM-SHA-256"
-
Create a topic called
twitch_chat
:rpk-topic create twitch_chat
Example output:
TOPIC STATUS twitch_chat OK
-
Describe the topic:
rpk-topic describe twitch_chat
Example output:
SUMMARY ======= NAME twitch_chat PARTITIONS 1 REPLICAS 1 CONFIGS ======= KEY VALUE SOURCE cleanup.policy delete DYNAMIC_TOPIC_CONFIG compression.type producer DEFAULT_CONFIG max.message.bytes 1048576 DEFAULT_CONFIG message.timestamp.type CreateTime DEFAULT_CONFIG redpanda.remote.delete true DEFAULT_CONFIG redpanda.remote.read false DEFAULT_CONFIG redpanda.remote.write false DEFAULT_CONFIG retention.bytes -1 DEFAULT_CONFIG retention.local.target.bytes -1 DEFAULT_CONFIG retention.local.target.ms 86400000 DEFAULT_CONFIG retention.ms 604800000 DEFAULT_CONFIG segment.bytes 134217728 DEFAULT_CONFIG segment.ms 1209600000 DEFAULT_CONFIG
-
Produce a message to the topic:
rpk-topic produce twitch_chat
-
Type a message, then press Enter:
Pandas are fabulous!
Example output:
Produced to partition 0 at offset 0 with timestamp 1663282629789.
-
Press Ctrl+C to finish producing messages to the topic.
-
Consume one message from the topic:
rpk-topic consume twitch_chat --num 1
Example output:
{ "topic": "twitch_chat", "value": "Pandas are fabulous!", "timestamp": 1663282629789, "partition": 0, "offset": 0 }
Explore your topic in Redpanda Console
Redpanda Console is a developer-friendly web UI for managing and debugging your Redpanda cluster and your applications.
In this step, you use port-forwarding to access Redpanda Console on your local network.
Because you’re using the Community Edition of Redpanda Console, you should not expose Redpanda Console outside your local network. The Community Edition of Redpanda Console does not provide authentication, and it connects to the Redpanda cluster as superuser. To use the Enterprise Edition, you need a license key, see Redpanda Licensing. |
-
Expose Redpanda Console to your localhost:
kubectl -n redpanda port-forward svc/redpanda-console 8080:8080
The
kubectl port-forward
command actively runs in the command-line window. To execute other commands while the command is running, open another command-line window. -
Open Redpanda Console on http://localhost:8080.
All your Redpanda brokers are listed along with their IP addresses and IDs.
-
Go to Topics > twitch_chat.
The message that you produced to the topic is displayed along with some other details about the topic.
-
Press Ctrl+C in the command-line to stop the port-forwarding process.
Configure external access to Redpanda
If you want to connect to the Redpanda cluster with external clients, Redpanda brokers must advertise an externally accessible address that external clients can connect to. External clients are common in Internet of Things (IoT) environments, or if you use external services that do not implement VPC peering in your network.
When you created the cluster, you set the external.domain
configuration to customredpandadomain.local
, which means that your Redpanda brokers are advertising the following addresses:
-
redpanda-0.customredpandadomain.local
-
redpanda-1.customredpandadomain.local
-
redpanda-2.customredpandadomain.local
To access your Redpanda brokers externally, you can map your worker nodes' IP addresses to these domains.
IP addresses can change. If the IP addresses of your worker nodes change, you must update your In a production environment, it’s best practice to use ExternalDNS to manage DNS records for your brokers. See Use ExternalDNS for external access. |
-
Add mappings in your
/etc/hosts
file between your worker nodes' IP addresses and their custom domain names:sudo true && kubectl -n redpanda get endpoints,node -A -o go-template='{{ range $_ := .items }}{{ if and (eq .kind "Endpoints") (eq .metadata.name "redpanda-external") }}{{ range $_ := (index .subsets 0).addresses }}{{ $nodeName := .nodeName }}{{ $podName := .targetRef.name }}{{ range $node := $.items }}{{ if and (eq .kind "Node") (eq .metadata.name $nodeName) }}{{ range $_ := .status.addresses }}{{ if eq .type "ExternalIP" }}{{ .address }} {{ $podName }}.${DOMAIN}{{ "\n" }}{{ end }}{{ end }}{{ end }}{{ end }}{{ end }}{{ end }}{{ end }}' | envsubst | sudo tee -a /etc/hosts
/etc/hosts
203.0.113.3 redpanda-0.customredpandadomain.local 203.0.113.5 redpanda-1.customredpandadomain.local 203.0.113.7 redpanda-2.customredpandadomain.local
-
Save the root certificate authority (CA) to your local file system outside Kubernetes:
kubectl -n redpanda get secret redpanda-default-root-certificate -o go-template='{{ index .data "ca.crt" | base64decode }}' > ca.crt
-
Install rpk on your local machine, not on a Pod:
-
Linux
-
macOS
-
Download the
rpk
archive for Linux, and make sure the version matches your Redpanda version.-
To download the latest version of
rpk
:curl -LO \ https://github.com/redpanda-data/redpanda/releases/latest/download/rpk-linux-amd64.zip
-
To download a version other than the latest:
curl -LO \ https://github.com/redpanda-data/redpanda/releases/download/v<version>/rpk-linux-amd64.zip
-
-
Ensure that you have the folder
~/.local/bin
:mkdir -p ~/.local/bin
-
Add it to your
$PATH
:export PATH="~/.local/bin:$PATH"
-
Unzip the
rpk
files to your~/.local/bin/
directory:unzip rpk-linux-amd64.zip -d ~/.local/bin/
-
Run
rpk version
to display the version of the rpk binary:rpk version
23.1.13 (rev 9eefb907c)
-
If you don’t have Homebrew installed, install it.
-
Install rpk:
brew install redpanda-data/tap/redpanda
-
Run
rpk version
to display the version of the rpk binary:rpk version
23.1.13 (rev 9eefb907c)
This method installs the latest version of rpk
, which is supported only with the latest version of Redpanda.
-
-
Set the
REDPANDA_BROKERS
environment variable to the custom domains of your Redpanda brokers:export REDPANDA_BROKERS=redpanda-0.customredpandadomain.local:31092,redpanda-1.customredpandadomain.local:31092,redpanda-2.customredpandadomain.local:31092
31092 is the Kafka API port that’s exposed by the default NodePort Service. -
Describe the topic:
rpk topic describe twitch_chat --tls-enabled --tls-truststore=ca.crt --user=redpanda-twitch-account --password=changethispassword --sasl-mechanism SCRAM-SHA-256
Example output:
SUMMARY ======= NAME twitch_chat PARTITIONS 1 REPLICAS 1 CONFIGS ======= KEY VALUE SOURCE cleanup.policy delete DYNAMIC_TOPIC_CONFIG compression.type producer DEFAULT_CONFIG message.timestamp.type CreateTime DEFAULT_CONFIG partition_count 1 DYNAMIC_TOPIC_CONFIG redpanda.datapolicy function_name: script_name: DEFAULT_CONFIG redpanda.remote.read false DEFAULT_CONFIG redpanda.remote.write false DEFAULT_CONFIG replication_factor 1 DYNAMIC_TOPIC_CONFIG retention.bytes -1 DEFAULT_CONFIG retention.ms 604800000 DEFAULT_CONFIG segment.bytes 1073741824 DEFAULT_CONFIG
Explore the default Kubernetes components
By default, the Redpanda Helm chart deploys the following Kubernetes components:
-
A StatefulSet with three Pods.
-
One PersistentVolumeClaim for each Pod, each with a capacity of 20Gi.
-
A headless ClusterIP Service and a NodePort Service for each Kubernetes node that runs a Redpanda broker.
StatefulSet
Redpanda is a stateful application. Each Redpanda broker needs to store its own state (topic partitions) in its own storage volume. As a result, the Helm chart deploys a StatefulSet to manage the Pods in which the Redpanda brokers are running.
kubectl get statefulset -n redpanda
Example output:
NAME READY AGE redpanda 3/3 3m11s
StatefulSets ensure that the state associated with a particular Pod replica is always the same, no matter how often the Pod is recreated.
Each Pod is also given a unique ordinal number in its name such as redpanda-0
.
A Pod with a particular ordinal number is always associated with a PersistentVolumeClaim with the same number.
When a Pod in the StatefulSet is deleted and recreated,
it is given the same ordinal number and so it mounts the same storage volume as the deleted Pod that it replaced.
kubectl get pod -n redpanda
Example output:
NAME READY STATUS RESTARTS AGE redpanda-0 1/1 Running 0 6m9s redpanda-1 1/1 Running 0 6m9s redpanda-2 1/1 Running 0 6m9s redpanda-console-5ff45cdb9b-6z2vs 1/1 Running 0 5m redpanda-configuration-smqv7 0/1 Completed 0 6m9s
The redpanda-configuration Job updates the Redpanda runtime configuration.
|
PersistentVolumeClaim
Redpanda brokers must be able to store their data on disk. By default, the Helm chart uses the default StorageClass in the Kubernetes cluster to create a PersistentVolumeClaim for each Pod. The default StorageClass in your Kubernetes cluster depends on the Kubernetes platform that you are using.
kubectl get persistentvolumeclaims -n redpanda
Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-redpanda-0 Bound pvc-3311ade3-de84-4027-80c6-3d8347302962 20Gi RWO standard 75s datadir-redpanda-1 Bound pvc-4ea8bc03-89a6-41e4-b985-99f074995f08 20Gi RWO standard 75s datadir-redpanda-2 Bound pvc-45c3555f-43bc-48c2-b209-c284c8091c45 20Gi RWO standard 75s
Service
The clients writing to or reading from a given partition have to connect directly to the leader broker that hosts the partition. As a result, clients needs to be able to connect directly to each Pod. To allow internal and external clients to connect to each Pod that hosts a Redpanda broker, the Helm chart configures two Services:
-
Internal using the Headless ClusterIP
-
External using the NodePort
kubectl get service -n redpanda
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redpanda ClusterIP None <none> <none> 5m37s redpanda-console ClusterIP 10.0.251.204 <none> 8080 5m redpanda-external NodePort 10.96.137.220 <none> 9644:31644/TCP,9094:31092/TCP,8083:30082/TCP,8080:30081/TCP 5m37s
Headless ClusterIP Service
The headless Service associated with a StatefulSet gives the Pods their network identity in the form of a fully qualified domain name (FQDN). Both Redpanda brokers in the same Redpanda cluster and clients within the same Kubernetes cluster use this FQDN to communicate with each other.
An important requirement of distributed applications such as Redpanda is peer discovery: The ability for each broker to find other brokers in the same cluster.
When each Pod is rolled out, its seed_servers
field is updated with the FQDN of each Pod in the cluster so that they can discover each other.
kubectl -n redpanda exec redpanda-0 -c redpanda -- cat etc/redpanda/redpanda.yaml
redpanda:
data_directory: /var/lib/redpanda/data
empty_seed_starts_cluster: false
seed_servers:
- host:
address: redpanda-0.redpanda.redpanda.svc.cluster.local.
port: 33145
- host:
address: redpanda-1.redpanda.redpanda.svc.cluster.local.
port: 33145
- host:
address: redpanda-2.redpanda.redpanda.svc.cluster.local.
port: 33145
NodePort Service
External access is made available by a NodePort service that opens the following ports by default for the listeners:
Node port | Pod port | Listener |
---|---|---|
30081 |
8081 |
Schema Registry |
30082 |
8083 |
HTTP Proxy |
31092 |
9094 |
Kafka API |
31644 |
9644 |
Admin API |
To learn more, see Networking and Connectivity in Kubernetes.
TLS Certificates
By default, TLS is enabled in the Redpanda Helm chart. The Helm chart uses cert-manager to generate two Certificate resources that provide Redpanda with self-signed certificates:
-
The
redpanda-default-cert
Certificate is the TLS certificate that is used by all listeners. -
The
redpanda-default-root-certificate
Certificate is the root certificate authority for the TLS certificates.
kubectl get certificate -n redpanda
NAME READY SECRET AGE redpanda-default-cert True redpanda-default-cert 10m redpanda-default-root-certificate True redpanda-default-root-certificate 10m
Troubleshoot
Before troubleshooting your cluster, make sure that you have all the prerequisites.
For troubleshooting steps, see Troubleshoot Redpanda in Kubernetes.
Next steps
When you’re ready to use a registered domain, make sure to remove your entries from the /etc/hosts file, and see Configure External Access through a NodePort Service
|
Suggested reading
-
See the Redpanda Console README on GitHub
-
Explore the Redpanda Helm chart’s
values.yaml
file to learn what else you can configure -
Explore the Redpanda Console Helm chart’s
values.yaml
file to learn what else you can configure