Docs Self-Managed Deploy Redpanda Console Kubernetes Deploy on Kubernetes Deploy Redpanda Console on Kubernetes Page options Copy as Markdown Copied! View as plain text Ask AI about this topic Add MCP server to VS Code This page shows you how to deploy Redpanda Console as a standalone service on Kubernetes using the Redpanda Operator (Console custom resource), Helm charts, or YAML manifests. When you deploy a Redpanda cluster using the Redpanda Operator or Redpanda Helm chart, Redpanda Console is automatically deployed alongside your cluster. Use this standalone deployment guide only when you need to: Connect to a Redpanda cluster running outside Kubernetes. Deploy Redpanda Console independently from your Redpanda cluster. Deploy multiple Redpanda Console instances for different environments. After reading this page, you will be able to: Deploy Redpanda Console on Kubernetes using the Redpanda Operator, Helm charts, or YAML manifests Configure TLS and SASL authentication for Redpanda Console Verify and scale a Redpanda Console deployment Prerequisites You must have a running Redpanda or Kafka cluster available to connect to. Redpanda Console requires a cluster to function. For instructions on deploying a Redpanda cluster, see Deploy on Kubernetes. Review the system requirements for Redpanda Console on Kubernetes. Install Redpanda Console Choose your deployment method. Operator Helm The Redpanda Operator provides a Console custom resource (CR) that lets you deploy and manage Redpanda Console declaratively. The operator handles the lifecycle of the Console deployment, including creating the underlying Deployment, Service, and ConfigMap resources. Create a Console custom resource: console.yaml apiVersion: cluster.redpanda.com/v1alpha2 kind: Console metadata: name: redpanda-console namespace: redpanda spec: cluster: clusterRef: (1) name: redpanda replicaCount: 2 (2) resources: (3) requests: cpu: 100m memory: 512Mi limits: cpu: 4000m memory: 2Gi service: (4) type: LoadBalancer port: 8080 ingress: (5) enabled: true annotations: cert-manager.io/cluster-issuer: letsencrypt-prod className: nginx hosts: - host: console.example.com paths: - path: / pathType: Prefix tls: - secretName: console-tls hosts: - console.example.com 1 Reference to your Redpanda cluster CR. The operator automatically configures broker addresses, TLS, and authentication based on the referenced cluster. If your Redpanda cluster is not managed by the operator, use staticConfiguration instead of clusterRef. See the TLS section for staticConfiguration examples. 2 For production, run at least two replicas for high availability and rolling upgrades. 3 Adjust resource requests and limits based on your expected workload and available node resources. 4 Use LoadBalancer for cloud environments or when you want Redpanda Console to be accessible from outside the cluster. Use ClusterIP for internal-only access. 5 Enable and configure Ingress if you want to expose Redpanda Console using a domain name and use TLS/HTTPS. Make sure your cluster has an Ingress controller installed. Apply the Console CR: kubectl apply -f console.yaml --namespace redpanda The operator reconciles the Console CR and creates the necessary Deployment, Service, and ConfigMap resources. Create a values file: The values file is where you configure how Redpanda Console connects to your Redpanda or Kafka cluster. You must specify the broker addresses in the config.kafka.brokers section. console-values.yaml config: kafka: brokers: - redpanda-0.redpanda.redpanda.svc.cluster.local:9092 (1) - redpanda-1.redpanda.redpanda.svc.cluster.local:9092 - redpanda-2.redpanda.redpanda.svc.cluster.local:9092 # Resource configuration resources: (2) requests: cpu: 100m memory: 512Mi limits: cpu: 4000m memory: 2Gi # High availability configuration replicaCount: 2 (3) # Pod anti-affinity for node separation affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: console topologyKey: kubernetes.io/hostname # Service configuration service: (4) type: LoadBalancer port: 8080 # Ingress configuration (optional) ingress: (5) enabled: true annotations: cert-manager.io/cluster-issuer: letsencrypt-prod ingressClassName: nginx hosts: - host: console.example.com paths: - path: / pathType: Prefix tls: - secretName: console-tls hosts: - console.example.com 1 Replace these addresses with the internal DNS names or external addresses of your Redpanda brokers. If you deployed Redpanda using the Redpanda Helm chart or Redpanda Operator, you can find the broker service names by running: kubectl get svc -n <redpanda-namespace> Look for services named like redpanda-0, redpanda-1, etc. The port is typically 9092 for Kafka traffic. If your brokers are outside the cluster, use their reachable addresses instead. 2 Adjust resource requests and limits based on your expected workload and available node resources. 3 For production, run at least two replicas for high availability and rolling upgrades. 4 Use LoadBalancer for cloud environments or when you want Redpanda Console to be accessible from outside the cluster. Use ClusterIP for internal-only access. 5 Enable and configure Ingress if you want to expose Redpanda Console using a domain name and use TLS/HTTPS. Make sure your cluster has an Ingress controller installed. Install the chart: helm install redpanda-console redpanda/console \ --namespace redpanda \ --create-namespace \ --values console-values.yaml Connect to Redpanda clusters with TLS If your Redpanda cluster uses TLS encryption (the default for Helm deployments), you must configure Redpanda Console to connect securely. Operator Helm When you use clusterRef to reference a Redpanda cluster managed by the operator, TLS is configured automatically. No additional steps are required. If you use staticConfiguration to connect to an external cluster with TLS: Extract the CA certificate: kubectl get secret redpanda-default-root-certificate -n redpanda -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Create a secret with the CA certificate: kubectl create secret generic redpanda-console-tls --from-file=ca.crt=ca.crt -n redpanda Configure the Console CR: apiVersion: cluster.redpanda.com/v1alpha2 kind: Console metadata: name: redpanda-console namespace: redpanda spec: cluster: staticConfiguration: kafka: brokers: - redpanda-0.redpanda.redpanda.svc.cluster.local:9093 tls: caCertSecretRef: name: redpanda-console-tls key: ca.crt secretMounts: - name: redpanda-console-tls secretName: redpanda-console-tls path: /etc/console/secrets Apply the updated Console CR: kubectl apply -f console.yaml --namespace redpanda Run the following command to extract the CA certificate from the Redpanda Helm deployment: kubectl get secret redpanda-default-root-certificate -n redpanda -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Create a secret named redpanda-console in the redpanda namespace with the CA certificate: kubectl create secret generic redpanda-console --from-file=ca.crt=ca.crt -n redpanda In your console-values.yaml: config: kafka: brokers: - redpanda-0.redpanda.redpanda.svc.cluster.local:9093 tls: enabled: true caFilepath: /etc/console/secrets/ca.crt insecureSkipTlsVerify: true # For local/testing only secretMounts: - name: redpanda-console secretName: redpanda-console path: /etc/console/secrets Upgrade or install Redpanda Console: helm upgrade --install redpanda-console redpanda/console \ --namespace redpanda \ --values console-values.yaml Redpanda Console now connects securely to your Redpanda cluster using TLS. For production, set insecureSkipTlsVerify: false and use a trusted CA. Deploy Redpanda Console as standalone service with YAML manifests If you prefer to deploy using YAML manifests, you can create the following resources: console-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redpanda-console namespace: redpanda labels: app.kubernetes.io/name: console app.kubernetes.io/component: console spec: replicas: 2 selector: matchLabels: app.kubernetes.io/name: console template: metadata: labels: app.kubernetes.io/name: console spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: console topologyKey: kubernetes.io/hostname containers: - name: console image: docker.redpanda.com/redpandadata/console:v3.7.0 ports: - containerPort: 8080 name: http resources: requests: cpu: 200m memory: 512Mi limits: cpu: 1000m memory: 2Gi env: - name: KAFKA_BROKERS value: "redpanda-0.redpanda.redpanda.svc.cluster.local:9092,redpanda-1.redpanda.redpanda.svc.cluster.local:9092,redpanda-2.redpanda.redpanda.svc.cluster.local:9092" livenessProbe: httpGet: path: /health port: http initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /health port: http initialDelaySeconds: 5 periodSeconds: 5 console-service.yaml apiVersion: v1 kind: Service metadata: name: redpanda-console namespace: redpanda labels: app.kubernetes.io/name: console spec: type: LoadBalancer ports: - port: 8080 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: console For more complex configurations, create a ConfigMap: console-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: redpanda-console-config namespace: redpanda data: config.yaml: | kafka: brokers: - redpanda-0.redpanda.redpanda.svc.cluster.local:9092 - redpanda-1.redpanda.redpanda.svc.cluster.local:9092 - redpanda-2.redpanda.redpanda.svc.cluster.local:9092 server: listenPort: 8080 console: enabled: true Apply the manifests: kubectl apply -f console-config.yaml kubectl apply -f console-deployment.yaml kubectl apply -f console-service.yaml Configuration Make sure to configure the following settings in your Console CR, values file, or ConfigMap: Connect to Redpanda Configure the connection to your Redpanda cluster by setting the broker addresses in your Console CR or values file. See Configure Redpanda Console to Connect to a Redpanda Cluster. Authentication and security For production deployments, configure: TLS encryption: Enable TLS for secure communication SASL authentication: Configure SASL if Redpanda uses authentication RBAC: Set up role-based access control Configure authentication based on your deployment method. Operator Helm When you use clusterRef, the operator automatically inherits SASL and TLS settings from the referenced Redpanda cluster. No additional Console configuration is needed. To configure SASL manually with staticConfiguration: apiVersion: cluster.redpanda.com/v1alpha2 kind: Console metadata: name: redpanda-console namespace: redpanda spec: cluster: staticConfiguration: kafka: brokers: - redpanda-0.redpanda.redpanda.svc.cluster.local:9092 sasl: enabled: true mechanism: SCRAM-SHA-256 secret: kafka: saslPassword: <console-password> You can also reference an existing Kubernetes Secret for credentials: apiVersion: cluster.redpanda.com/v1alpha2 kind: Console metadata: name: redpanda-console namespace: redpanda spec: cluster: staticConfiguration: kafka: brokers: - redpanda-0.redpanda.redpanda.svc.cluster.local:9092 sasl: enabled: true mechanism: SCRAM-SHA-256 username: console-user passwordFilepath: /etc/console/secrets/password secretMounts: - name: kafka-credentials secretName: console-kafka-credentials path: /etc/console/secrets Example with SASL authentication: config: kafka: brokers: - redpanda-0.redpanda.redpanda.svc.cluster.local:9092 sasl: enabled: true mechanism: SCRAM-SHA-256 username: console-user password: console-password See Redpanda Console Security. Verify deployment Use the following steps to confirm that Redpanda Console is running and accessible. Operator Helm Check the Console CR status: kubectl get console -n redpanda The output shows the replica status of your Console deployment: NAME REPLICAS UPDATED READY AVAILABLE redpanda-console 2 2 2 2 Check pod status: kubectl get pods -n redpanda -l app.kubernetes.io/name=console Check service status: kubectl get svc -n redpanda redpanda-console Access the Redpanda Console: If using LoadBalancer: kubectl get svc -n redpanda redpanda-console -o jsonpath='{.status.loadBalancer.ingress[0].ip}' If using port-forward for testing: kubectl port-forward -n redpanda svc/redpanda-console 8080:8080 Open http://localhost:8080 in your browser. Check pod status: kubectl get pods -n redpanda -l app.kubernetes.io/name=console Check service status: kubectl get svc -n redpanda redpanda-console Access the Redpanda Console: If using LoadBalancer: kubectl get svc -n redpanda redpanda-console -o jsonpath='{.status.loadBalancer.ingress[0].ip}' If using port-forward for testing: kubectl port-forward -n redpanda svc/redpanda-console 8080:8080 Open http://localhost:8080 in your browser. Scaling For production deployments, consider the following scaling strategies: Horizontal scaling Scale the deployment: kubectl scale deployment redpanda-console -n redpanda --replicas=3 Auto-scaling Create an HPA for automatic scaling: console-hpa.yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: redpanda-console-hpa namespace: redpanda spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: redpanda-console minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 Monitoring Configure metrics exposure and Prometheus scraping for Redpanda Console. Enable monitoring for Redpanda Console: config: server: metrics: enabled: true port: 9090 Prometheus ServiceMonitor If you use the Prometheus Operator, deploy a ServiceMonitor resource alongside Redpanda Console. Prometheus then discovers and scrapes Console metrics from the /admin/metrics endpoint. Operator Helm To enable the ServiceMonitor in the Console custom resource, set monitoring.enabled to true: apiVersion: cluster.redpanda.com/v1alpha2 kind: Console metadata: name: redpanda-console namespace: redpanda spec: monitoring: enabled: true (1) scrapeInterval: "30s" (2) labels: (3) release: kube-prometheus-stack cluster: clusterRef: name: redpanda 1 Set to true to create a ServiceMonitor resource. Default: false. 2 How often Prometheus scrapes the metrics endpoint. Default: 1m. 3 Additional labels to apply to the ServiceMonitor. Match your Prometheus Operator’s serviceMonitorSelector by applying the same labels here. Apply the Console CR: kubectl apply -f console.yaml --namespace redpanda To enable the ServiceMonitor in the Console Helm chart, add the following to your console-values.yaml: monitoring: enabled: true (1) scrapeInterval: "30s" (2) labels: {} (3) 1 Set to true to create a ServiceMonitor resource. Default: false. 2 How often Prometheus scrapes the metrics endpoint. Default: 1m. 3 Additional labels to apply to the ServiceMonitor. Match your Prometheus Operator’s serviceMonitorSelector by applying the same labels here. For example: monitoring: enabled: true labels: release: kube-prometheus-stack If you deploy Redpanda Console as a subchart of the Redpanda Helm chart, configure monitoring under the console key. All monitoring options are available under this key. console: monitoring: enabled: true When the Console server is configured with TLS (config.server.tls.enabled: true), the ServiceMonitor uses HTTPS and configures CA validation for scraping. Troubleshooting Connection refused: Verify Redpanda broker addresses and network policies Authentication failed: Check SASL credentials and configuration Resource limits: Monitor CPU and memory usage, adjust limits as needed Logs Check Redpanda Console logs: kubectl logs -n redpanda -l app.kubernetes.io/name=console -f Next steps Configure Redpanda Console Authentication in Redpanda Console Authorization in Redpanda Console Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution 🎉 Thanks for your feedback! Requirements Linux