Docs Self-Managed Deploy Linux Deployment Options Deploy for Production: Manual This is documentation for Self-Managed v23.3, which is no longer supported. To view the latest available version of the docs, see v24.3. Deploy for Production: Manual You can deploy Redpanda for production with a default deployment, which uses recommended deployment tools, or with a custom deployment, which uses unsupported deployment tools. See Deploy for Production: Automated to use Terraform and Ansible to deploy Redpanda. See Redpanda Quickstart to try out Redpanda in Docker or Deploy for Development. Prerequisites Make sure you meet the hardware and software requirements. TCP/IP ports Redpanda uses the following default ports: Port Purpose 9092 Kafka API 8082 HTTP Proxy 8081 Schema Registry 9644 Admin API and Prometheus 33145 internal RPC Select deployment type To start deploying Redpanda for production, choose your deployment type: Default deployment: Use recommended deployment tools. Custom deployment: Use unsupported deployment tools. Default deployment This section describes how to set up a production cluster of Redpanda. Install Redpanda Install Redpanda on each system you want to be part of your cluster. There are binaries available for Fedora/RedHat or Debian systems. Fedora/RedHat Debian/Ubuntu curl -1sLf 'https://dl.redpanda.com/nzc4ZYQK3WRGd9sy/redpanda/cfg/setup/bash.rpm.sh' | \ sudo -E bash && sudo yum install redpanda -y curl -1sLf 'https://dl.redpanda.com/nzc4ZYQK3WRGd9sy/redpanda/cfg/setup/bash.deb.sh' | \ sudo -E bash && sudo apt install redpanda -y Install Redpanda Console Redpanda Console is a developer-friendly web UI for managing and debugging your Redpanda cluster and your applications. For each new release, Redpanda compiles the Redpanda Console to a single binary for Linux, macOS, and Windows. You can find the binaries in the attachments of each release on GitHub. Fedora/RedHat Debian/Ubuntu curl -1sLf 'https://dl.redpanda.com/nzc4ZYQK3WRGd9sy/redpanda/cfg/setup/bash.rpm.sh' | \ sudo -E bash && sudo yum install redpanda-console -y curl -1sLf 'https://dl.redpanda.com/nzc4ZYQK3WRGd9sy/redpanda/cfg/setup/bash.deb.sh' | \ sudo -E bash && sudo apt-get install redpanda-console -y Tune the Linux kernel for production To get the best performance from your hardware, set Redpanda to production mode on each node and run the autotuner tool. The autotuner identifies the hardware configuration of your node and optimizes the Linux kernel to give you the best performance. By default, Redpanda is installed in development mode, which turns off hardware optimization. Make sure that your current Linux user has root privileges. The autotuner requires privileged access to the Linux kernel settings. Set Redpanda to run in production mode: sudo rpk redpanda mode production Tune the Linux kernel: sudo rpk redpanda tune all Changes to the Linux kernel are not persisted. If a node restarts, make sure to run the autotuner again. To automatically tune the Linux kernel on a Redpanda broker after the node restarts, enable the redpanda-tuner service, which runs rpk redpanda tune all: For RHEL, after installing the rpm package, run systemctl to both start and enable the redpanda-tuner service: sudo systemctl start redpanda-tuner sudo systemctl enable redpanda-tuner For Ubuntu, after installing the apt package, run systemctl to start the redpanda-tuner service (which is already enabled): sudo systemctl start redpanda-tuner For more details, see the autotuner reference. Generate optimal I/O configuration settings After tuning the Linux kernel, you can optimize Redpanda for the I/O capabilities of your worker node by using rpk to run benchmarks that capture its read/write IOPS and bandwidth capabilities. After running the benchmarks rpk saves the results to an I/O configuration file (io-config.yaml) that Redpanda reads upon startup to optimize itself for the node. Unlike the autotuner, it isn’t necessary to run rpk iotune each time Redpanda is started, as its I/O output configuration file can be reused for each node that runs on the same type of hardware. Run rpk iotune: sudo rpk iotune # takes 10mins For reference, a local NVMe SSD should yield around 1 GB/s sustained writes. rpk iotune captures SSD wear and tear and gives accurate measurements of what your hardware is capable of delivering. Run this before benchmarking. If you’re on AWS, GCP, or Azure, creating a new instance and upgrading to an image with a recent Linux kernel version is often the easiest way to work around bad devices. Bootstrap broker configurations Each broker requires a set of broker configurations that determine how all brokers communicate with each other and with clients. Bootstrapping a cluster configures the listener, seed-server, and advertised listener, which ensure proper network connectivity and accessibility. Starting in version 23.3.8, rpk enhances the bootstrapping process with additional flags for configuring advertised listener addresses directly. Use the rpk redpanda config bootstrap command to bootstrap Redpanda: sudo rpk redpanda config bootstrap --self <listener-address> --advertised-kafka <advertised-kafka-address> --ips <seed-server1-ip>,<seed-server2-ip>,<seed-server3-ip> && \ sudo rpk redpanda config set redpanda.empty_seed_starts_cluster false Replace the following placeholders: <listener-address>: The --self flag tells Redpanda the interfaces to bind to for the Kafka API, the RPC API, and the Admin API. These addresses determine on which network interface and port Redpanda listens for incoming connections. Set the listener address to 0.0.0.0 to listen on all network interfaces available on the machine. Set the listener address to a specific IP address to bind the listener to that address, restricting connections to that interface. <advertised-kafka-address>: The --advertised-kafka flag sets a different advertised Kafka address, which is useful for scenarios where the accessible address differs from the bind address. Redpanda does not allow advertised addresses set to 0.0.0.0. If you set any advertised addresses to 0.0.0.0, Redpanda will output startup validation errors. <seed-server-ips>: The --ips flag lists all the seed servers in the cluster, including the one being started. The --ips flag must be set identically (with nodes listed in identical order) on each node. Bootstrapping Redpanda updates your /etc/redpanda/redpanda.yaml configuration file: /etc/redpanda/redpanda.yaml redpanda: data_directory: /var/lib/redpanda/data empty_seed_starts_cluster: false seed_servers: - host: address: <seed-server1-ip> port: 33145 - host: address: <seed-server2-ip> port: 33145 - host: address: <seed-server3-ip> port: 33145 rpc_server: address: <listener-address> port: 33145 kafka_api: - address: <listener-address> port: 9092 admin: - address: <listener-address> port: 9644 advertised_rpc_api: address: <listener-address> port: 33145 advertised_kafka_api: - address: <advertised-kafka-address> port: 9092 Recommendations Redpanda Data strongly recommends at least three seed servers when forming a cluster. A larger number of seed servers increases the robustness of consensus and minimizes any chance that new clusters get spuriously formed after brokers are lost or restarted without any data. It’s important to have one or more seed servers in each fault domain (for example, in each rack or cloud AZ). A higher number provides a stronger guarantee that clusters don’t fracture unintentionally. It’s possible to change the seed servers for a short period of time after a cluster has been created. For example, you may want to designate one additional broker as a seed server to increase availability. To do this without cluster downtime, add the new broker to the seed_servers property and restart Redpanda to apply the change on a broker-by-broker basis. Listeners for mixed environments For clusters serving both internal and external clients, configure multiple listeners for the Kafka API to separate internal from external traffic. For more details, see Configure Listeners. Start Redpanda To start Redpanda: sudo systemctl start redpanda-tuner redpanda When a Redpanda cluster starts, it instantiates a controller Raft group with all the seed servers specified in the --ips flag. After all seed servers complete their startup procedure and become accessible, the cluster is then available. After that, non-seed servers start up and are added to the cluster. Start Redpanda Console Start Redpanda Console: sudo systemctl start redpanda-console Make sure that Redpanda Console is active and running: sudo systemctl status redpanda-console Verify the installation To verify that the Redpanda cluster is up and running, use rpk to get information about the cluster: rpk cluster info You should see a list of advertised addresses. To create a topic: rpk topic create <topic-name> If topics were initially created in a test environment with a replication factor of 1, use rpk topic alter-config to change the topic replication factor: rpk topic alter-config <topic-names> --set replication.factor=3 Enable monitoring Monitor Redpanda. Observability is essential in production environments. Custom deployment This section provides information for creating your own automation for deploying Redpanda clusters without using any of the tools that Redpanda supports for setting up a cluster, such as Ansible Playbook, Helm Chart, or Kubernetes Operator. Redpanda strongly recommends using one of these supported deployment tools. See Automate Deploying for Production. Configure a bootstrap file Redpanda cluster configuration is written with the Admin API and the rpk cluster config CLIs. In the special case where you want to provide configuration to Redpanda before it starts for the first time, you can write a .bootstrap.yaml file in the same directory as redpanda.yaml. This file is only read on the first startup of the cluster. Any subsequent changes to .bootstrap.yaml are ignored, so changes to cluster configuration must be done with the Admin API. The content format is a YAML dictionary of cluster configuration properties. For example, to initialize a cluster with Admin API authentication enabled and a single superuser, the .bootstrap.yaml file would contain the following: admin_api_require_auth: true superusers: - alice With this configuration, the Admin API is not accessible until you bootstrap a user account. Bootstrap a user account When using username/password authentication, it’s helpful to be able to create one user before the cluster starts for the first time. Do this by setting the RP_BOOTSTRAP_USER environment variable when starting Redpanda for the first time. The value has the format <username>:<password>. For example, you could set RP_BOOTSTRAP_USER to alice:letmein. RP_BOOTSTRAP_USER only creates a user account. You must still set up authentication using cluster configuration. Secure the Admin API The Admin API is used to create SASL user accounts and ACLs, so it’s important to think about how you secure it when creating a cluster. No authentication, but listening only on 127.0.0.1: This may be appropriate if your Redpanda processes are running in an environment where only administrators can access the host. mTLS authentication: You can generate client and server x509 certificates before starting Redpanda for the first time, refer to them in redpanda.yaml, and use the client certificate when accessing the Admin API. Username/password authentication: Use the combination of admin_api_require_auth, superusers, and RP_BOOTSTRAP_USER to access the Admin API username/password authentication. You probably still want to enable TLS on the Admin API endpoint to protect credentials in flight. Configure the seed servers Seed servers help new brokers join a cluster by directing requests from newly-started brokers to an existing cluster. The seed_servers broker configuration property controls how Redpanda finds its peers when initially forming a cluster. It is dependent on the empty_seed_starts_cluster broker configuration property. Starting with Redpanda version 22.3, you should explicitly set empty_seed_starts_cluster to false on every broker, and every broker in the cluster should have the same value set for seed_servers. With this set of configurations, Redpanda clusters form with these guidelines: When a broker starts and it is a seed server (its address is in the seed_servers list), it waits for all other seed servers to start up, and it forms a cluster with all seed servers as members. When a broker starts and it is not a seed server, it sends requests to the seed servers to join the cluster. It is essential that all seed servers have identical values for the seed_servers list. Redpanda strongly recommends at least three seed servers when forming a cluster. Each seed server decreases the likelihood of unintentionally forming a split brain cluster. To ensure brokers can always discover the cluster, at least one seed server should be available at all times. By default, for backward compatibility, empty_seed_starts_cluster is set to true, and Redpanda clusters form with the guidelines used prior to version 22.3: When a broker starts with an empty seed_servers list, it creates a single broker cluster with itself as the only member. When a broker starts with a non-empty seed_servers list, it sends requests to the brokers in that list to join the cluster. You should never have more than one broker with an empty seed_servers list, which would result in the creation of multiple clusters. Redpanda expects its storage to be persistent, and it’s an error to erase a broker’s drive and restart it. However, in some environments (like when migrating to a different Node pool on Kubernetes), truly persistent storage is unavailable, and brokers may find their data volumes erased. For such environments, Redpanda recommends setting empty_seed_starts_cluster to false and designating a set of seed servers such that they couldn’t lose their storage simultaneously. Do not configure broker IDs Redpanda automatically generates unique broker IDs for each new broker and assigns it to the node_id field in the broker configuration. This ensures safe and consistent cluster operations without requiring manual configuration. Do not set node_id manually. Redpanda assigns unique IDs automatically to prevent issues such as: Brokers with empty disks rejoining the cluster. Conflicts during recovery or scaling. Manually setting or reusing node_id values, even for decommissioned brokers, can cause cluster inconsistencies and operational failures. Perform a self test To understand the performance capabilities of your Redpanda cluster, Redpanda offers built-in self-test features that evaluate the performance of both disk and network operations. For more information, see Disk and network self-test benchmarks. Upgrade considerations Deployment automation should place each broker into maintenance mode and wait for it to drain leadership before restarting it with a newer version of Redpanda. For more information, see Upgrade. If upgrading multiple feature release versions of Redpanda in succession, make sure to verify that each version upgrades to completion before proceeding to the next version. You can verify by reading the /v1/features Admin API endpoint and checking that cluster_version has increased. Starting with Redpanda version 23.1, the /v1/features endpoint also includes a node_latest_version attribute, and installers can verify that the cluster has activated any new functionality from a previous upgrade by checking for cluster_version == node_latest_version. Next steps If clients connect from a different subnet, see Configure Listeners. Observability is essential in production environments. See Monitor Redpanda. Suggested reading Configure Cluster Properties Redpanda Console Configuration Schema Registry Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution Deploy for Production: Automated High Availability