Upgrade in Kubernetes
To benefit from Redpanda's new features and enhancements, upgrade to the latest version. Redpanda recommends that you perform a rolling upgrade on production clusters, which requires all brokers to be placed into maintenance mode and restarted separately, one after the other.
Redpanda version numbers follow the convention AB.C.D, where AB is the two digit year, C is the feature release, and D is the patch release. For example, version 22.3.1 indicates the first patch release on the third feature release of the year 2022.
- New features are enabled after all brokers (nodes) in the cluster are upgraded. You can stop the upgrade process and roll back to the original version as long as you have not upgraded every broker and restarted the cluster.
- Redpanda only supports upgrading one sequential feature release at a time. For example, you can upgrade from the 22.2 feature release to 22.3. You cannot skip feature releases.
- Redpanda only supports downgrading between sequential patch releases. For example, you can downgrade from the 22.2.2 patch release to 22.2.1, but you cannot downgrade to 22.1.7.
Prerequisites
- A running Redpanda cluster.
- jq for listing available versions.
- An understanding of the impact of broker restarts on clients, node CPU, and any alerting systems you use.
Find a new version
Before you perform a rolling upgrade, you must find out which Redpanda version you are currently running, whether you can upgrade straight to the new version, and what's changed since your original version.
Find your current version:
- TLS Enabled
- TLS Disabled
kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk redpanda admin brokers list \
--admin-api-tls-enabled \
--admin-api-tls-truststore <path-to-admin-api-ca-certificate> \
--hosts <broker-url>:<admin-api-port>kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk redpanda admin brokers list \
--hosts <broker-url>:<admin-api-port> \For all available flags, see the
rpk redpanda admin brokers list
command reference.Example output
The Redpanda version for each broker is listed under
BROKER-VERSION
.NODE-ID BROKER-VERSION
0 v22.2.10
1 v22.2.10
2 v22.2.10Find the Redpanda version that's used in the latest Redpanda Helm chart:
helm repo update && \
helm show chart redpanda/redpanda | grep appVersionExample output
appVersion: v22.2.10
noteIf your current version is more than one feature release behind the version in the latest Redpanda Helm chart, you must first upgrade to an intermediate version. To list all available versions:
curl -s 'https://hub.docker.com/v2/repositories/redpandadata/redpanda/tags/?ordering=last_updated&page=1&page_size=50' | jq -r '.results[].name'
Example output
v22.3.13
latest
v22.3.13-arm64
v22.3.13-amd64
v22.2.10
v22.2.10-arm64
v22.2.10-amd64
v22.3.12
v22.3.11
v22.3.10
...Check the release notes to find information about what has changed between Redpanda versions.
Impact of broker restarts
When brokers restart, clients may experience higher latency, nodes may experience CPU spikes when the broker becomes available again, and you may receive alerts about under-replicated partitions. Topics that weren't using replication (that is, topics that had replication.factor=1
) will be unavailable.
Temporary increase in latency on clients (producers and consumers)
When you restart one or more brokers in a cluster, clients (consumers and producers) may experience higher latency due to partition leadership reassignment. Because clients must communicate with the leader of a partition, they may send a request to a broker whose leadership has been transferred, and receive NOT_LEADER_FOR_PARTITION
. In this case, clients must request metadata from the cluster to find out the address of the new leader. Clients refresh their metadata periodically, or when the client receives some retryable errors that indicate that the metadata may be stale. For example:
- Broker A shuts down.
- Client sends a request to broker A, and receives
NOT_LEADER_FOR_PARTITION
. - Client requests metadata, and learns that the new leader is broker B.
- Client sends the request to broker B.
CPU spikes upon broker restart
When a restarted broker becomes available again, you may see your nodes' CPU usage increase temporarily. This temporary increase in CPU usage is due to the cluster rebalancing the partition replicas.
Under-replicated partitions
When a broker is in maintenance mode, Redpanda continues to replicate updates to that broker. When a broker is taken offline during a restart, partitions with replicas on the broker could become out of sync until it is brought back online. Once the broker is available again, data is copied to its under-replicated replicas until all affected partitions are in sync with the partition leader.
Perform a rolling upgrade
A rolling upgrade involves putting a broker into maintenance mode, upgrading the broker, taking the broker out of maintenance mode, and then repeating the process on the next broker in the cluster. Placing brokers into maintenance mode ensures a smooth upgrade of your cluster while reducing the risk of interruption or degradation in service.
When a broker is placed into maintenance mode, it reassigns its partition leadership to other brokers for all topics that have a replication factor greater than one. Reassigning partition leadership involves draining leadership from the broker and transferring that leadership to another broker. If you have topics with replication.factor=1
, and if you have sufficient disk space, Redpanda recommends temporarily increasing the replication factor. This can help limit outages for these topics during the rolling upgrade. Do this before the upgrade to make sure there's time for the data to replicate to other brokers. For more information, see Change topic replication factor.
To perform a rolling upgrade:
- Deploy an upgraded StatefulSet with your desired Redpanda version.
- Upgrade and restart each broker separately, one after the other.
Redpanda Data does not recommend using the kubectl rollout restart statefulset
command to perform rolling upgrades. Although the chart's preStop
lifecycle hook puts the broker into maintenance mode before a Pod is deleted, the terminationGracePeriod
may not be long enough to allow maintenance mode to finish. If maintenance mode does not finish before the Pod is deleted, you may lose data. The default terminationGracePeriod
is 30 seconds and cannot be configured in the chart. After the terminationGracePeriod
, the container is forcefully stopped using a SIGKILL
command.
Deploy an upgraded StatefulSet
To deploy an upgraded StatefulSet, you need to delete the existing StatefulSet, then upgrade the Redpanda Helm chart deployment with your desired Redpanda version.
Delete the existing StatefulSet, but leave the Pods running:
kubectl delete statefulset redpanda --cascade=orphan -n redpanda
Example output
statefulset.apps "redpanda" deleted
Upgrade the Redpanda version by overriding the
image.tag
setting. Replace<new-version>
with a valid version tag.helm upgrade --install redpanda redpanda/redpanda \
--namespace redpanda \
--create-namespace \
--set image.tag=<new-version> --set statefulset.updateStrategy.type=OnDeleteimportantMake sure to include all your configuration overrides in the
helm upgrade
command. Otherwise, the upgrade may fail. For example, if you already enabled SASL, include the same SASL overrides.Do not use the
--reuse-values
flag, otherwise Helm won't include any new values from the upgraded chart.The
statefulset.updateStrategy.type=OnDelete
setting stops the StatefulSet from upgrading all the Pods automatically. Changing theupgradeStrategy
toOnDelete
allows you to keep the existing Pods running and upgrade each broker separately. For more details, see the Kubernetes documentation.tipTo use the Redpanda version in the latest version of the Redpanda Helm chart, set
image.tag
to""
(empty string).
Upgrade the brokers
To upgrade the Redpanda brokers, you must do the following to each broker, one at a time:
- Place the broker into maintenance mode.
- Wait for maintenance mode to finish.
- Delete the Pod that the broker was running in.
Before placing a broker into maintenance mode, you may want to temporarily disable or ignore alerts related to under-replicated partitions. When a broker is taken offline during a restart, replicas can become under-replicated.
Select a broker that has not been upgraded yet and place it into maintenance mode.
In this example, the command is executed on a Pod called
redpanda-0
in theredpanda
namespace. The ordinal of the StatefulSet replica,0
in this example, is the same as the broker's ID. This ID is used to enable maintenance mode on the broker:rpk cluster maintenance enable 0
.- TLS Enabled
- TLS Disabled
kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk cluster maintenance enable 0 --wait \
--admin-api-tls-enabled \
--admin-api-tls-truststore <path-to-admin-api-ca-certificate> \
--api-urls <broker-url>:<admin-api-port>kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk cluster maintenance enable 0 --wait \
--api-urls <broker-url>:<admin-api-port> \The
--wait
flag ensures that the cluster is healthy before putting the broker into maintenance mode.The draining process won’t start until the cluster is healthy. The amount of time it takes to drain a broker and reassign partition leadership depends on the number of partitions and how healthy the cluster is. For healthy clusters, draining leadership should take less than a minute. If the cluster is unhealthy, such as when a follower is not in sync with the leader, then draining the broker can take even longer.
Example output
NODE-ID DRAINING FINISHED ERRORS PARTITIONS ELIGIBLE TRANSFERRING FAILED
0 true true false 1 0 1 0Wait until the cluster is healthy before continuing:
- TLS Enabled
- TLS Disabled
kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk cluster health \
--admin-api-tls-enabled \
--admin-api-tls-truststore <path-to-admin-api-ca-certificate> \
--api-urls <broker-url>:<admin-api-port>kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk cluster health \
--api-urls <broker-url>:<admin-api-port> \Example output
CLUSTER HEALTH OVERVIEW
=======================
Healthy: true
Controller ID: 0
All nodes: [0 2 1]
Nodes down: []
Leaderless partitions: []noteYou can also evaluate external metrics to determine cluster health. If the cluster has any issues, take the broker out of maintenance mode by running the following command before proceeding with other operations, such as decommissioning or retrying the rolling upgrade:
rpk cluster maintenance disable <node-id>
Delete the Pod in which the broker was running:
kubectl delete pod redpanda-0 -n redpanda
Expected output
pod "redpanda-0" deleted
When the Pod restarts, make sure that it's now running the upgraded version of Redpanda:
- TLS Enabled
- TLS Disabled
kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk redpanda admin brokers list \
--admin-api-tls-enabled \
--admin-api-tls-truststore <path-to-admin-api-ca-certificate> \
--hosts <broker-url>:<admin-api-port>kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk redpanda admin brokers list \
--hosts <broker-url>:<admin-api-port> \Expected output
NODE-ID BROKER-VERSION
0 v22.3.13
1 v22.2.10
2 v22.2.10Repeat this process for all the other brokers in the cluster.
Verify that the upgrade was successful
When you've upgraded all brokers, verify that the cluster is healthy. If the cluster is unhealthy, the upgrade may still be in progress. Try waiting a few moments, then run the command again.
- TLS Enabled
- TLS Disabled
kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk cluster health \
--admin-api-tls-enabled \
--admin-api-tls-truststore <path-to-admin-api-ca-certificate> \
--api-urls <broker-url>:<admin-api-port>
kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk cluster health \
--api-urls <broker-url>:<admin-api-port> \
Example output
CLUSTER HEALTH OVERVIEW
=======================
Healthy: true
Controller ID: 1
All nodes: [2,1,0]
Nodes down: []
Leaderless partitions: []
Rollbacks
If something does not go as planned during a rolling upgrade, you can roll back to the original version as long as you have not upgraded every broker. The StatefulSet uses the default RollingUpdate
strategy, which means all Pods in the StatefulSet are restarted in reverse-ordinal order. For more details, see the Kubernetes documentation.
Find the previous revision:
helm history redpanda -n redpanda
Example output
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Fri Mar 3 15:16:24 year superseded redpanda-2.12.2 v22.3.13 Install complete
2 Fri Mar 3 15:19:41 year deployed redpanda-2.12.2 v22.3.13 Upgrade completeRoll back to the previous revision:
helm rollback redpanda <previous-revision> -n redpanda
Example output
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Fri Mar 3 15:16:24 year superseded redpanda-2.12.2 v22.3.13 Install complete
2 Fri Mar 3 15:19:41 year superseded redpanda-2.12.2 v22.3.13 Upgrade complete
3 Fri Mar 3 15:28:41 year deployed redpanda-2.12.2 v22.3.13 Rollback to 1Verify that the cluster is healthy. If the cluster is unhealthy, the upgrade may still be in progress. Try waiting a few moments, then run the command again.
- TLS Enabled
- TLS Disabled
kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk cluster health \
--admin-api-tls-enabled \
--admin-api-tls-truststore <path-to-admin-api-ca-certificate> \
--api-urls <broker-url>:<admin-api-port>kubectl exec redpanda-0 -n redpanda -c redpanda -- \
rpk cluster health \
--api-urls <broker-url>:<admin-api-port> \Example output
CLUSTER HEALTH OVERVIEW
=======================
Healthy: true
Controller ID: 1
All nodes: [2,1,0]
Nodes down: []
Leaderless partitions: []
Suggested reading
Set up a real-time dashboard to monitor your cluster health, see Monitor Redpanda.