Failover Runbook

This feature requires an enterprise license. To get a trial license key or extend your trial period, generate a new trial license key. To purchase a license, contact Redpanda Sales.

If Redpanda has enterprise features enabled and it cannot find a valid license, restrictions apply.

This guide provides step-by-step procedures for emergency failover when your primary Redpanda cluster becomes unavailable. Follow these procedures only during active disasters when immediate failover is required.

This is an emergency procedure. For planned failover testing or day-to-day shadow link management, see Configure Failover. Ensure you have completed the disaster readiness checklist in ./overview.adoc#disaster-readiness-checklist before an emergency occurs.

Emergency failover procedure

Follow these steps during an active disaster:

Assess the situation

Confirm that failover is necessary:

# Check if the primary cluster is responding
rpk cluster info --brokers prod-cluster-1.example.com:9092,prod-cluster-2.example.com:9092

# If primary cluster is down, check shadow cluster health
rpk cluster info --brokers shadow-cluster-1.example.com:9092,shadow-cluster-2.example.com:9092

Decision point: If the primary cluster is responsive, consider whether failover is actually needed. Partial outages may not require full disaster recovery.

Examples that require full failover:

  • Primary cluster is completely unreachable (network partition, regional outage)

  • Multiple broker failures preventing writes to critical topics

  • Data center failure affecting majority of brokers

  • Persistent authentication or authorization failures across the cluster

Examples that may NOT require failover:

  • Single broker failure with sufficient replicas remaining

  • Temporary network connectivity issues affecting some clients

  • High latency or performance degradation (but cluster still functional)

  • Non-critical topic or partition unavailability

Verify shadow cluster status

Check the health of your shadow links:

# List all shadow links
rpk shadow list

# Check the configuration of your shadow link
rpk shadow describe <shadow-link-name>

# Check the status of your disaster recovery link
rpk shadow status <shadow-link-name>

Verify that the following conditions exist before proceeding with failover:

  • Shadow link state should be ACTIVE.

  • Topics should be in ACTIVE state (not FAULTED).

  • Replication lag should be reasonable for your RPO requirements.

Understanding replication lag:

Use rpk shadow status <shadow-link-name> to check lag, which shows the message count difference between source and shadow partitions:

  • Acceptable lag examples: 0-1000 messages for low-throughput topics, 0-10000 messages for high-throughput topics

  • Concerning lag examples: Growing lag over 50,000 messages, or lag that continuously increases without recovering

  • Critical lag examples: Lag exceeding your data loss tolerance (for example, if you can only afford to lose 1 minute of data, lag should represent less than 1 minute of typical message volume)

Document current state

Record the current lag and status before proceeding:

# Capture current status for post-mortem analysis
rpk shadow status <shadow-link-name> > failover-status-$(date +%Y%m%d-%H%M%S).log

Example output showing healthy replication before failover:

Shadow Link: <shadow-link-name>

Overview:
NAME                 <shadow-link-name>
UID                  <uid>
STATE                ACTIVE

Tasks:
Name                 Broker_ID  State   Reason
<task-name>          1          ACTIVE
<task-name>          2          ACTIVE

Topics:
Name: <topic-name>, State: ACTIVE

 Partition  SRC_LSO  SRC_HWM  DST_HWM  Lag
 0          1234     1468     1456     12
 1          2345     2579     2568     11
Note the replication lag to estimate potential data loss during failover.

Initiate failover

A complete cluster failover is appropriate If you observe that the source cluster is no longer reachable:

# Fail over all topics in the shadow link
rpk shadow failover <shadow-link-name> --all

For selective topic failover (when only specific services are affected):

# Fail over individual topics
rpk shadow failover <shadow-link-name> --topic <topic-name>
rpk shadow failover <shadow-link-name> --topic <topic-name>

Monitor failover progress

Track the failover process:

# Monitor status until all topics show FAILED_OVER
watch -n 5 "rpk shadow status <shadow-link-name>"

# Check detailed topic status and lag during emergency
rpk shadow status <shadow-link-name> --print-topic

Example output during successful failover:

Shadow Link: <shadow-link-name>

Overview:
NAME                 <shadow-link-name>
UID                  <uid>
STATE                ACTIVE

Tasks:
Name                 Broker_ID  State   Reason
<task-name>          1          ACTIVE
<task-name>          2          ACTIVE

Topics:
Name: <topic-name>, State: FAILED_OVER
Name: <topic-name>, State: FAILED_OVER
Name: <topic-name>, State: FAILING_OVER

Wait for: All critical topics to reach FAILED_OVER state before proceeding.

Update application configuration

Redirect your applications to the shadow cluster by updating connection strings in your applications to point to shadow cluster brokers. If using DNS-based service discovery, update DNS records accordingly. Restart applications to pick up new connection settings and verify connectivity from application hosts to shadow cluster.

Verify application functionality

Test critical application workflows:

# Verify applications can produce messages
rpk topic produce <topic-name> --brokers <shadow-cluster-address>:9092

# Verify applications can consume messages
rpk topic consume <topic-name> --brokers <shadow-cluster-address>:9092 --num 1

Test message production and consumption, consumer group functionality, and critical business workflows to ensure everything is working properly.

Clean up and stabilize

After all applications are running normally:

# Optional: Delete the shadow link (no longer needed)
rpk shadow delete <shadow-link-name>

Document the time of failover initiation and completion, applications affected and recovery times, data loss estimates based on replication lag, and issues encountered during failover.

Troubleshoot common issues

Topics stuck in FAILING_OVER state

Problem: Topics remain in FAILING_OVER state for extended periods

Solution: Check shadow cluster logs for specific error messages and ensure sufficient cluster resources (CPU, memory, disk space) are available on the shadow cluster. Verify network connectivity between shadow cluster nodes and confirm that all shadow topic partitions have elected leaders and the controller partition is properly replicated with an active leader.

If topics remain stuck after addressing these cluster health issues and you need immediate failover, you can force delete the shadow link to failover all topics:

# Force delete the shadow link to failover all topics
rpk shadow delete <shadow-link-name> --force

Force deleting a shadow link immediately fails over all topics in the link. This action is irreversible and should only be used when topics are stuck and you need immediate access to all replicated data.

Topics in FAULTED state

Problem: Topics show FAULTED state and are not replicating

Solution: Check for authentication issues, network connectivity problems, or source cluster unavailability. Verify that the shadow link service account still has the required permissions on the source cluster. Review shadow cluster logs for specific error messages about the faulted topics.

Application connection failures

Problem: Applications cannot connect to shadow cluster after failover

Solution: Verify shadow cluster broker endpoints are correct and check security group and firewall rules. Confirm authentication credentials are valid for the shadow cluster and test network connectivity from application hosts.

Consumer group offset issues

Problem: Consumers start from beginning or wrong positions

Solution: Verify consumer group offsets were replicated (check your filters) and use rpk group describe <group-name> to check offset positions. If necessary, manually reset offsets to appropriate positions. See How to manage consumer group offsets in Redpanda for detailed reset procedures.

Next steps

After successful failover, focus on recovery planning and process improvement. Begin by assessing the source cluster failure and determining whether to restore the original cluster or permanently promote the shadow cluster as your new primary.

Immediate recovery planning:

  1. Assess source cluster: Determine root cause of the outage

  2. Plan recovery: Decide whether to restore source cluster or promote shadow cluster permanently

  3. Data synchronization: Plan how to synchronize any data produced during failover

  4. Fail forward: Create a new shadow link with the failed over shadow cluster as source to maintain a DR cluster

Process improvement:

  1. Document the incident: Record timeline, impact, and lessons learned

  2. Update runbooks: Improve procedures based on what you learned

  3. Test regularly: Schedule regular disaster recovery drills

  4. Review monitoring: Ensure monitoring caught the issue appropriately