Docs Cloud Manage Disaster Recovery Shadowing Configure Shadowing Configure Shadowing Page options Copy as Markdown Copied! View as plain text Ask AI about this topic Add MCP server to VS Code You can create and manage shadow links with the Redpanda Cloud UI, the Cloud API, or rpk, giving you flexibility in how you interact with your disaster recovery infrastructure. Deploy clusters in different geographic regions to protect against regional disasters. Prerequisites License and cluster requirements Shadowing is supported on BYOC and Dedicated clusters running Redpanda version 25.3 and later. Cluster properties Both source and shadow clusters must have the enable_shadow_linking cluster property set to true. Starting with Redpanda v25.3, this cluster property is enabled by default on new Redpanda Cloud clusters. Existing clusters created before v25.3 must enable this property manually. See Configure Cluster Properties. Replication service permissions You must configure a service account on the source cluster with the following ACL permissions for shadow link replication: Topics: read permission on all topics you want to replicate Topic configurations: describe_configs permission on topics for configuration synchronization Consumer groups: describe and read permission on consumer groups for offset replication ACLs: describe permission on ACL resources to replicate security policies Cluster: describe permission on the cluster resource to access ACLs This service account authenticates from the shadow cluster to the source cluster and performs the actual data replication. The credentials for this account are provided when you set up the shadow link. Network and authentication You must configure network connectivity between clusters with appropriate firewall rules to allow the shadow cluster to connect to the source cluster for data replication. Shadowing uses a pull-based architecture where the shadow cluster fetches data from the source cluster. For detailed networking configuration, see Networking. If using authentication for the shadow link connection, configure the source cluster with your chosen authentication method (SASL/SCRAM, TLS, mTLS) and ensure the shadow cluster has the proper credentials to authenticate to the source cluster. Set up Shadowing To set up Shadowing, you need to create a shadow link and configure filters to select which topics, consumer groups, ACLs, and Schema Registry data to replicate. If using the Cloud API to set up Shadowing, you must authenticate to the API by including an access token in your requests. Create a shadow link Any BYOC or Dedicated cluster can create a shadow link to a source cluster. You can use rpk to generate a sample configuration file with common filter patterns: # Generate a sample configuration file with placeholder values rpk shadow config generate --for-cloud -o shadow-config.yaml This creates a complete YAML configuration file that you can customize for your environment. The template includes all available fields with comments explaining their purpose. For detailed command options, see rpk shadow config generate --for-cloud. Explore the configuration file # Sample ShadowLinkConfig YAML with all fields name: <shadow-link-name> # Unique name for this shadow link, example: "production-dr" cloud_options: # Use either source_redpanda_id or bootstrap_servers: only one is required. source_redpanda_id: <source-cluster-id> # Optional: 20 character lowercase ID of the cluster # Example: m7xtv2qq5njbhwruk88f shadow_redpanda_id: <shadow-cluster-id> # 20 character lowercase ID of the cluster # Example: m7xtv2qq5njbhwruk88f client_options: bootstrap_servers: # Source cluster brokers to connect to - <source-broker-1>:<port> # Example: "prod-kafka-1.example.com:9092" - <source-broker-2>:<port> # Example: "prod-kafka-2.example.com:9092" - <source-broker-3>:<port> # Example: "prod-kafka-3.example.com:9092" source_cluster_id: <cluster-id> # Optional: UUID assigned by Redpanda # Example: a882bc98-7aca-40f6-a657-36a0b4daf1fd # This UUID is not available in Redpanda Cloud. # TLS settings using PEM strings tls_settings: enabled: true tls_pem_settings: ca: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- key: ${secrets.<key-from-shadow-cluster-secret-store>} cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- # Create SASL credentials in the source cluster. # Then, with this configuration, ensure the shadow cluster uses the credentials # to authenticate to the source cluster. authentication_configuration: # SASL/SCRAM authentication scram_configuration: username: <sasl-username> # SASL/SCRAM username, example: "shadow-replication-user" password: ${secrets.<sasl-password-secret-id>} # ID of secret containing SASL/SCRAM password scram_mechanism: SCRAM_SHA_256 # SCRAM mechanism: "SCRAM_SHA_256" or "SCRAM_SHA_512" # Connection tuning - adjust based on network characteristics metadata_max_age_ms: 10000 # How often to refresh cluster metadata (default: 10000ms) connection_timeout_ms: 1000 # Connection timeout (default: 1000ms, increase for high latency) retry_backoff_ms: 100 # Backoff between retries (default: 100ms) fetch_wait_max_ms: 500 # Max time to wait for fetch requests (default: 500ms) fetch_min_bytes: 5242880 # Min bytes per fetch (default: 5MB) fetch_max_bytes: 20971520 # Max bytes per fetch (default: 20MB) fetch_partition_max_bytes: 1048576 # Max bytes per partition fetch (default: 1MB) topic_metadata_sync_options: interval: 30s # How often to sync topic metadata (examples: "30s", "1m", "5m") auto_create_shadow_topic_filters: # Filters for automatic topic creation - pattern_type: LITERAL # Include all topics (wildcard) filter_type: INCLUDE name: '*' - pattern_type: PREFIX # Exclude topics with specific prefix filter_type: EXCLUDE name: <topic-prefix-to-exclude> # Examples: "temp-", "test-", "debug-" synced_shadow_topic_properties: # Additional topic properties to sync (beyond defaults) - retention.ms # Topic retention time - segment.ms # Segment roll time exclude_default: false # Include default properties (compression, retention, etc.) start_at_earliest: {} # Start from the beginning of source topics (default) paused: false # Enable topic metadata synchronization consumer_offset_sync_options: interval: 30s # How often to sync consumer group offsets paused: false # Enable consumer offset synchronization group_filters: # Filters for consumer groups to sync - pattern_type: LITERAL filter_type: INCLUDE name: '*' # Include all consumer groups security_sync_options: interval: 30s # How often to sync security settings paused: false # Enable security settings synchronization acl_filters: # Filters for ACLs to sync - resource_filter: resource_type: TOPIC # Resource type: "TOPIC", "GROUP", "CLUSTER" pattern_type: PREFIXED # Pattern type: "LITERAL", "PREFIXED" name: <resource-pattern> # Examples: "prod-", "app-data-" access_filter: principal: User:<username> # Principal name, example: "User:app-service" operation: ANY # Operation: "READ", "WRITE", "CREATE", "DELETE", "ALTER", "DESCRIBE", "ANY" permission_type: ALLOW # Permission: "ALLOW" or "DENY" host: '*' # Host pattern, examples: "*", "10.0.0.0/8", "app-server.example.com" schema_registry_sync_options: # Schema Registry synchronization options shadow_schema_registry_topic: {} # Enable byte-for-byte _schemas topic replication Because the shadow cluster pulls from the source cluster, the shadow cluster requires credentials to connect to the source cluster. And because you cannot store plaintext passwords in Redpanda Cloud, you must create a secret to hold the password for the user on the source cluster. If using mTLS, you must also create a secret to hold the key of the client certificate for the client to authenticate. Reference that secret in client_options.tls_settings.key_file in the configuration file. In the shadow cluster, create the secret: Cloud UI rpk Data Plane API In the shadow cluster, go to the Secrets Store page and create a secret for the source cluster user, scoped to Redpanda Cluster. If necessary, first create the user with all ACLs enabled in the source cluster. In the shadow cluster, create a secret to store the authentication credential that the cluster will use ("scram_configuration": "password" in the example configuration in the next step). Your secret must be scoped to "Redpanda Cluster". Use rpk security secret create to create the secret from the command line. In the shadow cluster, create a secret to store the authentication credential that the cluster will use ("scram_configuration": "password" in the example configuration in the next step). Your secret must be scoped to "Redpanda Cluster". Use the Data Plane API to programmatically create the secret. In the shadow cluster, create a shadow link to the source cluster. Cloud UI rpk Control Plane API At the organization level of the Cloud UI, navigate to Shadow Link. Click Create shadow link. Enter a unique name for the shadow link. The name must start and end with lowercase alphanumeric characters, hyphens allowed. Select the source cluster from which data will be replicated. You can select an existing Redpanda Cloud cluster, or you can enter a bootstrap server URL to connect to any Kafka-compatible cluster. For an existing Redpanda Cloud cluster, you select the specific cluster on the next page. Enter the authorization and authentication details from the source cluster, including the user and the name of the secret containing the password created in the previous step. Optionally, expand Advanced options to configure client connection properties. Click Save to apply changes. Run rpk cloud login. Select your shadow cluster when prompted. To create a shadow link with the source cluster using rpk, run the following command from the shadow cluster: # When logged in, optionally create a new rpk profile to easily # switch to the shadow cluster rpk profile create --from-cloud <shadow-cluster-id> shadow-cluster # Use the generated configuration file to create the shadow link rpk shadow create --config-file shadow-config.yaml For detailed command options, see rpk shadow create. Use rpk profile to save your cluster connection details and credentials for both source and shadow clusters. This allows you to easily switch between the two configurations. To create a shadow link using the Control Plane API, make a POST /shadow-links request from the shadow cluster: curl -X POST 'https://api.redpanda.com/v1/shadow-links' \ -H "Content-Type: application/json" \ -H "Authorization: Bearer ${RP_CLOUD_TOKEN}" \ -d '{ "shadow_link": { "shadow_redpanda_id": "<destination-redpanda-cluster-id>", "name": "<shadow-link-name>", "client_options": { "bootstrap_servers": ["<source-broker-1>:<port>", "<source-broker-2>:<port>", "<source-broker-3>:<port>"], "tls_settings": { "enabled": true }, "authentication_configuration": { "scram_configuration": { "username": "<sasl-username>", "password": "${secrets.<sasl-password-secret-id>}", "scram_mechanism": "SCRAM_MECHANISM_SCRAM_SHA_256" } } }, "topic_metadata_sync_options": { "interval": "30s", "auto_create_shadow_topic_filters": [ { "name": "*", "filter_type": "FILTER_TYPE_INCLUDE", "pattern_type": "PATTERN_TYPE_LITERAL" }, { "name": "<topic-prefix-to-exclude>", "filter_type": "FILTER_TYPE_EXCLUDE", "pattern_type": "PATTERN_TYPE_PREFIX" } ], "start_at_earliest": {}, "paused": false }, "consumer_offset_sync_options": { "paused": true }, "security_sync_options": { "paused": true } } }' Replace the placeholders with your own values: <destination-redpanda-cluster-id>: ID of the shadow (destination) cluster. <shadow-link-name>: Unique name for this shadow link, for example, production-dr. <source-broker-1>:<port>, <source-broker-2>:<port> …: Source cluster brokers to connect to, for example, prod-kafka-1.example.com:9092, prod-kafka-2.example.com:9092. <sasl-username>: SASL/SCRAM username, for example, shadow-replication-user. You create this user in the source cluster. <sasl-password-secret-id>: The name of the secret containing the SASL/SCRAM password from the source cluster. <topic-prefix-to-exclude>: Exclude topics that use this prefix, for example, temp-, test-, debug-. The response object represents the long-running operation of creating a shadow link. For the full API reference, see Control Plane API reference. Set filters Filters determine which resources Shadowing automatically creates when establishing your shadow link. Topic filters select which topics Shadowing automatically creates as shadow topics when they appear on the source cluster. After Shadowing creates a shadow topic, it continues replicating until you failover the topic, delete it, or delete the entire shadow link. Consumer group and ACL filters control which groups and security policies replicate to maintain application functionality. Filter types and patterns Each filter uses two key settings: Pattern type: Determines how names are matched LITERAL: Matches names exactly (including the special wildcard * to match all items) PREFIX: Matches names that start with the specified string Filter type: Specifies whether to INCLUDE or EXCLUDE matching items INCLUDE: Replicate items that match the pattern EXCLUDE: Skip items that match the pattern Filter processing rules Redpanda processes filters in the order you define them with EXCLUDE filters taking precedence. Design your filter lists carefully: Exclude filters win: If any EXCLUDE filter matches a resource, it is excluded regardless of INCLUDE filters. Order matters for INCLUDE filters: Among INCLUDE filters, the first match determines the result. Default behavior: Items that don’t match any filter are excluded from replication. Common filtering patterns Replicate all topics except test topics: topic_metadata_sync_options: auto_create_shadow_topic_filters: - pattern_type: PREFIX filter_type: EXCLUDE name: test- # Exclude all test topics - pattern_type: LITERAL filter_type: INCLUDE name: '*' # Include all other topics Replicate only production topics: topic_metadata_sync_options: auto_create_shadow_topic_filters: - pattern_type: PREFIX filter_type: INCLUDE name: prod- # Include production topics - pattern_type: PREFIX filter_type: INCLUDE name: production- # Alternative production prefix Replicate specific consumer groups: consumer_offset_sync_options: group_filters: - pattern_type: LITERAL filter_type: INCLUDE name: critical-app-consumers # Include specific consumer group - pattern_type: PREFIX filter_type: INCLUDE name: prod-consumer- # Include production consumers Schema Registry synchronization Shadowing can replicate Schema Registry data by shadowing the _schemas system topic. When enabled, this provides byte-for-byte replication of schema definitions, versions, and compatibility settings. To enable Schema Registry synchronization, add the following to your shadow link configuration: schema_registry_sync_options: shadow_schema_registry_topic: {} Requirements: The _schemas topic must exist on the source cluster The _schemas topic must not exist on the shadow cluster, or must be empty Once enabled, the _schemas topic will be replicated completely Important: After the _schemas topic becomes a shadow topic, it cannot be stopped without either failing over the topic or deleting it entirely. System topic filtering rules Redpanda system topics have the following specific filtering restrictions: Literal filters for __consumer_offsets and _redpanda.audit_log are rejected. Prefix filters for topics starting with _redpanda or __redpanda are rejected. Wildcard * filters will not match topics that start with _redpanda or __redpanda. To shadow specific system topics, you must provide explicit literal filters for those individual topics. ACL filtering ACLs are replicated by the Security Migrator task. This is recommended to ensure that your shadow cluster has the same permissions as your source cluster. To configure ACL filters: security_sync_options: acl_filters: # Include read permissions for production topics - resource_filter: resource_type: TOPIC # Filter by topic resource pattern_type: PREFIXED # Match by prefix name: prod- # Production topic prefix access_filter: principal: User:app-user # Application service user operation: READ # Read operation permission_type: ALLOW # Allow permission host: '*' # Any host # Include consumer group permissions - resource_filter: resource_type: GROUP # Filter by consumer group pattern_type: LITERAL # Exact match name: '*' # All consumer groups access_filter: principal: User:app-user # Same application user operation: READ # Read operation permission_type: ALLOW # Allow permission host: '*' # Any host Consumer group filtering and behavior Consumer group filters determine which consumer groups have their offsets replicated to the shadow cluster by the Consumer Group Shadowing task. Offset replication operates selectively within each consumer group. Only committed offsets for active shadow topics are synchronized, even if the consumer group has offsets for additional topics that aren’t being shadowed. For example, if consumer group "app-consumers" has committed offsets for "orders", "payments", and "inventory" topics, but only "orders" is an active shadow topic, then only the "orders" offsets will be replicated to the shadow cluster. consumer_offset_sync_options: interval: 30s # How often to sync consumer group offsets paused: false # Enable consumer offset synchronization group_filters: - pattern_type: PREFIX filter_type: INCLUDE name: prod-consumer- # Include production consumer groups - pattern_type: LITERAL filter_type: EXCLUDE name: test-consumer-group # Exclude specific test groups Important consumer group considerations Avoid name conflicts: If you plan to consume data from the shadow cluster, do not use the same consumer group names as those used on the source cluster. While this won’t break shadow linking, it can impact your RPO/RTO because conflicting group names may interfere with offset replication and consumer resumption during disaster recovery. Offset clamping: When Redpanda replicates consumer group offsets from the source cluster, offsets are automatically "clamped" during the commit process on the shadow cluster. If a committed offset from the source cluster is above the high watermark (HWM) of the corresponding shadow partition, Redpanda clamps the offset to the shadow partition’s HWM before committing it to the shadow cluster. This ensures offsets remain valid and prevents consumers from seeking beyond available data on the shadow cluster. Starting offset for new shadow topics When the Source Topic Sync task creates a shadow topic for the first time, you can control where replication begins on the source topic. This setting only applies to empty shadow partitions and is crucial for disaster recovery planning. Changing this configuration only affects new shadow topics, existing shadow topics continue replicating from their current position. topic_metadata_sync_options: start_at_earliest: {} Alternatively, to start from the most recent offset: topic_metadata_sync_options: start_at_latest: {} Or to start from a specific timestamp: topic_metadata_sync_options: start_at_timestamp: 2024-01-01T00:00:00Z Starting offset options: earliest (default): This replicates all existing data from the source topic. Use this for complete disaster recovery where you need full data history. latest: This starts replication from the current end of the source topic, skipping existing data. Use this when you only need new data for disaster recovery and want to minimize initial replication time. timestamp: This starts replication from the first record with a timestamp at or after the specified time. Use this for point-in-time disaster recovery scenarios. The starting offset only affects new shadow topics. After a shadow topic exists and has data, changing this setting has no effect on that topic’s replication. Networking Configure network connectivity between your source and shadow clusters to enable shadow link replication. The shadow cluster initiates connections to the source cluster using a pull-based architecture. For additional details about networking, see Network and authentication. Connection requirements Direction: Shadow cluster connects to source cluster (outbound from shadow, inbound to source) Protocol: Kafka protocol over TCP (default port 9092, or your configured listener ports) Persistence: Connections remain active for continuous replication Firewall configuration You must configure firewall rules to allow the shadow cluster to reach the source cluster. On the source cluster network: Allow inbound TCP connections on Kafka listener ports (typically 9092). Allow connections from the shadow cluster’s IP addresses or subnets. On the shadow cluster network: Allow outbound TCP connections to the source cluster’s Kafka listener ports. Ensure DNS resolution works for source cluster hostnames. Bootstrap servers Specify multiple bootstrap servers in your shadow link configuration for high availability: client_options: bootstrap_servers: # Source cluster brokers to connect to - <source-broker-1>:<port> # Example: "prod-kafka-1.example.com:9092" - <source-broker-2>:<port> # Example: "prod-kafka-2.example.com:9092" - <source-broker-3>:<port> # Example: "prod-kafka-3.example.com:9092" The shadow cluster uses these addresses to discover all brokers in the source cluster. If one bootstrap server is unavailable, the shadow cluster tries the next one in the list. Network security For production deployments, secure the network connection between clusters: TLS encryption: client_options: tls_settings: enabled: true # Enable TLS tls_pem_settings: ca: |- # CA certificate in PEM format -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- key: ${secrets.<key-from-shadow-cluster-secret-store>} # Client private key (can use secrets reference) cert: |- # Optional: Client certificate in PEM format for mutual TLS -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- do_not_set_sni_hostname: false # Optional: Skip SNI hostname when using TLS (default: false) Authentication: client_options: authentication_configuration: # SASL/SCRAM authentication. # Create SASL credentials in the source cluster. # Then, with this configuration, ensure the shadow cluster uses the credentials # to authenticate to the source cluster. scram_configuration: username: <sasl-username> # SASL/SCRAM username, example: "shadow-replication-user" password: ${secrets.<sasl-password-secret-id>} # ID of secret containing SASL/SCRAM password scram_mechanism: SCRAM_SHA_256 # SCRAM mechanism: "SCRAM_SHA_256" or "SCRAM_SHA_512" Connection tuning Adjust connection parameters based on your network characteristics. For example: client_options: # Connection and metadata settings connection_timeout_ms: 1000 # Default 1000ms, increase for high-latency networks retry_backoff_ms: 100 # Default 100ms, backoff between connection retries metadata_max_age_ms: 10000 # Default 10000ms, how often to refresh cluster metadata # Fetch request settings fetch_wait_max_ms: 500 # Default 500ms, max time to wait for fetch requests fetch_min_bytes: 5242880 # Default 5MB, minimum bytes to fetch per request fetch_max_bytes: 20971520 # Default 20MB, maximum bytes to fetch per request fetch_partition_max_bytes: 1048576 # Default 1MB, maximum bytes to fetch per partition Update an existing shadow link To modify a shadow link configuration after creation, run: Cloud UI rpk Control Plane API At the organization level of the Cloud UI, navigate to Shadow Link. Select the shadow link you want to modify, and click Edit. Edit the shadow link settings or the shadowing behavior by specifying which content from the source cluster to shadow (topics, ACLs, consumer groups, Schema Registry). You can also enable additional topic properties to be shadowed or disable optional topic properties from being included in the shadowing. Click Save to apply changes. rpk shadow update <shadow-link-name> For detailed command options, see rpk shadow update. This opens your default editor to modify the shadow link configuration. Only changed fields are updated on the server. The shadow link name cannot be changed - you must delete and recreate the link to rename it. curl -X PATCH 'https://api.redpanda.com/v1/shadow-links/<shadow-link-id>' \ -H "Content-Type: application/json" \ -H "Authorization: Bearer ${RP_CLOUD_TOKEN}" \ -d '{ "security_sync_options": { "paused": false } }' This endpoint returns a long-running operation. For the full API reference, see Control Plane API reference. Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution 🎉 Thanks for your feedback! Overview Monitor Shadowing