Docs Cloud Redpanda Connect Cookbooks Redpanda Migrator Redpanda Migrator With Redpanda Migrator, you can move your workloads from any Apache Kafka system to Redpanda using a single command. It lets you migrate Kafka messages, schemas, and ACLs quickly and efficiently. Redpanda Connect’s Redpanda Migrator uses the unified migrator components (available in Redpanda Connect 4.67.5+): redpanda_migrator input connects to the source Kafka cluster and Schema Registry. redpanda_migrator output handles all migration logic including topic creation, schema synchronization, and consumer group offset translation. If you’re currently using the legacy redpanda_migrator_bundle components, see Migrate to the Unified Redpanda Migrator for migration instructions. Create a Kafka cluster and a Redpanda Cloud cluster First, you need to provision two clusters, a Kafka one called source and a Redpanda Cloud one called destination. This cookbook uses the following sample connection details throughout the rest of this cookbook: Source broker: source.cloud.kafka.com:9092 schema registry: https://schema-registry-source.cloud.kafka.com:30081 username: kafka password: testpass Destination broker: destination.cloud.redpanda.com:9092 schema registry: https://schema-registry-destination.cloud.redpanda.com:30081 username: redpanda password: testpass Then you create two topics in the source Kafka cluster, foo and bar, and an ACL for each topic: cat > ./config.properties <<EOF security.protocol=SASL_SSL sasl.mechanism=SCRAM-SHA-256 sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka" password="testpass"; EOF kafka-topics.sh --bootstrap-server source.cloud.kafka.com:9092 --command-config config.properties --create --if-not-exists --topic foo --replication-factor=3 --partitions=2 kafka-topics.sh --bootstrap-server source.cloud.kafka.com:9092 --command-config config.properties --create --if-not-exists --topic bar --replication-factor=3 --partitions=2 kafka-topics.sh --bootstrap-server source.cloud.kafka.com:9092 --command-config config.properties --list --exclude-internal kafka-acls.sh --bootstrap-server source.cloud.kafka.com:9092 --command-config config.properties --add --allow-principal User:redpanda --operation Read --topic foo kafka-acls.sh --bootstrap-server source.cloud.kafka.com:9092 --command-config config.properties --add --deny-principal User:redpanda --operation Read --topic bar kafka-acls.sh --bootstrap-server source.cloud.kafka.com:9092 --command-config config.properties --list Create schemas When the demo clusters are up and running, use curl to create a schema for each topic in the source cluster. curl -X POST -u "kafka:testpass" -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema": "{\"name\": \"Foo\", \"type\": \"record\", \"fields\": [{\"name\": \"data\", \"type\": \"int\"}]}"}' https://schema-registry-source.cloud.kafka.com:30081/subjects/foo/versions curl -X POST -u "kafka:testpass" -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema": "{\"name\": \"Bar\", \"type\": \"record\", \"fields\": [{\"name\": \"data\", \"type\": \"int\"}]}"}' https://schema-registry-source.cloud.kafka.com:30081/subjects/bar/versions Generate messages Let’s simulate an application with a producer and consumer actively publishing and reading messages on the source cluster. You can use Redpanda Connect to generate some Avro-encoded messages and push them to the two topics from the source cluster. Go to the Connect page on your cluster and click Create pipeline. In Pipeline name, enter a name and add a short description. For Compute units, leave the default value of 1. Compute units are used to allocate server resources to a pipeline. One compute unit is equivalent to 0.1 CPU and 400 MB of memory. For Configuration, paste the following configuration. The Brave browser does not fully support code snippets. generate_data.yaml http: enabled: false input: sequence: inputs: - generate: mapping: | let msg = counter() root.data = $msg meta kafka_topic = match $msg % 2 { 0 => "foo" 1 => "bar" } interval: 1s count: 0 batch_size: 1 processors: - schema_registry_encode: url: "https://schema-registry-source.cloud.kafka.com:30081" subject: ${! metadata("kafka_topic") } avro_raw_json: true basic_auth: enabled: true username: kafka password: testpass output: kafka_franz: seed_brokers: [ "source.cloud.kafka.com:9092" ] topic: ${! @kafka_topic } partitioner: manual partition: ${! random_int(min:0, max:1) } tls: enabled: true sasl: - mechanism: SCRAM-SHA-256 username: kafka password: testpass The Brave browser does not fully support code snippets. Click Create. Your pipeline details are displayed and the pipeline state changes from Starting to Running, which may take a few minutes. If you don’t see this state change, refresh your page. Next, add a Redpanda Connect consumer, which reads messages from the source cluster topics, and leave it running. This consumer uses the foobar consumer group, which is reused in a later step when consuming from the destination cluster. Go to the Connect page on your cluster and click Create pipeline. In Pipeline name, enter a name and add a short description. For Compute units, leave the default value of 1. For Configuration, paste the following configuration. read_data_source.yaml http: enabled: false input: kafka_franz: seed_brokers: [ "source.cloud.kafka.com:9092" ] topics: - '^[^_]' # Skip topics which start with `_` regexp_topics: true consumer_group: foobar tls: enabled: true sasl: - mechanism: SCRAM-SHA-256 username: kafka password: testpass processors: - schema_registry_decode: url: "https://schema-registry-source.cloud.kafka.com:30081" avro_raw_json: true basic_auth: enabled: true username: kafka password: testpass output: stdout: {} processors: - mapping: | root = this.merge({"count": counter(), "topic": @kafka_topic, "partition": @kafka_partition}) The Brave browser does not fully support code snippets. Click Create. Your pipeline details are displayed and the pipeline state changes from Starting to Running, which may take a few minutes. If you don’t see this state change, refresh your page. At this point, the source cluster has some data in both foo and bar topics, and the consumer prints the messages it reads from these topics to stdout. Configure and start Redpanda Migrator The unified Redpanda Migrator does the following: The redpanda_migrator input connects to the source Kafka cluster and Schema Registry to consume messages and schema information. The redpanda_migrator output handles all migration logic: Schema migration: reads schemas from the source Schema Registry and synchronizes them to the destination. Topic creation: automatically creates destination topics that don’t exist with proper configurations. ACL migration: migrates access control lists according to the migration rules. Message streaming: processes and routes messages from source to destination topics. Consumer group offset translation: maps source consumer group offsets to equivalent destination positions. If new topics are created in the source cluster while the migrator is running, they are migrated when messages are written to them. ACL migration for topics adheres to the following principles: ALLOW WRITE ACLs for topics are not migrated ALLOW ALL ACLs for topics are downgraded to ALLOW READ Group ACLs are not migrated Changing topic configurations, such as partition count, isn’t currently supported. Now, use the following unified Redpanda Migrator configuration. See the redpanda_migrator input and redpanda_migrator output docs for details. Go to the Connect page on your cluster and click Create pipeline. In Pipeline name, enter a name and add a short description. For Compute units, leave the default value of 1. For Configuration, paste the following configuration. redpanda_migrator.yaml input: label: "migration_pipeline" (1) redpanda_migrator: # Source Kafka settings seed_brokers: [ "source.cloud.kafka.com:9092" ] topics: - '^[^_]' # Skip internal topics which start with `_` regexp_topics: true consumer_group: migrator tls: enabled: true sasl: - mechanism: SCRAM-SHA-256 username: kafka password: testpass # Source Schema Registry settings schema_registry: url: "https://schema-registry-source.cloud.kafka.com:30081" basic_auth: enabled: true username: kafka password: testpass output: label: "migration_pipeline" (2) redpanda_migrator: # Destination Redpanda settings seed_brokers: [ "destination.cloud.redpanda.com:9092" ] tls: enabled: true sasl: - mechanism: SCRAM-SHA-256 username: redpanda password: testpass # Destination Schema Registry and migration settings schema_registry: url: https://schema-registry-destination.cloud.redpanda.com:30081 include_deleted: true translate_ids: true basic_auth: enabled: true username: redpanda password: testpass # Consumer group migration settings consumer_groups: enabled: true interval: 30s serverless: false (3) 1 Labels are used for pairing input and output components. 2 Matching label pairs the input and output components. Labels are required only when using multiple input/output pairs (for example, migrating from multiple Kafka clusters to multiple Redpanda clusters). However, it’s recommended to always use labels for clarity and consistency, even with single input/output configurations. Label names must be between 3 and 128 characters and can only contain alphanumeric characters, hyphens, and underscores (A-Za-z0-9-_). 3 Enable this flag when migrating to Redpanda Serverless. It automatically limits configuration to features supported by Serverless clusters. If you’re migrating to a BYOC cluster, you can omit this field. Click Create. Your pipeline details are displayed and the pipeline state changes from Starting to Running, which may take a few minutes. If you don’t see this state change, refresh your page. Check the status of migrated topics You can use the Redpanda rpk CLI tool to check which topics and ACLs have been migrated to the destination cluster. You can quickly install rpk if you don’t already have it. For now, users require manual migration. However, this step is not required for the current demo. Similarly, roles are specific to Redpanda and, for now, also require manual migration if the source cluster is based on Redpanda. rpk -X brokers=destination.cloud.redpanda.com:9092 -X tls.enabled=true -X sasl.mechanism=SCRAM-SHA-256 -X user=redpanda -X pass=testpass topic list NAME PARTITIONS REPLICAS _schemas 1 1 bar 2 1 foo 2 1 rpk -X brokers=destination.cloud.redpanda.com:9092 -X tls.enabled=true -X sasl.mechanism=SCRAM-SHA-256 -X user=redpanda -X pass=testpass security acl list PRINCIPAL HOST RESOURCE-TYPE RESOURCE-NAME RESOURCE-PATTERN-TYPE OPERATION PERMISSION ERROR User:redpanda * TOPIC bar LITERAL READ DENY User:redpanda * TOPIC foo LITERAL READ ALLOW Check metrics to monitor progress Redpanda Connect provides a comprehensive suite of metrics in various formats, such as Prometheus, which you can use to monitor its performance in your observability stack. Besides the standard Redpanda Connect metrics, the redpanda_migrator input also emits an input_redpanda_migrator_lag metric for monitoring the migration progress of each topic and partition. To monitor the migration progress, use the Redpanda Cloud OpenMetrics endpoint, which exposes all Redpanda and connector metrics for your cluster. You can integrate this endpoint with Prometheus, Datadog, or other observability platforms. For step-by-step instructions on configuring monitoring and connecting your observability tool, see Monitor Redpanda Cloud. After ingesting the metrics, search for the input_redpanda_migrator_lag metric in your monitoring tool and filter by topic and partition as needed to track migration lag for each topic and partition. Read from the migrated topics Stop the read_data_source.yaml consumer you started earlier and, afterwards, start a similar consumer for the destination cluster. Before starting the consumer up on the destination cluster, make sure you give the migrator bundle some time to replicate the translated offset. On the Connect page, stop the read_data_source pipeline you created earlier. Go to the Connect page on your cluster and click Create pipeline. In Pipeline name, enter a name and add a short description. For Compute units, leave the default value of 1. For Configuration, paste the following configuration. read_data_destination.yaml http: enabled: false input: kafka_franz: seed_brokers: [ "destination.cloud.redpanda.com:9092" ] topics: - '^[^_]' # Skip topics which start with `_` regexp_topics: true consumer_group: foobar sasl: - mechanism: SCRAM-SHA-256 username: redpanda password: testpass processors: - schema_registry_decode: url: "https://schema-registry-destination.cloud.redpanda.com:30081" avro_raw_json: true basic_auth: enabled: true username: redpanda password: testpass output: stdout: {} processors: - mapping: | root = this.merge({"count": counter(), "topic": @kafka_topic, "partition": @kafka_partition}) The Brave browser does not fully support code snippets. Click Create. Your pipeline details are displayed and the pipeline state changes from Starting to Running, which may take a few minutes. If you don’t see this state change, refresh your page. The source cluster consumer uses the same foobar consumer group. This consumer resumes reading messages from where the source consumer left off. Redpanda Migrator performs offset remapping when migrating consumer group offsets to the destination cluster. While more sophisticated approaches are possible, Redpanda chose to use a simple timestamp-based approach. So, for each migrated offset, the destination cluster is queried to find the latest offset before the received offset timestamp. Redpanda Migrator then writes this offset as the destination consumer group offset for the corresponding topic and partition pair. Although the timestamp-based approach doesn’t guarantee exactly-once delivery, it minimizes the likelihood of message duplication and avoids the need for complex and error-prone offset remapping logic. Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution 🎉 Thanks for your feedback! Joining Streams Retrieval-Augmented Generation (RAG)