Docs Cloud Redpanda Connect Components Outputs redpanda_migrator redpanda_migrator Beta Type: OutputInput Available in: Cloud, Self-Managed Writes a batch of messages to an Apache Kafka broker and waits for acknowledgement before propagating any acknowledgements back to the input. Use this connector in conjunction with the redpanda_migrator input to migrate topics between Kafka brokers. The redpanda_migrator output uses the Franz Kafka client library. Common Advanced # Common configuration fields, showing default values output: label: "" redpanda_migrator: seed_brokers: [] # No default (required) topic: "" # No default (required) key: "" # No default (optional) partition: ${! meta("partition") } # No default (optional) metadata: include_prefixes: [] include_patterns: [] max_in_flight: 256 # All configuration fields, showing default values output: label: "" redpanda_migrator: seed_brokers: [] # No default (required) client_id: benthos tls: enabled: false skip_cert_verify: false enable_renegotiation: false root_cas: "" root_cas_file: "" client_certs: [] sasl: [] # No default (optional) topic: "" # No default (required) key: "" # No default (optional) partitioner: "" # No default (optional) partition: ${! meta("partition") } # No default (optional) metadata: include_prefixes: [] include_patterns: [] metadata_max_age: 5m timestamp_ms: ${! timestamp_unix_milli() } # No default (optional) max_in_flight: 256 input_resource: redpanda_migrator_input replication_factor_override: true replication_factor: 3 translate_schema_ids: true schema_registry_output_resource: schema_registry_output idempotent_write: true compression: "" # No default (optional) timeout: 10s max_message_bytes: 1MiB broker_write_max_bytes: 100MiB You must specify the label of the associated redpanda_migrator input in the input_resource field. When connected, this output queries the redpanda_migrator input for topic and ACL configurations. If the broker does not contain the current message topic, this output attempts to create the topic along with its ACLs. ACL migration adheres to the following principles: ALLOW WRITE ACLs for topics are not migrated ALLOW ALL ACLs for topics are downgraded to ALLOW READ Only topic ACLs are migrated, group ACLs are not migrated Examples Transfer data Writes messages to the configured broker and creates topics and topic ACLs if they don’t exist. It also ensures that the message order is preserved. output: redpanda_migrator: seed_brokers: [ "127.0.0.1:9093" ] topic: ${! metadata("kafka_topic").or(throw("missing kafka_topic metadata")) } key: ${! metadata("kafka_key") } partitioner: manual partition: ${! metadata("kafka_partition").or(throw("missing kafka_partition metadata")) } timestamp_ms: ${! metadata("kafka_timestamp_ms").or(timestamp_unix_milli()) } input_resource: redpanda_migrator_input max_in_flight: 256 Fields seed_brokers A list of broker addresses to connect to. Use commas to separate multiple addresses in a single list item. Type: array # Examples seed_brokers: - localhost:9092 seed_brokers: - foo:9092 - bar:9092 seed_brokers: - foo:9092,bar:9092 client_id An identifier for the client connection. Type: string Default: benthos tls Override system defaults with custom TLS settings. Type: object tls.enabled Whether custom TLS settings are enabled. Type: bool Default: false tls.skip_cert_verify Whether to skip server-side certificate verification. Type: bool Default: false tls.enable_renegotiation Whether to allow the remote server to repeatedly request renegotiation. Enable this option if you’re seeing the error message local error: tls: no renegotiation. Type: bool Default: false tls.root_cas Specify a certificate authority to use (optional). This is a string that represents a certificate chain from the parent-trusted root certificate, through possible intermediate signing certificates, to the host certificate. This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration. Type: string Default: "" # Examples root_cas: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- tls.root_cas_file Specify the path to a root certificate authority file (optional). This is a file, often with a .pem extension, which contains a certificate chain from the parent-trusted root certificate, through possible intermediate signing certificates, to the host certificate. Type: string Default: "" # Examples root_cas_file: ./root_cas.pem tls.client_certs A list of client certificates to use. For each certificate, specify values for either the cert and key fields, or cert_file and key_file fields. Type: array Default: [] # Examples client_certs: - cert: foo key: bar client_certs: - cert_file: ./example.pem key_file: ./example.key tls.client_certs[].cert A plain text certificate to use. Type: string Default: "" tls.client_certs[].key A plain text certificate key to use. This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration. Type: string Default: "" tls.client_certs[].cert_file The path of a certificate to use. Type: string Default: "" tls.client_certs[].key_file The path of a certificate key to use. Type: string Default: "" tls.client_certs[].password A plain text password for when the private key is password encrypted in PKCS#1 or PKCS#8 format. The obsolete pbeWithMD5AndDES-CBC algorithm is not supported for the PKCS#8 format. The pbeWithMD5AndDES-CBC algorithm does not authenticate ciphertext, and is vulnerable to padding oracle attacks which may allow an attacker to recover the plain text password. This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration. Type: string Default: "" # Examples password: foo password: ${KEY_PASSWORD} sasl Specify one or more methods of SASL authentication, which are tried in order. If the broker supports the first mechanism, all connections will use that mechanism. If the first mechanism fails, the client picks the first supported mechanism. Connections fail if the broker does not support any client mechanisms. Type: array # Examples sasl: - mechanism: SCRAM-SHA-512 password: bar username: foo sasl[].mechanism The SASL mechanism to use. Type: string Option Summary AWS_MSK_IAM AWS IAM-based authentication as specified by the aws-msk-iam-auth Java library. OAUTHBEARER OAuth bearer-based authentication. PLAIN Plain text authentication. SCRAM-SHA-256 SCRAM-based authentication as specified in RFC5802. SCRAM-SHA-512 SCRAM-based authentication as specified in RFC5802. none Disable SASL authentication. sasl[].username A username to provide for PLAIN or SCRAM-* authentication. Type: string Default: "" sasl[].password A password to provide for PLAIN or SCRAM-* authentication. This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration. Type: string Default: "" sasl[].token The token to use for a single session’s OAUTHBEARER authentication. Type: string Default: "" sasl[].extensions Key/value pairs to add to OAUTHBEARER authentication requests. Type: object sasl[].aws Contains AWS specific fields for when the mechanism is set to AWS_MSK_IAM. Type: object sasl[].aws.region The AWS region to target. Type: string Default: "" sasl[].aws.endpoint Specify a custom endpoint for the AWS API. Type: string Default: "" sasl[].aws.credentials Manually configure the AWS credentials to use (optional). For more information, see Amazon Web Services. Type: object sasl[].aws.credentials.profile A profile from ~/.aws/credentials to use. Type: string Default: "" sasl[].aws.credentials.id The ID of credentials to use. Type: string Default: "" sasl[].aws.credentials.secret The secret for the credentials being used. This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration. Type: string Default: "" sasl[].aws.credentials.token The token for the credentials being used. The token is required when using short term credentials. Type: string Default: "" sasl[].aws.credentials.from_ec2_role Use the credentials of a host EC2 machine configured to assume an IAM role associated with the instance. Type: bool Default: false sasl[].aws.credentials.role A role ARN to assume. Type: string Default: "" sasl[].aws.credentials.role_external_id An external ID to provide when assuming a role. Type: string Default: "" topic A topic to write messages to. This field supports interpolation functions. Type: string key An optional key to populate for each message. This field supports interpolation functions. Type: string partitioner Override the default murmur2 hashing partitioner. Type: string Option Summary least_backup Choose the partition with the fewest buffered records. Partitions are selected per batch. manual Manually select a partition for each message. You must also specify a value for the partition field. murmur2_hash Kafka’s default hash algorithm that uses a 32-bit murmur2 hash of the key to compute the partition for the record. round_robin Use round-robin messages through all available partitions. This algorithm has lower throughput and causes higher CPU load on brokers, but is useful if you want to ensure an even distribution of records to partitions. partition An optional explicit partition to set for each message. This field is only relevant when the partitioner is set to manual. The provided interpolation string must be a valid integer. This field supports interpolation functions. Type: string # Examples partition: ${! meta("partition") } metadata Specify which (if any) metadata values are added to messages as headers. Type: object metadata.include_prefixes Provide a list of explicit metadata key prefixes to match against. Type: array Default: [] # Examples include_prefixes: - foo_ - bar_ include_prefixes: - kafka_ include_prefixes: - content- metadata.include_patterns Provide a list of explicit metadata key regular expression (re2) patterns to match against. Type: array Default: [] # Examples include_patterns: - .* include_patterns: - _timestamp_unix$ metadata_max_age The maximum period of time after which metadata is refreshed. Reducing this value increases the frequency with which newly-created topics are identified. Type: string Default: 5m timestamp_ms Set a timestamp (in milliseconds) for each message (optional). When left empty, the current timestamp is used. This field supports interpolation functions. Type: string # Examples timestamp_ms: ${! timestamp_unix_milli() } timestamp_ms: ${! metadata("kafka_timestamp_ms") } max_in_flight The maximum number of batches to send in parallel at any given time. Type: int Default: 256 input_resource The label of the redpanda_migrator input from which to read the configurations of topics and ACLs for creation. Type: string Default: redpanda_migrator_input replication_factor_override Whether to override the replication factor of input topics when creating copies of them on the output cluster. You must specify the new replication factor in the replication_factor field. Type: bool Default: true replication_factor The replication factor for created topics. This is only used when replication_factor_override is set to true. Type: int Default: 3 translate_schema_ids When set to true, this field translates the schema ID in each message ready to write to the destination schema registry. Type: bool Default: true schema_registry_output_resource The label of the schema_registry output to use for fetching schema IDs. Type: string Default: schema_registry_output rack_id A rack identifier for this client. Type: string Default: "" idempotent_write Enable the idempotent write producer option. This requires the IDEMPOTENT_WRITE permission on CLUSTER. Disable this option if the IDEMPOTENT_WRITE permission is unavailable. Type: bool Default: true compression Set an explicit compression type (optional). The default preference is to use snappy when the broker supports it. Otherwise, use none. Type: string Options: lz4 , snappy , gzip , none , zstd timeout The maximum period of time to wait for message sends before abandoning the request and retrying. Type: string Default: 10s max_message_bytes The maximum space in bytes that an individual message may use. Messages larger than this value are rejected. This field corresponds to Kafka’s max.message.bytes. Type: string Default: 1MiB # Examples max_message_bytes: 100MiB max_message_bytes: 50MiB broker_write_max_bytes The maximum number of bytes this output can write to a broker connection in a single write. This field corresponds to Kafka’s socket.request.max.bytes. Type: string Default: 100MiB # Examples broker_write_max_bytes: 128MiB broker_write_max_bytes: 50MiB Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution redpanda redpanda_migrator_bundle