redpanda_migrator

Beta

Use this connector in conjunction with the redpanda_migrator output to migrate topics between Apache Kafka brokers. The redpanda_migrator input uses the Franz Kafka client library.

  • Common

  • Advanced

# Common configuration fields, showing default values
input:
  label: ""
  redpanda_migrator:
    seed_brokers: [] # No default (required)
    topics: [] # No default (required)
    regexp_topics: false
    consumer_group: "" # No default (optional)
    auto_replay_nacks: true
# All configuration fields, showing default values
input:
  label: ""
  redpanda_migrator:
    seed_brokers: [] # No default (required)
    topics: [] # No default (required)
    regexp_topics: false
    output_resource: redpanda_migrator_output
    replication_factor_override: true
    replication_factor: 3
    consumer_group: "" # No default (optional)
    client_id: benthos
    rack_id: ""
    batch_size: 1024
    auto_replay_nacks: true
    commit_period: 5s
    start_from_oldest: true
    tls:
      enabled: false
      skip_cert_verify: false
      enable_renegotiation: false
      root_cas: ""
      root_cas_file: ""
      client_certs: []
    sasl: [] # No default (optional)
    multi_header: false
    batching:
      count: 0
      byte_size: 0
      period: ""
      check: ""
      processors: [] # No default (optional)
    topic_lag_refresh_period: 5s
    metadata_max_age: 5m

The redpanda_migrator input:

  • Reads a batch of messages from a Kafka broker.

  • Attempts to create all selected topics along with their associated ACLs in the broker that the output points to, identified by the label specified in output_resource.

  • Waits for the redpanda_migrator output to acknowledge the writes before updating the Kafka consumer group offset.

Specify a consumer group for this input to consume one or more topics and automatically balance the topic partitions across any other connected clients with the same consumer group. Otherwise, topics are consumed in their entirety or with explicit partitions.

Metrics

This input emits a input_redpanda_migrator_lag metric with topic and partition labels for each consumed topic.

Metadata

This input adds the following metadata fields to each message:

- kafka_key
- kafka_topic
- kafka_partition
- kafka_offset
- kafka_lag
- kafka_timestamp_ms
- kafka_timestamp_unix
- kafka_tombstone_message
- All record headers

Fields

seed_brokers

A list of broker addresses to connect to in order. Use commas to separate multiple addresses in a single list item.

Type: array

# Examples

seed_brokers:
  - localhost:9092

seed_brokers:
  - foo:9092
  - bar:9092

seed_brokers:
  - foo:9092,bar:9092

topics

A list of topics to consume from. Use commas to separate multiple topics in a single element.

When a consumer_group is specified, partitions are automatically distributed across consumers of a topic. Otherwise, all partitions are consumed.

Alternatively, you can specify explicit partitions to consume by using a colon after the topic name. For example, foo:0 would consume the partition 0 of the topic foo. This syntax supports ranges. For example, foo:0-10 would consume partitions 0 through to 10 inclusive.

It is also possible to specify an explicit offset to consume from by adding another colon after the partition. For example, foo:0:10 would consume the partition 0 of the topic foo starting from the offset 10. If the offset is not present (or remains unspecified) then the field start_from_oldest determines which offset to start from.

Type: array

# Examples

topics:
  - foo
  - bar

topics:
  - things.*

topics:
  - foo,bar

topics:
  - foo:0
  - bar:1
  - bar:3

topics:
  - foo:0,bar:1,bar:3

topics:
  - foo:0-5

regexp_topics

Whether listed topics are interpreted as regular expression patterns for matching multiple topics. When topics are specified with explicit partitions, this field must remain set to false.

Type: bool

Default: false

consumer_group

An optional consumer group. When specified, the partitions of specified topics are automatically distributed across consumers sharing a consumer group, and partition offsets are automatically committed and resumed under this name. Consumer groups are not supported when explicit partitions are specified to consume from in the topics field.

Type: string

output_resource

The label of the redpanda_migrator output in which the currently selected topics need to be created before attempting to read messages.

Type: string

Default: redpanda_migrator_output

replication_factor_override

Indicate whether to override the replication factor of the input topics when creating copies of them on the output cluster. You must specify the new replication factor in the replication_factor field.

Type: bool

Default: true

replication_factor

Replication factor for created topics. This is only used when replication_factor_override is set to true.

Type: int

Default: 3

client_id

An identifier for the client connection.

Type: string

Default: benthos

rack_id

A rack identifier for this client.

Type: string

Default: ""

batch_size

The maximum number of messages that can accumulate in each batch.

Type: int

Default: 1024

auto_replay_nacks

Whether to automatically replay messages that are rejected (nacked) at the output level. If the cause of rejections is persistent, leaving this option enabled can result in back pressure.

Set auto_replay_nacks to false to delete rejected messages. Disabling auto replays can greatly improve memory efficiency of high throughput streams as the original shape of the data is discarded immediately upon consumption and mutation.

Type: bool

Default: true

commit_period

The period of time between each commit of the current partition offsets. Offsets are always committed during shutdown.

Type: string

Default: 5s

start_from_oldest

Determines whether to consume from the oldest available offset, otherwise messages are consumed from the latest offset. This setting is applied when creating a new consumer group or the saved offset no longer exists.

Type: bool

Default: true

tls

Override system defaults with custom TLS settings.

Type: object

tls.enabled

Whether custom TLS settings are enabled.

Type: bool

Default: false

tls.skip_cert_verify

Whether to skip server-side certificate verification.

Type: bool

Default: false

tls.enable_renegotiation

Whether to allow the remote server to repeatedly request renegotiation. Enable this option if you’re seeing the error message local error: tls: no renegotiation.

Type: bool

Default: false

tls.root_cas

Specify a certificate authority to use (optional). This is a string that represents a certificate chain from the parent trusted root certificate, through possible intermediate signing certificates, to the host certificate.

This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration.

Type: string

Default: ""

# Examples

root_cas: |-
  -----BEGIN CERTIFICATE-----
  ...
  -----END CERTIFICATE-----

tls.root_cas_file

Specify the path to a root certificate authority file (optional). This is a file, often with a .pem extension, which contains a certificate chain from the parent trusted root certificate, through possible intermediate signing certificates, to the host certificate.

Type: string

Default: ""

# Examples

root_cas_file: ./root_cas.pem

tls.client_certs

A list of client certificates to use. For each certificate specify values for either the cert and key fields, or cert_file and key_file fields.

Type: array

Default: []

# Examples

client_certs:
  - cert: foo
    key: bar

client_certs:
  - cert_file: ./example.pem
    key_file: ./example.key

tls.client_certs[].cert

A plain text certificate to use.

Type: string

Default: ""

tls.client_certs[].key

A plain text certificate key to use.

This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration.

Type: string

Default: ""

tls.client_certs[].cert_file

The path of a certificate to use.

Type: string

Default: ""

tls.client_certs[].key_file

The path of a certificate key to use.

Type: string

Default: ""

tls.client_certs[].password

A plain text password for when the private key is password encrypted in PKCS#1 or PKCS#8 format. The obsolete pbeWithMD5AndDES-CBC algorithm is not supported for the PKCS#8 format.

Because the obsolete pbeWithMD5AndDES-CBC algorithm does not authenticate the ciphertext, it is vulnerable to padding Oracle attacks that can let an attacker recover the plaintext.

This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration.

Type: string

Default: ""

# Examples

password: foo

password: ${KEY_PASSWORD}

sasl

Specify one or more methods of SASL authentication. SASL mechanisms are tried in order. If the broker supports the first mechanism, all connections use that mechanism. If the first mechanism fails, the client picks the first supported mechanism. Connections fail if the broker does not support any client mechanisms.

Type: array

# Examples

sasl:
  - mechanism: SCRAM-SHA-512
    password: bar
    username: foo

sasl[].mechanism

The SASL mechanism to use.

Type: string

Option Summary

AWS_MSK_IAM

AWS IAM based authentication as specified by the aws-msk-iam-auth java library.

OAUTHBEARER

OAuth Bearer based authentication.

PLAIN

Plain text authentication.

SCRAM-SHA-256

SCRAM based authentication as specified in RFC5802.

SCRAM-SHA-512

SCRAM based authentication as specified in RFC5802.

none

Disable sasl authentication

sasl[].username

A username to provide for PLAIN or SCRAM-* authentication.

Type: string

Default: ""

sasl[].password

A password to provide for PLAIN or SCRAM-* authentication.

This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration.

Type: string

Default: ""

sasl[].token

The token to use for a single session’s OAUTHBEARER authentication.

Type: string

Default: ""

sasl[].extensions

Key/value pairs to add to OAUTHBEARER authentication requests.

Type: object

sasl[].aws

Contains AWS specific fields for when the mechanism is set to AWS_MSK_IAM.

Type: object

sasl[].aws.region

The AWS region to target.

Type: string

Default: ""

sasl[].aws.endpoint

Specify a custom endpoint for the AWS API.

Type: string

Default: ""

sasl[].aws.credentials

Manually configure the AWS credentials to use (optional). For more information, see Amazon Web Services.

Type: object

sasl[].aws.credentials.profile

A profile from ~/.aws/credentials to use.

Type: string

Default: ""

sasl[].aws.credentials.id

The ID of credentials to use.

Type: string

Default: ""

sasl[].aws.credentials.secret

The secret for the credentials being used.

This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration.

Type: string

Default: ""

sasl[].aws.credentials.token

The token for the credentials being used. This is required when using short term credentials.

Type: string

Default: ""

sasl[].aws.credentials.from_ec2_role

Use the credentials of a host EC2 machine configured to assume an IAM role associated with the instance.

Type: bool

Default: false

sasl[].aws.credentials.role

A role ARN to assume.

Type: string

Default: ""

sasl[].aws.credentials.role_external_id

An external ID to provide when assuming a role.

Type: string

Default: ""

multi_header

Decode headers into lists to allow handling of multiple values with the same key.

Type: bool

Default: false

batching

Configure a batching policy that applies to individual topic partitions in order to batch messages together before flushing them for processing. Batching can be beneficial for performance as well as useful for windowed processing, and doing so this way preserves the ordering of topic partitions.

Type: object

# Examples

batching:
  byte_size: 5000
  count: 0
  period: 1s

batching:
  count: 10
  period: 1s

batching:
  check: this.contains("END BATCH")
  count: 0
  period: 1m

batching.count

The number of messages after which the batch is flushed. Set to 0 to disable count-based batching.

Type: int

Default: 0

batching.byte_size

The amount of bytes at which the batch is flushed. Set to 0 to disable size-based batching.

Type: int

Default: 0

batching.period

The period after which an incomplete batch is flushed regardless of its size.

Type: string

Default: ""

# Examples

period: 1s

period: 1m

period: 500ms

batching.check

A Bloblang query that should return a boolean value indicating whether a message should end a batch.

Type: string

Default: ""

# Examples

check: this.type == "end_of_transaction"

batching.processors

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch. All resulting messages are flushed as a single batch. Splitting the batch into smaller batches using these processors is a no-op.

Type: array

# Examples

processors:
  - archive:
      format: concatenate

processors:
  - archive:
      format: lines

processors:
  - archive:
      format: json_array

topic_lag_refresh_period

The period of time between each topic lag refresh cycle.

Type: string

Default: 5s

metadata_max_age

The maximum period of time after which metadata is refreshed. Reducing this value increases the frequency with which newly created topics are identified.

Type: string

Default: 5m