kafka_migrator

Type:

Writes a batch of messages to a Kafka broker and waits for acknowledgement before propagating them back to the input.

Use this connector in conjunction with the kafka_migrator input to migrate topics between Kafka brokers. The kafka_migrator output uses the the Franz Kafka client library.

  • Common

  • Advanced

# Common config fields, showing default values
output:
  label: ""
  kafka_migrator:
    seed_brokers: [] # No default (required)
    topic: "" # No default (required)
    key: "" # No default (optional)
    partition: ${! meta("partition") } # No default (optional)
    metadata:
      include_prefixes: []
      include_patterns: []
    max_in_flight: 10
    batching:
      count: 0
      byte_size: 0
      period: ""
      check: ""
# All config fields, showing default values
output:
  label: ""
  kafka_migrator:
    seed_brokers: [] # No default (required)
    topic: "" # No default (required)
    key: "" # No default (optional)
    partitioner: "" # No default (optional)
    partition: ${! meta("partition") } # No default (optional)
    client_id: benthos
    rack_id: ""
    idempotent_write: true
    metadata:
      include_prefixes: []
      include_patterns: []
    max_in_flight: 10
    timeout: 10s
    batching:
      count: 0
      byte_size: 0
      period: ""
      check: ""
      processors: [] # No default (optional)
    max_message_bytes: 1MB
    broker_write_max_bytes: 100MB
    compression: "" # No default (optional)
    tls:
      enabled: false
      skip_cert_verify: false
      enable_renegotiation: false
      root_cas: ""
      root_cas_file: ""
      client_certs: []
    sasl: [] # No default (optional)
    timestamp: ${! timestamp_unix() } # No default (optional)
    input_resource: kafka_migrator_input

This output can query the kafka_migrator input for topic and ACL configurations.

If the configured broker does not contain the current message topic, this output attempts to create it along with the topic ACLs, which are automatically read from the kafka_migrator input, identified by the label specified in input_resource.

Examples

  • Transfer data

Writes messages to the configured broker and creates topics and topic ACLs if they don’t exist. It also ensures that the message order is preserved.

output:
  kafka_migrator:
    seed_brokers: [ "127.0.0.1:9093" ]
    topic: ${! metadata("kafka_topic").or(throw("missing kafka_topic metadata")) }
    key: ${! metadata("kafka_key") }
    partitioner: manual
    partition: ${! metadata("kafka_partition").or(throw("missing kafka_partition metadata")) }
    timestamp: ${! metadata("kafka_timestamp_unix").or(timestamp_unix()) }
    input_resource: kafka_migrator_input
    max_in_flight: 1

Fields

seed_brokers

A list of broker addresses to connect to. Use commas to separate multiple addresses in a single list item.

Type: array

# Examples

seed_brokers:
  - localhost:9092

seed_brokers:
  - foo:9092
  - bar:9092

seed_brokers:
  - foo:9092,bar:9092

topic

A topic to write messages to.

This field supports interpolation functions.

Type: string

key

An optional key to populate for each message.

This field supports interpolation functions.

Type: string

partitioner

Override the default murmur2 hashing partitioner.

Type: string

Option Summary

least_backup

Chooses the least backed up partition (the partition with the fewest amount of buffered records). Partitions are selected per batch.

manual

Manually select a partition for each message. You must also specify a value for the partition field.

murmur2_hash

Kafka’s default hash algorithm that uses a 32-bit murmur2 hash of the key to compute the partition for the record.

round_robin

Round-robin’s messages through all available partitions. This algorithm has lower throughput and causes higher CPU load on brokers, but is useful if you want to ensure an even distribution of records to partitions.

partition

An optional explicit partition to set for each message. This field is only relevant when the partitioner is set to manual. The provided interpolation string must be a valid integer.

This field supports interpolation functions.

Type: string

# Examples

partition: ${! meta("partition") }

client_id

An identifier for the client connection.

Type: string

Default: "benthos"

rack_id

A rack identifier for this client.

Type: string

Default: ""

idempotent_write

Enable the idempotent write producer option. This requires the IDEMPOTENT_WRITE permission on CLUSTER. Disable this option if the IDEMPOTENT_WRITE permission is not available.

Type: bool

Default: true

metadata

Determine which (if any) metadata values are added to messages as headers.

Type: object

metadata.include_prefixes

Provide a list of explicit metadata key prefixes to match against.

Type: array

Default: []

# Examples

include_prefixes:
  - foo_
  - bar_

include_prefixes:
  - kafka_

include_prefixes:
  - content-

metadata.include_patterns

Provide a list of explicit metadata key regular expression (re2) patterns to match against.

Type: array

Default: []

# Examples

include_patterns:
  - .*

include_patterns:
  - _timestamp_unix$

max_in_flight

The maximum number of batches to send in parallel at any given time.

Type: int

Default: 10

timeout

The maximum period of time to wait for message sends before abandoning the request and retrying

Type: string

Default: "10s"

batching

Configure a batching policy.

Type: object

# Examples

batching:
  byte_size: 5000
  count: 0
  period: 1s

batching:
  count: 10
  period: 1s

batching:
  check: this.contains("END BATCH")
  count: 0
  period: 1m

batching.count

The number of messages after which the batch is flushed. Set to 0 to disable count-based batching.

Type: int

Default: 0

batching.byte_size

The amount of bytes at which the batch is flushed. Set to 0 to disable size-based batching.

Type: int

Default: 0

batching.period

The period after which an incomplete batch is flushed regardless of its size.

Type: string

Default: ""

# Examples

period: 1s

period: 1m

period: 500ms

batching.check

A Bloblang query that should return a boolean value indicating whether a message should end a batch.

Type: string

Default: ""

# Examples

check: this.type == "end_of_transaction"

batching.processors

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch. All resulting messages are flushed as a single batch. Splitting the batch into smaller batches using these processors is a no-op.

Type: array

# Examples

processors:
  - archive:
      format: concatenate

processors:
  - archive:
      format: lines

processors:
  - archive:
      format: json_array

max_message_bytes

The maximum space in bytes that an individual message may use. Messages larger than this value are rejected. This field corresponds to Kafka’s max.message.bytes.

Type: string

Default: "1MB"

# Examples

max_message_bytes: 100MB

max_message_bytes: 50mib

broker_write_max_bytes

The upper bound for the number of bytes written to a broker connection in a single write. This field corresponds to Kafka’s socket.request.max.bytes.

Type: string

Default: "100MB"

# Examples

broker_write_max_bytes: 128MB

broker_write_max_bytes: 50mib

compression

Set an explicit compression type (optional). The default preference is to use snappy when the broker supports it. Otherwise, use none.

Type: string

Options: lz4 , snappy , gzip , none , zstd .

tls

Override system defaults with custom TLS settings.

Type: object

tls.enabled

Whether custom TLS settings are enabled.

Type: bool

Default: false

tls.skip_cert_verify

Whether to skip server-side certificate verification.

Type: bool

Default: false

tls.enable_renegotiation

Whether to allow the remote server to repeatedly request renegotiation. Enable this option if you’re seeing the error message local error: tls: no renegotiation.

Type: bool

Default: false

tls.root_cas

Specify a certificate authority to use (optional). This is a string that represents a certificate chain from the parent trusted root certificate, through possible intermediate signing certificates, to the host certificate.

This field contains sensitive information. Review your cluster security before adding it to your configuration.

Type: string

Default: ""

# Examples

root_cas: |-
  -----BEGIN CERTIFICATE-----
  ...
  -----END CERTIFICATE-----

tls.root_cas_file

Specify the path to a root certificate authority file (optional). This is a file, often with a .pem extension, which contains a certificate chain from the parent trusted root certificate, through possible intermediate signing certificates, to the host certificate.

Type: string

Default: ""

# Examples

root_cas_file: ./root_cas.pem

tls.client_certs

A list of client certificates to use. For each certificate specify values for either the cert and key fields, or cert_file and key_file fields.

Type: array

Default: []

# Examples

client_certs:
  - cert: foo
    key: bar

client_certs:
  - cert_file: ./example.pem
    key_file: ./example.key

tls.client_certs[].cert

A plain text certificate to use.

Type: string

Default: ""

tls.client_certs[].key

A plain text certificate key to use.

This field contains sensitive information. Review your cluster security before adding it to your configuration.

Type: string

Default: ""

tls.client_certs[].cert_file

The path of a certificate to use.

Type: string

Default: ""

tls.client_certs[].key_file

The path of a certificate key to use.

Type: string

Default: ""

tls.client_certs[].password

A plain text password for when the private key is password encrypted in PKCS#1 or PKCS#8 format. The obsolete pbeWithMD5AndDES-CBC algorithm is not supported for the PKCS#8 format.

Because the obsolete pbeWithMD5AndDES-CBC algorithm does not authenticate the ciphertext, it is vulnerable to padding Oracle attacks that can let an attacker recover the plaintext.

This field contains sensitive information. Review your cluster security before adding it to your configuration.

Type: string

Default: ""

# Examples

password: foo

password: ${KEY_PASSWORD}

sasl

Specify one or more methods of SASL authentication. SASL mechanisms are tried in order. If the broker supports the first mechanism, all connections will use that mechanism. If the first mechanism fails, the client picks the first supported mechanism. Connections fail if the broker does not support any client mechanisms.

Type: array

# Examples

sasl:
  - mechanism: SCRAM-SHA-512
    password: bar
    username: foo

sasl[].mechanism

The SASL mechanism to use.

Type: string

Option Summary

AWS_MSK_IAM

AWS IAM based authentication as specified by the aws-msk-iam-auth java library.

OAUTHBEARER

OAuth Bearer based authentication.

PLAIN

Plain text authentication.

SCRAM-SHA-256

SCRAM-based authentication as specified in RFC5802.

SCRAM-SHA-512

SCRAM-based authentication as specified in RFC5802.

none

Disable sasl authentication

sasl[].username

A username to provide for PLAIN or SCRAM-* authentication.

Type: string

Default: ""

sasl[].password

A password to provide for PLAIN or SCRAM-* authentication.

This field contains sensitive information. Review your cluster security before adding it to your configuration.

Type: string

Default: ""

sasl[].token

The token to use for a single session’s OAUTHBEARER authentication.

Type: string

Default: ""

sasl[].extensions

Key/value pairs to add to OAUTHBEARER authentication requests.

Type: object

sasl[].aws

Contains AWS specific fields for when the mechanism is set to AWS_MSK_IAM.

Type: object

sasl[].aws.region

The AWS region to target.

Type: string

Default: ""

sasl[].aws.endpoint

Specify a custom endpoint for the AWS API.

Type: string

Default: ""

sasl[].aws.credentials

Manually configure the AWS credentials to use (optional). For more information, see Amazon Web Services.

Type: object

sasl[].aws.credentials.profile

A profile from ~/.aws/credentials to use.

Type: string

Default: ""

sasl[].aws.credentials.id

The ID of credentials to use.

Type: string

Default: ""

sasl[].aws.credentials.secret

The secret for the credentials being used.

This field contains sensitive information. Review your cluster security before adding it to your configuration.

Type: string

Default: ""

sasl[].aws.credentials.token

The token for the credentials being used. The token is required when using short term credentials.

Type: string

Default: ""

sasl[].aws.credentials.from_ec2_role

Use the credentials of a host EC2 machine configured to assume an IAM role associated with the instance.

Type: bool

Default: false

sasl[].aws.credentials.role

A role ARN to assume.

Type: string

Default: ""

sasl[].aws.credentials.role_external_id

An external ID to provide when assuming a role.

Type: string

Default: ""

timestamp

An optional timestamp to set for each message. When left empty, the current timestamp is used. This field supports interpolation functions.

Type: string

# Examples

timestamp: ${! timestamp_unix() }

timestamp: ${! metadata("kafka_timestamp_unix") }

input_resource

The label of the kafka_migrator input from which to read the configurations of topics and ACLs for creation.

Type: string

Default: "kafka_migrator_input"