Broker Configuration Properties

Broker configuration properties are applied individually to each broker in a cluster. You can find and modify these properties in the redpanda.yaml configuration file.

Broker properties are organized under different top-level sections in your configuration file:

  • redpanda

  • pandaproxy

  • pandaproxy_client

  • schema_registry

  • schema_registry_client

  • audit_log_client

Simple properties use a single key-value pair, while complex properties (such as listeners and TLS configurations) include examples.

For information on how to edit broker properties, see Configure Broker Properties.

All broker properties require that you restart Redpanda for any update to take effect.

Redpanda

Configuration properties for the core Redpanda broker.

Example
redpanda:
  data_directory: /var/lib/redpanda/data
  kafka_api:
    - address: 0.0.0.0
      port: 9092
      authentication_method: sasl
  admin:
    - address: 0.0.0.0
      port: 9644
  rpc_server:
    address: 0.0.0.0
    port: 33145
  seed_servers:
    - host:
        address: redpanda-1
        port: 33145

admin

Network address for the Admin API server.

Property Value

Type

object

Default

{address: "127.0.0.1", port: 9644}

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

redpanda:
  admin:
    - name: <admin-api-name>
      address: <external-broker-hostname>
      port: <admin-api-port>

Replace the following placeholders with your values:

  • <admin-api-name>: Name for the Admin API listener (TLS configuration is handled separately in the admin_api_tls broker property)

  • <external-broker-hostname>: The externally accessible hostname or IP address that clients use to connect to this broker

  • <admin-api-port>: The port number for the Admin API endpoint

admin_api_doc_dir

Path to the API specifications for the Admin API.

Property Value

Type

string

Default

/usr/share/redpanda/admin-api-doc

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

admin_api_tls

Specifies the TLS configuration for the HTTP Admin API.

Property Value

Type

array

Default

[]

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

redpanda:
  admin_api_tls:
    - name: <admin-api-tls-name>
      enabled: true
      cert_file: <path-to-cert-file>
      key_file: <path-to-key-file>
      truststore_file: <path-to-truststore-file>
      require_client_auth: true

Replace the following placeholders with your values:

  • <admin-api-tls-name>: Name that matches your Admin API listener (defined in the admin broker property)

  • <path-to-cert-file>: Full path to the TLS certificate file

  • <path-to-key-file>: Full path to the TLS private key file

  • <path-to-truststore-file>: Full path to the Certificate Authority file

advertised_kafka_api

Address of the Kafka API published to the clients. If not set, the kafka_api broker property is used. When behind a load balancer or in containerized environments, this should be the externally-accessible address that clients use to connect.

Property Value

Type

string

Default

null

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

redpanda:
  advertised_kafka_api:
    - name: <kafka-api-name>
      address: <external-broker-hostname>
      port: <kafka-port>

Replace the following placeholders with your values:

  • <kafka-api-name>: Name that matches your Kafka API listener (defined in the kafka_api broker property)

  • <external-broker-hostname>: The externally accessible hostname or IP address that clients use to connect to this broker

  • <kafka-port>: The port number for the Kafka API endpoint

advertised_rpc_api

Address of RPC endpoint published to other cluster members. If not set, the rpc_server broker property is used. This should be the address other brokers can use to communicate with this broker.

Property Value

Type

string

Default

null

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

redpanda:
  advertised_rpc_api:
    address: <external-broker-hostname>
    port: <rpc-port>

Replace the following placeholders with your values:

  • <external-broker-hostname>: The externally accessible hostname or IP address that other brokers use to communicate with this broker

  • <rpc-port>: The port number for the RPC endpoint (default is 33145)

cloud_storage_cache_directory

Directory for archival cache. Set when the cloud_storage_enabled cluster property is enabled. If not specified, Redpanda uses a default path within the data directory.

Property Value

Type

string

Default

null

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

redpanda:
  cloud_storage_cache_directory: <cache-directory-path>

Replace <cache-directory-path> with the full path to your desired cache directory.

Related topics

cloud_storage_inventory_hash_store

Directory to store inventory report hashes for use by cloud storage scrubber. If not specified, Redpanda uses a default path within the data directory.

Property Value

Type

string

Default

null

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

redpanda:
  cloud_storage_inventory_hash_store: <inventory-hash-directory-path>

crash_loop_limit

A limit on the number of consecutive times a broker can crash within one hour before its crash-tracking logic is reset. This limit prevents a broker from getting stuck in an infinite cycle of crashes.

If null, the property is disabled and no limit is applied.

The crash-tracking logic is reset (to zero consecutive crashes) by any of the following conditions:

  • The broker shuts down cleanly.

  • One hour passes since the last crash.

  • The redpanda.yaml broker configuration file is updated.

  • The startup_log file in the broker’s data_directory broker property is manually deleted.

Property Value

Type

integer

Maximum

4294967295

Default

5

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

User

crash_loop_sleep_sec

Introduced in v24.3.4

The amount of time the broker sleeps before terminating when the limit on consecutive broker crashes (crash_loop_limit) is reached. This property provides a debugging window for you to access the broker before it terminates, and is particularly useful in Kubernetes environments.

If null, the property is disabled, and the broker terminates immediately after reaching the crash loop limit.

For information about how to reset the crash loop limit, see the crash_loop_limit broker property.

Property Value

Type

integer

Range

[-17179869184, 17179869183]

Default

null

Nullable

Yes

Unit

Seconds

Restored on Whole Cluster Restore

Yes

Visibility

User

data_directory

Path to the directory for storing Redpanda’s streaming data files.

Property Value

Type

string

Default

null

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

developer_mode

Enabling developer_mode isn’t recommended for production use.

Enable developer mode, which skips most of the checks performed at startup.

Property Value

Type

boolean

Default

false

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

Tunable

emergency_disable_data_transforms

Override the cluster property data_transforms_enabled and disable Wasm-powered data transforms. This is an emergency shutoff button.

Property Value

Type

boolean

Default

false

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Related topics

empty_seed_starts_cluster

Controls how a new cluster is formed. All brokers in a cluster must have the same value.

For backward compatibility, true is the default. Redpanda recommends using false in production environments to prevent accidental cluster formation.
Property Value

Type

boolean

Default

true

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

fips_mode

Controls whether Redpanda starts in FIPS mode. This property allows for three values:

  • Disabled - Redpanda does not start in FIPS mode.

  • Permissive - Redpanda performs the same check as enabled, but a warning is logged, and Redpanda continues to run. Redpanda loads the OpenSSL FIPS provider into the OpenSSL library. After this completes, Redpanda is operating in FIPS mode, which means that the TLS cipher suites available to users are limited to the TLSv1.2 and TLSv1.3 NIST-approved cryptographic methods.

  • Enabled - Redpanda verifies that the operating system is enabled for FIPS by checking /proc/sys/crypto/fips_enabled. If the file does not exist or does not return 1, Redpanda immediately exits.

Property Value

Type

string (enum)

Accepted values

disabled, permissive, enabled

Default

disabled

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

kafka_api

IP address and port of the Kafka API endpoint that handles requests. Supports multiple listeners with different configurations.

Property Value

Type

array

Default

[127.0.0.1:9092, null]

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

Basic example
redpanda:
  kafka_api:
    - address: <bind-address>
      port: <kafka-port>
      authentication_method: sasl
Multiple listeners example (for different networks or authentication methods)
redpanda:
  kafka_api:
    - name: <internal-listener-name>
      address: <internal-bind-address>
      port: <internal-kafka-port>
      authentication_method: none
    - name: <external-listener-name>
      address: <external-bind-address>
      port: <external-kafka-port>
      authentication_method: sasl
    - name: <mtls-listener-name>
      address: <mtls-bind-address>
      port: <mtls-kafka-port>
      authentication_method: mtls_identity

Replace the following placeholders with your values:

  • <bind-address>: The IP address to bind the listener to (typically 0.0.0.0 for all interfaces)

  • <kafka-port>: The port number for the Kafka API endpoint

  • <internal-listener-name>: Name for internal network connections (for example, internal)

  • <external-listener-name>: Name for external network connections (for example, external)

  • <mtls-listener-name>: Name for mTLS connections (for example, mtls)

  • <internal-bind-address>: The IP address for internal connections

  • <internal-kafka-port>: The port number for internal Kafka API connections

  • <external-bind-address>: The IP address for external connections

  • <external-kafka-port>: The port number for external Kafka API connections

  • <mtls-bind-address>: The IP address for mTLS connections

  • <mtls-kafka-port>: The port number for mTLS Kafka API connections

Related topics

kafka_api_tls

Transport Layer Security (TLS) configuration for the Kafka API endpoint.

Property Value

Type

array

Default

[]

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

redpanda:
  kafka_api_tls:
    - name: <kafka-api-listener-name>
      enabled: true
      cert_file: <path-to-cert-file>
      key_file: <path-to-key-file>
      truststore_file: <path-to-truststore-file>
      require_client_auth: false

Replace the following placeholders with your values:

  • <kafka-api-listener-name>: Name that matches your Kafka API listener (defined in the kafka_api broker property)

  • <path-to-cert-file>: Full path to the TLS certificate file

  • <path-to-key-file>: Full path to the TLS private key file

  • <path-to-truststore-file>: Full path to the Certificate Authority file

Set require_client_auth: true for mutual TLS (mTLS) authentication, or false for server-side TLS only.

memory_allocation_warning_threshold

Threshold for log messages that contain a larger memory allocation than specified.

Property Value

Type

integer

Default

131072

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

Tunable

node_id

A number that uniquely identifies the broker within the cluster. If null (the default value), Redpanda automatically assigns an ID. If set, it must be non-negative value.

Do not set node_id manually.

Redpanda assigns unique IDs automatically to prevent issues such as:

  • Brokers with empty disks rejoining the cluster.

  • Conflicts during recovery or scaling.

Manually setting or reusing node_id values, even for decommissioned brokers, can cause cluster inconsistencies and operational failures.

Broker IDs are immutable. After a broker joins the cluster, its node_id cannot be changed.

Property Value

Type

integer

Default

null

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

User

node_id_overrides

List of node ID and UUID overrides applied at broker startup. Each entry includes the current UUID, the desired new ID and UUID, and an ignore flag. An entry applies only if current_uuid matches the broker’s actual UUID.

Remove this property after the cluster restarts successfully and operates normally. This prevents reapplication and maintains consistent configuration across brokers.

Property Value

Type

array

Default

[]

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

redpanda:
  node_id_overrides:
    - current_uuid: "<current-broker-uuid>"
      new_id: <new-broker-id>
      new_uuid: "<new-broker-uuid>"
      ignore_existing_node_id: <ignore-existing-flag>
    - current_uuid: "<another-current-uuid>"
      new_id: <another-new-broker-id>
      new_uuid: "<another-new-uuid>"
      ignore_existing_node_id: <another-ignore-flag>

Replace the following placeholders with your values:

  • <current-broker-uuid>: The current UUID of the broker to override

  • <new-broker-id>: The new broker ID to assign

  • <new-broker-uuid>: The new UUID to assign to the broker

  • <ignore-existing-flag>: Set to true to force override on brokers that already have a node ID, or false to apply override only to brokers without existing node IDs

  • <another-current-uuid>: Additional broker UUID for multiple overrides

  • <another-new-broker-id>: Additional new broker ID

  • <another-new-uuid>: Additional new UUID

  • <another-ignore-flag>: Additional ignore existing node ID flag

openssl_config_file

Path to the configuration file used by OpenSSL to properly load the FIPS-compliant module.

Property Value

Type

string

Default

null

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

User

openssl_module_directory

Path to the directory that contains the OpenSSL FIPS-compliant module. The filename that Redpanda looks for is fips.so.

Property Value

Type

string

Default

null

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

User

rack

A label that identifies a failure zone. Apply the same label to all brokers in the same failure zone. When enable_rack_awareness is set to true at the cluster level, the system uses the rack labels to spread partition replicas across different failure zones.

Property Value

Type

string

Default

null

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

User

Related topics

recovery_mode_enabled

If true, start Redpanda in recovery mode, where user partitions are not loaded and only administrative operations are allowed.

Property Value

Type

boolean

Default

false

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Related topics

rpc_server

IP address and port for the Remote Procedure Call (RPC) server.

Property Value

Type

object

Default

{address: "127.0.0.1", port: 33145}

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

rpc_server_tls

TLS configuration for the RPC server.

Property Value

Type

object

Default

{crl_file: null, enable_renegotiation: null, enabled: null, key_cert: null, min_tls_version: null, require_client_auth: null, tls_v1_2_cipher_suites: null, tls_v1_3_cipher_suites: null, truststore_file: null}

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

redpanda:
  rpc_server_tls:
    enabled: true
    cert_file: "<path-to-cert-file>"
    key_file: "<path-to-key-file>"
    truststore_file: "<path-to-truststore-file>"
    require_client_auth: true

Replace the following placeholders with your values:

  • <path-to-cert-file>: Full path to the RPC TLS certificate file

  • <path-to-key-file>: Full path to the RPC TLS private key file

  • <path-to-truststore-file>: Full path to the certificate authority file

seed_servers

List of the seed servers used to join current cluster. If the seed_servers list is empty the broker will be a cluster root and it will form a new cluster.

  • When empty_seed_starts_cluster is true, Redpanda enables one broker with an empty seed_servers list to initiate a new cluster. The broker with an empty seed_servers becomes the cluster root, to which other brokers must connect to join the cluster. Brokers looking to join the cluster should have their seed_servers populated with the cluster root’s address, facilitating their connection to the cluster.

    Only one broker, the designated cluster root, should have an empty seed_servers list during the initial cluster bootstrapping. This ensures a single initiation point for cluster formation.

  • When empty_seed_starts_cluster is false, Redpanda requires all brokers to start with a known set of brokers listed in seed_servers. The seed_servers list must not be empty and should be identical across these initial seed brokers, containing the addresses of all seed brokers. Brokers not included in the seed_servers list use it to discover and join the cluster, allowing for expansion beyond the foundational members.

    The seed_servers list must be consistent across all seed brokers to prevent cluster fragmentation and ensure stable cluster formation.

Property Value

Type

array

Default

[]

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

User

Example

Example with empty_seed_starts_cluster: true
# Cluster root broker (seed starter)
redpanda:
  empty_seed_starts_cluster: true
  seed_servers: []
# Additional brokers joining the cluster
redpanda:
  empty_seed_starts_cluster: true
  seed_servers:
    - host:
        address: <seed-broker-ip>
        port: <rpc-port>
Example with empty_seed_starts_cluster: false
# All initial seed brokers use the same configuration
redpanda:
  empty_seed_starts_cluster: false
  seed_servers:
    - host:
        address: <seed-broker-1-ip>
        port: <rpc-port>
    - host:
        address: <seed-broker-2-ip>
        port: <rpc-port>
    - host:
        address: <seed-broker-3-ip>
        port: <rpc-port>

Replace the following placeholders with your values:

  • <seed-broker-ip>: IP address of the cluster root broker

  • <seed-broker-x-ip>: IP addresses of each seed broker in the cluster

  • <rpc-port>: RPC port for brokers (default: 33145)

storage_failure_injection_config_path

Path to the configuration file used for low level storage failure injection.

Property Value

Type

string

Default

null

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

Tunable

storage_failure_injection_enabled

If true, inject low level storage failures on the write path. Do not use for production instances.

Property Value

Type

boolean

Default

false

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

Tunable

upgrade_override_checks

Whether to violate safety checks when starting a Redpanda version newer than the cluster’s consensus version.

Property Value

Type

boolean

Default

false

Nullable

No

Restored on Whole Cluster Restore

Yes

Visibility

Tunable

verbose_logging_timeout_sec_max

Maximum duration in seconds for verbose (TRACE or DEBUG) logging. Values configured above this will be clamped. If null (the default) there is no limit. Can be overridden in the Admin API on a per-request basis.

Property Value

Type

integer

Range

[-17179869184, 17179869183]

Default

null

Nullable

Yes

Restored on Whole Cluster Restore

Yes

Visibility

Tunable

Example

schema_registry:
  schema_registry_api:
    address: 0.0.0.0
    port: 8081
    authentication_method: http_basic
  schema_registry_replication_factor: 3
  mode_mutability: true

Related topics

HTTP-Based Authentication

The authentication_method property configures authentication for HTTP-based API listeners (Schema Registry and HTTP Proxy).

Accepted values: - none - No authentication required (allows anonymous access). - http_basic - Authentication required. The specific authentication method (Basic vs OIDC) depends on the http_authentication cluster property and the client’s Authorization header type.

Default: none

This property works together with the cluster property http_authentication:

  • authentication_method (broker property): Controls whether a specific listener requires authentication (http_basic) or allows anonymous access (none)

  • http_authentication (cluster property): Controls which authentication methods are available globally (["BASIC"], ["OIDC"], or ["BASIC", "OIDC"])

When authentication_method: http_basic is set on a listener, clients can use any authentication method that is enabled in the http_authentication cluster property.

For detailed authentication configuration, see Configure Authentication.

Schema Registry

The Schema Registry provides configuration properties to help you enable producers and consumers to share information needed to serialize and deserialize producer and consumer messages.

For information on how to edit broker properties for the Schema Registry, see Configure Broker Properties.

Schema Registry shares some configuration property patterns with HTTP Proxy (such as API listeners and authentication methods), but also has additional schema-specific properties like managing schema storage and validation behavior.

Shared properties:

Example
schema_registry:
  schema_registry_api:
    address: 0.0.0.0
    port: 8081
    authentication_method: http_basic
  schema_registry_replication_factor: 3
  mode_mutability: true

mode_mutability

Enable modifications to the read-only mode of the Schema Registry. When set to true, the entire Schema Registry or its subjects can be switched to READONLY or READWRITE. This property is useful for preventing unwanted changes to the entire Schema Registry or specific subjects.

Property Value

Type

boolean

Default

true

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

schema_registry_api

Schema Registry API listener address and port

Property Value

Type

array

Default

[0.0.0.0:8081, null]

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

Example

schema_registry:
  schema_registry_api:
    address: 0.0.0.0
    port: 8081
    authentication_method: http_basic

schema_registry_api_tls

TLS configuration for Schema Registry API.

Property Value

Type

array

Default

[]

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

schema_registry_replication_factor

Replication factor for internal _schemas topic. If unset, defaults to the default_topic_replication cluster property.

Property Value

Type

integer

Range

[-32768, 32767]

Default

null

Nullable

Yes

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

Example

pandaproxy:
  pandaproxy_api:
    address: 0.0.0.0
    port: 8082
    authentication_method: http_basic
  client_cache_max_size: 10
  client_keep_alive: 300000
  consumer_instance_timeout_ms: 300000

Related topics

HTTP Proxy Client

Configuration options for how the HTTP Proxy connects to Kafka brokers.

The HTTP Proxy acts as a bridge: external clients make REST API calls to the HTTP Proxy server (configured in the pandaproxy section), and the HTTP Proxy uses these client settings to connect to your Kafka cluster.

Example
pandaproxy_client:
  brokers:
    - address: <kafka-broker-1>
      port: <kafka-port>
    - address: <kafka-broker-2>
      port: <kafka-port>
  sasl_mechanism: <scram-mechanism>
  scram_username: <username>
  scram_password: <password>
  produce_ack_level: -1
  retries: 5

Replace the following placeholders with your values:

  • <kafka-broker-1>, <kafka-broker-2>: IP addresses or hostnames of your Kafka brokers

  • <kafka-port>: Port number for the Kafka API (default 9092)

  • <scram-mechanism>: SCRAM authentication mechanism (SCRAM-SHA-256 or SCRAM-SHA-512)

  • <username>: SCRAM username for authentication

  • <password>: SCRAM password for authentication

broker_tls

TLS configuration for the Kafka API servers to which the HTTP Proxy client should connect.

Property Value

Type

object

Default

{crl_file: null, enable_renegotiation: null, enabled: null, key_cert: null, min_tls_version: null, require_client_auth: null, tls_v1_2_cipher_suites: null, tls_v1_3_cipher_suites: null, truststore_file: null}

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

brokers

Network addresses of the Kafka API servers to which the HTTP Proxy client should connect.

Property Value

Type

array

Default

[]

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

client_identifier

Custom identifier to include in the Kafka request header for the HTTP Proxy client. This identifier can help debug or monitor client activities.

Property Value

Type

string

Default

test_client

Nullable

Yes

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

consumer_heartbeat_interval_ms

Interval (in milliseconds) for consumer heartbeats.

Property Value

Type

string

Default

null

Nullable

No

Unit

Milliseconds

Restored on Whole Cluster Restore

Yes

Visibility

User

consumer_rebalance_timeout_ms

Timeout (in milliseconds) for consumer rebalance.

Property Value

Type

string

Default

null

Nullable

No

Unit

Milliseconds

Restored on Whole Cluster Restore

Yes

Visibility

User

consumer_request_max_bytes

Maximum bytes to fetch per request.

Property Value

Type

integer

Range

[-2147483648, 2147483647]

Default

1048576

Nullable

No

Unit

Bytes

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

consumer_request_min_bytes

Minimum bytes to fetch per request.

Property Value

Type

integer

Range

[-2147483648, 2147483647]

Default

1

Nullable

No

Unit

Bytes

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

consumer_request_timeout_ms

Interval (in milliseconds) for consumer request timeout.

Property Value

Type

string

Default

null

Nullable

No

Unit

Milliseconds

Restored on Whole Cluster Restore

Yes

Visibility

User

consumer_session_timeout_ms

Timeout (in milliseconds) for consumer session.

Property Value

Type

string

Default

null

Nullable

No

Unit

Milliseconds

Restored on Whole Cluster Restore

Yes

Visibility

User

produce_ack_level

Number of acknowledgments the producer requires the leader to have received before considering a request complete.

Property Value

Type

integer

Range

[-32768, 32767]

Default

-1

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

produce_batch_delay_ms

Delay (in milliseconds) to wait before sending batch.

Property Value

Type

string

Default

null

Nullable

No

Unit

Milliseconds

Restored on Whole Cluster Restore

Yes

Visibility

User

produce_batch_record_count

Number of records to batch before sending to broker.

Property Value

Type

integer

Range

[-2147483648, 2147483647]

Default

1000

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

produce_batch_size_bytes

Number of bytes to batch before sending to broker.

Property Value

Type

integer

Range

[-2147483648, 2147483647]

Default

1048576

Nullable

No

Unit

Bytes

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

produce_compression_type

Enable or disable compression by the Kafka client. Specify none to disable compression or one of the supported types [gzip, snappy, lz4, zstd].

Property Value

Type

string

Default

none

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

produce_shutdown_delay_ms

Delay (in milliseconds) to allow for final flush of buffers before shutting down.

Property Value

Type

string

Default

null

Nullable

No

Unit

Milliseconds

Restored on Whole Cluster Restore

Yes

Visibility

User

retries

Number of times to retry a request to a broker.

Property Value

Type

integer

Default

5

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

retry_base_backoff_ms

Delay (in milliseconds) for initial retry backoff.

Property Value

Type

string

Default

null

Nullable

No

Unit

Milliseconds

Restored on Whole Cluster Restore

Yes

Visibility

User

sasl_mechanism

The SASL mechanism to use when the HTTP Proxy client connects to the Kafka API. These credentials are used when the HTTP Proxy API listener has authentication_method: none but the cluster requires authenticated access to the Kafka API.

This property specifies which individual SASL mechanism the HTTP Proxy client should use, while the cluster-wide available mechanisms are configured using the sasl_mechanisms cluster property.

Breaking change in Redpanda 25.2: Ephemeral credentials for HTTP Proxy are removed. If your HTTP Proxy API listeners use authentication_method: none, you must configure explicit SASL credentials (scram_username, scram_password, and sasl_mechanism) for HTTP Proxy to authenticate with the Kafka API.

This allows any HTTP API user to access Kafka using shared credentials. Redpanda Data recommends enabling HTTP Proxy authentication instead.

For configuration instructions, see Configure HTTP Proxy to connect to Redpanda with SASL.

For details about this breaking change, see What’s new.

While the cluster-wide sasl_mechanisms property may support additional mechanisms (PLAIN, GSSAPI, OAUTHBEARER), HTTP Proxy client connections only support SCRAM mechanisms.
Property Value

Type

string (enum)

Accepted values

SCRAM-SHA-256, SCRAM-SHA-512, OAUTHBEARER (Enterprise)

Default

null

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

Related topics

scram_password

Password to use for SCRAM authentication mechanisms when the HTTP Proxy client connects to the Kafka API. This property is required when the HTTP Proxy API listener has authentication_method: none but the cluster requires authenticated access to the Kafka API.

Breaking change in Redpanda 25.2: Ephemeral credentials for HTTP Proxy are removed. If your HTTP Proxy API listeners use authentication_method: none, you must configure explicit SASL credentials (scram_username, scram_password, and sasl_mechanism) for HTTP Proxy to authenticate with the Kafka API.

This allows any HTTP API user to access Kafka using shared credentials. Redpanda Data recommends enabling HTTP Proxy authentication instead.

For configuration instructions, see Configure HTTP Proxy to connect to Redpanda with SASL.

For details about this breaking change, see What’s new.

Property Value

Type

string

Default

null

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

scram_username

Username to use for SCRAM authentication mechanisms when the HTTP Proxy client connects to the Kafka API. This property is required when the HTTP Proxy API listener has authentication_method: none but the cluster requires authenticated access to the Kafka API.

Breaking change in Redpanda 25.2: Ephemeral credentials for HTTP Proxy are removed. If your HTTP Proxy API listeners use authentication_method: none, you must configure explicit SASL credentials (scram_username, scram_password, and sasl_mechanism) for HTTP Proxy to authenticate with the Kafka API.

This allows any HTTP API user to access Kafka using shared credentials. Redpanda Data recommends enabling HTTP Proxy authentication instead.

For configuration instructions, see Configure HTTP Proxy to connect to Redpanda with SASL.

For details about this breaking change, see What’s new.

Property Value

Type

string

Default

null

Nullable

No

Requires restart

Yes

Restored on Whole Cluster Restore

Yes

Example

schema_registry_client:
  brokers:
    - address: <kafka-broker-1>
      port: <kafka-port>
    - address: <kafka-broker-2>
      port: <kafka-port>
  sasl_mechanism: <scram-mechanism>
  scram_username: <username>
  scram_password: <password>
  produce_batch_delay_ms: 0
  produce_batch_record_count: 0
  client_identifier: schema_registry_client
audit_log_client:
  brokers:
    - address: <kafka-broker-1>
      port: <kafka-port>
    - address: <kafka-broker-2>
      port: <kafka-port>
  produce_batch_delay_ms: 0
  produce_batch_record_count: 0
  produce_batch_size_bytes: 0
  produce_compression_type: zstd
  produce_ack_level: 1
  produce_shutdown_delay_ms: 3000
  client_identifier: audit_log_client

Schema Registry Client

Configuration options for Schema Registry Client connections to Kafka brokers.

Example
schema_registry_client:
  brokers:
    - address: <kafka-broker-1>
      port: <kafka-port>
    - address: <kafka-broker-2>
      port: <kafka-port>
  sasl_mechanism: <scram-mechanism>
  scram_username: <username>
  scram_password: <password>
  produce_batch_delay_ms: 0
  produce_batch_record_count: 0
  client_identifier: schema_registry_client

Replace the following placeholders with your values:

  • <kafka-broker-1>, <kafka-broker-2>: IP addresses or hostnames of your Kafka brokers

  • <kafka-port>: Port number for the Kafka API (typically 9092)

  • <scram-mechanism>: SCRAM authentication mechanism (SCRAM-SHA-256 or SCRAM-SHA-512)

  • <username>: SCRAM username for authentication

  • <password>: SCRAM password for authentication

Schema Registry Client uses the same configuration properties as HTTP Proxy Client but with different defaults optimized for Schema Registry operations. The client uses immediate batching (0ms delay, 0 record count) for low-latency schema operations.

Audit Log Client

Configuration options for Audit Log Client connections to Kafka brokers.

Example
audit_log_client:
  brokers:
    - address: <kafka-broker-1>
      port: <kafka-port>
    - address: <kafka-broker-2>
      port: <kafka-port>
  produce_batch_delay_ms: 0
  produce_batch_record_count: 0
  produce_batch_size_bytes: 0
  produce_compression_type: zstd
  produce_ack_level: 1
  produce_shutdown_delay_ms: 3000
  client_identifier: audit_log_client

Replace the following placeholders with your values:

  • <kafka-broker-1>, <kafka-broker-2>: IP addresses or hostnames of your Kafka brokers

  • <kafka-port>: Port number for the Kafka API (typically 9092)

Audit log client uses the same configuration properties as HTTP Proxy client but with different defaults optimized for audit logging operations. The client uses immediate batching (0 ms delay, 0 record count) with compression enabled (zstd) and acknowledgment level 1 for reliable audit log delivery.