Docs Self-Managed Reference Properties Broker Configuration Properties Broker Configuration Properties Broker configuration properties are applied individually to each broker in a cluster. You can find and modify these properties in the redpanda.yaml configuration file. Broker properties are organized under different top-level sections in your configuration file: redpanda pandaproxy pandaproxy_client schema_registry schema_registry_client audit_log_client Simple properties use a single key-value pair, while complex properties (such as listeners and TLS configurations) include examples. For information on how to edit broker properties, see Configure Broker Properties. All broker properties require that you restart Redpanda for any update to take effect. Redpanda Configuration properties for the core Redpanda broker. Example redpanda: data_directory: /var/lib/redpanda/data kafka_api: - address: 0.0.0.0 port: 9092 authentication_method: sasl admin: - address: 0.0.0.0 port: 9644 rpc_server: address: 0.0.0.0 port: 33145 seed_servers: - host: address: redpanda-1 port: 33145 admin Network address for the Admin API server. Visibility: user Type: array Default: [{ address: "127.0.0.1", port: 9644 }] Example redpanda: admin: - name: <admin-api-name> address: <external-broker-hostname> port: <admin-api-port> Replace the following placeholders with your values: <admin-api-name>: Name for the Admin API listener (TLS configuration is handled separately in the admin_api_tls broker property) <external-broker-hostname>: The externally accessible hostname or IP address that clients use to connect to this broker <admin-api-port>: The port number for the Admin API endpoint admin_api_doc_dir Path to the API specifications for the Admin API. Visibility: user Type: string Default: /usr/share/redpanda/admin-api-doc admin_api_tls Specifies the TLS configuration for the HTTP Admin API. Visibility: user Default: [] Example redpanda: admin_api_tls: - name: <admin-api-tls-name> enabled: true cert_file: <path-to-cert-file> key_file: <path-to-key-file> truststore_file: <path-to-truststore-file> require_client_auth: true Replace the following placeholders with your values: <admin-api-tls-name>: Name that matches your Admin API listener (defined in the admin broker property) <path-to-cert-file>: Full path to the TLS certificate file <path-to-key-file>: Full path to the TLS private key file <path-to-truststore-file>: Full path to the Certificate Authority file advertised_kafka_api Address of the Kafka API published to the clients. If not set, the kafka_api broker property is used. When behind a load balancer or in containerized environments, this should be the externally-accessible address that clients use to connect. Visibility: user Type: array Default: [] Example redpanda: advertised_kafka_api: - name: <kafka-api-name> address: <external-broker-hostname> port: <kafka-port> Replace the following placeholders with your values: <kafka-api-name>: Name that matches your Kafka API listener (defined in the kafka_api broker property) <external-broker-hostname>: The externally accessible hostname or IP address that clients use to connect to this broker <kafka-port>: The port number for the Kafka API endpoint advertised_rpc_api Address of RPC endpoint published to other cluster members. If not set, the rpc_server broker property is used. This should be the address other brokers can use to communicate with this broker. Visibility: user Type: string Default: null Example redpanda: advertised_rpc_api: address: <external-broker-hostname> port: <rpc-port> Replace the following placeholders with your values: <external-broker-hostname>: The externally accessible hostname or IP address that other brokers use to communicate with this broker <rpc-port>: The port number for the RPC endpoint (default is 33145) cloud_storage_cache_directory Directory for archival cache. Set when the cloud_storage_enabled cluster property is enabled. If not specified, Redpanda uses a default path within the data directory. Visibility: user Type: string Default: null Example redpanda: cloud_storage_cache_directory: <cache-directory-path> Replace <cache-directory-path> with the full path to your desired cache directory. cloud_storage_inventory_hash_store Directory to store inventory report hashes for use by cloud storage scrubber. If not specified, Redpanda uses a default path within the data directory. Visibility: user Type: string Default: null Example redpanda: cloud_storage_inventory_hash_store: <inventory-hash-directory-path> Replace <inventory-hash-directory-path> with the full path to your desired inventory hash storage directory. crash_loop_limit A limit on the number of consecutive times a broker can crash within one hour before its crash-tracking logic is reset. This limit prevents a broker from getting stuck in an infinite cycle of crashes. If null, the property is disabled and no limit is applied. The crash-tracking logic is reset (to zero consecutive crashes) by any of the following conditions: The broker shuts down cleanly. One hour passes since the last crash. The redpanda.yaml broker configuration file is updated. The startup_log file in the broker’s data_directory broker property is manually deleted. Unit: number of consecutive crashes of a broker Visibility: user Type: integer Accepted values: [0, 4294967295] Default: 5 crash_loop_sleep_sec Introduced in v24.3.4 The amount of time the broker sleeps before terminating when the limit on consecutive broker crashes (crash_loop_limit) is reached. This property provides a debugging window for you to access the broker before it terminates, and is particularly useful in Kubernetes environments. If null, the property is disabled, and the broker terminates immediately after reaching the crash loop limit. For information about how to reset the crash loop limit, see the crash_loop_limit broker property. Unit: seconds Visibility: user Type: integer or null Accepted values: [0, 4294967295] or null Default: null data_directory Path to the directory for storing Redpanda’s streaming data files. Visibility: user Type: string Default: null developer_mode Enabling developer_mode isn’t recommended for production use. Enable developer mode, which skips most of the checks performed at startup. Visibility: tunable Type: boolean Default: false emergency_disable_data_transforms Override the cluster property data_transforms_enabled and disable Wasm-powered data transforms. This is an emergency shutoff button. Visibility: user Type: boolean Default: false empty_seed_starts_cluster Controls how a new cluster is formed. All brokers in a cluster must have the same value. See how the empty_seed_starts_cluster broker property works with the seed_servers broker property to form a cluster. For backward compatibility, true is the default. Redpanda recommends using false in production environments to prevent accidental cluster formation. Visibility: user Type: boolean Default: true fips_mode Controls whether Redpanda starts in FIPS mode. This property allows for three values: Disabled - Redpanda does not start in FIPS mode. Permissive - Redpanda performs the same check as enabled, but a warning is logged, and Redpanda continues to run. Redpanda loads the OpenSSL FIPS provider into the OpenSSL library. After this completes, Redpanda is operating in FIPS mode, which means that the TLS cipher suites available to users are limited to the TLSv1.2 and TLSv1.3 NIST-approved cryptographic methods. Enabled - Redpanda verifies that the operating system is enabled for FIPS by checking /proc/sys/crypto/fips_enabled. If the file does not exist or does not return 1, Redpanda immediately exits. Visibility: user Accepted values: 0 (disabled), 1 (permissive), 2 (enabled) Default: 0 (disabled) kafka_api IP address and port of the Kafka API endpoint that handles requests. Supports multiple listeners with different configurations. Visibility: user Type: array Default: [{ address: "127.0.0.1", port: 9092 }] Basic example redpanda: kafka_api: - address: <bind-address> port: <kafka-port> authentication_method: sasl Multiple listeners example (for different networks or authentication methods) redpanda: kafka_api: - name: <internal-listener-name> address: <internal-bind-address> port: <internal-kafka-port> authentication_method: none - name: <external-listener-name> address: <external-bind-address> port: <external-kafka-port> authentication_method: sasl - name: <mtls-listener-name> address: <mtls-bind-address> port: <mtls-kafka-port> authentication_method: mtls_identity Replace the following placeholders with your values: <bind-address>: The IP address to bind the listener to (typically 0.0.0.0 for all interfaces) <kafka-port>: The port number for the Kafka API endpoint <internal-listener-name>: Name for internal network connections (for example, internal) <external-listener-name>: Name for external network connections (for example, external) <mtls-listener-name>: Name for mTLS connections (for example, mtls) <internal-bind-address>: The IP address for internal connections <internal-kafka-port>: The port number for internal Kafka API connections <external-bind-address>: The IP address for external connections <external-kafka-port>: The port number for external Kafka API connections <mtls-bind-address>: The IP address for mTLS connections <mtls-kafka-port>: The port number for mTLS Kafka API connections Authentication The authentication_method property configures authentication for Kafka API listeners. Accepted values: none - No authentication required sasl - SASL authentication (specific mechanisms are configured using the sasl_mechanisms cluster property) mtls_identity - Mutual TLS authentication using client certificates Default: none When using authentication_method: sasl, you must also configure the available SASL mechanisms (such as SCRAM, PLAIN, GSSAPI, or OAUTHBEARER) using the sasl_mechanisms cluster property. For detailed authentication configuration, see Configure Authentication. kafka_api_tls Transport Layer Security (TLS) configuration for the Kafka API endpoint. Visibility: user Default: [] Example redpanda: kafka_api_tls: - name: <kafka-api-listener-name> enabled: true cert_file: <path-to-cert-file> key_file: <path-to-key-file> truststore_file: <path-to-truststore-file> require_client_auth: false Replace the following placeholders with your values: <kafka-api-listener-name>: Name that matches your Kafka API listener (defined in the kafka_api broker property) <path-to-cert-file>: Full path to the TLS certificate file <path-to-key-file>: Full path to the TLS private key file <path-to-truststore-file>: Full path to the Certificate Authority file Set require_client_auth: true for mutual TLS (mTLS) authentication, or false for server-side TLS only. memory_allocation_warning_threshold Threshold for log messages that contain a larger memory allocation than specified. Unit: bytes Visibility: tunable Type: integer Default: 131073 (128_kib + 1) node_id A number that uniquely identifies the broker within the cluster. If null (the default value), Redpanda automatically assigns an ID. If set, it must be non-negative value. Do not set node_id manually. Redpanda assigns unique IDs automatically to prevent issues such as: Brokers with empty disks rejoining the cluster. Conflicts during recovery or scaling. Manually setting or reusing node_id values, even for decommissioned brokers, can cause cluster inconsistencies and operational failures. Broker IDs are immutable. After a broker joins the cluster, its node_id cannot be changed. Accepted values: [0, 4294967295] Type: integer Visibility: user Default: null node_id_overrides List of broker IDs and UUID to override at broker startup. Each entry includes the current UUID and desired ID and UUID. Each entry applies to a given broker only if 'current' matches that broker’s current UUID. Visibility: user Type: array Default: [] Example redpanda: node_id_overrides: - current_uuid: "<current-broker-uuid>" new_id: <new-broker-id> new_uuid: "<new-broker-uuid>" ignore_existing_node_id: <ignore-existing-flag> - current_uuid: "<another-current-uuid>" new_id: <another-new-broker-id> new_uuid: "<another-new-uuid>" ignore_existing_node_id: <another-ignore-flag> Replace the following placeholders with your values: <current-broker-uuid>: The current UUID of the broker to override <new-broker-id>: The new broker ID to assign <new-broker-uuid>: The new UUID to assign to the broker <ignore-existing-flag>: Set to true to force override on brokers that already have a node ID, or false to apply override only to brokers without existing node IDs <another-current-uuid>: Additional broker UUID for multiple overrides <another-new-broker-id>: Additional new broker ID <another-new-uuid>: Additional new UUID <another-ignore-flag>: Additional ignore existing node ID flag openssl_config_file Path to the configuration file used by OpenSSL to properly load the FIPS-compliant module. Visibility: user Type: string Default: null openssl_module_directory Path to the directory that contains the OpenSSL FIPS-compliant module. The filename that Redpanda looks for is fips.so. Visibility: user Type: string Default: null rack A label that identifies a failure zone. Apply the same label to all brokers in the same failure zone. When enable_rack_awareness is set to true at the cluster level, the system uses the rack labels to spread partition replicas across different failure zones. Visibility: user Default: null recovery_mode_enabled If true, start Redpanda in recovery mode, where user partitions are not loaded and only administrative operations are allowed. Visibility: user Type: boolean Default: false rpc_server IP address and port for the Remote Procedure Call (RPC) server. Visibility: user Default: 127.0.0.1:33145 rpc_server_tls TLS configuration for the RPC server. Visibility: user Default: {} Example redpanda: rpc_server_tls: enabled: true cert_file: "<path-to-cert-file>" key_file: "<path-to-key-file>" truststore_file: "<path-to-truststore-file>" require_client_auth: true Replace the following placeholders with your values: <path-to-cert-file>: Full path to the RPC TLS certificate file <path-to-key-file>: Full path to the RPC TLS private key file <path-to-truststore-file>: Full path to the certificate authority file seed_servers List of the seed servers used to join current cluster. If the seed_servers list is empty the node will be a cluster root and it will form a new cluster. When empty_seed_starts_cluster is true, Redpanda enables one broker with an empty seed_servers list to initiate a new cluster. The broker with an empty seed_servers becomes the cluster root, to which other brokers must connect to join the cluster. Brokers looking to join the cluster should have their seed_servers populated with the cluster root’s address, facilitating their connection to the cluster. Only one broker, the designated cluster root, should have an empty seed_servers list during the initial cluster bootstrapping. This ensures a single initiation point for cluster formation. When empty_seed_starts_cluster is false, Redpanda requires all brokers to start with a known set of brokers listed in seed_servers. The seed_servers list must not be empty and should be identical across these initial seed brokers, containing the addresses of all seed brokers. Brokers not included in the seed_servers list use it to discover and join the cluster, allowing for expansion beyond the foundational members. The seed_servers list must be consistent across all seed brokers to prevent cluster fragmentation and ensure stable cluster formation. Visibility: user Type: array Default: [] Example with empty_seed_starts_cluster: true # Cluster root broker (seed starter) redpanda: empty_seed_starts_cluster: true seed_servers: [] # Additional brokers joining the cluster redpanda: empty_seed_starts_cluster: true seed_servers: - host: address: <seed-broker-ip> port: <rpc-port> Example with empty_seed_starts_cluster: false # All initial seed brokers use the same configuration redpanda: empty_seed_starts_cluster: false seed_servers: - host: address: <seed-broker-1-ip> port: <rpc-port> - host: address: <seed-broker-2-ip> port: <rpc-port> - host: address: <seed-broker-3-ip> port: <rpc-port> Replace the following placeholders with your values: <seed-broker-ip>: IP address of the cluster root broker <seed-broker-x-ip>: IP addresses of each seed broker in the cluster <rpc-port>: RPC port for brokers (default: 33145) storage_failure_injection_config_path Path to the configuration file used for low level storage failure injection. Visibility: tunable Type: string Default: null storage_failure_injection_enabled If true, inject low level storage failures on the write path. Do not use for production instances. Visibility: tunable Type: boolean Default: false upgrade_override_checks Whether to violate safety checks when starting a Redpanda version newer than the cluster’s consensus version. Visibility: tunable Type: boolean Default: false verbose_logging_timeout_sec_max Maximum duration in seconds for verbose (TRACE or DEBUG) logging. Values configured above this will be clamped. If null (the default) there is no limit. Can be overridden in the Admin API on a per-request basis. Unit: seconds Visibility: tunable Type: integer Accepted values: [-17179869184, 17179869183] Default: null HTTP-Based Authentication The authentication_method property configures authentication for HTTP-based API listeners (Schema Registry and HTTP Proxy). Accepted values: - none - No authentication required (allows anonymous access). - http_basic - Authentication required. The specific authentication method (Basic vs OIDC) depends on the http_authentication cluster property and the client’s Authorization header type. Default: none This property works together with the cluster property http_authentication: authentication_method (broker property): Controls whether a specific listener requires authentication (http_basic) or allows anonymous access (none) http_authentication (cluster property): Controls which authentication methods are available globally (["BASIC"], ["OIDC"], or ["BASIC", "OIDC"]) When authentication_method: http_basic is set on a listener, clients can use any authentication method that is enabled in the http_authentication cluster property. For detailed authentication configuration, see Configure Authentication. Schema Registry The Schema Registry provides configuration properties to help you enable producers and consumers to share information needed to serialize and deserialize producer and consumer messages. For information on how to edit broker properties for the Schema Registry, see Configure Broker Properties. Schema Registry shares some configuration property patterns with HTTP Proxy (such as API listeners and authentication methods), but also has additional schema-specific properties like managing schema storage and validation behavior. Shared properties: api_doc_dir - API documentation directory (independent from HTTP Proxy’s same-named property) schema_registry_api - API listener configuration (similar to HTTP Proxy’s pandaproxy_api) schema_registry_api_tls - TLS configuration (similar to HTTP Proxy’s pandaproxy_api_tls) Example schema_registry: schema_registry_api: address: 0.0.0.0 port: 8081 authentication_method: http_basic schema_registry_replication_factor: 3 mode_mutability: true mode_mutability Enable modifications to the read-only mode of the Schema Registry. When set to true, the entire Schema Registry or its subjects can be switched to READONLY or READWRITE. This property is useful for preventing unwanted changes to the entire Schema Registry or specific subjects. Visibility: user Type: boolean Default: true schema_registry_api Schema Registry API listener address and port. Visibility: user Type: array Default: [{ address: "0.0.0.0", port: 8081 }] Example schema_registry: schema_registry_api: address: 0.0.0.0 port: 8081 authentication_method: http_basic Authentication For authentication configuration options, see HTTP-Based Authentication. schema_registry_api_tls TLS configuration for Schema Registry API. Visibility: user Default: [] schema_registry_replication_factor Replication factor for internal _schemas topic. If unset, defaults to the default_topic_replication cluster property. Visibility: user Type: integer Accepted values: [-32768, 32767] Default: null Related topics: Cluster property default_topic_replication Topic property default_topic_replication HTTP Proxy (pandaproxy) Redpanda HTTP Proxy (formerly called Pandaproxy) allows access to your data through a REST API. For example, you can list topics or brokers, get events, produce events, subscribe to events from topics using consumer groups, and commit offsets for a consumer. These properties configure the HTTP Proxy server - the REST API endpoint that external clients connect to. Configure these settings to control how clients authenticate to your HTTP Proxy, which network interfaces it listens on, and how it manages client connections. See Use Redpanda with the HTTP Proxy API HTTP Proxy shares some configuration property patterns with Schema Registry (such as API listeners and authentication methods), but focuses on client management and proxy functionality. Example pandaproxy: pandaproxy_api: address: 0.0.0.0 port: 8082 authentication_method: http_basic client_cache_max_size: 10 client_keep_alive: 300000 consumer_instance_timeout_ms: 300000 api_doc_dir Path to the API specifications directory. This directory contains API documentation for both the HTTP Proxy API and Schema Registry API. Requires restart: Yes Visibility: user Type: string Default: /usr/share/redpanda/proxy-api-doc Both HTTP Proxy and Schema Registry have independent api_doc_dir properties that can be configured separately. However, they both default to the same path (/usr/share/redpanda/proxy-api-doc) since they typically use the same API documentation directory. advertised_pandaproxy_api Network address for the HTTP Proxy API server to publish to clients. Visibility: user Default: null client_cache_max_size The maximum number of Kafka client connections that Redpanda can cache in the LRU (least recently used) cache. The LRU cache helps optimize resource utilization by keeping the most recently used clients in memory, facilitating quicker reconnections for frequent clients while limiting memory usage. Visibility: user Type: integer Default: 10 client_keep_alive Time, in milliseconds, that an idle client connection may remain open to the HTTP Proxy API. Unit: milliseconds Visibility: user Type: integer Accepted values: [-17592186044416, 17592186044415] Default: 300000 (5min) consumer_instance_timeout_ms How long to wait for an idle consumer before removing it. A consumer is considered idle when it’s not making requests or heartbeats. Unit: milliseconds Visibility: user Type: integer Accepted values: [-17592186044416, 17592186044415] Default: 300000 pandaproxy_api Rest API listener address and port. Visibility: user Type: array Default: [{ address: "0.0.0.0", port: 8082 }] Example pandaproxy: pandaproxy_api: address: 0.0.0.0 port: 8082 authentication_method: http_basic Authentication For authentication configuration options, see HTTP-Based Authentication. pandaproxy_api_tls TLS configuration for Pandaproxy API. Visibility: user Default: [] HTTP Proxy Client Configuration options for how the HTTP Proxy connects to Kafka brokers. The HTTP Proxy acts as a bridge: external clients make REST API calls to the HTTP Proxy server (configured in the pandaproxy section), and the HTTP Proxy uses these client settings to connect to your Kafka cluster. Example pandaproxy_client: brokers: - address: <kafka-broker-1> port: <kafka-port> - address: <kafka-broker-2> port: <kafka-port> sasl_mechanism: <scram-mechanism> scram_username: <username> scram_password: <password> produce_ack_level: -1 retries: 5 Replace the following placeholders with your values: <kafka-broker-1>, <kafka-broker-2>: IP addresses or hostnames of your Kafka brokers <kafka-port>: Port number for the Kafka API (default 9092) <scram-mechanism>: SCRAM authentication mechanism (SCRAM-SHA-256 or SCRAM-SHA-512) <username>: SCRAM username for authentication <password>: SCRAM password for authentication broker_tls TLS configuration for the Kafka API servers to which the HTTP Proxy client should connect. Visibility: user brokers Network addresses of the Kafka API servers to which the HTTP Proxy client should connect. Visibility: user Type: array Default: ['127.0.0.1:9092'] client_identifier Custom identifier to include in the Kafka request header for the HTTP Proxy client. This identifier can help debug or monitor client activities. Visibility: user Type: string Default: test_client consumer_heartbeat_interval_ms Interval (in milliseconds) for consumer heartbeats. Unit: milliseconds Visibility: user Type: integer Accepted values: [-17592186044416, 17592186044415] Default: 500 consumer_rebalance_timeout_ms Timeout (in milliseconds) for consumer rebalance. Unit: milliseconds Visibility: user Type: integer Accepted values: [-17592186044416, 17592186044415] Default: 2000 consumer_request_max_bytes Maximum bytes to fetch per request. Unit: bytes Visibility: user Type: integer Accepted values: [-2147483648, 2147483647] Default: 1048576 consumer_request_min_bytes Minimum bytes to fetch per request. Unit: bytes Visibility: user Type: integer Accepted values: [-2147483648, 2147483647] Default: 1 consumer_request_timeout_ms Interval (in milliseconds) for consumer request timeout. Unit: milliseconds Visibility: user Type: integer Accepted values: [-17592186044416, 17592186044415] Default: 100 consumer_session_timeout_ms Timeout (in milliseconds) for consumer session. Unit: milliseconds Visibility: user Type: integer Accepted values: [-17592186044416, 17592186044415] Default: 10000 produce_ack_level Number of acknowledgments the producer requires the leader to have received before considering a request complete. Visibility: user Type: integer Accepted values: -1,0,1 Default: -1 produce_batch_delay_ms Delay (in milliseconds) to wait before sending batch. Unit: milliseconds Visibility: user Type: integer Accepted values: [-17592186044416, 17592186044415] Default: 100 produce_batch_record_count Number of records to batch before sending to broker. Visibility: user Type: integer Accepted values: [-2147483648, 2147483647] Default: 1000 produce_batch_size_bytes Number of bytes to batch before sending to broker. Unit: bytes Visibility: user Type: integer Accepted values: [-2147483648, 2147483647] Default: 1048576 produce_compression_type Enable or disable compression by the Kafka client. Specify none to disable compression or one of the supported types [gzip, snappy, lz4, zstd]. Visibility: user Type: string Default: none produce_shutdown_delay_ms Delay (in milliseconds) to allow for final flush of buffers before shutting down. Unit: milliseconds Visibility: user Type: integer Accepted values: [-17592186044416, 17592186044415] Default: 0 retries Number of times to retry a request to a broker. Visibility: user Type: integer Default: 5 retry_base_backoff_ms Delay (in milliseconds) for initial retry backoff. Unit: milliseconds Visibility: user Type: integer Accepted values: [-17592186044416, 17592186044415] Default: 100 sasl_mechanism The SASL mechanism to use when the HTTP Proxy client connects to the Kafka API. These credentials are used when the HTTP Proxy API listener has authentication_method: none but the cluster requires authenticated access to the Kafka API. This property specifies which individual SASL mechanism the HTTP Proxy client should use, while the cluster-wide available mechanisms are configured using the sasl_mechanisms cluster property. Breaking change in Redpanda 25.2: Ephemeral credentials for HTTP Proxy are removed. If your HTTP Proxy API listeners use authentication_method: none, you must configure explicit SASL credentials (scram_username, scram_password, and sasl_mechanism) for HTTP Proxy to authenticate with the Kafka API. This allows any HTTP API user to access Kafka using shared credentials. Redpanda Data recommends enabling HTTP Proxy authentication instead. For configuration instructions, see Configure HTTP Proxy to connect to Redpanda with SASL. For details about this breaking change, see What’s new. Visibility: user Type: string Accepted values: SCRAM-SHA-256, SCRAM-SHA-512 While the cluster-wide sasl_mechanisms property may support additional mechanisms (PLAIN, GSSAPI, OAUTHBEARER), HTTP Proxy client connections only support SCRAM mechanisms. Default: null scram_password Password to use for SCRAM authentication mechanisms when the HTTP Proxy client connects to the Kafka API. This property is required when the HTTP Proxy API listener has authentication_method: none but the cluster requires authenticated access to the Kafka API. Breaking change in Redpanda 25.2: Ephemeral credentials for HTTP Proxy are removed. If your HTTP Proxy API listeners use authentication_method: none, you must configure explicit SASL credentials (scram_username, scram_password, and sasl_mechanism) for HTTP Proxy to authenticate with the Kafka API. This allows any HTTP API user to access Kafka using shared credentials. Redpanda Data recommends enabling HTTP Proxy authentication instead. For configuration instructions, see Configure HTTP Proxy to connect to Redpanda with SASL. For details about this breaking change, see What’s new. Visibility: user Type: string Default: null scram_username Username to use for SCRAM authentication mechanisms when the HTTP Proxy client connects to the Kafka API. This property is required when the HTTP Proxy API listener has authentication_method: none but the cluster requires authenticated access to the Kafka API. Breaking change in Redpanda 25.2: Ephemeral credentials for HTTP Proxy are removed. If your HTTP Proxy API listeners use authentication_method: none, you must configure explicit SASL credentials (scram_username, scram_password, and sasl_mechanism) for HTTP Proxy to authenticate with the Kafka API. This allows any HTTP API user to access Kafka using shared credentials. Redpanda Data recommends enabling HTTP Proxy authentication instead. For configuration instructions, see Configure HTTP Proxy to connect to Redpanda with SASL. For details about this breaking change, see What’s new. Visibility: user Type: string Default: null Schema Registry Client Configuration options for Schema Registry Client connections to Kafka brokers. Example schema_registry_client: brokers: - address: <kafka-broker-1> port: <kafka-port> - address: <kafka-broker-2> port: <kafka-port> sasl_mechanism: <scram-mechanism> scram_username: <username> scram_password: <password> produce_batch_delay_ms: 0 produce_batch_record_count: 0 client_identifier: schema_registry_client Replace the following placeholders with your values: <kafka-broker-1>, <kafka-broker-2>: IP addresses or hostnames of your Kafka brokers <kafka-port>: Port number for the Kafka API (typically 9092) <scram-mechanism>: SCRAM authentication mechanism (SCRAM-SHA-256 or SCRAM-SHA-512) <username>: SCRAM username for authentication <password>: SCRAM password for authentication Schema Registry Client uses the same configuration properties as HTTP Proxy Client but with different defaults optimized for Schema Registry operations. The client uses immediate batching (0ms delay, 0 record count) for low-latency schema operations. Audit Log Client Configuration options for Audit Log Client connections to Kafka brokers. Example audit_log_client: brokers: - address: <kafka-broker-1> port: <kafka-port> - address: <kafka-broker-2> port: <kafka-port> produce_batch_delay_ms: 0 produce_batch_record_count: 0 produce_batch_size_bytes: 0 produce_compression_type: zstd produce_ack_level: 1 produce_shutdown_delay_ms: 3000 client_identifier: audit_log_client Replace the following placeholders with your values: <kafka-broker-1>, <kafka-broker-2>: IP addresses or hostnames of your Kafka brokers <kafka-port>: Port number for the Kafka API (typically 9092) Audit Log Client uses the same configuration properties as HTTP Proxy Client but with different defaults optimized for audit logging operations. The client uses immediate batching (0ms delay, 0 record count) with compression enabled (zstd) and acknowledgment level 1 for reliable audit log delivery. Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution 🎉 Thanks for your feedback! Properties Cluster Configuration Properties