Cluster Configuration Properties

Cluster configuration properties are the same for all brokers in a cluster, and are set at the cluster level.

For information on how to edit cluster properties, see Configure Cluster Properties or Configure Cluster Properties in Kubernetes.

Some cluster properties require that you restart the cluster for any updates to take effect. See the specific property details to identify whether or not a restart is required.

Cluster configuration

abort_index_segment_size

Capacity (in number of txns) of an abort index segment.

Each partition tracks the aborted transaction offset ranges to help service client requests. If the number of transactions increases beyond this threshold, they are flushed to disk to ease memory pressure. Then they’re loaded on demand. This configuration controls the maximum number of aborted transactions before they are flushed to disk.

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: 50000


abort_timed_out_transactions_interval_ms

Interval, in milliseconds, at which Redpanda looks for inactive transactions and aborts them.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000 (10s)


admin_api_require_auth

Whether Admin API clients must provide HTTP basic authentication headers.

Requires restart: No

Visibility: user

Type: boolean

Default: false


aggregate_metrics

Enable aggregation of metrics returned by the /metrics endpoint. Aggregation can simplify monitoring by providing summarized data instead of raw, per-instance metrics. Metric aggregation is performed by summing the values of samples by labels and is done when it makes sense by the shard and/or partition labels.

Requires restart: No

Visibility: user

Type: boolean

Default: false


alive_timeout_ms

The amount of time since the last broker status heartbeat. After this time, a broker is considered offline and not alive.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 5000


alter_topic_cfg_timeout_ms

The duration, in milliseconds, that Redpanda waits for the replication of entries in the controller log when executing a request to alter topic configurations. This timeout ensures that configuration changes are replicated across the cluster before the alteration request is considered complete.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 5000 (5s)


append_chunk_size

Size of direct write operations to disk in bytes. A larger chunk size can improve performance for write-heavy workloads, but increase latency for these writes as more data is collected before each write operation. A smaller chunk size can decrease write latency, but potentially increase the number of disk I/O operations.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 16384


audit_client_max_buffer_size

Defines the number of bytes allocated by the internal audit client for audit messages. When changing this, you must disable audit logging and then re-enable it for the change to take effect. Consider increasing this if your system generates a very large number of audit records in a short amount of time.

Requires restart: No

Visibility: user

Type: integer

Default: 16777216


audit_enabled

Some values for this property require an Enterprise license. Refer to the Enterprise license required field for specific requirements. For license details, see Redpanda Licenses and Enterprise Features.

Enables or disables audit logging. When you set this to true, Redpanda checks for an existing topic named _redpanda.audit_log. If none is found, Redpanda automatically creates one for you.

Requires restart: No

Visibility: user

Type: boolean

Enterprise license required: true

Default: false


audit_enabled_event_types

List of strings in JSON style identifying the event types to include in the audit log. This may include any of the following: management, produce, consume, describe, heartbeat, authenticate, schema_registry, admin.

Requires restart: No

Visibility: user

Type: array

Default: [management, authenticate, admin]


audit_excluded_principals

List of user principals to exclude from auditing.

Requires restart: No

Visibility: user

Type: array

Default: null


audit_excluded_topics

List of topics to exclude from auditing.

Requires restart: No

Visibility: user

Type: array

Default: null


audit_log_num_partitions

Defines the number of partitions used by a newly-created audit topic. This configuration applies only to the audit log topic and may be different from the cluster or other topic configurations. This cannot be altered for existing audit log topics.

Unit: number of partitions per topic

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: 12


audit_log_replication_factor

Defines the replication factor for a newly-created audit log topic. This configuration applies only to the audit log topic and may be different from the cluster or other topic configurations. This cannot be altered for existing audit log topics. Setting this value is optional. If a value is not provided, Redpanda will use the value specified for internal_topic_replication_factor.

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-32768, 32767]

Default: null


audit_queue_drain_interval_ms

Interval, in milliseconds, at which Redpanda flushes the queued audit log messages to the audit log topic. Longer intervals may help prevent duplicate messages, especially in high throughput scenarios, but they also increase the risk of data loss during shutdowns where the queue is lost.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 500


audit_queue_max_buffer_size_per_shard

Defines the maximum amount of memory in bytes used by the audit buffer in each shard. Once this size is reached, requests to log additional audit messages will return a non-retryable error. Limiting the buffer size per shard helps prevent any single shard from consuming excessive memory due to audit log messages.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 1048576


auto_create_topics_enabled

Allow automatic topic creation. To prevent excess topics, this property is not supported on Redpanda Cloud BYOC and Dedicated clusters. You should explicitly manage topic creation for these Redpanda Cloud clusters.

If you produce to a topic that doesn’t exist, the topic will be created with defaults if this property is enabled.

Requires restart: No

Visibility: user

Type: boolean

Default: false


cluster_id

Cluster identifier.

Requires restart: No

Visibility: user

Type: string

Default: null


compacted_log_segment_size

Size (in bytes) for each compacted log segment.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 268435456


compaction_ctrl_backlog_size

Target backlog size for compaction controller. If not set the max backlog size is configured to 80% of total disk space available.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: null


compaction_ctrl_d_coeff

Derivative coefficient for compaction PID controller.

Requires restart: Yes

Visibility: tunable

Type: number

Default: 0.2


compaction_ctrl_i_coeff

Integral coefficient for compaction PID controller.

Requires restart: Yes

Visibility: tunable

Type: number

Default: 0.0


compaction_ctrl_max_shares

Maximum number of I/O and CPU shares that compaction process can use.

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-32768, 32767]

Default: 1000


compaction_ctrl_min_shares

Minimum number of I/O and CPU shares that compaction process can use.

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-32768, 32767]

Default: 10


compaction_ctrl_p_coeff

Proportional coefficient for compaction PID controller. This must be negative, because the compaction backlog should decrease when the number of compaction shares increases.

Requires restart: Yes

Visibility: tunable

Type: number

Default: -12.5


controller_backend_housekeeping_interval_ms

Interval between iterations of controller backend housekeeping loop.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 1000 (1s)


controller_log_accummulation_rps_capacity_acls_and_users_operations

Maximum capacity of rate limit accumulation in controller ACLs and users operations limit.

Requires restart: No

Visibility: tunable

Type: integer

Default: null


controller_log_accummulation_rps_capacity_configuration_operations

Maximum capacity of rate limit accumulation in controller configuration operations limit.

Requires restart: No

Visibility: tunable

Type: integer

Default: null


controller_log_accummulation_rps_capacity_move_operations

Maximum capacity of rate limit accumulation in controller move operations limit.

Requires restart: No

Visibility: tunable

Type: integer

Default: null


controller_log_accummulation_rps_capacity_node_management_operations

Maximum capacity of rate limit accumulation in controller node management operations limit.

Requires restart: No

Visibility: tunable

Type: integer

Default: null


controller_log_accummulation_rps_capacity_topic_operations

Maximum capacity of rate limit accumulation in controller topic operations limit.

Requires restart: No

Visibility: tunable

Type: integer

Default: null


controller_snapshot_max_age_sec

Maximum amount of time before Redpanda attempts to create a controller snapshot after a new controller command appears.

Unit: seconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 60


core_balancing_continuous

Some values for this property require an Enterprise license. Refer to the Enterprise license required field for specific requirements. For license details, see Redpanda Licenses and Enterprise Features.

If set to true, move partitions between cores in runtime to maintain balanced partition distribution.

Requires restart: No

Visibility: user

Type: boolean

Enterprise license required: true

Default: false


core_balancing_debounce_timeout

Interval, in milliseconds, between trigger and invocation of core balancing.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000 (10s)


core_balancing_on_core_count_change

If set to true, and if after a restart the number of cores changes, Redpanda will move partitions between cores to maintain balanced partition distribution.

Requires restart: No

Visibility: user

Type: boolean

Default: true


cpu_profiler_enabled

Enables CPU profiling for Redpanda.

Requires restart: No

Visibility: user

Type: boolean

Default: false


cpu_profiler_sample_period_ms

The sample period for the CPU profiler.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 100


create_topic_timeout_ms

Timeout, in milliseconds, to wait for new topic creation.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 2000


data_transforms_binary_max_size

The maximum size for a deployable WebAssembly binary that the broker can store.

Requires restart: No

Visibility: tunable

Type: integer

Default: 10485760


data_transforms_commit_interval_ms

The commit interval at which data transforms progress.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 3000


data_transforms_enabled

Enables WebAssembly-powered data transforms directly in the broker. When data_transforms_enabled is set to true, Redpanda reserves memory for data transforms, even if no transform functions are currently deployed. This memory reservation ensures that adequate resources are available for transform functions when they are needed, but it also means that some memory is allocated regardless of usage.

Requires restart: Yes

Visibility: user

Type: boolean

Default: false


data_transforms_logging_buffer_capacity_bytes

Buffer capacity for transform logs, per shard. Buffer occupancy is calculated as the total size of buffered log messages; that is, logs emitted but not yet produced.

Unit: bytes

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 512000


data_transforms_logging_flush_interval_ms

Flush interval for transform logs. When a timer expires, pending logs are collected and published to the transform_logs topic.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 500


data_transforms_logging_line_max_bytes

Transform log lines truncate to this length. Truncation occurs after any character escaping.

Unit: bytes

Requires restart: No

Visibility: tunable

Type: integer

Default: 1024


data_transforms_per_core_memory_reservation

The amount of memory to reserve per core for data transform (Wasm) virtual machines. Memory is reserved on boot. The maximum number of functions that can be deployed to a cluster is equal to data_transforms_per_core_memory_reservation / data_transforms_per_function_memory_limit.

Requires restart: Yes

Visibility: user

Type: integer

Default: 20971520


data_transforms_per_function_memory_limit

The amount of memory to give an instance of a data transform (Wasm) virtual machine. The maximum number of functions that can be deployed to a cluster is equal to data_transforms_per_core_memory_reservation / data_transforms_per_function_memory_limit.

Requires restart: Yes

Visibility: user

Type: integer

Default: 2097152


data_transforms_read_buffer_memory_percentage

This property is for Redpanda internal use only. Do not use or modify this property unless specifically instructed to do so by Redpanda Support. Using this property without explicit guidance from Redpanda Support could result in data loss.

The percentage of available memory in the transform subsystem to use for read buffers.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 45


data_transforms_runtime_limit_ms

The maximum amount of runtime to start up a data transform, and the time it takes for a single record to be transformed.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 3000


data_transforms_write_buffer_memory_percentage

This property is for Redpanda internal use only. Do not use or modify this property unless specifically instructed to do so by Redpanda Support. Using this property without explicit guidance from Redpanda Support could result in data loss.

The percentage of available memory in the transform subsystem to use for write buffers.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 45


datalake_coordinator_snapshot_max_delay_secs

Maximum amount of time the coordinator waits to snapshot after a command appears in the log.

Unit: seconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 900


debug_bundle_auto_removal_seconds

If set, how long debug bundles are kept in the debug bundle storage directory after they are created. If not set, debug bundles are kept indefinitely.

Unit: seconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: null


debug_bundle_storage_dir

Path to the debug bundle storage directory. Note: Changing this path does not clean up existing debug bundles. If not set, the debug bundle is stored in the Redpanda data directory specified in the redpanda.yaml broker configuration file.

Requires restart: No

Visibility: user

Type: string

Default: null


debug_load_slice_warning_depth

The recursion depth after which debug logging is enabled automatically for the log reader.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: null


default_leaders_preference

Some values for this property require an Enterprise license. Refer to the Enterprise license required field for specific requirements. For license details, see Redpanda Licenses and Enterprise Features.

Default settings for preferred location of topic partition leaders. It can be either "none" (no preference), or "racks:<rack1>,<rack2>,…​" (prefer brokers with rack ID from the list).

The list can contain one or more rack IDs. If you specify multiple IDs, Redpanda tries to distribute the partition leader locations equally across brokers in these racks.

If enable_rack_awareness is set to false, leader pinning is disabled across the cluster.

Requires restart: No

Visibility: user

Enterprise license required: Any value other than the default none

Default: none

Related topics:


default_num_windows

Default number of quota tracking windows.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-32768, 32767]

Default: 10


default_topic_partitions

Default number of partitions per topic.

Unit: number of partitions per topic

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: 1


default_topic_replications

Default replication factor for new topics.

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-32768, 32767]

Default: 1

In Redpanda Cloud, all new topics are created with a replication factor of 3.

default_window_sec

Default quota tracking window size in milliseconds.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 1000


development_enable_cloud_topics

Enable cloud topics.

Requires restart: No

Visibility: user

Type: boolean

Default: false


disable_batch_cache

Disable batch cache in log manager.

Requires restart: Yes

Visibility: tunable

Type: boolean

Default: false


disable_cluster_recovery_loop_for_tests

This property is for Redpanda internal use only. Do not use or modify this property unless specifically instructed to do so by Redpanda Support. Using this property without explicit guidance from Redpanda Support could result in data loss.

Disables the cluster recovery loop.

Requires restart: No

Visibility: tunable

Type: boolean

Default: false


disable_metrics

Disable registering the metrics exposed on the internal /metrics endpoint.

Requires restart: Yes

Visibility: user

Type: boolean

Default: false


disable_public_metrics

Disable registering the metrics exposed on the /public_metrics endpoint.

Requires restart: Yes

Visibility: user

Type: boolean

Default: false


disk_reservation_percent

The percentage of total disk capacity that Redpanda will avoid using. This applies both when cloud cache and log data share a disk, as well as when cloud cache uses a dedicated disk.

It is recommended to not run disks near capacity to avoid blocking I/O due to low disk space, as well as avoiding performance issues associated with SSD garbage collection.

Unit: percentage of total disk size.

Requires restart: No

Visibility: tunable

Type: number

Default: 25.0


enable_cluster_metadata_upload_loop

Enables cluster metadata uploads. Required for whole cluster restore.

Requires restart: Yes

Visibility: tunable

Type: boolean

Default: true


enable_controller_log_rate_limiting

Limits the write rate for the controller log.

Requires restart: No

Visibility: user

Type: boolean

Default: false


enable_idempotence

Enable idempotent producers.

Requires restart: Yes

Visibility: user

Type: boolean

Default: true


enable_leader_balancer

Enable automatic leadership rebalancing.

Requires restart: No

Visibility: user

Type: boolean

Default: true


enable_metrics_reporter

Enable the cluster metrics reporter. If true, the metrics reporter collects and exports to Redpanda Data a set of customer usage metrics at the interval set by metrics_reporter_report_interval.

The cluster metrics of the metrics reporter are different from monitoring metrics.

  • The metrics reporter exports customer usage metrics for consumption by Redpanda Data.

  • Monitoring metrics are exported for consumption by Redpanda users.

Requires restart: No

Visibility: user

Type: boolean

Default: true


enable_mpx_extensions

Enable Redpanda extensions for MPX.

Requires restart: No

Visibility: tunable

Type: boolean

Default: false


enable_pid_file

Enable PID file. You should not need to change.

Requires restart: Yes

Visibility: tunable

Type: boolean

Default: true


enable_rack_awareness

Enable rack-aware replica assignment.

Requires restart: No

Visibility: user

Type: boolean

Default: false


enable_sasl

Enable SASL authentication for Kafka connections. Authorization is required to modify this property. See also kafka_enable_authorization.

Requires restart: No

Visibility: user

Type: boolean

Default: false


enable_schema_id_validation

Some values for this property require an Enterprise license. Refer to the Enterprise license required field for specific requirements. For license details, see Redpanda Licenses and Enterprise Features.

Mode to enable server-side schema ID validation.

Related topics:

Requires restart: No

Visibility: user

Accepted Values:

  • none: Schema validation is disabled (no schema ID checks are done). Associated topic properties cannot be modified.

  • redpanda: Schema validation is enabled. Only Redpanda topic properties are accepted.

  • compat: Schema validation is enabled. Both Redpanda and compatible topic properties are accepted.

Enterprise license required: compat , redpanda

Default: none


enable_transactions

Enable transactions (atomic writes).

Requires restart: Yes

Visibility: user

Type: boolean

Default: true


enable_usage

Enables the usage tracking mechanism, storing windowed history of kafka/cloud_storage metrics over time.

Requires restart: No

Visibility: user

Type: boolean

Default: false


features_auto_enable

Whether new feature flags auto-activate after upgrades (true) or must wait for manual activation via the Admin API (false).

Requires restart: No

Visibility: tunable

Type: boolean

Default: true


fetch_max_bytes

Maximum number of bytes returned in a fetch request.

Unit: bytes

Requires restart: No

Visibility: user

Type: integer

Default: 57671680


fetch_pid_d_coeff

Derivative coefficient for fetch PID controller.

Requires restart: No

Visibility: tunable

Type: number

Default: 0.0


fetch_pid_i_coeff

Integral coefficient for fetch PID controller.

Requires restart: No

Visibility: tunable

Type: number

Default: 0.01


fetch_pid_max_debounce_ms

The maximum debounce time the fetch PID controller will apply, in milliseconds.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 100


fetch_pid_p_coeff

Proportional coefficient for fetch PID controller.

Requires restart: No

Visibility: tunable

Type: number

Default: 100.0


fetch_pid_target_utilization_fraction

A fraction, between 0 and 1, for the target reactor utilization of the fetch scheduling group.

Unit: fraction

Requires restart: No

Visibility: tunable

Type: number

Default: 0.2


fetch_read_strategy

The strategy used to fulfill fetch requests.

  • polling: Repeatedly polls every partition in the request for new data. The polling interval is set by fetch_reads_debounce_timeout (deprecated).

  • non_polling: The backend is signaled when a partition has new data, so Redpanda doesn’t need to repeatedly read from every partition in the fetch. Redpanda Data recommends using this value for most workloads, because it can improve fetch latency and CPU utilization.

  • non_polling_with_debounce: This option behaves like non_polling, but it includes a debounce mechanism with a fixed delay specified by fetch_reads_debounce_timeout at the start of each fetch. By introducing this delay, Redpanda can accumulate more data before processing, leading to fewer fetch operations and returning larger amounts of data. Enabling this option reduces reactor utilization, but it may also increase end-to-end latency.

Requires restart: No

Visibility: tunable

Accepted Values: polling, non_polling, non_polling_with_debounce

Default: non_polling


fetch_reads_debounce_timeout

Time to wait for the next read in fetch requests when the requested minimum bytes was not reached.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 1


fetch_session_eviction_timeout_ms

Time duration after which the inactive fetch session is removed from the fetch session cache. Fetch sessions are used to implement the incremental fetch requests where a consumer does not send all requested partitions to the server but the server tracks them for the consumer.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 60000


group_initial_rebalance_delay

Delay added to the rebalance phase to wait for new members.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 3000


group_max_session_timeout_ms

The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 300000


group_min_session_timeout_ms

The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 6000


group_new_member_join_timeout

Timeout for new member joins.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 30000


group_offset_retention_check_ms

Frequency rate at which the system should check for expired group offsets.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 600000 (10min)


group_offset_retention_sec

Consumer group offset retention seconds. To disable offset retention, set this to null.

Unit: seconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 604800 (one week)


group_topic_partitions

Number of partitions in the internal group membership topic.

Unit: number of partitions per topic

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: 16


health_manager_tick_interval

How often the health manager runs.

Unit: milliseconds Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 180000 (3min)


health_monitor_max_metadata_age

Maximum age of the metadata cached in the health monitor of a non-controller broker.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000


http_authentication

Some values for this property require an Enterprise license. Refer to the Enterprise license required field for specific requirements. For license details, see Redpanda Licenses and Enterprise Features.

A list of supported HTTP authentication mechanisms.

Requires restart: No

Visibility: user

Type: array

Accepted Values: BASIC, OIDC

Enterprise license required: OIDC

Default: [basic]


iceberg_catalog_base_location

Base path for the object storage backed Iceberg catalog. After Iceberg is enabled, do not change this value.

Requires restart: Yes

Visibility: user

Type: string

Default: redpanda-iceberg-catalog


iceberg_catalog_commit_interval_ms

The frequency at which the Iceberg coordinator commits topic files to the catalog. This is the interval between commit transactions across all topics monitored by the coordinator, not the interval between individual commits.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 60000


iceberg_catalog_type

Iceberg catalog type that Redpanda will use to commit table metadata updates. Supported types: 'rest', 'object_storage'.

Requires restart: Yes

Visibility: user

Accepted values: rest, object_storage

Default: object_storage


iceberg_delete

Default value for the redpanda.iceberg.delete topic property that determines if the corresponding Iceberg table is deleted upon deleting the topic.

Requires restart: No

Visibility: user

Type: boolean

Default: true


iceberg_enabled

Some values for this property require an Enterprise license. Refer to the Enterprise license required field for specific requirements. For license details, see Redpanda Licenses and Enterprise Features.

Enables the translation of topic data into Iceberg tables. Setting iceberg_enabled to true activates the feature at the cluster level, but each topic must also set the redpanda.iceberg.enabled topic-level property to true to use it. If iceberg_enabled is set to false, then the feature is disabled for all topics in the cluster, overriding any topic-level settings.

Requires restart: Yes

Visibility: user

Type: boolean

Enterprise license required: true

Default: false


iceberg_rest_catalog_client_id

Iceberg REST catalog user ID. This ID is used to query the catalog API for the OAuth token. Required if catalog type is set to rest.

Requires restart: Yes

Visibility: user

Type: string

Default: null


iceberg_rest_catalog_client_secret

Secret to authenticate against Iceberg REST catalog. Required if catalog type is set to rest.

Requires restart: Yes

Visibility: user

Type: string

Default: null


iceberg_rest_catalog_crl_file

Path to certificate revocation list for iceberg_rest_catalog_trust_file.

Requires restart: Yes

Visibility: user

Type: string

Default: null


iceberg_rest_catalog_endpoint

URL of Iceberg REST catalog endpoint.

Requires restart: Yes

Visibility: user

Type: string

Default: null


iceberg_rest_catalog_prefix

Prefix part of the Iceberg REST catalog URL. Prefix is appended to the catalog path, for example /v1/{prefix}/namespaces.

Requires restart: Yes

Visibility: user

Type: string

Default: null


iceberg_rest_catalog_request_timeout_ms

Maximum length of time that Redpanda waits for a response from the REST catalog before aborting the request.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000


iceberg_rest_catalog_token

Token used to access the REST Iceberg catalog. If the token is present, Redpanda ignores credentials stored in the properties iceberg_rest_catalog_client_id and iceberg_rest_catalog_client_secret.

Requires restart: Yes

Visibility: user

Type: string

Default: null


iceberg_rest_catalog_trust_file

Path to a file containing a certificate chain to trust for the REST Iceberg catalog.

Requires restart: Yes

Visibility: user

Type: string

Default: null


id_allocator_batch_size

The ID allocator allocates messages in batches (each batch is a one log record) and then serves requests from memory without touching the log until the batch is exhausted.

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-32768, 32767]

Default: 1000


id_allocator_log_capacity

Capacity of the id_allocator log in number of batches. After it reaches id_allocator_stm, it truncates the log’s prefix.

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-32768, 32767]

Default: 100


initial_retention_local_target_bytes_default

Initial local retention size target for partitions of topics with Tiered Storage enabled. If no initial local target retention is configured, then all locally-retained data will be delivered to learner when joining the partition replica set.

Unit: bytes

Requires restart: No

Visibility: user

Type: integer

Default: null


initial_retention_local_target_ms_default

Initial local retention time target for partitions of topics with Tiered Storage enabled. If no initial local target retention is configured, then all locally-retained data will be delivered to learner when joining the partition replica is set.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: null


internal_topic_replication_factor

Target replication factor for internal topics.

Unit: number of replicas per topic. Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: 3


join_retry_timeout_ms

Time between cluster join retries in milliseconds.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 5000


kafka_admin_topic_api_rate

Target quota rate (partition mutations per default_window_sec).

Requires restart: No

Visibility: user

Type: integer

Accepted values: [0, 4294967295]

Default: null


kafka_batch_max_bytes

Maximum size of a batch processed by the server. If the batch is compressed, the limit applies to the compressed batch size.

Unit: bytes

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: 1048576


kafka_client_group_byte_rate_quota

Per-group target produce quota byte rate (bytes per second). Client is considered part of the group if client_id contains clients_prefix.

Requires restart: No

Visibility: user

Default: null


kafka_client_group_fetch_byte_rate_quota

Per-group target fetch quota byte rate (bytes per second). Client is considered part of the group if client_id contains clients_prefix.

Requires restart: No

Visibility: user

Default: null


kafka_connection_rate_limit

Maximum connections per second for one core. If null (the default), then the number of connections per second is unlimited.

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-9223372036854775808, 9223372036854775807]

Default: null


kafka_connection_rate_limit_overrides

Overrides the maximum connections per second for one core for the specified IP addresses (for example, ['127.0.0.1:90', '50.20.1.1:40'])

Related topics:

Requires restart: No

Visibility: user

Type: array

Default: null


kafka_connections_max

Maximum number of Kafka client connections per broker. If null, the property is disabled.

Unit: number of Kafka client connections per broker

Default: null

Requires restart: No

Visibility: user

Type: integer

Accepted values: [0, 4294967295]

Related topics:


kafka_connections_max_overrides

A list of IP addresses for which Kafka client connection limits are overridden and don’t apply. For example, (['127.0.0.1:90', '50.20.1.1:40'])..

Requires restart: No

Visibility: user

Type: array

Default: {} (empty list)

Related topics:


kafka_connections_max_per_ip

Maximum number of Kafka client connections per IP address, per broker. If null, the property is disabled.

Requires restart: No

Visibility: user

Type: integer

Accepted values: [0, 4294967295]

Default: null

Related topics:


kafka_enable_authorization

Flag to require authorization for Kafka connections. If null, the property is disabled, and authorization is instead enabled by enable_sasl.

Requires restart: No

Visibility: user

Type: boolean

Default: null

Accepted Values:

  • null: Ignored. Authorization is enabled with enable_sasl: true

  • true: authorization is required.

  • false: authorization is disabled.

Related properties:


kafka_enable_describe_log_dirs_remote_storage

Whether to include Tiered Storage as a special remote:// directory in DescribeLogDirs Kafka API requests.

Requires restart: No

Visibility: user

Type: boolean

Default: true


kafka_enable_partition_reassignment

Enable the Kafka partition reassignment API.

Requires restart: No

Visibility: user

Type: boolean

Default: true


kafka_group_recovery_timeout_ms

Kafka group recovery timeout.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 30000 (30 sec)


kafka_max_bytes_per_fetch

Limit fetch responses to this many bytes, even if the total of partition bytes limits is higher.

Requires restart: No

Visibility: tunable

Type: integer

Default: 67108864


kafka_memory_batch_size_estimate_for_fetch

The size of the batch used to estimate memory consumption for fetch requests, in bytes. Smaller sizes allow more concurrent fetch requests per shard. Larger sizes prevent running out of memory because of too many concurrent fetch requests.

Requires restart: No

Visibility: user

Type: integer

Default: 1048576


kafka_memory_share_for_fetch

The share of Kafka subsystem memory that can be used for fetch read buffers, as a fraction of the Kafka subsystem memory amount.

Requires restart: Yes

Visibility: user

Type: number

Default: 0.5


kafka_mtls_principal_mapping_rules

Principal mapping rules for mTLS authentication on the Kafka API. If null, the property is disabled.

Requires restart: No

Visibility: user

Type: array

Default: null


kafka_nodelete_topics

A list of topics that are protected from deletion and configuration changes by Kafka clients. Set by default to a list of Redpanda internal topics.

Requires restart: No

Visibility: user

Type: string array

Default: ['_redpanda.audit_log', '__consumer_offsets', '_schemas']

Related topics:


kafka_noproduce_topics

A list of topics that are protected from being produced to by Kafka clients. Set by default to a list of Redpanda internal topics.

Requires restart: No

Visibility: user

Type: array

Default: ['_redpanda.audit_log']


kafka_qdc_depth_alpha

Smoothing factor for Kafka queue depth control depth tracking.

Requires restart: Yes

Visibility: tunable

Type: number

Default: 0.8


kafka_qdc_depth_update_ms

Update frequency for Kafka queue depth control.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 7000


kafka_qdc_enable

Enable Kafka queue depth control.

Requires restart: Yes

Visibility: user

Type: boolean

Default: false


kafka_qdc_idle_depth

Queue depth when idleness is detected in Kafka queue depth control.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 10


kafka_qdc_latency_alpha

Smoothing parameter for Kafka queue depth control latency tracking.

Requires restart: Yes

Visibility: tunable

Type: number

Default: 0.002


kafka_qdc_max_depth

Maximum queue depth used in Kafka queue depth control.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 100


kafka_qdc_max_latency_ms

Maximum latency threshold for Kafka queue depth control depth tracking.

Unit: milliseconds

Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 80


kafka_qdc_min_depth

Minimum queue depth used in Kafka queue depth control.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 1


kafka_qdc_window_count

Number of windows used in Kafka queue depth control latency tracking.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 12


kafka_qdc_window_size_ms

Window size for Kafka queue depth control latency tracking.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 1500


kafka_quota_balancer_min_shard_throughput_bps

The minimum value of the throughput quota a shard can get in the process of quota balancing, expressed in bytes per second. The value applies equally to ingress and egress traffic.

kafka_quota_balancer_min_shard_throughput_bps doesn’t override the limit settings, kafka_throughput_limit_node_in_bps and kafka_throughput_limit_node_out_bps. Consequently, the value of kafka_throughput_limit_node_in_bps or kafka_throughput_limit_node_out_bps can result in lesser throughput than kafka_quota_balancer_min_shard_throughput_bps.

Both kafka_quota_balancer_min_shard_throughput_ratio and kafka_quota_balancer_min_shard_throughput_bps can be specified at the same time. In this case, the balancer will not decrease the effective shard quota below the largest bytes-per-second (bps) value of each of these two properties.

If set to 0, no minimum is enforced.

Unit: bytes per second

Related topics:

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-9223372036854775808, 9223372036854775807]

Default: 256

Related topics:


kafka_quota_balancer_min_shard_throughput_ratio

The minimum value of the throughput quota a shard can get in the process of quota balancing, expressed as a ratio of default shard quota. While the value applies equally to ingress and egress traffic, the default shard quota can be different for ingress and egress and therefore result in different minimum throughput bytes-per-second (bps) values.

Both kafka_quota_balancer_min_shard_throughput_ratio and kafka_quota_balancer_min_shard_throughput_bps can be specified at the same time. In this case, the balancer will not decrease the effective shard quota below the largest bps value of each of these two properties.

If set to 0.0, the minimum is disabled. If set to 1.0, the balancer won’t be able to rebalance quota without violating this ratio, preventing the balancer from adjusting shards' quotas.

Unit: ratio of default shard quota

Related topics:

Requires restart: No

Visibility: user

Type: number

Default: 0.01

Related topics:


kafka_quota_balancer_node_period_ms

Intra-node throughput quota balancer invocation period, in milliseconds. When set to 0, the balancer is disabled and makes all the throughput quotas immutable.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 0


kafka_quota_balancer_window_ms

Time window used to average current throughput measurement for quota balancer, in milliseconds.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 5000


kafka_request_max_bytes

Maximum size of a single request processed using the Kafka API.

Unit: bytes

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: 104857600


kafka_rpc_server_stream_recv_buf

Maximum size of the user-space receive buffer. If null, this limit is not applied.

Unit: bytes

Requires restart: Yes

Visibility: tunable

Type: integer

Default: null


kafka_rpc_server_tcp_recv_buf

Size of the Kafka server TCP receive buffer. If null, the property is disabled.

Unit: bytes

Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: null


kafka_rpc_server_tcp_send_buf

Size of the Kafka server TCP transmit buffer. If null, the property is disabled.

Unit: bytes

Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-2147483648, 2147483647] aligned to 4096 bytes

Default: null


kafka_sasl_max_reauth_ms

The maximum time between Kafka client reauthentications. If a client has not reauthenticated a connection within this time frame, that connection is torn down.

If this property is not set (or set to null), session expiry is disabled, and a connection could live long after the client’s credentials are expired or revoked.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: null


kafka_schema_id_validation_cache_capacity

Per-shard capacity of the cache for validating schema IDs.

Requires restart: No

Visibility: tunable

Type: integer

Default: 128


kafka_tcp_keepalive_timeout

TCP keepalive idle timeout in seconds for Kafka connections. This describes the timeout between TCP keepalive probes that the remote site successfully acknowledged. Refers to the TCP_KEEPIDLE socket option. When changed, applies to new connections only.

Unit: seconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 120


kafka_tcp_keepalive_probe_interval_seconds

TCP keepalive probe interval in seconds for Kafka connections. This describes the timeout between unacknowledged TCP keepalives. Refers to the TCP_KEEPINTVL socket option. When changed, applies to new connections only.

Unit: seconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 60


kafka_tcp_keepalive_probes

TCP keepalive unacknowledged probes until the connection is considered dead for Kafka connections. Refers to the TCP_KEEPCNT socket option. When changed, applies to new connections only.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: 3


kafka_throughput_control

List of throughput control groups that define exclusions from node-wide throughput limits. Clients excluded from node-wide throughput limits are still potentially subject to client-specific throughput limits.

Each throughput control group consists of:

  • name (optional) - any unique group name

  • client_id - regex to match client_id

Example values:

  • [{'name': 'first_group','client_id': 'client1'}, {'client_id': 'consumer-\d+'}]

  • [{'name': 'catch all'}]

  • [{'name': 'missing_id', 'client_id': '+empty'}]

A connection is assigned the first matching group and is then excluded from throughput control. A name is not required, but can help you categorize the exclusions. Specifying +empty for the client_id will match on clients that opt not to send a client_id. You can also optionally omit the client_id and specify only a name, as shown. In this situation, all clients will match the rule and Redpanda will exclude them from all from node-wide throughput control.

Requires restart: No

Visibility: user

Type: string array

Accepted Values: list of control groups of the format {'name' : 'group name', 'client_id' : 'regex pattern'}

Default: [] (empty list)

Related topics:


kafka_throughput_controlled_api_keys

List of Kafka API keys that are subject to cluster-wide and node-wide throughput limit control.

Requires restart: No

Visibility: user

Type: list<string>

Default: ["produce", "fetch"]


kafka_throughput_limit_node_in_bps

The maximum rate of all ingress Kafka API traffic for a node. Includes all Kafka API traffic (requests, responses, headers, fetched data, produced data, etc.). If null, the property is disabled, and traffic is not limited.

Unit: bytes per second

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-9223372036854775808, 9223372036854775807]

Default: null

Related topics:


kafka_throughput_limit_node_out_bps

The maximum rate of all egress Kafka traffic for a node. Includes all Kafka API traffic (requests, responses, headers, fetched data, produced data, etc.). If null, the property is disabled, and traffic is not limited.

Unit: bytes per second

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-9223372036854775808, 9223372036854775807]

Default: null

Related topics:


kafka_throughput_replenish_threshold

Threshold for refilling the token bucket as part of enforcing throughput limits. This only applies when kafka_throughput_throttling_v2 is true.

This threshold is evaluated with each request for data. When the number of tokens to replenish exceeds this threshold, then tokens are added to the token bucket. This ensures that the atomic is not being updated for the token count with each request. The range for this threshold is automatically clamped to the corresponding throughput limit for ingress and egress.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: For ingress, [1, kafka_throughput_limit_node_in_bps]. For egress, [1, kafka_throughput_limit_node_out_bps]

Default: 1

Related topics:


kafka_throughput_throttling_v2

Enables an updated algorithm for enforcing node throughput limits based on a shared token bucket, introduced with Redpanda v23.3.8. Set this property to false if you need to use the quota balancing algorithm from Redpanda v23.3.7 and older. This property defaults to true for all new or upgraded Redpanda clusters.

Disabling this property is not recommended. It causes your Redpanda cluster to use an outdated throughput throttling mechanism. Only set this to false when advised to do so by Redpanda support.

Requires restart: No

Visibility: tunable

Type: boolean

Default: true


kvstore_flush_interval

Key-value store flush interval (in milliseconds).

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10


kvstore_max_segment_size

Key-value maximum segment size (in bytes).

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 16777216


leader_balancer_idle_timeout

Leadership rebalancing idle timeout.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 120000 (2min)


leader_balancer_mute_timeout

Leadership rebalancing mute timeout.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 300000 (5min)


leader_balancer_mute_timeout

Leadership rebalancing node mute timeout.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 20000


leader_balancer_transfer_limit_per_shard

Per shard limit for in-progress leadership transfers.

Requires restart: No

Visibility: tunable

Type: integer

Default: 512


legacy_group_offset_retention_enabled

Group offset retention is enabled by default starting in Redpanda version 23.1. To enable offset retention after upgrading from an older version, set this option to true.

Requires restart: No

Visibility: tunable

Type: boolean

Default: false


legacy_permit_unsafe_log_operation

Flag to enable a Redpanda cluster operator to use unsafe control characters within strings, such as consumer group names or user names. This flag applies only for Redpanda clusters that were originally on version 23.1 or earlier and have been upgraded to version 23.2 or later. Starting in version 23.2, newly-created Redpanda clusters ignore this property.

Requires restart: No

Visibility: user

Type: boolean

Default: true


legacy_unsafe_log_warning_interval_sec

Period at which to log a warning about using unsafe strings containing control characters. If unsafe strings are permitted by legacy_permit_unsafe_log_operation, a warning will be logged at an interval specified by this property.

Unit: seconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 300


log_cleanup_policy

Default cleanup policy for topic logs.

The topic property cleanup.policy overrides the value of log_cleanup_policy at the topic level.

Requires restart: No

Visibility: user

Accepted Values: compact, delete, compact,delete

Default: delete


log_compaction_interval_ms

How often to trigger background compaction.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000


log_compaction_use_sliding_window

Use sliding window compaction.

Requires restart: Yes

Visibility: tunable

Type: boolean

Default: true


log_compression_type

Default topic compression type.

The topic property compression.type overrides the value of log_compression_type at the topic level.

Requires restart: No

Visibility: user

Accepted Values: gzip, snappy, lz4, zstd, producer, none.

Default: producer


log_disable_housekeeping_for_tests

Disables the housekeeping loop for local storage. This property is used to simplify testing, and should not be set in production.

Requires restart: Yes

Visibility: tunable

Type: boolean

Default: false


log_message_timestamp_alert_after_ms

Threshold in milliseconds for alerting on messages with a timestamp after the broker’s time, meaning the messages are in the future relative to the broker’s clock.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 7200000 (2h)


log_message_timestamp_alert_before_ms

Threshold in milliseconds for alerting on messages with a timestamp before the broker’s time, meaning the messages are in the past relative to the broker’s clock. To disable this check, set to null.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: null


log_message_timestamp_type

Default timestamp type for topic messages (CreateTime or LogAppendTime).

The topic property message.timestamp.type overrides the value of log_message_timestamp_type at the topic level.

Requires restart: No

Visibility: user

Accepted Values: CreateTime, LogAppendTime.

Default: CreateTime


log_retention_ms

The amount of time to keep a log file before deleting it (in milliseconds). If set to -1, no time limit is applied. This is a cluster-wide default when a topic does not set or disable retention.ms.

Unit: milliseconds

Requires restart: No

Visibility: user

Accepted values: [-17592186044416, 17592186044415]

Default: 604800000 (one week)


log_segment_ms

Default lifetime of log segments. If null, the property is disabled, and no default lifetime is set. Any value under 60 seconds (60000 ms) is rejected. This property can also be set in the Kafka API using the Kafka-compatible alias, log.roll.ms.

The topic property segment.ms overrides the value of log_segment_ms at the topic level.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 1209600000 (2 weeks)

Related properties:


log_segment_ms_max

Upper bound on topic segment.ms: higher values will be clamped to this value.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 31536000000 (one year)


log_segment_ms_min

Lower bound on topic segment.ms: lower values will be clamped to this value.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 600000 (10min)


log_segment_size

Default log segment size in bytes for topics which do not set segment.bytes.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 134217728


log_segment_size_jitter_percent

Random variation to the segment size limit used for each partition.

Unit: percent

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [0, 65535]

Default: 5


log_segment_size_max

Upper bound on topic segment.bytes: higher values will be clamped to this limit.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: null


log_segment_size_min

Lower bound on topic segment.bytes: lower values will be clamped to this limit.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 1048576


lz4_decompress_reusable_buffers_disabled

Disable reusable preallocated buffers for LZ4 decompression.

Requires restart: Yes

Visibility: tunable

Type: boolean

Default: false


max_compacted_log_segment_size

Maximum compacted segment size after consolidation.

Requires restart: No

Visibility: tunable

Type: integer

Default: 5368709120


max_concurrent_producer_ids

Maximum number of active producer sessions. When the threshold is passed, Redpanda terminates old sessions. When an idle producer corresponding to the terminated session wakes up and produces, its message batches are rejected, and an out of order sequence error is emitted. Consumers don’t affect this setting.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 18446744073709551615


max_in_flight_pandaproxy_requests_per_shard

Maximum number of in-flight HTTP requests to HTTP Proxy permitted per shard. Any additional requests above this limit will be rejected with a 429 error.

Requires restart: No

Visibility: tunable

Type: integer

Default: 500


max_in_flight_schema_registry_requests_per_shard

Maximum number of in-flight HTTP requests to Schema Registry permitted per shard. Any additional requests above this limit will be rejected with a 429 error.

Requires restart: No

Visibility: tunable

Type: integer

Default: 500


max_kafka_throttle_delay_ms

Fail-safe maximum throttle delay on Kafka requests.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 30000


max_transactions_per_coordinator

Specifies the maximum number of active transaction sessions per coordinator. When the threshold is passed Redpanda terminates old sessions. When an idle producer corresponding to the terminated session wakes up and produces, it leads to its batches being rejected with invalid producer epoch or invalid_producer_id_mapping error (depends on the transaction execution phase).

For details, see Transaction usage tips.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 18446744073709551615


members_backend_retry_ms

Time between members backend reconciliation loop retries.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 5000 (5s)


memory_abort_on_alloc_failure

If true, the Redpanda process will terminate immediately when an allocation cannot be satisfied due to memory exhaustion. If false, an exception is thrown.

Requires restart: No

Visibility: tunable

Type: boolean

Default: true


metadata_dissemination_interval_ms

Interval for metadata dissemination batching.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 3000


metadata_dissemination_retries

Number of attempts to look up a topic’s metadata-like shard before a request fails. This configuration controls the number of retries that request handlers perform when internal topic metadata (for topics like tx, consumer offsets, etc) is missing. These topics are usually created on demand when users try to use the cluster for the first time and it may take some time for the creation to happen and the metadata to propagate to all the brokers (particularly the broker handling the request). In the meantime Redpanda waits and retries. This configuration controls the number retries.

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-32768, 32767]

Default: 30


metadata_dissemination_retry_delay_ms

Delay before retrying a topic lookup in a shard or other meta tables.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 500


metadata_status_wait_timeout_ms

Maximum time to wait in metadata request for cluster health to be refreshed.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 2000


metrics_reporter_report_interval

Cluster metrics reporter report interval.

Unit: milliseconds Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 86400000 (one day)


metrics_reporter_tick_interval

Cluster metrics reporter tick interval.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 60000 (1min)


metrics_reporter_url

URL of the cluster metrics reporter.

Requires restart: No

Visibility: tunable

Type: string


minimum_topic_replications

Minimum allowable replication factor for topics in this cluster. The set value must be positive, odd, and equal to or less than the number of available brokers. Changing this parameter only restricts newly-created topics. Redpanda returns an INVALID_REPLICATION_FACTOR error on any attempt to create a topic with a replication factor less than this property.

If you change the minimum_topic_replications setting, the replication factor of existing topics remains unchanged. However, Redpanda will log a warning on start-up with a list of any topics that have fewer replicas than this minimum. For example, you might see a message such as Topic X has a replication factor less than specified minimum: 1 < 3.

Unit: minimum number of replicas per topic

Requires restart: No

Visibility: user

Type: integer

Accepted values: [1, 32767]

Default: 1


node_isolation_heartbeat_timeout

How long after the last heartbeat request a node will wait before considering itself to be isolated.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-9223372036854775808, 9223372036854775807]

Default: 3000


node_management_operation_timeout_ms

Timeout for executing node management operations.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 5000 (5s)


node_status_interval

Time interval between two node status messages. Node status messages establish liveness status outside of the Raft protocol.

Unit: milliseconds Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 100


node_status_reconnect_max_backoff_ms

Maximum backoff (in milliseconds) to reconnect to an unresponsive peer during node status liveness checks.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 15000


oidc_clock_skew_tolerance

The amount of time (in seconds) to allow for when validating the expiry claim in the token.

Unit: seconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 30


oidc_discovery_url

The URL pointing to the well-known discovery endpoint for the OIDC provider.

Requires restart: No

Visibility: user

Type: string


oidc_keys_refresh_interval

The frequency of refreshing the JSON Web Keys (JWKS) used to validate access tokens.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 3600000


oidc_principal_mapping

Rule for mapping JWT payload claim to a Redpanda user principal.

Requires restart: No

Visibility: user

Type: string

Default: $.sub


oidc_token_audience

A string representing the intended recipient of the token.

Requires restart: No

Visibility: user

Type: string

Default: redpanda


partition_autobalancing_concurrent_moves

Number of partitions that can be reassigned at once.

Requires restart: No

Visibility: tunable

Type: integer

Default: 50


partition_autobalancing_max_disk_usage_percent

This property applies only when partition_autobalancing_mode is set to continuous.

When the disk usage of a node exceeds this threshold, it triggers Redpanda to move partitions off of the node.

Unit: percent of disk used

Requires restart: No

Visibility: user

Type: integer

Accepted values: [0, 4294967295]

Default: 80

Related topics:


partition_autobalancing_min_size_threshold

Minimum size of partition that is going to be prioritized when rebalancing a cluster due to the disk size threshold being breached. This value is calculated automatically by default.

Requires restart: No

Visibility: tunable

Type: integer

Default: null


partition_autobalancing_mode

Some values for this property require an Enterprise license. Refer to the Enterprise license required field for specific requirements. For license details, see Redpanda Licenses and Enterprise Features.

Mode of partition balancing for a cluster.

Requires restart: No

Visibility: user

Accepted values:

Enterprise license required: continuous

Default: node_add

Related topics:


partition_autobalancing_node_availability_timeout_sec

This property applies only when partition_autobalancing_mode is set to continuous.

When a node is unavailable for at least this timeout duration, it triggers Redpanda to move partitions off of the node.

Unit: seconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 900 (15min)

Related topics:


partition_autobalancing_tick_interval_ms

Partition autobalancer tick interval.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 30000 (30s)


partition_autobalancing_tick_moves_drop_threshold

If the number of scheduled tick moves drops by this ratio, a new tick is scheduled immediately. Valid values are (0, 1]. For example, with a value of 0.2 and 100 scheduled moves in a tick, a new tick is scheduled when the in-progress moves are fewer than 80.

Requires restart: No

Visibility: tunable

Type: number

Default: 0.2


partition_autobalancing_topic_aware

If true, Redpanda prioritizes balancing a topic’s partition replica count evenly across all brokers while it’s balancing the cluster’s overall partition count. Because different topics in a cluster can have vastly different load profiles, this better distributes the workload of the most heavily-used topics evenly across brokers.

Requires restart: No

Visibility: user

Type: boolean

Default: true

Related topics:


partition_manager_shutdown_watchdog_timeout

A threshold value to detect partitions which might have been stuck while shutting down. After this threshold, a watchdog in partition manager will log information about partition shutdown not making progress.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 30000


pp_sr_smp_max_non_local_requests

Maximum number of Cross-core(Inter-shard communication) requests pending in HTTP Proxy and Schema Registry seastar::smp group. (For more details, see the seastar::smp_service_group documentation).

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: null


quota_manager_gc_sec

Quota manager GC frequency in milliseconds.

Unit: seconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 30000 (30s)


raft_replica_max_flush_delay_ms

Maximum delay between two subsequent flushes. After this delay, the log is automatically force flushed.

Unit: milliseconds

Requires restart: No

Nullable: No

Visibility: tunable

Type: integer

Accepted values: [1, 17592186044415]

Default: 100


election_timeout_ms

Raft election timeout expressed in milliseconds.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 1500


raft_enable_longest_log_detection

Enables an additional step in leader election where a candidate is allowed to wait for all the replies from the broker it requested votes from. This may introduce a small delay when recovering from failure, but it prevents truncation if any of the replicas have more data than the majority.

Requires restart: No

Visibility: tunable

Type: boolean

Default: true


raft_enable_lw_heartbeat

Enables Raft optimization of heartbeats.

Requires restart: No

Visibility: tunable

Type: boolean

Default: true


raft_heartbeat_disconnect_failures

The number of failed heartbeats after which an unresponsive TCP connection is forcibly closed. To disable forced disconnection, set to 0.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 3


raft_heartbeat_interval_ms

Number of milliseconds for Raft leader heartbeats.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [1, 17592186044415]

Default: 150


raft_heartbeat_timeout_ms

Raft heartbeat RPC (remote procedure call) timeout. Raft uses a heartbeat mechanism to maintain leadership authority and to trigger leader elections. The raft_heartbeat_interval_ms is a periodic heartbeat sent by the partition leader to all followers to declare its leadership. If a follower does not receive a heartbeat within the raft_heartbeat_timeout_ms, then it triggers an election to choose a new partition leader.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 3000


raft_io_timeout_ms

Raft I/O timeout.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000


raft_learner_recovery_rate

Raft learner recovery rate limit. Throttles the rate of data communicated to nodes (learners) that need to catch up to leaders. This rate limit is placed on a node sending data to a recovering node. Each sending node is limited to this rate. The recovering node accepts data as fast as possible according to the combined limits of all healthy nodes in the cluster. For example, if two nodes are sending data to the recovering node, and raft_learner_recovery_rate is 100 MB/sec, then the recovering node will recover at a rate of 200 MB/sec.

Requires restart: No

Visibility: tunable

Type: integer

Default: 104857600


raft_max_concurrent_append_requests_per_follower

Maximum number of concurrent append entry requests sent by the leader to one follower.

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: 16


raft_max_recovery_memory

Maximum memory that can be used for reads in Raft recovery process by default 15% of total memory.

Requires restart: No

Visibility: tunable

Type: integer

Default: null


raft_recovery_concurrency_per_shard

Number of partitions that may simultaneously recover data to a particular shard. This number is limited to avoid overwhelming nodes when they come back online after an outage.

Requires restart: No

Visibility: tunable

Type: integer

Default: 64


raft_recovery_default_read_size

Specifies the default size of a read issued during Raft follower recovery.

Requires restart: No

Visibility: tunable

Type: integer

Default: 524288


raft_recovery_throttle_disable_dynamic_mode

This property is for Redpanda internal use only. Do not use or modify this property unless specifically instructed to do so by Redpanda Support. Using this property without explicit guidance from Redpanda Support could result in data loss.

Disables cross shard sharing used to throttle recovery traffic. Should only be used to debug unexpected problems.

Requires restart: No

Visibility: tunable

Type: boolean

Default: false


raft_replica_max_flush_delay_ms

Maximum delay between two subsequent flushes. After this delay, the log is automatically force flushed.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 100


raft_replica_max_pending_flush_bytes

Maximum number of bytes that are not flushed per partition. If the configured threshold is reached, the log is automatically flushed even if it has not been explicitly requested.

Unit: bytes

Requires restart: No

Visibility: tunable

Type: integer

Default: 262144


raft_replicate_batch_window_size

Maximum size of requests cached for replication.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 1048576


raft_smp_max_non_local_requests

Maximum number of Cross-core(Inter-shard communication) requests pending in Raft seastar::smp group. For details, refer to the seastar::smp_service_group documentation).

See Seastar documentation Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: null


raft_timeout_now_timeout_ms

Timeout for Raft’s timeout_now RPC. This RPC is used to force a follower to dispatch a round of votes immediately.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 1000


raft_transfer_leader_recovery_timeout_ms

Follower recovery timeout waiting period when transferring leadership.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000


readers_cache_eviction_timeout_ms

Duration after which inactive readers are evicted from cache.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 30000


readers_cache_target_max_size

Maximum desired number of readers cached per NTP. This a soft limit, meaning that a number of readers in cache may temporarily increase as cleanup is performed in the background.

Requires restart: No

Visibility: tunable

Type: integer

Default: 200


reclaim_batch_cache_min_free

Minimum amount of free memory maintained by the batch cache background reclaimer.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 67108864


reclaim_growth_window

Starting from the last point in time when memory was reclaimed from the batch cache, this is the duration during which the amount of memory to reclaim grows at a significant rate, based on heuristics about the amount of available memory.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 3000


reclaim_max_size

Maximum batch cache reclaim size.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 4194304


reclaim_min_size

Minimum batch cache reclaim size.

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 131072


reclaim_stable_window

If the duration since the last time memory was reclaimed is longer than the amount of time specified in this property, the memory usage of the batch cache is considered stable, so only the minimum size (reclaim_min_size) is set to be reclaimed.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000


recovery_append_timeout_ms

Timeout for append entry requests issued while updating a stale follower.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 5000


release_cache_on_segment_roll

Flag for specifying whether or not to release cache when a full segment is rolled.

Requires restart: No

Visibility: tunable

Type: boolean

Default: false


replicate_append_timeout_ms

Timeout for append entry requests issued while replicating entries.

Unit: milliseconds

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 3000


retention_bytes

Default maximum number of bytes per partition on disk before triggering deletion of the oldest messages. If null (the default value), no limit is applied.

The topic property retention.bytes overrides the value of retention_bytes at the topic level.

Unit: bytes per partition.

Requires restart: No

Visibility: user

Type: integer

Default: null


retention_local_strict

Flag to allow Tiered Storage topics to expand to consumable retention policy limits. When this flag is enabled, non-local retention settings are used, and local retention settings are used to inform data removal policies in low-disk space scenarios.

Requires restart: No

Visibility: user

Type: boolean

Default: false


retention_local_strict_override

Trim log data when a cloud topic reaches its local retention limit. When this option is disabled Redpanda will allow partitions to grow past the local retention limit, and will be trimmed automatically as storage reaches the configured target size.

Requires restart: No

Visibility: user

Type: boolean

Default: true


retention_local_target_bytes_default

Local retention size target for partitions of topics with object storage write enabled. If null, the property is disabled.

This property can be overridden on a per-topic basis by setting retention.local.target.bytes in each topic enabled for Tiered Storage. See Configure message retention.

Both retention_local_target_bytes_default and retention_local_target_ms_default can be set. The limit that is reached earlier is applied.

Related properties:

Unit: bytes

Requires restart: No

Visibility: user

Type: integer

Default: null


retention_local_target_capacity_bytes

The target capacity (in bytes) that log storage will try to use before additional retention rules take over to trim data to meet the target. When no target is specified, storage usage is unbounded.

Redpanda Data recommends setting only one of retention_local_target_capacity_bytes or retention_local_target_capacity_percent. If both are set, the minimum of the two is used as the effective target capacity.

Unit: percentage of total disk size

Requires restart: No

Visibility: user

Type: integer

Accepted values: [0, 18446744073709551615]

Default: null


retention_local_target_capacity_percent

The target capacity in percent of unreserved space (disk_reservation_percent) that log storage will try to use before additional retention rules will take over to trim data in order to meet the target. When no target is specified storage usage is unbounded.

Redpanda Data recommends setting only one of retention_local_target_capacity_bytes or retention_local_target_capacity_percent. If both are set, the minimum of the two is used as the effective target capacity.

Unit: percentage of total disk size

Requires restart: No

Visibility: user

Type: number

Default: 80.0


retention_local_target_ms_default

Local retention time target for partitions of topics with object storage write enabled.

This property can be overridden on a per-topic basis by setting retention.local.target.ms in each topic enabled for Tiered Storage. See Configure message retention.

Both retention_local_target_bytes_default and retention_local_target_ms_default can be set. The limit that is reached first is applied.

Related properties:

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 86400000 (one day)


retention_local_trim_interval

The period during which disk usage is checked for disk pressure, and data is optionally trimmed to meet the target.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 17592186044415]

Default: 30000 (30s)


retention_local_trim_overage_coeff

The space management control loop reclaims the overage multiplied by this this coefficient to compensate for data that is written during the idle period between control loop invocations.

Requires restart: No

Visibility: tunable

Type: number

Default: 2.0


rm_sync_timeout_ms

Resource manager’s synchronization timeout. Specifies the maximum time for this node to wait for the internal state machine to catch up with all events written by previous leaders before rejecting a request.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000


rpc_client_connections_per_peer

The maximum number of connections a broker will open to each of its peers.

Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: 128


rpc_server_compress_replies

Enable compression for internal RPC (remote procedure call) server replies.

Requires restart: No

Visibility: tunable

Type: boolean

Default: false


rpc_server_listen_backlog

Maximum TCP connection queue length for Kafka server and internal RPC server. If null (the default value), no queue length is set.

Unit: number of queue entries Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: null


rpc_server_tcp_recv_buf

Internal RPC TCP receive buffer size. If null (the default value), no buffer size is set by Redpanda.

Unit: bytes

Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: null


rpc_server_tcp_send_buf

Internal RPC TCP send buffer size. If null (the default value), then no buffer size is set by Redpanda.

Unit: bytes

Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: null


rpk_path

Path to RPK binary.

Requires restart: No

Visibility: tunable

Type: string

Default: /usr/bin/rpk


rps_limit_acls_and_users_operations

Rate limit for controller ACLs and user’s operations.

Requires restart: No

Visibility: tunable

Type: integer

Default: 1000


rps_limit_configuration_operations

Rate limit for controller configuration operations.

Requires restart: No

Visibility: tunable

Type: integer

Default: 1000


rps_limit_move_operations

Rate limit for controller move operations.

Requires restart: No

Visibility: tunable

Type: integer

Default: 1000


rps_limit_node_management_operations

Rate limit for controller node management operations.

Requires restart: No

Visibility: tunable

Type: integer

Default: 1000


rps_limit_topic_operations

Rate limit for controller topic operations.

Requires restart: No

Visibility: tunable

Type: integer

Default: 1000


memory_enable_memory_sampling

When true, memory allocations are sampled and tracked. A sampled live set of allocations can then be retrieved from the Admin API. Additionally, Redpanda will periodically log the top-n allocation sites.

Requires restart: No

Visibility: tunable

Type: boolean

Default: true


sasl_kerberos_config

The location of the Kerberos krb5.conf file for Redpanda.

Requires restart: No

Visibility: user

Type: string

Default: /etc/krb5.conf


sasl_kerberos_keytab

The location of the Kerberos keytab file for Redpanda.

Requires restart: No

Visibility: user

Type: string

Default: /var/lib/redpanda/redpanda.keytab


sasl_kerberos_principal

The primary of the Kerberos Service Principal Name (SPN) for Redpanda.

Requires restart: No

Visibility: user

Type: string

Default: redpanda


sasl_kerberos_principal_mapping

Rules for mapping Kerberos principal names to Redpanda user principals.

Requires restart: No

Visibility: user

Type: string array

Default: [default]


sasl_mechanisms

Some values for this property require an Enterprise license. Refer to the Enterprise license required field for specific requirements. For license details, see Redpanda Licenses and Enterprise Features.

A list of supported SASL mechanisms.

Requires restart: No

Visibility: user

Type: string array

Accepted values: SCRAM, GSSAPI, OAUTHBEARER

Enterprise license required: GSSAPI, OAUTHBEARER

Default: [SCRAM]


schema_registry_normalize_on_startup

Normalize schemas as they are read from the topic on startup.

Requires restart: Yes

Visibility: user

Type: boolean

Default: false


segment_appender_flush_timeout_ms

Maximum delay until buffered data is written.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 1000 (1s)


segment_fallocation_step

Size for segments fallocation.

Requires restart: No

Visibility: tunable

Type: integer

Default: 33554432


space_management_enable

Option to explicitly disable automatic disk space management. If this property was explicitly disabled while using v23.2, it will remain disabled following an upgrade.

Requires restart: No

Visibility: user

Type: boolean

Default: true


space_management_enable_override

Enable automatic space management. This option is ignored and deprecated in versions >= v23.3.

Requires restart: No

Visibility: user

Type: boolean

Default: false


space_management_max_log_concurrency

Maximum parallel logs inspected during space management process.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 65535]

Default: 20


space_management_max_segment_concurrency

Maximum parallel segments inspected during space management process.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 65535]

Default: 10


storage_compaction_index_memory

Maximum number of bytes that may be used on each shard by compaction index writers.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 134217728


storage_compaction_key_map_memory

Maximum number of bytes that may be used on each shard by compaction key-offset maps. Only applies when log_compaction_use_sliding_window is set to true.

Requires restart: Yes

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 134217728


storage_compaction_key_map_memory_limit_percent

Limit on storage_compaction_key_map_memory, expressed as a percentage of memory per shard, that bounds the amount of memory used by compaction key-offset maps.

Memory per shard is computed after data_transforms_per_core_memory_reservation, and only applies when log_compaction_use_sliding_window is set to true.

Requires restart: Yes

Visibility: tunable

Type: number

Default: 12.0


storage_ignore_cstore_hints

When set, cstore hints are ignored and not used for data access (but are otherwise generated).

Requires restart: No

Visibility: tunable

Type: boolean

Default: false


storage_ignore_timestamps_in_future_sec

The maximum number of seconds that a record’s timestamp can be ahead of a Redpanda broker’s clock and still be used when deciding whether to clean up the record for data retention. This property makes possible the timely cleanup of records from clients with clocks that are drastically unsynchronized relative to Redpanda.

When determining whether to clean up a record with timestamp more than storage_ignore_timestamps_in_future_sec seconds ahead of the broker, Redpanda ignores the record’s timestamp and instead uses a valid timestamp of another record in the same segment, or (if another record’s valid timestamp is unavailable) the timestamp of when the segment file was last modified (mtime).

By default, storage_ignore_timestamps_in_future_sec is disabled (null).

To figure out whether to set storage_ignore_timestamps_in_future_sec for your system:

  1. Look for logs with segments that are unexpectedly large and not being cleaned up.

  2. In the logs, search for records with unsynchronized timestamps that are further into the future than tolerable by your data retention and storage settings. For example, timestamps 60 seconds or more into the future can be considered to be too unsynchronized.

  3. If you find unsynchronized timestamps throughout your logs, determine the number of seconds that the timestamps are ahead of their actual time, and set storage_ignore_timestamps_in_future_sec to that value so data retention can proceed.

  4. If you only find unsynchronized timestamps that are the result of transient behavior, you can disable storage_ignore_timestamps_in_future_sec.

Unit: seconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: null


storage_max_concurrent_replay

Maximum number of partitions' logs that will be replayed concurrently at startup, or flushed concurrently on shutdown.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 1024


storage_min_free_bytes

Threshold of minimum bytes free space before rejecting producers.

Unit: bytes

Requires restart: No

Visibility: tunable

Type: integer

Default: 5368709120


storage_read_buffer_size

Size of each read buffer (one per in-flight read, per log segment).

Requires restart: No

Visibility: tunable

Type: integer

Default: 131072


storage_read_readahead_count

How many additional reads to issue ahead of current read location.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-32768, 32767]

Default: 10


storage_reserve_min_segments

The number of segments per partition that the system will attempt to reserve disk capacity for. For example, if the maximum segment size is configured to be 100 MB, and the value of this option is 2, then in a system with 10 partitions Redpanda will attempt to reserve at least 2 GB of disk space.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-32768, 32767]

Default: 2


storage_space_alert_free_threshold_bytes

Threshold of minimum bytes free space before setting storage space alert.

Unit: bytes

Requires restart: No

Visibility: tunable

Type: integer

Default: 0


storage_space_alert_free_threshold_percent

Threshold of minimum percent free space before setting storage space alert.

Unit: percent

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: 5


storage_strict_data_init

Requires that an empty file named .redpanda_data_dir be present in the data_ directory. If set to true, Redpanda will refuse to start if the file is not found in the data directory.

Requires restart: No

Visibility: user

Type: boolean

Default: false


storage_target_replay_bytes

Target bytes to replay from disk on startup after clean shutdown: controls frequency of snapshots and checkpoints.

Unit: bytes

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 10737418240


superusers

List of superuser usernames.

Requires restart: No

Visibility: user

Type: string

Default: null


target_fetch_quota_byte_rate

Target fetch size quota byte rate (bytes per second) - disabled default.

Requires restart: No

Visibility: user

Type: integer

Accepted values: [0, 4294967295]

Default: null


target_quota_byte_rate

Target request size quota byte rate (bytes per second).

Requires restart: No

Visibility: user

Type: integer

Accepted values: [0, 4294967295]

Default: target_produce_quota_byte_rate_default


tls_min_version

The minimum TLS version that Redpanda clusters support. This property prevents client applications from negotiating a downgrade to the TLS version when they make a connection to a Redpanda cluster.

Requires restart: Yes

Visibility: user

Accepted values: v1.0, v1.1, v1.2, v1.3

Type: string

Default: v1.2


tm_sync_timeout_ms

Transaction manager’s synchronization timeout. Maximum time to wait for internal state machine to catch up before rejecting a request.

Unit: milliseconds

Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 10000


tombstone_retention_ms

The retention time for tombstone records in a compacted topic. Cannot be enabled at the same time as any of cloud_storage_enabled, cloud_storage_enable_remote_read, or cloud_storage_enable_remote_write. A typical default setting is 86400000, or 24 hours.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [1, 17592186044415]

Default: null

Related topics: Tombstone record removal


topic_fds_per_partition

Required file handles per partition when creating topics.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: 5


topic_memory_per_partition

Required memory per partition when creating topics.

Requires restart: No

Visibility: tunable

Type: integer

Default: 4194304


topic_partitions_per_shard

Maximum number of partitions which may be allocated to one shard (CPU core).

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: 1000


topic_partitions_reserve_shard0

Reserved partition slots on shard (CPU core) 0 on each node. If this is greater than or equal to topic_partitions_per_core, no data partitions will be scheduled on shard 0.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 4294967295]

Default: 0


transaction_coordinator_cleanup_policy

Cleanup policy for a transaction coordinator topic.

Requires restart: No

Visibility: user

Type: string array

Accepted Values: compact, delete, ["compact","delete"], none

Default: delete


transaction_coordinator_delete_retention_ms

Delete segments older than this age. To ensure transaction state is retained as long as the longest-running transaction, make sure this is no less than transactional_id_expiration_ms.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 604800000 (10080min)


transaction_coordinator_log_segment_size

The size (in bytes) each log segment should be.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 1073741824 (1Gb)


transaction_coordinator_partitions

Number of partitions for transactions coordinator.

Unit: number of partitions per topic

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-2147483648, 2147483647]

Default: 50


transaction_max_timeout_ms

The maximum allowed timeout for transactions. If a client-requested transaction timeout exceeds this configuration, the broker returns an error during transactional producer initialization. This guardrail prevents hanging transactions from blocking consumer progress.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 900000


transactional_id_expiration_ms

Expiration time of producer IDs. Measured starting from the time of the last write until now for a given ID.

Unit: milliseconds

Requires restart: No

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 604800000 (10080min)


tx_timeout_delay_ms

Delay before scheduling the next check for timed out transactions.

Unit: milliseconds

Requires restart: Yes

Visibility: user

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 1000


unsafe_enable_consumer_offsets_delete_retention

Enables delete retention of consumer offsets topic. This is an internal-only configuration and should be enabled only after consulting with Redpanda support.

Requires restart: Yes

Visibility: user

Type: boolean

Default: false


usage_disk_persistance_interval_sec

The interval in which all usage stats are written to disk.

Unit: seconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 300 (5min)


usage_num_windows

The number of windows to persist in memory and disk.

Requires restart: No

Visibility: tunable

Type: integer

Default: 24


usage_window_width_interval_sec

The width of a usage window, tracking cloud and kafka ingress/egress traffic each interval.

Unit: seconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17179869184, 17179869183]

Default: 3600


use_fetch_scheduler_group

Use a separate scheduler group for fetch processing.

Requires restart: No

Visibility: tunable

Type: boolean

Default: true


virtual_cluster_min_producer_ids

Minimum number of active producers per virtual cluster.

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [0, 18446744073709551615]

Default: 18446744073709551615


wait_for_leader_timeout_ms

Timeout to wait for leadership in metadata cache.

Unit: milliseconds

Requires restart: No

Visibility: tunable

Type: integer

Accepted values: [-17592186044416, 17592186044415]

Default: 5000


write_caching_default

The default write caching mode to apply to user topics. Write caching acknowledges a message as soon as it is received and acknowledged on a majority of brokers, without waiting for it to be written to disk. With acks=all, this provides lower latency while still ensuring that a majority of brokers acknowledge the write. Fsyncs follow raft_replica_max_pending_flush_bytes and raft_replica_max_flush_delay_ms, whichever is reached first.

The write_caching_default cluster property can be overridden with the write.caching topic property.

Requires restart: no

Type: string

Accepted values:

  • true

  • false

  • disabled: This takes precedence over topic overrides and disables write caching for the entire cluster.

Default: For clusters in production mode, the default is false. For clusters in development mode, the default is true.

Related topics:


zstd_decompress_workspace_bytes

Size of the zstd decompression workspace.

Unit: bytes

Requires restart: Yes

Visibility: tunable

Type: integer

Default: 8388608