Docs Self-Managed Reference Properties Topic Configuration Properties You are viewing the Self-Managed v24.3 beta documentation. We welcome your feedback at the Redpanda Community Slack #beta-feedback channel. To view the latest available version of the docs, see v24.2. Topic Configuration Properties A topic-level property sets a Redpanda or Kafka configuration for a particular topic. Many topic-level properties have corresponding cluster properties that set a default value for all topics of a cluster. To customize the value for a topic, you can set a topic-level property that overrides the value of the corresponding cluster property. All topic properties take effect immediately after being set. Topic property Corresponding cluster property cleanup.policy log_cleanup_policy flush.bytes raft_replica_max_pending_flush_bytes flush.ms raft_replica_max_flush_delay_ms initial.retention.local.target.ms initial_retention_local_target_ms_default retention.bytes retention_bytes retention.ms log_retention_ms segment.ms log_segment_ms segment.bytes log_segment_size compression.type log_compression_type message.timestamp.type log_message_timestamp_type max.message.bytes kafka_batch_max_bytes replication.factor default_topic_replication write.caching write_caching_default The SOURCE output of the rpk topic describe <topic> command describes how the property is set for the topic: DEFAULT_CONFIG is set by a Redpanda default. DYNAMIC_TOPIC_CONFIG is set by the user specifically for the topic and overrides inherited default configurations, such as a default or a cluster-level property. Although rpk topic describe doesn’t report replication.factor as a configuration, replication.factor can indeed be set by using the rpk topic alter-config command. Examples The following examples show how to configure topic-level properties. Set a topic-level property for a topic to override the value of corresponding cluster property. Create topic with topic properties To set topic properties when creating a topic, use the rpk topic create command with the -c option. For example, to create a topic with the cleanup.policy property set to compact: Local Kubernetes rpk topic create -c cleanup.policy=compact <topic-name> kubectl exec <pod-name> -- rpk topic create -c cleanup.policy=compact<topic-name> To configure multiple properties for a topic, use the -c option for each property. For example, to create a topic with all necessary properties for Tiered Storage: Local Kubernetes rpk topic create -c redpanda.remote.recovery=true -c redpanda.remote.write=true -c redpanda.remote.read=true <topic-name> kubectl exec <pod-name> -- rpk topic create -c redpanda.remote.recovery=true -c redpanda.remote.write=true -c redpanda.remote.read=true <topic-name> Modify topic properties To modify topic properties of an existing topic, use the rpk topic alter-config command. For example, to modify a topic’s retention.ms property: Local Kubernetes rpk topic alter-config <topic-name> --set retention.ms=<retention-time> kubectl exec <pod-name> -- rpk topic alter-config <topic-name> --set retention.ms=<retention-time> Properties This section describes all supported topic-level properties. Disk space properties Configure properties to manage the disk space used by a topic: Clean up log segments by deletion and/or compaction (cleanup.policy). Retain logs up to a maximum size per partition before cleanup (retention.bytes). Retain logs for a maximum duration before cleanup (retention.ms). Periodically close an active log segment (segment.ms). Limit the maximum size of an active log segment (segment.bytes). Cache batches until the segment appender chunk is full, instead of fsyncing for every acks=all write (write.caching). With write.caching enabled, fsyncs follow flush.ms and flush.bytes, whichever is reached first. cleanup.policy The cleanup policy to apply for log segments of a topic. When cleanup.policy is set, it overrides the cluster property log_cleanup_policy for the topic. Default: [delete] Values: [delete] - Deletes data according to size-based or time-based retention limits, or both. [compact] - Deletes data according to a key-based retention policy, discarding all but the latest value for each key. [compact,delete] - The latest values are kept for each key, while the remaining data is deleted according to retention limits. Related topics: Configure segment size Compacted topics in Tiered Storage flush.ms The maximum delay (in ms) between two subsequent fsyncs. After this delay, the log is automatically fsynced. Default: 100 Related topics: Configure Producers flush.bytes The maximum bytes not fsynced per partition. If this configured threshold is reached, the log is automatically fsynced, even though it wasn’t explicitly requested. Default: 262144 Related topics: Configure Producers retention.bytes A size-based retention limit that configures the maximum size that a topic partition can grow before becoming eligible for cleanup. If retention.bytes is set to a positive value, it overrides the cluster property retention_bytes for the topic, and the total retained size for the topic is retention.bytes multiplied by the number of partitions for the topic. When both size-based (retention.bytes) and time-based (retention.ms) retention limits are set, cleanup occurs when either limit is reached. Default: null Related topics: Configure message retention retention.ms A time-based retention limit that configures the maximum duration that a log’s segment file for a topic is retained before it becomes eligible to be cleaned up. To consume all data, a consumer of the topic must read from a segment before its retention.ms elapses, otherwise the segment may be compacted and/or deleted. If a non-positive value, no per-topic limit is applied. If retention.ms is set to a positive value, it overrides the cluster property log_retention_ms for the topic. When both size-based (retention.bytes) and time-based (retention.ms) retention limits are set, the earliest occurring limit applies. Default: null Related topics: Configure message retention segment.ms The maximum duration that a log segment of a topic is active (open for writes and not deletable). A periodic event, with segment.ms as its period, forcibly closes the active segment and transitions, or rolls, to a new active segment. The closed (inactive) segment is then eligible to be cleaned up according to cleanup and retention properties. If set to a positive duration, segment.ms overrides the cluster property log_segment_ms and its lower and upper bounds set by log_segment_ms_min and log_segment_ms_max, respectively. Default: null Related topics: Log rolling segment.bytes The maximum size of an active log segment for a topic. When the size of an active segment exceeds segment.bytes, the segment is closed and a new active segment is created. The closed, inactive segment is then eligible to be cleaned up according to retention properties. When segment.bytes is set to a positive value, it overrides the cluster property log_segment_size for the topic. Default: null Related topics: Configure segment size Configure message retention Remote Read Replicas write.caching The write caching mode to apply to a topic. When write.caching is set, it overrides the cluster property write_caching_default. Write caching acknowledges a message as soon as it is received and acknowledged on a majority of brokers, without waiting for it to be written to disk. With acks=all, this provides lower latency while still ensuring that a majority of brokers acknowledge the write. Fsyncs follow flush.ms and flush.bytes, whichever is reached first. Default: false Values: true - Enables write caching for a topic, according to flush.ms and flush.bytes. false - Disables write caching for a topic, according to flush.ms and flush.bytes. Related topics: Write caching Message properties Configure properties for the messages of a topic: Compress a message or batch to reduce storage space and increase throughput (compression.type). Set the source of a message’s timestamp (message.timestamp.type). Set the maximum size of a message (max.message.bytes). compression.type The type of compression algorithm to apply for all messages of a topic. When a compression type is set for a topic, producers compress and send messages, nodes (brokers) store and send compressed messages, and consumers receive and uncompress messages. Enabling compression reduces message size, which improves throughput and decreases storage for messages with repetitive values and data structures. The trade-off is increased CPU utilization and network latency to perform the compression. You can also enable producer batching to increase compression efficiency, since the messages in a batch likely have repeated data that can be compressed. When compression.type is set, it overrides the cluster property log_compression_type for the topic. The valid values of compression.type are taken from log_compression_type and differ from Kafka’s compression types. Default: none Values: none gzip lz4 snappy zstd producer Related topics: Message batching Common producer configuration options message.timestamp.type The source of a message’s timestamp: either the message’s creation time or its log append time. When message.timestamp.type is set, it overrides the cluster property log_message_timestamp_type for the topic. Default: CreateTime Values: CreateTime LogAppendTime max.message.bytes The maximum size of a message or batch of a topic. If a compression type is enabled, max.message.bytes sets the maximum size of the compressed message or batch. If max.message.bytes is set to a positive value, it overrides the cluster property kafka_batch_max_bytes for the topic. Default: null Related topics: Message batching Tiered Storage properties Configure properties to manage topics for Tiered Storage: Upload and fetch data to and from object storage for a topic (redpanda.remote.write and redpanda.remote.read). Configure size-based and time-based retention properties for local storage of a topic (retention.local.target.bytes and retention.local.target.ms). Recover or reproduce data for a topic from object storage (redpanda.remote.recovery). Delete data from object storage for a topic when it’s deleted from local storage (redpanda.remote.delete). redpanda.remote.write A flag for enabling Redpanda to upload data for a topic from local storage to object storage. When set to true together with redpanda.remote.read, it enables the Tiered Storage feature. Default: false Related topics: Tiered Storage redpanda.remote.read A flag for enabling Redpanda to fetch data for a topic from object storage to local storage. When set to true together with redpanda.remote.write, it enables the Tiered Storage feature. Default: false Related topics: Tiered Storage initial.retention.local.target.bytes A size-based initial retention limit for Tiered Storage that determines how much data in local storage is transferred to a partition replica when a cluster is resized. If null (default), all locally retained data is transferred. Default: null Related topics: Fast commission and decommission through Tiered Storage initial.retention.local.target.ms A time-based initial retention limit for Tiered Storage that determines how much data in local storage is transferred to a partition replica when a cluster is resized. If null (default), all locally retained data is transferred. Default: null Related topics: Fast commission and decommission through Tiered Storage retention.local.target.bytes A size-based retention limit for Tiered Storage that configures the maximum size that a topic partition in local storage can grow before becoming eligible for cleanup. It applies per partition and is equivalent to retention.bytes without Tiered Storage. Default: null Related topics: Tiered Storage retention.local.target.ms A time-based retention limit for Tiered Storage that sets the maximum duration that a log’s segment file for a topic is retained in local storage before it’s eligible for cleanup. This property is equivalent to retention.ms without Tiered Storage. Default: 86400000 Related topics: Tiered Storage redpanda.remote.recovery A flag that enables the recovery or reproduction of a topic from object storage for Tiered Storage. The recovered data is saved in local storage, and the maximum amount of recovered data is determined by the local storage retention limits of the topic. You can only configure redpanda.remote.recovery when you create a topic. You cannot apply this setting to existing topics. Default: false Related topics: Tiered Storage redpanda.remote.delete A flag that enables deletion of data from object storage for Tiered Storage when it’s deleted from local storage for a topic. redpanda.remote.delete doesn’t apply to Remote Read Replica topics: a Remote Read Replica topic isn’t deleted from object storage when this flag is true. Default: false for topics created using Redpanda 22.2 or earlier. true for topics created in Redpanda 22.3 and later, including new topics on upgraded clusters. Related topics: Tiered Storage Remote Read Replica properties Configure properties to manage topics for Remote Read Replicas. redpanda.remote.readreplica The name of the object storage bucket for a Remote Read Replica topic. Setting redpanda.remote.readreplica together with either redpanda.remote.read or redpanda.remote.write results in an error. Default: null Related topics: Remote Read Replicas Redpanda topic properties Configure Redpanda-specific topic properties. redpanda.leaders.preference The preferred location (rack) for partition leaders of a topic. This property inherits the value from the default_leaders_preference cluster configuration property. You may override the cluster-wide setting by specifying the value for individual topics. If the cluster configuration property enable_rack_awareness is set to false, leader pinning is disabled across the cluster. Default: none Values: none: Opt out the topic from leader pinning. racks:<rack1>[,<rack2>,…]: Specify the preferred location (rack) of all topic partition leaders. The list can contain one or more rack IDs. If you specify multiple IDs, Redpanda tries to distribute the partition leader locations equally across brokers in these racks. Related topics: Leader pinning replication.factor The number of replicas of a topic to save in different nodes (brokers) of a cluster. If replication.factor is set to a positive value, it overrides the cluster property default_topic_replication for the topic. Although replication.factor isn’t returned or displayed by rpk topic describe as a valid Kafka property, you can set it using rpk topic alter-config. When the replication.factor of a topic is altered, it isn’t simply a property value that’s updated, but rather the actual replica sets of topic partitions that are changed. Default: null Related topics: Choose the replication factor Change the replication factor Related topics Configure Producers Manage Topics Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution Object Storage Properties Release Notes