Leader Pinning

Produce requests that write data to Redpanda topics go through the topic partition leader, which syncs messages across its follower replicas. For a Redpanda cluster deployed across multiple availability zones (AZs), leader pinning ensures that a topic’s partition leaders are geographically closer to clients, which helps decrease networking costs and guarantees lower latency.

If consumers are located in the same preferred region or AZ for leader pinning, and you have not set up follower fetching, leader pinning can also help reduce networking costs on consume requests.

Prerequisites

This feature requires an Enterprise license for self-managed deployments. To upgrade, contact Redpanda sales.

Before you can enable leader pinning, you must configure rack awareness on the cluster. If the enable_rack_awareness cluster configuration property is set to false, leader pinning is disabled across the cluster.

Configure leader pinning

You can use both a topic configuration property and a cluster configuration property to configure leader pinning.

You can set the topic configuration property for individual topics only, or set the cluster-wide configuration property that will enable leader pinning by default for all topics. You can also use a combination in which a default setting applies across the cluster, and you toggle the setting on or off for specific topics.

This configuration is based on the following scenario: you have Redpanda deployed in a multi-AZ or multi-region cluster, and you have configured each broker so that the rack configuration property contains rack IDs corresponding to the AZ IDs:

  • Set the topic configuration property redpanda.leaders.preference. The property accepts the following string values:

    • none: Opt out the topic from leader pinning.

    • racks:<rack1>[,<rack2>,…​]: Specify the preferred location (rack) of all topic partition leaders. The list can contain one or more rack IDs, and you can list the IDs in any order. Spaces in the list are ignored, for example: racks:rack1,rack2 and racks: rack1, rack2 are equivalent. You cannot specify empty rack IDs, for example: racks: rack1,,rack2. If you specify multiple IDs, Redpanda tries to distribute the partition leader locations equally across brokers in these racks.

    This property inherits the default value from the cluster property default_leaders_preference.

  • Set the cluster configuration property default_leaders_preference, which specifies the default leader pinning configuration for all topics that don’t have redpanda.leaders.preference explicitly set. It accepts values in the same format as redpanda.leaders.preference. Default: none

    This property also affects internal topics, such as __consumer_offsets and transaction coordinators. All offset tracking and transaction coordination requests get placed within the preferred regions or AZs for all clients, so you see end-to-end latency and networking cost benefits.

If there is more than one broker in the preferred AZ (or AZs), leader pinning distributes partition leaders uniformly across brokers in the AZ.

Leader pinning failover across availability zones

If there are three AZs: A, B, and C, and A becomes unavailable, the failover behavior is as follows:

  • A topic with "A" as the preferred leader AZ will have its partition leaders uniformly distributed across B and C.

  • A topic with "A,B" as the preferred leader AZs will have its partition leaders in B.

  • A topic with “B” as the preferred leader AZ will have its partition leaders in B as well.

Suggested reading