Manage the throughput of Kafka traffic at the cluster level, with configurable properties that limit and protect the use of disk and network resources for individual brokers and for an entire cluster. Set broker-wide throughput limits for Kafka API traffic.
The network bandwidth and disk utilization of brokers may be overloaded by clients that produce or consume throughput without limits. To prevent resource overloading caused by unconstrained throughput and to configure back pressure, Redpanda provides runtime-configurable properties that limit and balance throughput of Kafka API traffic.
To manage the volume of traffic going through a broker, Redpanda implements throughput quotas on the ingress and egress sides of every broker. The throughput quota accounts for all Kafka API traffic going in or out of a broker, with the value of quota representing the allowed rate of data passing through in one direction. When a connection is in breach of the quota, the throttler advises the client about the delay (throttle time) that would bring the rate back to the allowed level, and it implements that delay before handling Kafka API requests. To control the quotas, Redpanda provides configurable rate limits for total ingress and egress traffic through a broker.
With Redpanda’s thread-per-core model, the Kafka API traffic to and from a client connection is processed by a single core (shard). In order to manage throughput quotas efficiently, broker quotas are distributed between shards, and each per-shard quota is in turn shared by all connections served by the shard. Splitting broker quota optimally between shards is done behind the scenes by the quota balancer component.
To distribute the broker throughput quota, the balancer periodically monitors the throughput rate of a broker’s shards, and it distributes more quota to the shards that can make better use of it than the others. Each shard has a minimum throughput quota value, which is configurable both as a percentage of the default quota and as an absolute rate limit.
The properties for broker-wide throughput quota balancing are configured at the cluster level, for all brokers in a cluster:
A broker’s total throughput limit for ingress Kafka traffic.
A broker’s total throughput limit for egress Kafka traffic.
The period at which the quota balancer runs to balance throughput quota between a broker’s shards.
The lowest value of the throughput quota a shard can get in the process of quota balancing, expressed as a ratio of the default shard quota. If set as
The lowest value of the throughput quota a shard can get in the process of quota balancing, in bytes per second. If set as
The maximum delay inserted in the data path of Kafka API requests to throttle them down. Configuring this to be less than the Kafka client timeout can ensure that the delay that’s inserted won’t be long enough to cause a client timeout by itself.
The time window the balancer uses to average the current throughput measurement.
Similar to Broker-wide throughput limits but for clients, Redpanda provides configurable throughput quotas that apply to an individual client or a group of clients.
|The client throughput quotas limit the rates within each shard (logical CPU core) of a Redpanda broker’s node. The quotas are neither shared nor balanced between shards and brokers.
target_quota_byte_rateproperty applies to a producer client that isn’t a member of a client group configured by
kafka_client_group_byte_rate_quota. It sets the maximum throughput quota of a client sending to a Redpanda broker.
target_fetch_quota_byte_rateproperty applies to a consumer client that isn’t a member of a client group configured by
kafka_client_group_fetch_byte_rate_quota. It sets the maximum throughput quota of a client fetching from a Redpanda broker.
The values of both
target_fetch_quota_byte_rate are throughput rate limits within a shard, in bytes per second.
kafka_client_group_byte_rate_quotaproperty applies to producer clients. It sets a maximum throughput quota of traffic sent to Redpanda from each producer in the group named by the property.
kafka_client_group_fetch_byte_rate_quotaproperty applies to consumer clients. It sets a maximum throughput quota of traffic fetched from Redpanda by each consumer in the group named by the property.
kafka_client_group_fetch_byte_rate_quota have the configuration value fields:
group_name: a name for the group of clients.
clients_prefix: a client belonging to the group has this prefix in its
quota: the maximum throughput rate of each client in the group in bytes per second.
kafka_client_group_fetch_byte_rate_quota, a group of consumer clients is not a Kafka consumer group, and the
group_name field is not a Kafka consumer group ID. Instead, it is the set of clients fetching from a Redpanda broker that are configured by the property to be throughput limited.
An example configuration of
kafka_client_group_byte_rate_quota for two groups of producers: