rpk redpanda tune list
List available Redpanda tuners and check if they are enabled and supported by your system.
To enable a tuner it must be set in the redpanda.yaml
configuration file
under rpk section, for example:
rpk:
tune_cpu: true
tune_swappiness: true
You may use rpk redpanda config set
to enable or disable a tuner.
Flags
Value | Type | Description |
---|---|---|
|
strings |
List of data directories or places to store data (e.g. /var/vectorized/redpanda/); usually your XFS filesystem on an NVMe SSD device. |
|
strings |
Lists of devices to tune f.e. 'sda1'. |
|
- |
Help for list. |
|
string |
Operation Mode: one of: [sq, sq_split, mq]. |
|
strings |
Network Interface Controllers to tune. |
|
- |
Allow tuners to tune boot parameters and request system reboot. |
|
string |
Redpanda or rpk config file; default search paths are ~/.config/rpk/rpk.yaml, $PWD, and /etc/redpanda/redpanda.yaml. |
|
stringArray |
Override rpk configuration settings; '-X help' for detail or '-X list' for terser detail. |
|
string |
rpk profile to use. |
|
- |
Enable verbose logging. |
Example output
The output of the command lists each available tuner and whether it’s enabled or supported (with a reason if it’s unsupported).
TUNER ENABLED SUPPORTED UNSUPPORTED-REASON
aio_events true true
ballast_file true true
clocksource true false Clocksource setting not available for this architecture
coredump false true
cpu true true
disk_irq true true
disk_nomerges true true
disk_scheduler true false open : no such file or directory
disk_write_cache true false Disk write cache tuner is only supported in GCP
fstrim false false dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
net true true
swappiness true true
transparent_hugepages false true
Tuners
This section describes each tuner and their configuration settings in redpanda.yaml. It provides overlapping and additional content to the descriptions returned by rpk redpanda tune help.
aio_events
The aio_events
tuner maximizes throughput by increasing the number of simultaneous I/O requests. It sets the maximum number of outstanding asynchronous I/O operations to be above a calculated threshold.
Configuration (yaml):
-
rpk.tune_aio_events
: flag to enable the tuner (Default for production mode: true)
ballast_file
The ballast_file
tuner improves debuggability and recoverability of a system with a full disk by creating a ballast file: if the disk becomes full, the ballast file can be deleted, thereby unblocking processes that are failing due to write failures and providing time for operators to stop any problematic processes causing the disk to fill, and to delete topics or records and change retention settings.
Configuration (yaml):
-
rpk.tune_ballast_file
: flag to enable the tuner (Default for production mode: true) -
rpk.ballast_file_size
: size of the ballast file in bytes (Default for production mode: 1 GiB) -
rpk.ballast_file_path
: path to the ballast file (Default for production mode:/var/lib/redpanda/data/ballast
)
clocksource
The clocksource
tuner improves the performance of getting system time by changing the clock source. It sets the clock source to the Time Stamp Counter (TSC) so time can be read in userspace via the Virtual Dynamic Shared Object (vDSO). This is more efficient than the usual virtual machine clock source, xen
, that involves the overhead of a system call.
Configuration (yaml):
-
rpk.tune_clocksource
: flag to enable the tuner (Default for production mode: true)
coredump
The coredump
tuner enables coredumps to be saved for production mode. By default, coredumps are not enabled for production. Enabling this tuner sets the location to save coredumps.
Configuration (yaml):
-
rpk.tune_coredump
: flag to enable the tuner (Default for production mode: false) -
rpk.coredump_dir
: path to the saved coredump (Default for production mode:/var/lib/redpanda/coredump
)
cpu
The cpu
tuner maximizes CPU performance for I/O intensive workloads by optimizing various CPU settings. It performs the following operations:
-
Sets the ACPI-cpufreq governor to
performance
If system reboot is allowed (rpk redpanda tune --reboot-allowed
option is set), the system must be rebooted after the first pass of the tuner, and it performs additional operations:
-
Disables Intel P-states
-
Disables Intel C-states
-
Disables turbo boost
After tuning, the system CPUs operate at the maximum non-turbo frequency.
Configuration (yaml):
-
rpk.tune_cpu
: flag to enable the tuner (Default for production mode: true)
disk_irq
The disk_irq
tuner optimizes the handling of interrupt requests (IRQs) for disks binding all disk IRQs to a requested set of CPUs. It tries to distribute IRQs according to the following guidelines:
-
Distribute NVMe disks IRQs equally among all available CPUs.
-
Distribute non-NVMe disks IRQs equally among designated CPUs or among all available CPUs in the
mq
mode.
IRQs are distributed according to the operation mode set by rpk redpanda tune --mode <operation-mode>
. The available operation modes:
-
sq
: set all IRQs of a given device to CPU0 -
sq_split
: divide all IRQs of a given device between CPU0 and its HT siblings -
mq
: distribute device IRQs among all available CPUs instead of binding them all to CPU0
If no --mode
is specified, a default mode is determined:
-
If there are only NVMe disks, the
mq
mode is set as the default. -
For non-NVMe disks:
-
If the number of HT siblings is less than or equal to four, the
mq
mode is set as the default. -
Otherwise, if the number of cores is less than or equal to four, the
sq
mode is set as the default. -
For all other conditions, the
sq_split
mode is set as the default.
-
Configuration (yaml):
-
rpk.tune_disk_irq
: flag to enable the tuner (Default for production mode: true) -
rpk redpanda tune --mode <operation-mode>
sets the IRQ distribution mode
net
The net
tuner optimizes the handling of interrupt requests (IRQs) for network interfaces (NICs) by binding all NIC IRQs to a requested set of CPUs.
Its IRQ distribution operation modes are the same as described for the disk_irq tuner with NICs as the devices.
Configuration (yaml):
-
rpk.tune_network
: flag to enable the tuner (Default for production mode: true) -
rpk redpanda tune --mode <operation-mode>
sets the IRQ distribution mode
disk_nomerges
The disk_nomerges
tuner reduces CPU overhead by disabling the merging of adjacent I/O requests.
Configuration (yaml):
-
rpk.tune_disk_nomerges
: flag to enable the tuner (Default for production mode: true)
disk_scheduler
The disk_scheduler
tuner optimizes disk scheduler performance for the type of device (NVME, non-NVME). It provides a selectable set of schedulers:
-
none
: minimizes latency of modern NVMe devices by bypassing the operating system’s I/O scheduler -
noop
: preferred for non-NVME devices (and used whennone
is unavailable), this scheduler uses a simple FIFO queue where all I/O operations are first stored and then handled by the driver.
Configuration (yaml):
-
rpk.tune_disk_scheduler
: flag to enable the tuner (Default for production mode: true)
disk_write_cache
The disk_write_cache
tuner optimizes performance in Google Cloud Platform (GCP) by enabling write-through caching for its NVMe Local SSD
drives.
Configuration (yaml):
-
rpk.tune_disk_write_cache
: flag to enable the tuner (Default for production mode: true)
fstrim
The fstrim
tuner improves SSD performance by starting a background systemd service to periodically wipe memory blocks that are not used by the filesystem. This is desirable for SSDs because they require wiping the space where new data will be written, so not wiping during non-write cycles will eventually cause performance degradations, when the lack of free space results in writes triggering synchronous erasures.
If it’s available, the fstrim
systemd service will be run. If it’s unavailable but systemd is available, an equivalent service will be installed and run. Otherwise, no service will be run.
Configuration (yaml):
-
rpk.tune_fstrim
: flag to enable the tuner (Default for production mode: true)
swappiness
The swappiness
tuner tunes the kernel to keep process data in-memory for as long as possible instead of swapping it out to disk.
Configuration (yaml):
-
rpk.tune_swappiness
: flag to enable the tuner (Default for production mode: true)
transparent_hugepages
The transparent_hugepages
tuner improves memory page caching by enabling Transparent Huge Pages (THP) for CPUs that support it. Its larger memory pages reduce the number of misses from Translation Lookaside Buffer (TLB) lookups.
Configuration (yaml):
-
rpk.tune_transparent_hugepages
: flag to enable the tuner (Default for production mode: false)