Designed for Performance
Redpanda comes with an autotuner that detects the optimal settings for your hardware. To get the best performance for your hardware, set Redpanda to production mode. In production mode, Redpanda identifies your hardware configuration and tunes itself to give you the best performance. This section describes other design considerations.
Disk
Redpanda uses DMA (Direct Memory Access) for all its disk I/O. To get the
best I/O performance, place the data directory
(/var/lib/redpanda/data
) on an XFS partition in a local NVMe SSD. Redpanda can
drive your SSD at maximum throughput at all times. Redpanda relies on XFS due
to its use of sparse file system support to flush concurrent, non-overlapping pages.
Although other file systems might work, they may have limitations that prevent
you from getting the most value out of your hardware.
Redpanda recommends not using network-blocked devices, because of their inherent performance limitations.
For multi-disk setups, use Raid 0 with XFS on top.
While monitoring, you might notice that the file system file sizes change. This is expected behavior, because Redpanda uses internal heuristics to expand the file system metadata when it improves performance for a sequence of operations, or to amortize the cost of synchronization events.
Network
Modern network interface controllers (NICs) can drive multi-gigabit traffic to hosts. Redpanda ships with rpk
, a single tool for managing your entire Redpanda cluster. rpk
probes the hardware
(taking into account the number of CPUs) and automatically chooses the best
setting to drive high-throughput traffic to the machine. The modes are all but
cpu0, cpu0 + hyper-thread sibling, or distributed across all cores. This is in addition
to other settings like backlog and max sockets, regardless of whether the NIC is bonded
or not. The user is never aware of any of these low-level settings, and in most
production scenarios, it is distributed across all cores. This distributes the cost of interrupt processing evenly among all cores.
CGROUPS
To run at peak performance for extended periods, Redpanda leverages control groups (cgroups) to isolate the Redpanda processes. This shields Redpanda processes from other processes that could adversely affect performance.
Redpanda also leverages systemd
slices. Redpanda instructs the kernel to
evict other processes before process memory, and to reserve I/O
quotas and CPU time. This way, even when other processes are competing for resources,
Redpanda still delivers predictable latency and throughput to end users.
CPU
The default CPU configuration is prioritized for typical end-user use cases, such as non-CPU-intensive desktop applications and optimizing power usage. Redpanda disables all power-saving modes and ensures that the CPU is configured for predictable latency at all times. Redpanda drives machines around ~90% utilization and still gives predictable low latency results.
Memory
Swapping is prevented, so Redpanda is never swapped out of memory. By design, Redpanda allocates nearly all available memory upfront, partitioning the allocated memory between all cores and pinning the memory to the specified NUMA domain (specific CPU socket). This makes sure that there are predictable memory allocations and predictable latency.