Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

docs, nvme: introduce nvme-multipath document

This adds a document about nvme-multipath and policies supported
by the Linux NVMe host driver, and also each policy's best scenario.

Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
Link: https://lore.kernel.org/r/20241209071127.22922-1-kanie@linux.alibaba.com

authored by

Guixin Liu and committed by
Jonathan Corbet
80568f47 dfddf353

+73
+1
Documentation/admin-guide/index.rst
··· 136 136 vga-softcursor 137 137 video-output 138 138 xfs 139 + nvme-multipath 139 140 140 141 .. only:: subproject and html 141 142
+72
Documentation/admin-guide/nvme-multipath.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ==================== 4 + Linux NVMe multipath 5 + ==================== 6 + 7 + This document describes NVMe multipath and its path selection policies supported 8 + by the Linux NVMe host driver. 9 + 10 + 11 + Introduction 12 + ============ 13 + 14 + The NVMe multipath feature in Linux integrates namespaces with the same 15 + identifier into a single block device. Using multipath enhances the reliability 16 + and stability of I/O access while improving bandwidth performance. When a user 17 + sends I/O to this merged block device, the multipath mechanism selects one of 18 + the underlying block devices (paths) according to the configured policy. 19 + Different policies result in different path selections. 20 + 21 + 22 + Policies 23 + ======== 24 + 25 + All policies follow the ANA (Asymmetric Namespace Access) mechanism, meaning 26 + that when an optimized path is available, it will be chosen over a non-optimized 27 + one. Current the NVMe multipath policies include numa(default), round-robin and 28 + queue-depth. 29 + 30 + To set the desired policy (e.g., round-robin), use one of the following methods: 31 + 1. echo -n "round-robin" > /sys/module/nvme_core/parameters/iopolicy 32 + 2. or add the "nvme_core.iopolicy=round-robin" to cmdline. 33 + 34 + 35 + NUMA 36 + ---- 37 + 38 + The NUMA policy selects the path closest to the NUMA node of the current CPU for 39 + I/O distribution. This policy maintains the nearest paths to each NUMA node 40 + based on network interface connections. 41 + 42 + When to use the NUMA policy: 43 + 1. Multi-core Systems: Optimizes memory access in multi-core and 44 + multi-processor systems, especially under NUMA architecture. 45 + 2. High Affinity Workloads: Binds I/O processing to the CPU to reduce 46 + communication and data transfer delays across nodes. 47 + 48 + 49 + Round-Robin 50 + ----------- 51 + 52 + The round-robin policy distributes I/O requests evenly across all paths to 53 + enhance throughput and resource utilization. Each I/O operation is sent to the 54 + next path in sequence. 55 + 56 + When to use the round-robin policy: 57 + 1. Balanced Workloads: Effective for balanced and predictable workloads with 58 + similar I/O size and type. 59 + 2. Homogeneous Path Performance: Utilizes all paths efficiently when 60 + performance characteristics (e.g., latency, bandwidth) are similar. 61 + 62 + 63 + Queue-Depth 64 + ----------- 65 + 66 + The queue-depth policy manages I/O requests based on the current queue depth 67 + of each path, selecting the path with the least number of in-flight I/Os. 68 + 69 + When to use the queue-depth policy: 70 + 1. High load with small I/Os: Effectively balances load across paths when 71 + the load is high, and I/O operations consist of small, relatively 72 + fixed-sized requests.