Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

RDMA/rw: Fix MR pool exhaustion in bvec RDMA READ path

When IOVA-based DMA mapping is unavailable (e.g., IOMMU
passthrough mode), rdma_rw_ctx_init_bvec() falls back to
checking rdma_rw_io_needs_mr() with the raw bvec count.
Unlike the scatterlist path in rdma_rw_ctx_init(), which
passes a post-DMA-mapping entry count that reflects
coalescing of physically contiguous pages, the bvec path
passes the pre-mapping page count. This overstates the
number of DMA entries, causing every multi-bvec RDMA READ
to consume an MR from the QP's pool.

Under NFS WRITE workloads the server performs RDMA READs
to pull data from the client. With the inflated MR demand,
the pool is rapidly exhausted, ib_mr_pool_get() returns
NULL, and rdma_rw_init_one_mr() returns -EAGAIN. svcrdma
treats this as a DMA mapping failure, closes the connection,
and the client reconnects -- producing a cycle of 71% RPC
retransmissions and ~100 reconnections per test run. RDMA
WRITEs (NFS READ direction) are unaffected because
DMA_TO_DEVICE never triggers the max_sgl_rd check.

Remove the rdma_rw_io_needs_mr() gate from the bvec path
entirely, so that bvec RDMA operations always use the
map_wrs path (direct WR posting without MR allocation).
The bvec caller has no post-DMA-coalescing segment count
available -- xdr_buf and svc_rqst hold pages as individual
pointers, and physical contiguity is discovered only during
DMA mapping -- so the raw page count cannot serve as a
reliable input to rdma_rw_io_needs_mr(). iWARP devices,
which require MRs unconditionally, are handled by an
earlier check in rdma_rw_ctx_init_bvec() and are unaffected.

Fixes: bea28ac14cab ("RDMA/core: add MR support for bvec-based RDMA operations")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://patch.msgid.link/20260313194201.5818-3-cel@kernel.org
Signed-off-by: Leon Romanovsky <leon@kernel.org>

authored by

Chuck Lever and committed by
Leon Romanovsky
f28599f3 00da250c

+9 -7
+9 -7
drivers/infiniband/core/rw.c
··· 701 701 return ret; 702 702 703 703 /* 704 - * IOVA mapping not available. Check if MR registration provides 705 - * better performance than multiple SGE entries. 704 + * IOVA not available; fall back to the map_wrs path, which maps 705 + * each bvec as a direct SGE. This is always correct: the MR path 706 + * is a throughput optimization, not a correctness requirement. 707 + * (iWARP, which does require MRs, is handled by the check above.) 708 + * 709 + * The rdma_rw_io_needs_mr() gate is not used here because nr_bvec 710 + * is a raw page count that overstates DMA entry demand -- the bvec 711 + * caller has no post-DMA-coalescing segment count, and feeding the 712 + * inflated count into the MR path exhausts the pool on RDMA READs. 706 713 */ 707 - if (rdma_rw_io_needs_mr(dev, port_num, dir, nr_bvec)) 708 - return rdma_rw_init_mr_wrs_bvec(ctx, qp, port_num, bvecs, 709 - nr_bvec, &iter, remote_addr, 710 - rkey, dir); 711 - 712 714 return rdma_rw_init_map_wrs_bvec(ctx, qp, bvecs, nr_bvec, &iter, 713 715 remote_addr, rkey, dir); 714 716 }