Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

nvmet-tcp: fix race between ICReq handling and queue teardown

nvmet_tcp_handle_icreq() updates queue->state after sending an
Initialization Connection Response (ICResp), but it does so without
serializing against target-side queue teardown.

If an NVMe/TCP host sends an Initialization Connection Request
(ICReq) and immediately closes the connection, target-side teardown
may start in softirq context before io_work drains the already
buffered ICReq. In that case, nvmet_tcp_schedule_release_queue()
sets queue->state to NVMET_TCP_Q_DISCONNECTING and drops the queue
reference under state_lock.

If io_work later processes that ICReq, nvmet_tcp_handle_icreq() can
still overwrite the state back to NVMET_TCP_Q_LIVE. That defeats the
DISCONNECTING-state guard in nvmet_tcp_schedule_release_queue() and
allows a later socket state change to re-enter teardown and issue a
second kref_put() on an already released queue.

The ICResp send failure path has the same problem. If teardown has
already moved the queue to DISCONNECTING, a send error can still
overwrite the state with NVMET_TCP_Q_FAILED, again reopening the
window for a second teardown path to drop the queue reference.

Fix this by serializing both post-send state transitions with
state_lock and bailing out if teardown has already started.

Use -ESHUTDOWN as an internal sentinel for that bail-out path rather
than propagating it as a transport error like -ECONNRESET. Keep
nvmet_tcp_socket_error() setting rcv_state to NVMET_TCP_RECV_ERR before
honoring that sentinel so receive-side parsing stays quiesced until the
existing release path completes.

Fixes: c46a6465bac2 ("nvmet-tcp: add NVMe over TCP target driver")
Cc: stable@vger.kernel.org
Reported-by: Shivam Kumar <skumar47@syr.edu>
Tested-by: Shivam Kumar <kumar.shivam43666@gmail.com>
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>

authored by

Chaitanya Kulkarni and committed by
Keith Busch
5293a888 bad44c9c

+26
+26
drivers/nvme/target/tcp.c
··· 394 394 395 395 static void nvmet_tcp_socket_error(struct nvmet_tcp_queue *queue, int status) 396 396 { 397 + /* 398 + * Keep rcv_state at RECV_ERR even for the internal -ESHUTDOWN path. 399 + * nvmet_tcp_handle_icreq() can return -ESHUTDOWN after the ICReq has 400 + * already been consumed and queue teardown has started. 401 + * 402 + * If nvmet_tcp_data_ready() or nvmet_tcp_write_space() queues 403 + * nvmet_tcp_io_work() again before nvmet_tcp_release_queue_work() 404 + * cancels it, the queue must not keep that old receive state. 405 + * Otherwise the next nvmet_tcp_io_work() run can reach 406 + * nvmet_tcp_done_recv_pdu() and try to handle the same ICReq again. 407 + * 408 + * That is why queue->rcv_state needs to be updated before we return. 409 + */ 397 410 queue->rcv_state = NVMET_TCP_RECV_ERR; 398 411 if (status == -EPIPE || status == -ECONNRESET || !queue->nvme_sq.ctrl) 399 412 kernel_sock_shutdown(queue->sock, SHUT_RDWR); ··· 921 908 iov.iov_len = sizeof(*icresp); 922 909 ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len); 923 910 if (ret < 0) { 911 + spin_lock_bh(&queue->state_lock); 912 + if (queue->state == NVMET_TCP_Q_DISCONNECTING) { 913 + spin_unlock_bh(&queue->state_lock); 914 + return -ESHUTDOWN; 915 + } 924 916 queue->state = NVMET_TCP_Q_FAILED; 917 + spin_unlock_bh(&queue->state_lock); 925 918 return ret; /* queue removal will cleanup */ 926 919 } 927 920 921 + spin_lock_bh(&queue->state_lock); 922 + if (queue->state == NVMET_TCP_Q_DISCONNECTING) { 923 + spin_unlock_bh(&queue->state_lock); 924 + /* Tell nvmet_tcp_socket_error() teardown is in progress. */ 925 + return -ESHUTDOWN; 926 + } 928 927 queue->state = NVMET_TCP_Q_LIVE; 928 + spin_unlock_bh(&queue->state_lock); 929 929 nvmet_prepare_receive_pdu(queue); 930 930 return 0; 931 931 }