Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

io_uring: take page references for NOMMU pbuf_ring mmaps

Under !CONFIG_MMU, io_uring_get_unmapped_area() returns the kernel
virtual address of the io_mapped_region's backing pages directly;
the user's VMA aliases the kernel allocation. io_uring_mmap() then
just returns 0 -- it takes no page references.

The CONFIG_MMU path uses vm_insert_pages(), which takes a reference on
each inserted page. Those references are released when the VMA is torn
down (zap_pte_range -> put_page). io_free_region() -> release_pages()
drops the io_uring-side references, but the pages survive until munmap
drops the VMA-side references.

Under NOMMU there are no VMA-side references. io_unregister_pbuf_ring ->
io_put_bl -> io_free_region -> release_pages drops the only references
and the pages return to the buddy allocator while the user's VMA still
has vm_start pointing into them. The user can then write into whatever
the allocator hands out next.

Mirror the MMU lifetime: take get_page references in io_uring_mmap() and
release them via vm_ops->close. NOMMU's delete_vma() calls vma_close()
which runs ->close on munmap.

This also incidentally addresses the duplicate-vm_start case: two mmaps
of SQ_RING and CQ_RING resolve to the same ctx->ring_region pointer.
With page refs taken per mmap, the second mmap takes its own refs and
the pages survive until both mmaps are closed. The nommu rb-tree BUG_ON
on duplicate vm_start is a separate mm/nommu.c concern (it should share
the existing region rather than BUG), but the page lifetime is now
correct.

Cc: Jens Axboe <axboe@kernel.dk>
Reported-by: Anthropic
Assisted-by: gkh_clanker_t1000
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Link: https://patch.msgid.link/2026042115-body-attention-d15b@gregkh
[axboe: get rid of region lookup, just iterate pages in vma]
Signed-off-by: Jens Axboe <axboe@kernel.dk>

authored by

Greg Kroah-Hartman and committed by
Jens Axboe
d0be8884 1967f0b1

+45 -1
+45 -1
io_uring/memmap.c
··· 366 366 367 367 #else /* !CONFIG_MMU */ 368 368 369 + /* 370 + * Drop the pages that were initially referenced and added in 371 + * io_uring_mmap(). We cannot have had a mremap() as that isn't supported, 372 + * hence the vma should be identical to the one we initially referenced and 373 + * mapped, and partial unmaps and splitting isn't possible on a file backed 374 + * mapping. 375 + */ 376 + static void io_uring_nommu_vm_close(struct vm_area_struct *vma) 377 + { 378 + unsigned long index; 379 + 380 + for (index = vma->vm_start; index < vma->vm_end; index += PAGE_SIZE) 381 + put_page(virt_to_page((void *) index)); 382 + } 383 + 384 + static const struct vm_operations_struct io_uring_nommu_vm_ops = { 385 + .close = io_uring_nommu_vm_close, 386 + }; 387 + 369 388 int io_uring_mmap(struct file *file, struct vm_area_struct *vma) 370 389 { 371 - return is_nommu_shared_mapping(vma->vm_flags) ? 0 : -EINVAL; 390 + struct io_ring_ctx *ctx = file->private_data; 391 + struct io_mapped_region *region; 392 + unsigned long i; 393 + 394 + if (!is_nommu_shared_mapping(vma->vm_flags)) 395 + return -EINVAL; 396 + 397 + guard(mutex)(&ctx->mmap_lock); 398 + region = io_mmap_get_region(ctx, vma->vm_pgoff); 399 + if (!region || !io_region_is_set(region)) 400 + return -EINVAL; 401 + 402 + if ((vma->vm_end - vma->vm_start) != 403 + (unsigned long) region->nr_pages << PAGE_SHIFT) 404 + return -EINVAL; 405 + 406 + /* 407 + * Pin the pages so io_free_region()'s release_pages() does not 408 + * drop the last reference while this VMA exists. delete_vma() 409 + * in mm/nommu.c calls vma_close() which runs ->close above. 410 + */ 411 + for (i = 0; i < region->nr_pages; i++) 412 + get_page(region->pages[i]); 413 + 414 + vma->vm_ops = &io_uring_nommu_vm_ops; 415 + return 0; 372 416 } 373 417 374 418 unsigned int io_uring_nommu_mmap_capabilities(struct file *file)