Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

perf/core: Fix refcount bug and potential UAF in perf_mmap

Syzkaller reported a refcount_t: addition on 0; use-after-free warning
in perf_mmap.

The issue is caused by a race condition between a failing mmap() setup
and a concurrent mmap() on a dependent event (e.g., using output
redirection).

In perf_mmap(), the ring_buffer (rb) is allocated and assigned to
event->rb with the mmap_mutex held. The mutex is then released to
perform map_range().

If map_range() fails, perf_mmap_close() is called to clean up.
However, since the mutex was dropped, another thread attaching to
this event (via inherited events or output redirection) can acquire
the mutex, observe the valid event->rb pointer, and attempt to
increment its reference count. If the cleanup path has already
dropped the reference count to zero, this results in a
use-after-free or refcount saturation warning.

Fix this by extending the scope of mmap_mutex to cover the
map_range() call. This ensures that the ring buffer initialization
and mapping (or cleanup on failure) happens atomically effectively,
preventing other threads from accessing a half-initialized or
dying ring buffer.

Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://patch.msgid.link/20260202162057.7237-1-yuhaocheng035@gmail.com

authored by

Haocheng Yu and committed by
Peter Zijlstra
77de62ad 6a8a4864

+21 -21
+21 -21
kernel/events/core.c
··· 7465 7465 ret = perf_mmap_aux(vma, event, nr_pages); 7466 7466 if (ret) 7467 7467 return ret; 7468 + 7469 + /* 7470 + * Since pinned accounting is per vm we cannot allow fork() to copy our 7471 + * vma. 7472 + */ 7473 + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); 7474 + vma->vm_ops = &perf_mmap_vmops; 7475 + 7476 + mapped = get_mapped(event, event_mapped); 7477 + if (mapped) 7478 + mapped(event, vma->vm_mm); 7479 + 7480 + /* 7481 + * Try to map it into the page table. On fail, invoke 7482 + * perf_mmap_close() to undo the above, as the callsite expects 7483 + * full cleanup in this case and therefore does not invoke 7484 + * vmops::close(). 7485 + */ 7486 + ret = map_range(event->rb, vma); 7487 + if (ret) 7488 + perf_mmap_close(vma); 7468 7489 } 7469 - 7470 - /* 7471 - * Since pinned accounting is per vm we cannot allow fork() to copy our 7472 - * vma. 7473 - */ 7474 - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); 7475 - vma->vm_ops = &perf_mmap_vmops; 7476 - 7477 - mapped = get_mapped(event, event_mapped); 7478 - if (mapped) 7479 - mapped(event, vma->vm_mm); 7480 - 7481 - /* 7482 - * Try to map it into the page table. On fail, invoke 7483 - * perf_mmap_close() to undo the above, as the callsite expects 7484 - * full cleanup in this case and therefore does not invoke 7485 - * vmops::close(). 7486 - */ 7487 - ret = map_range(event->rb, vma); 7488 - if (ret) 7489 - perf_mmap_close(vma); 7490 7490 7491 7491 return ret; 7492 7492 }