Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

drm/xe/uapi: Reject coh_none PAT index for CPU cached memory in madvise

Add validation in xe_vm_madvise_ioctl() to reject PAT indices with
XE_COH_NONE coherency mode when applied to CPU cached memory.

Using coh_none with CPU cached buffers is a security issue. When the
kernel clears pages before reallocation, the clear operation stays in
CPU cache (dirty). GPU with coh_none can bypass CPU caches and read
stale sensitive data directly from DRAM, potentially leaking data from
previously freed pages of other processes.

This aligns with the existing validation in vm_bind path
(xe_vm_bind_ioctl_validate_bo).

v2(Matthew brost)
- Add fixes
- Move one debug print to better place

v3(Matthew Auld)
- Should be drm/xe/uapi
- More Cc

v4(Shuicheng Lin)
- Fix kmem leak issues by the way

v5
- Remove kmem leak because it has been merged by another patch

v6
- Remove the fix which is not related to current fix

v7
- No change

v8
- Rebase

v9
- Limit the restrictions to iGPU

v10
- No change

Fixes: ada7486c5668 ("drm/xe: Implement madvise ioctl for xe")
Cc: <stable@vger.kernel.org> # v6.18+
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Mathew Alwin <alwin.mathew@intel.com>
Cc: Michal Mrozek <michal.mrozek@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Jia Yao <jia.yao@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Michal Mrozek <michal.mrozek@intel.com>
Acked-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patch.msgid.link/20260417055917.2027459-2-jia.yao@intel.com
(cherry picked from commit 016ccdb674b8c899940b3944952c96a6a490d10a)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

authored by

Jia Yao and committed by
Rodrigo Vivi
4e5591c2 0df99689

+47
+47
drivers/gpu/drm/xe/xe_vm_madvise.c
··· 621 621 return 0; 622 622 } 623 623 624 + static bool check_pat_args_are_sane(struct xe_device *xe, 625 + struct xe_vmas_in_madvise_range *madvise_range, 626 + u16 pat_index) 627 + { 628 + u16 coh_mode = xe_pat_index_get_coh_mode(xe, pat_index); 629 + int i; 630 + 631 + /* 632 + * Using coh_none with CPU cached buffers is not allowed on iGPU. 633 + * On iGPU the GPU shares the LLC with the CPU, so with coh_none 634 + * the GPU bypasses CPU caches and reads directly from DRAM, 635 + * potentially seeing stale sensitive data from previously freed 636 + * pages. On dGPU this restriction does not apply, because the 637 + * platform does not provide a non-coherent system memory access 638 + * path that would violate the DMA coherency contract. 639 + */ 640 + if (coh_mode != XE_COH_NONE || IS_DGFX(xe)) 641 + return true; 642 + 643 + for (i = 0; i < madvise_range->num_vmas; i++) { 644 + struct xe_vma *vma = madvise_range->vmas[i]; 645 + struct xe_bo *bo = xe_vma_bo(vma); 646 + 647 + if (bo) { 648 + /* BO with WB caching + COH_NONE is not allowed */ 649 + if (XE_IOCTL_DBG(xe, bo->cpu_caching == DRM_XE_GEM_CPU_CACHING_WB)) 650 + return false; 651 + /* Imported dma-buf without caching info, assume cached */ 652 + if (XE_IOCTL_DBG(xe, !bo->cpu_caching)) 653 + return false; 654 + } else if (XE_IOCTL_DBG(xe, xe_vma_is_cpu_addr_mirror(vma) || 655 + xe_vma_is_userptr(vma))) 656 + /* System memory (userptr/SVM) is always CPU cached */ 657 + return false; 658 + } 659 + 660 + return true; 661 + } 662 + 624 663 static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas, 625 664 int num_vmas, u32 atomic_val) 626 665 { ··· 786 747 (pat_index != 19 && coh_mode != XE_COH_2WAY))) { 787 748 err = -EINVAL; 788 749 goto madv_fini; 750 + } 751 + } 752 + 753 + if (args->type == DRM_XE_MEM_RANGE_ATTR_PAT) { 754 + if (!check_pat_args_are_sane(xe, &madvise_range, 755 + args->pat_index.val)) { 756 + err = -EINVAL; 757 + goto free_vmas; 789 758 } 790 759 } 791 760