Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

drm/shmem_helper: Make sure PMD entries get the writeable upgrade

Unlike PTEs which are automatically upgraded to writeable entries if
.pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(),
and we currently pretend to have handled the make-writeable request
even though we only ever map things read-only. Make sure we pass the
proper "write" info to vmf_insert_pfn_pmd() in that case.

This also means we have to record the mkwrite event in the .huge_fault()
path now. Move the dirty tracking logic to a
drm_gem_shmem_record_mkwrite() helper so it can also be called from
drm_gem_shmem_pfn_mkwrite().

Note that this wasn't a problem before commit 28e3918179aa
("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because
the pgprot were not lowered to read-only before this commit (see the
vma_wants_writenotify() in vma_set_page_prot()).

Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap")
Cc: Biju Das <biju.das.jz@bp.renesas.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
Reviewed-by: Loïc Molinari <loic.molinari@collabora.com>
Tested-by: Biju Das <biju.das.jz@bp.renesas.com>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Tested-by: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
Link: https://patch.msgid.link/20260320151914.586945-1-boris.brezillon@collabora.com
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>

+32 -14
+32 -14
drivers/gpu/drm/drm_gem_shmem_helper.c
··· 554 554 } 555 555 EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create); 556 556 557 + static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf) 558 + { 559 + struct vm_area_struct *vma = vmf->vma; 560 + struct drm_gem_object *obj = vma->vm_private_data; 561 + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); 562 + loff_t num_pages = obj->size >> PAGE_SHIFT; 563 + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */ 564 + 565 + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages)) 566 + return; 567 + 568 + file_update_time(vma->vm_file); 569 + folio_mark_dirty(page_folio(shmem->pages[page_offset])); 570 + } 571 + 557 572 static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order, 558 573 unsigned long pfn) 559 574 { ··· 581 566 582 567 if (aligned && 583 568 folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) { 569 + vm_fault_t ret; 570 + 584 571 pfn &= PMD_MASK >> PAGE_SHIFT; 585 - return vmf_insert_pfn_pmd(vmf, pfn, false); 572 + 573 + /* Unlike PTEs which are automatically upgraded to 574 + * writeable entries, the PMD upgrades go through 575 + * .huge_fault(). Make sure we pass the "write" info 576 + * along in that case. 577 + * This also means we have to record the write fault 578 + * here, instead of in .pfn_mkwrite(). 579 + */ 580 + ret = vmf_insert_pfn_pmd(vmf, pfn, 581 + vmf->flags & FAULT_FLAG_WRITE); 582 + if (ret == VM_FAULT_NOPAGE && (vmf->flags & FAULT_FLAG_WRITE)) 583 + drm_gem_shmem_record_mkwrite(vmf); 584 + 585 + return ret; 586 586 } 587 587 #endif 588 588 } ··· 685 655 686 656 static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) 687 657 { 688 - struct vm_area_struct *vma = vmf->vma; 689 - struct drm_gem_object *obj = vma->vm_private_data; 690 - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); 691 - loff_t num_pages = obj->size >> PAGE_SHIFT; 692 - pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */ 693 - 694 - if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages)) 695 - return VM_FAULT_SIGBUS; 696 - 697 - file_update_time(vma->vm_file); 698 - 699 - folio_mark_dirty(page_folio(shmem->pages[page_offset])); 700 - 658 + drm_gem_shmem_record_mkwrite(vmf); 701 659 return 0; 702 660 } 703 661