Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

arm64: Enable permission change on arm64 kernel block mappings

This patch paves the path to enable huge mappings in vmalloc space and
linear map space by default on arm64. For this we must ensure that we
can handle any permission games on the kernel (init_mm) pagetable.
Previously, __change_memory_common() used apply_to_page_range() which
does not support changing permissions for block mappings. We move away
from this by using the pagewalk API, similar to what riscv does right
now. It is the responsibility of the caller to ensure that the range
over which permissions are being changed falls on leaf mapping
boundaries. For systems with BBML2, this will be handled in future
patches by dyanmically splitting the mappings when required.

Unlike apply_to_page_range(), the pagewalk API currently enforces the
init_mm.mmap_lock to be held. To avoid the unnecessary bottleneck of the
mmap_lock for our usecase, this patch extends this generic API to be
used locklessly, so as to retain the existing behaviour for changing
permissions. Apart from this reason, it is noted at [1] that KFENCE can
manipulate kernel pgtable entries during softirqs. It does this by
calling set_memory_valid() -> __change_memory_common(). This being a
non-sleepable context, we cannot take the init_mm mmap lock.

Add comments to highlight the conditions under which we can use the
lockless variant - no underlying VMA, and the user having exclusive
control over the range, thus guaranteeing no concurrent access.

We require that the start and end of a given range do not partially
overlap block mappings, or cont mappings. Return -EINVAL in case a
partial block mapping is detected in any of the PGD/P4D/PUD/PMD levels;
add a corresponding comment in update_range_prot() to warn that
eliminating such a condition is the responsibility of the caller.

Note that, the pte level callback may change permissions for a whole
contpte block, and that will be done one pte at a time, as opposed to an
atomic operation for the block mappings. This is fine as any access will
decode either the old or the new permission until the TLBI.

apply_to_page_range() currently performs all pte level callbacks while
in lazy mmu mode. Since arm64 can optimize performance by batching
barriers when modifying kernel pgtables in lazy mmu mode, we would like
to continue to benefit from this optimisation. Unfortunately
walk_kernel_page_table_range() does not use lazy mmu mode. However,
since the pagewalk framework is not allocating any memory, we can safely
bracket the whole operation inside lazy mmu mode ourselves. Therefore,
wrap the call to walk_kernel_page_table_range() with the lazy MMU
helpers.

Link: https://lore.kernel.org/linux-arm-kernel/89d0ad18-4772-4d8f-ae8a-7c48d26a927e@arm.com/ [1]
Signed-off-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Yang Shi <yshi@os.amperecomputing.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>

authored by

Dev Jain and committed by
Will Deacon
a660194d bfbbb0d3

+117 -45
+90 -33
arch/arm64/mm/pageattr.c
··· 8 8 #include <linux/mem_encrypt.h> 9 9 #include <linux/sched.h> 10 10 #include <linux/vmalloc.h> 11 + #include <linux/pagewalk.h> 11 12 12 13 #include <asm/cacheflush.h> 13 14 #include <asm/pgtable-prot.h> ··· 19 18 struct page_change_data { 20 19 pgprot_t set_mask; 21 20 pgprot_t clear_mask; 21 + }; 22 + 23 + static ptdesc_t set_pageattr_masks(ptdesc_t val, struct mm_walk *walk) 24 + { 25 + struct page_change_data *masks = walk->private; 26 + 27 + val &= ~(pgprot_val(masks->clear_mask)); 28 + val |= (pgprot_val(masks->set_mask)); 29 + 30 + return val; 31 + } 32 + 33 + static int pageattr_pud_entry(pud_t *pud, unsigned long addr, 34 + unsigned long next, struct mm_walk *walk) 35 + { 36 + pud_t val = pudp_get(pud); 37 + 38 + if (pud_sect(val)) { 39 + if (WARN_ON_ONCE((next - addr) != PUD_SIZE)) 40 + return -EINVAL; 41 + val = __pud(set_pageattr_masks(pud_val(val), walk)); 42 + set_pud(pud, val); 43 + walk->action = ACTION_CONTINUE; 44 + } 45 + 46 + return 0; 47 + } 48 + 49 + static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr, 50 + unsigned long next, struct mm_walk *walk) 51 + { 52 + pmd_t val = pmdp_get(pmd); 53 + 54 + if (pmd_sect(val)) { 55 + if (WARN_ON_ONCE((next - addr) != PMD_SIZE)) 56 + return -EINVAL; 57 + val = __pmd(set_pageattr_masks(pmd_val(val), walk)); 58 + set_pmd(pmd, val); 59 + walk->action = ACTION_CONTINUE; 60 + } 61 + 62 + return 0; 63 + } 64 + 65 + static int pageattr_pte_entry(pte_t *pte, unsigned long addr, 66 + unsigned long next, struct mm_walk *walk) 67 + { 68 + pte_t val = __ptep_get(pte); 69 + 70 + val = __pte(set_pageattr_masks(pte_val(val), walk)); 71 + __set_pte(pte, val); 72 + 73 + return 0; 74 + } 75 + 76 + static const struct mm_walk_ops pageattr_ops = { 77 + .pud_entry = pageattr_pud_entry, 78 + .pmd_entry = pageattr_pmd_entry, 79 + .pte_entry = pageattr_pte_entry, 22 80 }; 23 81 24 82 bool rodata_full __ro_after_init = true; ··· 97 37 arm64_kfence_can_set_direct_map() || is_realm_world(); 98 38 } 99 39 100 - static int change_page_range(pte_t *ptep, unsigned long addr, void *data) 101 - { 102 - struct page_change_data *cdata = data; 103 - pte_t pte = __ptep_get(ptep); 104 - 105 - pte = clear_pte_bit(pte, cdata->clear_mask); 106 - pte = set_pte_bit(pte, cdata->set_mask); 107 - 108 - __set_pte(ptep, pte); 109 - return 0; 110 - } 111 - 112 - /* 113 - * This function assumes that the range is mapped with PAGE_SIZE pages. 114 - */ 115 - static int __change_memory_common(unsigned long start, unsigned long size, 116 - pgprot_t set_mask, pgprot_t clear_mask) 40 + static int update_range_prot(unsigned long start, unsigned long size, 41 + pgprot_t set_mask, pgprot_t clear_mask) 117 42 { 118 43 struct page_change_data data; 119 44 int ret; ··· 106 61 data.set_mask = set_mask; 107 62 data.clear_mask = clear_mask; 108 63 109 - ret = apply_to_page_range(&init_mm, start, size, change_page_range, 110 - &data); 64 + arch_enter_lazy_mmu_mode(); 65 + 66 + /* 67 + * The caller must ensure that the range we are operating on does not 68 + * partially overlap a block mapping, or a cont mapping. Any such case 69 + * must be eliminated by splitting the mapping. 70 + */ 71 + ret = walk_kernel_page_table_range_lockless(start, start + size, 72 + &pageattr_ops, NULL, &data); 73 + arch_leave_lazy_mmu_mode(); 74 + 75 + return ret; 76 + } 77 + 78 + static int __change_memory_common(unsigned long start, unsigned long size, 79 + pgprot_t set_mask, pgprot_t clear_mask) 80 + { 81 + int ret; 82 + 83 + ret = update_range_prot(start, size, set_mask, clear_mask); 111 84 112 85 /* 113 86 * If the memory is being made valid without changing any other bits ··· 237 174 238 175 int set_direct_map_invalid_noflush(struct page *page) 239 176 { 240 - struct page_change_data data = { 241 - .set_mask = __pgprot(0), 242 - .clear_mask = __pgprot(PTE_VALID), 243 - }; 177 + pgprot_t clear_mask = __pgprot(PTE_VALID); 178 + pgprot_t set_mask = __pgprot(0); 244 179 245 180 if (!can_set_direct_map()) 246 181 return 0; 247 182 248 - return apply_to_page_range(&init_mm, 249 - (unsigned long)page_address(page), 250 - PAGE_SIZE, change_page_range, &data); 183 + return update_range_prot((unsigned long)page_address(page), 184 + PAGE_SIZE, set_mask, clear_mask); 251 185 } 252 186 253 187 int set_direct_map_default_noflush(struct page *page) 254 188 { 255 - struct page_change_data data = { 256 - .set_mask = __pgprot(PTE_VALID | PTE_WRITE), 257 - .clear_mask = __pgprot(PTE_RDONLY), 258 - }; 189 + pgprot_t set_mask = __pgprot(PTE_VALID | PTE_WRITE); 190 + pgprot_t clear_mask = __pgprot(PTE_RDONLY); 259 191 260 192 if (!can_set_direct_map()) 261 193 return 0; 262 194 263 - return apply_to_page_range(&init_mm, 264 - (unsigned long)page_address(page), 265 - PAGE_SIZE, change_page_range, &data); 195 + return update_range_prot((unsigned long)page_address(page), 196 + PAGE_SIZE, set_mask, clear_mask); 266 197 } 267 198 268 199 static int __set_memory_enc_dec(unsigned long addr,
+3
include/linux/pagewalk.h
··· 134 134 int walk_kernel_page_table_range(unsigned long start, 135 135 unsigned long end, const struct mm_walk_ops *ops, 136 136 pgd_t *pgd, void *private); 137 + int walk_kernel_page_table_range_lockless(unsigned long start, 138 + unsigned long end, const struct mm_walk_ops *ops, 139 + pgd_t *pgd, void *private); 137 140 int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start, 138 141 unsigned long end, const struct mm_walk_ops *ops, 139 142 void *private);
+24 -12
mm/pagewalk.c
··· 606 606 int walk_kernel_page_table_range(unsigned long start, unsigned long end, 607 607 const struct mm_walk_ops *ops, pgd_t *pgd, void *private) 608 608 { 609 - struct mm_struct *mm = &init_mm; 609 + /* 610 + * Kernel intermediate page tables are usually not freed, so the mmap 611 + * read lock is sufficient. But there are some exceptions. 612 + * E.g. memory hot-remove. In which case, the mmap lock is insufficient 613 + * to prevent the intermediate kernel pages tables belonging to the 614 + * specified address range from being freed. The caller should take 615 + * other actions to prevent this race. 616 + */ 617 + mmap_assert_locked(&init_mm); 618 + 619 + return walk_kernel_page_table_range_lockless(start, end, ops, pgd, 620 + private); 621 + } 622 + 623 + /* 624 + * Use this function to walk the kernel page tables locklessly. It should be 625 + * guaranteed that the caller has exclusive access over the range they are 626 + * operating on - that there should be no concurrent access, for example, 627 + * changing permissions for vmalloc objects. 628 + */ 629 + int walk_kernel_page_table_range_lockless(unsigned long start, unsigned long end, 630 + const struct mm_walk_ops *ops, pgd_t *pgd, void *private) 631 + { 610 632 struct mm_walk walk = { 611 633 .ops = ops, 612 - .mm = mm, 634 + .mm = &init_mm, 613 635 .pgd = pgd, 614 636 .private = private, 615 637 .no_vma = true ··· 641 619 return -EINVAL; 642 620 if (!check_ops_valid(ops)) 643 621 return -EINVAL; 644 - 645 - /* 646 - * Kernel intermediate page tables are usually not freed, so the mmap 647 - * read lock is sufficient. But there are some exceptions. 648 - * E.g. memory hot-remove. In which case, the mmap lock is insufficient 649 - * to prevent the intermediate kernel pages tables belonging to the 650 - * specified address range from being freed. The caller should take 651 - * other actions to prevent this race. 652 - */ 653 - mmap_assert_locked(mm); 654 622 655 623 return walk_pgd_range(start, end, &walk); 656 624 }