Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm: vmalloc: group declarations depending on CONFIG_MMU together

Patch series "x86/module: use large ROX pages for text allocations", v7.

These patches add support for using large ROX pages for allocations of
executable memory on x86.

They address Andy's comments [1] about having executable mappings for code
that was not completely formed.

The approach taken is to allocate ROX memory along with writable but not
executable memory and use the writable copy to perform relocations and
alternatives patching. After the module text gets into its final shape,
the contents of the writable memory is copied into the actual ROX location
using text poking.

The allocations of the ROX memory use vmalloc(VMAP_ALLOW_HUGE_MAP) to
allocate PMD aligned memory, fill that memory with invalid instructions
and in the end remap it as ROX. Portions of these large pages are handed
out to execmem_alloc() callers without any changes to the permissions.
When the memory is freed with execmem_free() it is invalidated again so
that it won't contain stale instructions.

The module memory allocation, x86 code dealing with relocations and
alternatives patching take into account the existence of the two copies,
the writable memory and the ROX memory at the actual allocated virtual
address.

[1] https://lore.kernel.org/all/a17c65c6-863f-4026-9c6f-a04b659e9ab4@app.fastmail.com


This patch (of 8):

There are a couple of declarations that depend on CONFIG_MMU in
include/linux/vmalloc.h spread all over the file.

Group them all together to improve code readability.

No functional changes.

Link: https://lkml.kernel.org/r/20241023162711.2579610-1-rppt@kernel.org
Link: https://lkml.kernel.org/r/20241023162711.2579610-2-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Tested-by: kdevops <kdevops@lists.linux.dev>
Cc: Andreas Larsson <andreas@gaisler.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov (AMD) <bp@alien8.de>
Cc: Brian Cain <bcain@quicinc.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dinh Nguyen <dinguyen@kernel.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guo Ren <guoren@kernel.org>
Cc: Helge Deller <deller@gmx.de>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Song Liu <song@kernel.org>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Steven Rostedt (Google) <rostedt@goodmis.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vineet Gupta <vgupta@kernel.org>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Mike Rapoport (Microsoft) and committed by
Andrew Morton
beeb9220 906c38ff

+24 -36
+24 -36
include/linux/vmalloc.h
··· 134 134 extern void *vm_map_ram(struct page **pages, unsigned int count, int node); 135 135 extern void vm_unmap_aliases(void); 136 136 137 - #ifdef CONFIG_MMU 138 - extern unsigned long vmalloc_nr_pages(void); 139 - #else 140 - static inline unsigned long vmalloc_nr_pages(void) { return 0; } 141 - #endif 142 - 143 137 extern void *vmalloc_noprof(unsigned long size) __alloc_size(1); 144 138 #define vmalloc(...) alloc_hooks(vmalloc_noprof(__VA_ARGS__)) 145 139 ··· 260 266 #endif 261 267 } 262 268 269 + /* for /proc/kcore */ 270 + long vread_iter(struct iov_iter *iter, const char *addr, size_t count); 271 + 272 + /* 273 + * Internals. Don't use.. 274 + */ 275 + __init void vm_area_add_early(struct vm_struct *vm); 276 + __init void vm_area_register_early(struct vm_struct *vm, size_t align); 277 + 278 + int register_vmap_purge_notifier(struct notifier_block *nb); 279 + int unregister_vmap_purge_notifier(struct notifier_block *nb); 280 + 263 281 #ifdef CONFIG_MMU 282 + #define VMALLOC_TOTAL (VMALLOC_END - VMALLOC_START) 283 + 284 + unsigned long vmalloc_nr_pages(void); 285 + 264 286 int vm_area_map_pages(struct vm_struct *area, unsigned long start, 265 287 unsigned long end, struct page **pages); 266 288 void vm_area_unmap_pages(struct vm_struct *area, unsigned long start, 267 289 unsigned long end); 268 290 void vunmap_range(unsigned long addr, unsigned long end); 291 + 269 292 static inline void set_vm_flush_reset_perms(void *addr) 270 293 { 271 294 struct vm_struct *vm = find_vm_area(addr); ··· 290 279 if (vm) 291 280 vm->flags |= VM_FLUSH_RESET_PERMS; 292 281 } 282 + #else /* !CONFIG_MMU */ 283 + #define VMALLOC_TOTAL 0UL 293 284 294 - #else 295 - static inline void set_vm_flush_reset_perms(void *addr) 296 - { 297 - } 298 - #endif 285 + static inline unsigned long vmalloc_nr_pages(void) { return 0; } 286 + static inline void set_vm_flush_reset_perms(void *addr) {} 287 + #endif /* CONFIG_MMU */ 299 288 300 - /* for /proc/kcore */ 301 - extern long vread_iter(struct iov_iter *iter, const char *addr, size_t count); 302 - 303 - /* 304 - * Internals. Don't use.. 305 - */ 306 - extern __init void vm_area_add_early(struct vm_struct *vm); 307 - extern __init void vm_area_register_early(struct vm_struct *vm, size_t align); 308 - 309 - #ifdef CONFIG_SMP 310 - # ifdef CONFIG_MMU 289 + #if defined(CONFIG_MMU) && defined(CONFIG_SMP) 311 290 struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, 312 291 const size_t *sizes, int nr_vms, 313 292 size_t align); ··· 312 311 return NULL; 313 312 } 314 313 315 - static inline void 316 - pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) 317 - { 318 - } 319 - # endif 314 + static inline void pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) {} 320 315 #endif 321 - 322 - #ifdef CONFIG_MMU 323 - #define VMALLOC_TOTAL (VMALLOC_END - VMALLOC_START) 324 - #else 325 - #define VMALLOC_TOTAL 0UL 326 - #endif 327 - 328 - int register_vmap_purge_notifier(struct notifier_block *nb); 329 - int unregister_vmap_purge_notifier(struct notifier_block *nb); 330 316 331 317 #if defined(CONFIG_MMU) && defined(CONFIG_PRINTK) 332 318 bool vmalloc_dump_obj(void *object);