Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

arch, mm: consolidate empty_zero_page

Reduce 22 declarations of empty_zero_page to 3 and 23 declarations of
ZERO_PAGE() to 4.

Every architecture defines empty_zero_page that way or another, but for the
most of them it is always a page aligned page in BSS and most definitions
of ZERO_PAGE do virt_to_page(empty_zero_page).

Move Linus vetted x86 definition of empty_zero_page and ZERO_PAGE() to the
core MM and drop these definitions in architectures that do not implement
colored zero page (MIPS and s390).

ZERO_PAGE() remains a macro because turning it to a wrapper for a static
inline causes severe pain in header dependencies.

For the most part the change is mechanical, with these being noteworthy:

* alpha: aliased empty_zero_page with ZERO_PGE that was also used for boot
parameters. Switching to a generic empty_zero_page removes the aliasing
and keeps ZERO_PGE for boot parameters only
* arm64: uses __pa_symbol() in ZERO_PAGE() so that definition of
ZERO_PAGE() is kept intact.
* m68k/parisc/um: allocated empty_zero_page from memblock,
although they do not support zero page coloring and having it in BSS
will work fine.
* sparc64 can have empty_zero_page in BSS rather allocate it, but it
can't use virt_to_page() for BSS. Keep it's definition of ZERO_PAGE()
but instead of allocating it, make mem_map_zero point to
empty_zero_page.
* sh: used empty_zero_page for boot parameters at the very early boot.
Rename the parameters page to boot_params_page and let sh use the generic
empty_zero_page.
* hexagon: had an amusing comment about empty_zero_page

/* A handy thing to have if one has the RAM. Declared in head.S */

that unfortunately had to go :)

Link: https://lkml.kernel.org/r/20260211103141.3215197-4-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Acked-by: Helge Deller <deller@gmx.de> [parisc]
Tested-by: Helge Deller <deller@gmx.de> [parisc]
Reviewed-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Magnus Lindholm <linmag7@gmail.com> [alpha]
Acked-by: Dinh Nguyen <dinguyen@kernel.org> [nios2]
Acked-by: Andreas Larsson <andreas@gaisler.com> [sparc]
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Acked-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: "Borislav Petkov (AMD)" <bp@alien8.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guo Ren <guoren@kernel.org>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vineet Gupta <vgupta@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Mike Rapoport (Microsoft) and committed by
Andrew Morton
6215d9f4 9a1d0c73

+23 -285
-6
arch/alpha/include/asm/pgtable.h
··· 127 127 #define pgprot_noncached(prot) (prot) 128 128 129 129 /* 130 - * ZERO_PAGE is a global shared page that is always zero: used 131 - * for zero-mapped memory areas etc.. 132 - */ 133 - #define ZERO_PAGE(vaddr) (virt_to_page(ZERO_PGE)) 134 - 135 - /* 136 130 * On certain platforms whose physical address space can overlap KSEG, 137 131 * namely EV6 and above, we must re-twiddle the physaddr to restore the 138 132 * correct high-order bits.
-3
arch/arc/include/asm/pgtable.h
··· 21 21 22 22 #ifndef __ASSEMBLER__ 23 23 24 - extern char empty_zero_page[PAGE_SIZE]; 25 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 26 - 27 24 extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); 28 25 29 26 /* to cope with aliasing VIPT cache */
-2
arch/arc/mm/init.c
··· 19 19 #include <asm/arcregs.h> 20 20 21 21 pgd_t swapper_pg_dir[PTRS_PER_PGD] __aligned(PAGE_SIZE); 22 - char empty_zero_page[PAGE_SIZE] __aligned(PAGE_SIZE); 23 - EXPORT_SYMBOL(empty_zero_page); 24 22 25 23 static const unsigned long low_mem_start = CONFIG_LINUX_RAM_BASE; 26 24 static unsigned long low_mem_sz;
-9
arch/arm/include/asm/pgtable.h
··· 10 10 #include <linux/const.h> 11 11 #include <asm/proc-fns.h> 12 12 13 - #ifndef __ASSEMBLY__ 14 - /* 15 - * ZERO_PAGE is a global shared page that is always zero: used 16 - * for zero-mapped memory areas etc.. 17 - */ 18 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; 19 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 20 - #endif 21 - 22 13 #include <asm-generic/pgtable-nopud.h> 23 14 24 15 #ifndef CONFIG_MMU
-7
arch/arm/mm/mmu.c
··· 42 42 extern unsigned long __atags_pointer; 43 43 44 44 /* 45 - * empty_zero_page is a special page that is used for 46 - * zero-initialized data and COW. 47 - */ 48 - unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; 49 - EXPORT_SYMBOL(empty_zero_page); 50 - 51 - /* 52 45 * The pmd table for the upper-most set of pages. 53 46 */ 54 47 pmd_t *top_pmd;
-7
arch/arm/mm/nommu.c
··· 27 27 28 28 unsigned long vectors_base; 29 29 30 - /* 31 - * empty_zero_page is a special page that is used for 32 - * zero-initialized data and COW. 33 - */ 34 - unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; 35 - EXPORT_SYMBOL(empty_zero_page); 36 - 37 30 #ifdef CONFIG_ARM_MPU 38 31 struct mpu_rgn_info mpu_rgn_info; 39 32 #endif
-1
arch/arm64/include/asm/pgtable.h
··· 110 110 * ZERO_PAGE is a global shared page that is always zero: used 111 111 * for zero-mapped memory areas etc.. 112 112 */ 113 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; 114 113 #define ZERO_PAGE(vaddr) phys_to_page(__pa_symbol(empty_zero_page)) 115 114 116 115 #define pte_ERROR(e) \
-7
arch/arm64/mm/mmu.c
··· 64 64 */ 65 65 long __section(".mmuoff.data.write") __early_cpu_boot_status; 66 66 67 - /* 68 - * Empty_zero_page is a special page that is used for zero-initialized data 69 - * and COW. 70 - */ 71 - unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; 72 - EXPORT_SYMBOL(empty_zero_page); 73 - 74 67 static DEFINE_SPINLOCK(swapper_pgdir_lock); 75 68 static DEFINE_MUTEX(fixmap_lock); 76 69
-3
arch/csky/include/asm/pgtable.h
··· 76 76 #define MAX_SWAPFILES_CHECK() \ 77 77 BUILD_BUG_ON(MAX_SWAPFILES_SHIFT != 5) 78 78 79 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; 80 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 81 - 82 79 extern void load_pgd(unsigned long pg_dir); 83 80 extern pte_t invalid_pte_table[PTRS_PER_PTE]; 84 81
-3
arch/csky/mm/init.c
··· 38 38 pte_t kernel_pte_tables[PTRS_KERN_TABLE] __page_aligned_bss; 39 39 40 40 EXPORT_SYMBOL(invalid_pte_table); 41 - unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] 42 - __page_aligned_bss; 43 - EXPORT_SYMBOL(empty_zero_page); 44 41 45 42 void free_initmem(void) 46 43 {
-6
arch/hexagon/include/asm/pgtable.h
··· 14 14 #include <asm/page.h> 15 15 #include <asm-generic/pgtable-nopmd.h> 16 16 17 - /* A handy thing to have if one has the RAM. Declared in head.S */ 18 - extern unsigned long empty_zero_page; 19 - 20 17 /* 21 18 * The PTE model described here is that of the Hexagon Virtual Machine, 22 19 * which autonomously walks 2-level page tables. At a lower level, we ··· 344 347 { 345 348 return (unsigned long)__va(pmd_val(pmd) & PAGE_MASK); 346 349 } 347 - 348 - /* ZERO_PAGE - returns the globally shared zero page */ 349 - #define ZERO_PAGE(vaddr) (virt_to_page(&empty_zero_page)) 350 350 351 351 /* 352 352 * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that
-5
arch/hexagon/kernel/head.S
··· 216 216 .p2align PAGE_SHIFT 217 217 ENTRY(external_cmdline_buffer) 218 218 .fill _PAGE_SIZE,1,0 219 - 220 - .data 221 - .p2align PAGE_SHIFT 222 - ENTRY(empty_zero_page) 223 - .fill _PAGE_SIZE,1,0
-1
arch/hexagon/kernel/hexagon_ksyms.c
··· 17 17 EXPORT_SYMBOL(__vmgetie); 18 18 EXPORT_SYMBOL(__vmsetie); 19 19 EXPORT_SYMBOL(__vmyield); 20 - EXPORT_SYMBOL(empty_zero_page); 21 20 EXPORT_SYMBOL(memcpy); 22 21 EXPORT_SYMBOL(memset); 23 22
-9
arch/loongarch/include/asm/pgtable.h
··· 74 74 struct mm_struct; 75 75 struct vm_area_struct; 76 76 77 - /* 78 - * ZERO_PAGE is a global shared page that is always zero; used 79 - * for zero-mapped memory areas etc.. 80 - */ 81 - 82 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; 83 - 84 - #define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page) 85 - 86 77 #ifdef CONFIG_32BIT 87 78 88 79 #define VMALLOC_START (vm_map_base + PCI_IOSIZE + (2 * PAGE_SIZE))
-3
arch/loongarch/mm/init.c
··· 36 36 #include <asm/pgalloc.h> 37 37 #include <asm/tlb.h> 38 38 39 - unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; 40 - EXPORT_SYMBOL(empty_zero_page); 41 - 42 39 void copy_user_highpage(struct page *to, struct page *from, 43 40 unsigned long vaddr, struct vm_area_struct *vma) 44 41 {
-9
arch/m68k/include/asm/pgtable_mm.h
··· 110 110 #define VMALLOC_END KMAP_START 111 111 #endif 112 112 113 - /* zero page used for uninitialized stuff */ 114 - extern void *empty_zero_page; 115 - 116 - /* 117 - * ZERO_PAGE is a global shared page that is always zero: used 118 - * for zero-mapped memory areas etc.. 119 - */ 120 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 121 - 122 113 extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode); 123 114 124 115 /*
-7
arch/m68k/include/asm/pgtable_no.h
··· 31 31 #define swapper_pg_dir ((pgd_t *) 0) 32 32 33 33 /* 34 - * ZERO_PAGE is a global shared page that is always zero: used 35 - * for zero-mapped memory areas etc.. 36 - */ 37 - extern void *empty_zero_page; 38 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 39 - 40 - /* 41 34 * All 32bit addresses are effectively valid for vmalloc... 42 35 * Sort of meaningless for non-VM targets. 43 36 */
-9
arch/m68k/mm/init.c
··· 33 33 #include <asm/sections.h> 34 34 #include <asm/tlb.h> 35 35 36 - /* 37 - * ZERO_PAGE is a special page that is used for zero-initialized 38 - * data and COW. 39 - */ 40 - void *empty_zero_page; 41 - EXPORT_SYMBOL(empty_zero_page); 42 - 43 36 void __init arch_zone_limits_init(unsigned long *max_zone_pfns) 44 37 { 45 38 max_zone_pfns[ZONE_DMA] = PFN_DOWN(memblock_end_of_DRAM()); ··· 64 71 unsigned long end_mem = memory_end & PAGE_MASK; 65 72 66 73 high_memory = (void *) end_mem; 67 - 68 - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); 69 74 } 70 75 71 76 #endif /* CONFIG_MMU */
-2
arch/m68k/mm/mcfmmu.c
··· 41 41 unsigned long next_pgtable; 42 42 int i; 43 43 44 - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); 45 - 46 44 pg_dir = swapper_pg_dir; 47 45 memset(swapper_pg_dir, 0, sizeof(swapper_pg_dir)); 48 46
-6
arch/m68k/mm/motorola.c
··· 499 499 early_memtest(min_addr, max_addr); 500 500 501 501 /* 502 - * initialize the bad page table and bad page to point 503 - * to a couple of allocated pages 504 - */ 505 - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); 506 - 507 - /* 508 502 * Set up SFC/DFC registers 509 503 */ 510 504 set_fc(USER_DATA);
-2
arch/m68k/mm/sun3mmu.c
··· 43 43 unsigned long bootmem_end; 44 44 unsigned long size; 45 45 46 - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); 47 - 48 46 address = PAGE_OFFSET; 49 47 pg_dir = swapper_pg_dir; 50 48 memset (swapper_pg_dir, 0, sizeof (swapper_pg_dir));
-10
arch/microblaze/include/asm/pgtable.h
··· 207 207 * Also, write permissions imply read permissions. 208 208 */ 209 209 210 - #ifndef __ASSEMBLER__ 211 - /* 212 - * ZERO_PAGE is a global shared page that is always zero: used 213 - * for zero-mapped memory areas etc.. 214 - */ 215 - extern unsigned long empty_zero_page[1024]; 216 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 217 - 218 - #endif /* __ASSEMBLER__ */ 219 - 220 210 #define pte_none(pte) ((pte_val(pte) & ~_PTE_NONE_MASK) == 0) 221 211 #define pte_present(pte) (pte_val(pte) & _PAGE_PRESENT) 222 212 #define pte_clear(mm, addr, ptep) \
-4
arch/microblaze/kernel/head.S
··· 39 39 #include <asm/processor.h> 40 40 41 41 .section .data 42 - .global empty_zero_page 43 - .align 12 44 - empty_zero_page: 45 - .space PAGE_SIZE 46 42 .global swapper_pg_dir 47 43 swapper_pg_dir: 48 44 .space PAGE_SIZE
-2
arch/microblaze/kernel/microblaze_ksyms.c
··· 33 33 EXPORT_SYMBOL(memmove); 34 34 #endif 35 35 36 - EXPORT_SYMBOL(empty_zero_page); 37 - 38 36 EXPORT_SYMBOL(mbc); 39 37 40 38 extern void __divsi3(void);
-7
arch/nios2/include/asm/pgtable.h
··· 65 65 #define PGDIR_SIZE (1UL << PGDIR_SHIFT) 66 66 #define PGDIR_MASK (~(PGDIR_SIZE-1)) 67 67 68 - /* 69 - * ZERO_PAGE is a global shared page that is always zero: used 70 - * for zero-mapped memory areas etc.. 71 - */ 72 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; 73 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 74 - 75 68 extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 76 69 extern pte_t invalid_pte_table[PAGE_SIZE/sizeof(pte_t)]; 77 70
-10
arch/nios2/kernel/head.S
··· 24 24 #include <asm/asm-macros.h> 25 25 26 26 /* 27 - * ZERO_PAGE is a special page that is used for zero-initialized 28 - * data and COW. 29 - */ 30 - .data 31 - .global empty_zero_page 32 - .align 12 33 - empty_zero_page: 34 - .space PAGE_SIZE 35 - 36 - /* 37 27 * This global variable is used as an extension to the nios' 38 28 * STATUS register to emulate a user/supervisor mode. 39 29 */
-1
arch/nios2/kernel/nios2_ksyms.c
··· 20 20 21 21 /* memory management */ 22 22 23 - EXPORT_SYMBOL(empty_zero_page); 24 23 EXPORT_SYMBOL(flush_icache_range); 25 24 26 25 /*
-4
arch/openrisc/include/asm/pgtable.h
··· 179 179 __pgprot(_PAGE_ALL | _PAGE_SRE | _PAGE_SWE \ 180 180 | _PAGE_SHARED | _PAGE_DIRTY | _PAGE_EXEC | _PAGE_CI) 181 181 182 - /* zero page used for uninitialized stuff */ 183 - extern unsigned long empty_zero_page[2048]; 184 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 185 - 186 182 #define pte_none(x) (!pte_val(x)) 187 183 #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) 188 184 #define pte_clear(mm, addr, xp) do { pte_val(*(xp)) = 0; } while (0)
-3
arch/openrisc/kernel/head.S
··· 1563 1563 */ 1564 1564 .section .data,"aw" 1565 1565 .align 8192 1566 - .global empty_zero_page 1567 - empty_zero_page: 1568 - .space 8192 1569 1566 1570 1567 .global swapper_pg_dir 1571 1568 swapper_pg_dir:
-1
arch/openrisc/kernel/or32_ksyms.c
··· 40 40 DECLARE_EXPORT(__lshrdi3); 41 41 DECLARE_EXPORT(__ucmpdi2); 42 42 43 - EXPORT_SYMBOL(empty_zero_page); 44 43 EXPORT_SYMBOL(__copy_tofrom_user); 45 44 EXPORT_SYMBOL(__clear_user); 46 45 EXPORT_SYMBOL(memset);
-3
arch/openrisc/mm/init.c
··· 188 188 { 189 189 BUG_ON(!mem_map); 190 190 191 - /* clear the zero-page */ 192 - memset((void *)empty_zero_page, 0, PAGE_SIZE); 193 - 194 191 printk("mem_init_done ...........................................\n"); 195 192 mem_init_done = 1; 196 193 return;
-11
arch/parisc/include/asm/pgtable.h
··· 262 262 263 263 extern pte_t pg0[]; 264 264 265 - /* zero page used for uninitialized stuff */ 266 - 267 - extern unsigned long *empty_zero_page; 268 - 269 - /* 270 - * ZERO_PAGE is a global shared page that is always zero: used 271 - * for zero-mapped memory areas etc.. 272 - */ 273 - 274 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 275 - 276 265 #define pte_none(x) (pte_val(x) == 0) 277 266 #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) 278 267 #define pte_user(x) (pte_val(x) & _PAGE_USER)
-6
arch/parisc/mm/init.c
··· 604 604 #endif 605 605 } 606 606 607 - unsigned long *empty_zero_page __ro_after_init; 608 - EXPORT_SYMBOL(empty_zero_page); 609 - 610 607 /* 611 608 * pagetable_init() sets up the page tables 612 609 * ··· 636 639 initrd_end - initrd_start, PAGE_KERNEL, 0); 637 640 } 638 641 #endif 639 - 640 - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); 641 - 642 642 } 643 643 644 644 static void __init gateway_init(void)
-6
arch/powerpc/include/asm/pgtable.h
··· 76 76 } 77 77 #define pmd_page_vaddr pmd_page_vaddr 78 78 #endif 79 - /* 80 - * ZERO_PAGE is a global shared page that is always zero: used 81 - * for zero-mapped memory areas etc.. 82 - */ 83 - extern unsigned long empty_zero_page[]; 84 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 85 79 86 80 extern pgd_t swapper_pg_dir[]; 87 81
-3
arch/powerpc/mm/mem.c
··· 38 38 39 39 unsigned long long memory_limit __initdata; 40 40 41 - unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; 42 - EXPORT_SYMBOL(empty_zero_page); 43 - 44 41 pgprot_t __phys_mem_access_prot(unsigned long pfn, unsigned long size, 45 42 pgprot_t vma_prot) 46 43 {
-7
arch/riscv/include/asm/pgtable.h
··· 1285 1285 void misc_mem_init(void); 1286 1286 1287 1287 /* 1288 - * ZERO_PAGE is a global shared page that is always zero, 1289 - * used for zero-mapped memory areas, etc. 1290 - */ 1291 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; 1292 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 1293 - 1294 - /* 1295 1288 * Use set_p*_safe(), and elide TLB flushing, when confident that *no* 1296 1289 * TLB flush will be required as a result of the "set". For example, use 1297 1290 * in scenarios where it is known ahead of time that the routine is
-4
arch/riscv/mm/init.c
··· 69 69 EXPORT_SYMBOL(vmemmap_start_pfn); 70 70 #endif 71 71 72 - unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] 73 - __page_aligned_bss; 74 - EXPORT_SYMBOL(empty_zero_page); 75 - 76 72 extern char _start[]; 77 73 void *_dtb_early_va __initdata; 78 74 uintptr_t _dtb_early_pa __initdata;
-8
arch/sh/include/asm/pgtable.h
··· 20 20 #ifndef __ASSEMBLER__ 21 21 #include <asm/addrspace.h> 22 22 #include <asm/fixmap.h> 23 - 24 - /* 25 - * ZERO_PAGE is a global shared page that is always zero: used 26 - * for zero-mapped memory areas etc.. 27 - */ 28 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; 29 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 30 - 31 23 #endif /* !__ASSEMBLER__ */ 32 24 33 25 /*
+2 -1
arch/sh/include/asm/setup.h
··· 7 7 /* 8 8 * This is set up by the setup-routine at boot-time 9 9 */ 10 - #define PARAM ((unsigned char *)empty_zero_page) 10 + extern unsigned char *boot_params_page; 11 + #define PARAM boot_params_page 11 12 12 13 #define MOUNT_ROOT_RDONLY (*(unsigned long *) (PARAM+0x000)) 13 14 #define RAMDISK_FLAGS (*(unsigned long *) (PARAM+0x004))
+2 -2
arch/sh/kernel/head_32.S
··· 26 26 #endif 27 27 28 28 .section .empty_zero_page, "aw" 29 - ENTRY(empty_zero_page) 29 + ENTRY(boot_params_page) 30 30 .long 1 /* MOUNT_ROOT_RDONLY */ 31 31 .long 0 /* RAMDISK_FLAGS */ 32 32 .long 0x0200 /* ORIG_ROOT_DEV */ ··· 39 39 .long 0x53453f00 + 29 /* "SE?" = 29 bit */ 40 40 #endif 41 41 1: 42 - .skip PAGE_SIZE - empty_zero_page - 1b 42 + .skip PAGE_SIZE - boot_params_page - 1b 43 43 44 44 __HEAD 45 45
-1
arch/sh/kernel/sh_ksyms_32.c
··· 20 20 EXPORT_SYMBOL(csum_partial_copy_generic); 21 21 EXPORT_SYMBOL(copy_page); 22 22 EXPORT_SYMBOL(__clear_user); 23 - EXPORT_SYMBOL(empty_zero_page); 24 23 #ifdef CONFIG_FLATMEM 25 24 /* need in pfn_valid macro */ 26 25 EXPORT_SYMBOL(min_low_pfn);
-1
arch/sh/mm/init.c
··· 332 332 cpu_cache_init(); 333 333 334 334 /* clear the zero-page */ 335 - memset(empty_zero_page, 0, PAGE_SIZE); 336 335 __flush_wback_region(empty_zero_page, PAGE_SIZE); 337 336 338 337 vsyscall_init();
-8
arch/sparc/include/asm/pgtable_32.h
··· 72 72 extern unsigned long pfn_base; 73 73 74 74 /* 75 - * ZERO_PAGE is a global shared page that is always zero: used 76 - * for zero-mapped memory areas etc.. 77 - */ 78 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; 79 - 80 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 81 - 82 - /* 83 75 * In general all page table modifications should use the V8 atomic 84 76 * swap instruction. This insures the mmu and the cpu are in sync 85 77 * with respect to ref/mod bits in the page tables.
-2
arch/sparc/include/asm/setup.h
··· 17 17 */ 18 18 extern unsigned char boot_cpu_id; 19 19 20 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; 21 - 22 20 extern int serial_console; 23 21 static inline int con_is_present(void) 24 22 {
-7
arch/sparc/kernel/head_32.S
··· 57 57 58 58 .align PAGE_SIZE 59 59 60 - /* This was the only reasonable way I could think of to properly align 61 - * these page-table data structures. 62 - */ 63 - .globl empty_zero_page 64 - empty_zero_page: .skip PAGE_SIZE 65 - EXPORT_SYMBOL(empty_zero_page) 66 - 67 60 .global root_flags 68 61 .global ram_flags 69 62 .global root_dev
-4
arch/sparc/mm/init_32.c
··· 246 246 prom_halt(); 247 247 } 248 248 249 - 250 - /* Saves us work later. */ 251 - memset((void *)empty_zero_page, 0, PAGE_SIZE); 252 - 253 249 i = last_valid_pfn >> ((20 - PAGE_SHIFT) + 5); 254 250 i += 1; 255 251 sparc_valid_addr_bitmap = (unsigned long *)
+4 -7
arch/sparc/mm/init_64.c
··· 2492 2492 } 2493 2493 void __init mem_init(void) 2494 2494 { 2495 + phys_addr_t zero_page_pa = kern_base + 2496 + ((unsigned long)&empty_zero_page[0] - KERNBASE); 2497 + 2495 2498 /* 2496 2499 * Must be done after boot memory is put on freelist, because here we 2497 2500 * might set fields in deferred struct pages that have not yet been ··· 2507 2504 * Set up the zero page, mark it reserved, so that page count 2508 2505 * is not manipulated when freeing the page from user ptes. 2509 2506 */ 2510 - mem_map_zero = alloc_pages(GFP_KERNEL|__GFP_ZERO, 0); 2511 - if (mem_map_zero == NULL) { 2512 - prom_printf("paging_init: Cannot alloc zero page.\n"); 2513 - prom_halt(); 2514 - } 2515 - mark_page_reserved(mem_map_zero); 2516 - 2507 + mem_map_zero = pfn_to_page(PHYS_PFN(zero_page_pa)); 2517 2508 2518 2509 if (tlb_type == cheetah || tlb_type == cheetah_plus) 2519 2510 cheetah_ecache_flush_init();
-9
arch/um/include/asm/pgtable.h
··· 34 34 35 35 extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 36 36 37 - /* zero page used for uninitialized stuff */ 38 - extern unsigned long *empty_zero_page; 39 - 40 37 /* Just any arbitrary offset to the start of the vmalloc VM area: the 41 38 * current 8MB value just means that there will be a 8MB "hole" after the 42 39 * physical memory until the kernel virtual memory starts. That means that ··· 70 73 * Also, write permissions imply read permissions. This is the closest we can 71 74 * get.. 72 75 */ 73 - 74 - /* 75 - * ZERO_PAGE is a global shared page that is always zero: used 76 - * for zero-mapped memory areas etc.. 77 - */ 78 - #define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page) 79 76 80 77 #define pte_clear(mm, addr, xp) pte_set_val(*(xp), (phys_t) 0, __pgprot(_PAGE_NEEDSYNC)) 81 78
-1
arch/um/include/shared/kern_util.h
··· 38 38 extern void uml_pm_wake(void); 39 39 40 40 extern int start_uml(void); 41 - extern void paging_init(void); 42 41 43 42 extern void uml_cleanup(void); 44 43 extern void do_uml_exitcalls(void);
-16
arch/um/kernel/mem.c
··· 44 44 = kasan_init; 45 45 #endif 46 46 47 - /* allocated in paging_init, zeroed in mem_init, and unchanged thereafter */ 48 - unsigned long *empty_zero_page = NULL; 49 - EXPORT_SYMBOL(empty_zero_page); 50 - 51 47 /* 52 48 * Initialized during boot, and readonly for initializing page tables 53 49 * afterwards ··· 60 64 { 61 65 /* Safe to call after jump_label_init(). Enables KASAN. */ 62 66 kasan_init_generic(); 63 - 64 - /* clear the zero-page */ 65 - memset(empty_zero_page, 0, PAGE_SIZE); 66 67 67 68 /* Map in the area just after the brk now that kmalloc is about 68 69 * to be turned on. ··· 80 87 void __init arch_zone_limits_init(unsigned long *max_zone_pfns) 81 88 { 82 89 max_zone_pfns[ZONE_NORMAL] = high_physmem >> PAGE_SHIFT; 83 - } 84 - 85 - void __init paging_init(void) 86 - { 87 - empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE, 88 - PAGE_SIZE); 89 - if (!empty_zero_page) 90 - panic("%s: Failed to allocate %lu bytes align=%lx\n", 91 - __func__, PAGE_SIZE, PAGE_SIZE); 92 90 } 93 91 94 92 /*
-1
arch/um/kernel/um_arch.c
··· 413 413 uml_dtb_init(); 414 414 read_initrd(); 415 415 416 - paging_init(); 417 416 strscpy(boot_command_line, command_line, COMMAND_LINE_SIZE); 418 417 *cmdline_p = command_line; 419 418 setup_hostinfo(host_info, sizeof host_info);
-8
arch/x86/include/asm/pgtable.h
··· 47 47 #define debug_checkwx_user() do { } while (0) 48 48 #endif 49 49 50 - /* 51 - * ZERO_PAGE is a global shared page that is always zero: used 52 - * for zero-mapped memory areas etc.. 53 - */ 54 - extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] 55 - __visible; 56 - #define ZERO_PAGE(vaddr) ((void)(vaddr),virt_to_page(empty_zero_page)) 57 - 58 50 extern spinlock_t pgd_lock; 59 51 extern struct list_head pgd_list; 60 52
-4
arch/x86/kernel/head_32.S
··· 441 441 swapper_pg_dir: 442 442 .fill 1024,4,0 443 443 .fill PTI_USER_PGD_FILL,4,0 444 - .globl empty_zero_page 445 - empty_zero_page: 446 - .fill 4096,1,0 447 - EXPORT_SYMBOL(empty_zero_page) 448 444 449 445 /* 450 446 * This starts the data section.
-7
arch/x86/kernel/head_64.S
··· 684 684 EXPORT_SYMBOL(phys_base) 685 685 686 686 #include "../xen/xen-head.S" 687 - 688 - __PAGE_ALIGNED_BSS 689 - SYM_DATA_START_PAGE_ALIGNED(empty_zero_page) 690 - .skip PAGE_SIZE 691 - SYM_DATA_END(empty_zero_page) 692 - EXPORT_SYMBOL(empty_zero_page) 693 -
-4
arch/xtensa/include/asm/pgtable.h
··· 209 209 #define pgd_ERROR(e) \ 210 210 printk("%s:%d: bad pgd entry %08lx.\n", __FILE__, __LINE__, pgd_val(e)) 211 211 212 - extern unsigned long empty_zero_page[1024]; 213 - 214 - #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 215 - 216 212 #ifdef CONFIG_MMU 217 213 extern pgd_t swapper_pg_dir[PAGE_SIZE/sizeof(pgd_t)]; 218 214 extern void paging_init(void);
-3
arch/xtensa/kernel/head.S
··· 381 381 .fill PAGE_SIZE, 1, 0 382 382 END(swapper_pg_dir) 383 383 #endif 384 - ENTRY(empty_zero_page) 385 - .fill PAGE_SIZE, 1, 0 386 - END(empty_zero_page)
-2
arch/xtensa/kernel/xtensa_ksyms.c
··· 15 15 #include <linux/module.h> 16 16 #include <asm/pgtable.h> 17 17 18 - EXPORT_SYMBOL(empty_zero_page); 19 - 20 18 unsigned int __sync_fetch_and_and_4(volatile void *p, unsigned int v) 21 19 { 22 20 BUG();
+10
include/linux/pgtable.h
··· 1925 1925 * for different ranges in the virtual address space. 1926 1926 * 1927 1927 * zero_page_pfn identifies the first (or the only) pfn for these pages. 1928 + * 1929 + * For architectures that don't __HAVE_COLOR_ZERO_PAGE the zero page lives in 1930 + * empty_zero_page in BSS. 1928 1931 */ 1929 1932 #ifdef __HAVE_COLOR_ZERO_PAGE 1930 1933 static inline int is_zero_pfn(unsigned long pfn) ··· 1954 1951 1955 1952 return zero_page_pfn; 1956 1953 } 1954 + 1955 + extern uint8_t empty_zero_page[PAGE_SIZE]; 1956 + 1957 + #ifndef ZERO_PAGE 1958 + #define ZERO_PAGE(vaddr) ((void)(vaddr),virt_to_page(empty_zero_page)) 1959 + #endif 1960 + 1957 1961 #endif /* __HAVE_COLOR_ZERO_PAGE */ 1958 1962 1959 1963 #ifdef CONFIG_MMU
+5
mm/mm_init.c
··· 56 56 unsigned long zero_page_pfn __ro_after_init; 57 57 EXPORT_SYMBOL(zero_page_pfn); 58 58 59 + #ifndef __HAVE_COLOR_ZERO_PAGE 60 + uint8_t empty_zero_page[PAGE_SIZE] __page_aligned_bss; 61 + EXPORT_SYMBOL(empty_zero_page); 62 + #endif 63 + 59 64 #ifdef CONFIG_DEBUG_MEMORY_INIT 60 65 int __meminitdata mminit_loglevel; 61 66