Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

powerpc/mem: Move CMA reservations to arch_mm_preinit

commit 4267739cabb8 ("arch, mm: consolidate initialization of SPARSE memory model"),
changed the initialization order of "pageblock_order" from...
start_kernel()
- setup_arch()
- initmem_init()
- sparse_init()
- set_pageblock_order(); // this sets the pageblock_order
- xxx_cma_reserve();

to...
start_kernel()
- setup_arch()
- xxx_cma_reserve();
- mm_core_init_early()
- free_area_init()
- sparse_init()
- set_pageblock_order() // this sets the pageblock_order.

So this means, pageblock_order is not initialized before these cma
reservation function calls, hence we are seeing CMA failures like...

[ 0.000000] kvm_cma_reserve: reserving 3276 MiB for global area
[ 0.000000] cma: pageblock_order not yet initialized. Called during early boot?
[ 0.000000] cma: Failed to reserve 3276 MiB
....
[ 0.000000][ T0] cma: pageblock_order not yet initialized. Called during early boot?
[ 0.000000][ T0] cma: Failed to reserve 1024 MiB

This patch moves these CMA reservations to arch_mm_preinit() which
happens in mm_core_init() (which happens after pageblock_order is
initialized), but before the memblock moves the free memory to buddy.

Fixes: 4267739cabb8 ("arch, mm: consolidate initialization of SPARSE memory model")
Suggested-by: Mike Rapoport <rppt@kernel.org>
Reported-and-tested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Closes: https://lore.kernel.org/linuxppc-dev/4c338a29-d190-44f3-8874-6cfa0a031f0b@linux.ibm.com/
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Tested-by: Dan Horák <dan@danny.cz>
Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com>
Link: https://patch.msgid.link/6e532cf0db5be99afbe20eed699163d5e86cd71f.1772303986.git.ritesh.list@gmail.com

authored by

Ritesh Harjani (IBM) and committed by
Madhavan Srinivasan
0a8321dd 35e4f2a1

+14 -10
-10
arch/powerpc/kernel/setup-common.c
··· 35 35 #include <linux/of_irq.h> 36 36 #include <linux/hugetlb.h> 37 37 #include <linux/pgtable.h> 38 - #include <asm/kexec.h> 39 38 #include <asm/io.h> 40 39 #include <asm/paca.h> 41 40 #include <asm/processor.h> ··· 993 994 smp_release_cpus(); 994 995 995 996 initmem_init(); 996 - 997 - /* 998 - * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM and 999 - * hugetlb. These must be called after initmem_init(), so that 1000 - * pageblock_order is initialised. 1001 - */ 1002 - fadump_cma_init(); 1003 - kdump_cma_reserve(); 1004 - kvm_cma_reserve(); 1005 997 1006 998 early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT); 1007 999
+14
arch/powerpc/mm/mem.c
··· 30 30 #include <asm/setup.h> 31 31 #include <asm/fixmap.h> 32 32 33 + #include <asm/fadump.h> 34 + #include <asm/kexec.h> 35 + #include <asm/kvm_ppc.h> 36 + 33 37 #include <mm/mmu_decl.h> 34 38 35 39 unsigned long long memory_limit __initdata; ··· 272 268 273 269 void __init arch_mm_preinit(void) 274 270 { 271 + 272 + /* 273 + * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM 274 + * and hugetlb. These must be called after pageblock_order is 275 + * initialised. 276 + */ 277 + fadump_cma_init(); 278 + kdump_cma_reserve(); 279 + kvm_cma_reserve(); 280 + 275 281 /* 276 282 * book3s is limited to 16 page sizes due to encoding this in 277 283 * a 4-bit field for slices.