Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm: fix pageblock bitmap allocation

Commit c060f943d092 ("mm: use aligned zone start for pfn_to_bitidx
calculation") fixed out calculation of the index into the pageblock
bitmap when a !SPARSEMEM zome was not aligned to pageblock_nr_pages.

However, the _allocation_ of that bitmap had never taken this alignment
requirement into accout, so depending on the exact size and alignment of
the zone, the use of that index could then access past the allocation,
resulting in some very subtle memory corruption.

This was reported (and bisected) by Ingo Molnar: one of his random
config builds would hang with certain very specific kernel command line
options.

In the meantime, commit c060f943d092 has been marked for stable, so this
fix needs to be back-ported to the stable kernels that backported the
commit to use the right alignment.

Bisected-and-tested-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

+9 -6
+9 -6
mm/page_alloc.c
··· 4420 4420 * round what is now in bits to nearest long in bits, then return it in 4421 4421 * bytes. 4422 4422 */ 4423 - static unsigned long __init usemap_size(unsigned long zonesize) 4423 + static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned long zonesize) 4424 4424 { 4425 4425 unsigned long usemapsize; 4426 4426 4427 + zonesize += zone_start_pfn & (pageblock_nr_pages-1); 4427 4428 usemapsize = roundup(zonesize, pageblock_nr_pages); 4428 4429 usemapsize = usemapsize >> pageblock_order; 4429 4430 usemapsize *= NR_PAGEBLOCK_BITS; ··· 4434 4433 } 4435 4434 4436 4435 static void __init setup_usemap(struct pglist_data *pgdat, 4437 - struct zone *zone, unsigned long zonesize) 4436 + struct zone *zone, 4437 + unsigned long zone_start_pfn, 4438 + unsigned long zonesize) 4438 4439 { 4439 - unsigned long usemapsize = usemap_size(zonesize); 4440 + unsigned long usemapsize = usemap_size(zone_start_pfn, zonesize); 4440 4441 zone->pageblock_flags = NULL; 4441 4442 if (usemapsize) 4442 4443 zone->pageblock_flags = alloc_bootmem_node_nopanic(pgdat, 4443 4444 usemapsize); 4444 4445 } 4445 4446 #else 4446 - static inline void setup_usemap(struct pglist_data *pgdat, 4447 - struct zone *zone, unsigned long zonesize) {} 4447 + static inline void setup_usemap(struct pglist_data *pgdat, struct zone *zone, 4448 + unsigned long zone_start_pfn, unsigned long zonesize) {} 4448 4449 #endif /* CONFIG_SPARSEMEM */ 4449 4450 4450 4451 #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE ··· 4597 4594 continue; 4598 4595 4599 4596 set_pageblock_order(); 4600 - setup_usemap(pgdat, zone, size); 4597 + setup_usemap(pgdat, zone, zone_start_pfn, size); 4601 4598 ret = init_currently_empty_zone(zone, zone_start_pfn, 4602 4599 size, MEMMAP_EARLY); 4603 4600 BUG_ON(ret);