Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

SLUB: minimum alignment fixes

If ARCH_KMALLOC_MINALIGN is set to a value greater than 8 (SLUBs smallest
kmalloc cache) then SLUB may generate duplicate slabs in sysfs (yes again)
because the object size is padded to reach ARCH_KMALLOC_MINALIGN. Thus the
size of the small slabs is all the same.

No arch sets ARCH_KMALLOC_MINALIGN larger than 8 though except mips which
for some reason wants a 128 byte alignment.

This patch increases the size of the smallest cache if
ARCH_KMALLOC_MINALIGN is greater than 8. In that case more and more of the
smallest caches are disabled.

If we do that then the count of the active general caches that is displayed
on boot is not correct anymore since we may skip elements of the kmalloc
array. So count them separately.

This approach was tested by Havard yesterday.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Christoph Lameter and committed by
Linus Torvalds
4b356be0 8dab5241

+26 -7
+11 -2
include/linux/slub_def.h
··· 28 28 int size; /* The size of an object including meta data */ 29 29 int objsize; /* The size of an object without meta data */ 30 30 int offset; /* Free pointer offset. */ 31 - unsigned int order; 31 + int order; 32 32 33 33 /* 34 34 * Avoid an extra cache line for UP, SMP and for the node local to ··· 56 56 /* 57 57 * Kmalloc subsystem. 58 58 */ 59 - #define KMALLOC_SHIFT_LOW 3 59 + #if defined(ARCH_KMALLOC_MINALIGN) && ARCH_KMALLOC_MINALIGN > 8 60 + #define KMALLOC_MIN_SIZE ARCH_KMALLOC_MINALIGN 61 + #else 62 + #define KMALLOC_MIN_SIZE 8 63 + #endif 64 + 65 + #define KMALLOC_SHIFT_LOW ilog2(KMALLOC_MIN_SIZE) 60 66 61 67 /* 62 68 * We keep the general caches in an array of slab caches that are used for ··· 81 75 82 76 if (size > KMALLOC_MAX_SIZE) 83 77 return -1; 78 + 79 + if (size <= KMALLOC_MIN_SIZE) 80 + return KMALLOC_SHIFT_LOW; 84 81 85 82 if (size > 64 && size <= 96) 86 83 return 1;
+15 -5
mm/slub.c
··· 2436 2436 void __init kmem_cache_init(void) 2437 2437 { 2438 2438 int i; 2439 + int caches = 0; 2439 2440 2440 2441 #ifdef CONFIG_NUMA 2441 2442 /* ··· 2447 2446 create_kmalloc_cache(&kmalloc_caches[0], "kmem_cache_node", 2448 2447 sizeof(struct kmem_cache_node), GFP_KERNEL); 2449 2448 kmalloc_caches[0].refcount = -1; 2449 + caches++; 2450 2450 #endif 2451 2451 2452 2452 /* Able to allocate the per node structures */ 2453 2453 slab_state = PARTIAL; 2454 2454 2455 2455 /* Caches that are not of the two-to-the-power-of size */ 2456 - create_kmalloc_cache(&kmalloc_caches[1], 2456 + if (KMALLOC_MIN_SIZE <= 64) { 2457 + create_kmalloc_cache(&kmalloc_caches[1], 2457 2458 "kmalloc-96", 96, GFP_KERNEL); 2458 - create_kmalloc_cache(&kmalloc_caches[2], 2459 + caches++; 2460 + } 2461 + if (KMALLOC_MIN_SIZE <= 128) { 2462 + create_kmalloc_cache(&kmalloc_caches[2], 2459 2463 "kmalloc-192", 192, GFP_KERNEL); 2464 + caches++; 2465 + } 2460 2466 2461 - for (i = KMALLOC_SHIFT_LOW; i <= KMALLOC_SHIFT_HIGH; i++) 2467 + for (i = KMALLOC_SHIFT_LOW; i <= KMALLOC_SHIFT_HIGH; i++) { 2462 2468 create_kmalloc_cache(&kmalloc_caches[i], 2463 2469 "kmalloc", 1 << i, GFP_KERNEL); 2470 + caches++; 2471 + } 2464 2472 2465 2473 slab_state = UP; 2466 2474 ··· 2486 2476 nr_cpu_ids * sizeof(struct page *); 2487 2477 2488 2478 printk(KERN_INFO "SLUB: Genslabs=%d, HWalign=%d, Order=%d-%d, MinObjects=%d," 2489 - " Processors=%d, Nodes=%d\n", 2490 - KMALLOC_SHIFT_HIGH, cache_line_size(), 2479 + " CPUs=%d, Nodes=%d\n", 2480 + caches, cache_line_size(), 2491 2481 slub_min_order, slub_max_order, slub_min_objects, 2492 2482 nr_cpu_ids, nr_node_ids); 2493 2483 }