Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'mm-hotfixes-stable-2026-02-26-14-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull misc fixes from Andrew Morton:
"12 hotfixes. 7 are cc:stable. 8 are for MM.

All are singletons - please see the changelogs for details"

* tag 'mm-hotfixes-stable-2026-02-26-14-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
MAINTAINERS: update Yosry Ahmed's email address
mailmap: add entry for Daniele Alessandrelli
mm: fix NULL NODE_DATA dereference for memoryless nodes on boot
mm/tracing: rss_stat: ensure curr is false from kthread context
mm/kfence: fix KASAN hardware tag faults during late enablement
mm/damon/core: disallow non-power of two min_region_sz
Squashfs: check metadata block offset is within range
MAINTAINERS, mailmap: update e-mail address for Vlastimil Babka
liveupdate: luo_file: remember retrieve() status
mm: thp: deny THP for files on anonymous inodes
mm: change vma_alloc_folio_noprof() macro to inline function
mm/kfence: disable KFENCE upon KASAN HW tags enablement

+102 -42
+4 -1
.mailmap
··· 215 215 Daniel Lezcano <daniel.lezcano@kernel.org> <daniel.lezcano@linexp.org> 216 216 Daniel Lezcano <daniel.lezcano@kernel.org> <dlezcano@fr.ibm.com> 217 217 Daniel Thompson <danielt@kernel.org> <daniel.thompson@linaro.org> 218 + Daniele Alessandrelli <daniele.alessandrelli@gmail.com> <daniele.alessandrelli@intel.com> 218 219 Danilo Krummrich <dakr@kernel.org> <dakr@redhat.com> 219 220 David Brownell <david-b@pacbell.net> 220 221 David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org> ··· 881 880 Vlad Dogaru <ddvlad@gmail.com> <vlad.dogaru@intel.com> 882 881 Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@parallels.com> 883 882 Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@virtuozzo.com> 883 + Vlastimil Babka <vbabka@kernel.org> <vbabka@suse.cz> 884 884 WangYuli <wangyuli@aosc.io> <wangyl5933@chinaunicom.cn> 885 885 WangYuli <wangyuli@aosc.io> <wangyuli@deepin.org> 886 886 Weiwen Hu <huweiwen@linux.alibaba.com> <sehuww@mail.scut.edu.cn> ··· 896 894 Ying Huang <huang.ying.caritas@gmail.com> <ying.huang@intel.com> 897 895 Yixun Lan <dlan@kernel.org> <dlan@gentoo.org> 898 896 Yixun Lan <dlan@kernel.org> <yixun.lan@amlogic.com> 899 - Yosry Ahmed <yosry.ahmed@linux.dev> <yosryahmed@google.com> 897 + Yosry Ahmed <yosry@kernel.org> <yosryahmed@google.com> 898 + Yosry Ahmed <yosry@kernel.org> <yosry.ahmed@linux.dev> 900 899 Yu-Chun Lin <eleanor.lin@realtek.com> <eleanor15x@gmail.com> 901 900 Yusuke Goda <goda.yusuke@renesas.com> 902 901 Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com>
+10 -10
MAINTAINERS
··· 16654 16654 M: David Hildenbrand <david@kernel.org> 16655 16655 R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16656 16656 R: Liam R. Howlett <Liam.Howlett@oracle.com> 16657 - R: Vlastimil Babka <vbabka@suse.cz> 16657 + R: Vlastimil Babka <vbabka@kernel.org> 16658 16658 R: Mike Rapoport <rppt@kernel.org> 16659 16659 R: Suren Baghdasaryan <surenb@google.com> 16660 16660 R: Michal Hocko <mhocko@suse.com> ··· 16784 16784 M: David Hildenbrand <david@kernel.org> 16785 16785 R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16786 16786 R: Liam R. Howlett <Liam.Howlett@oracle.com> 16787 - R: Vlastimil Babka <vbabka@suse.cz> 16787 + R: Vlastimil Babka <vbabka@kernel.org> 16788 16788 R: Mike Rapoport <rppt@kernel.org> 16789 16789 R: Suren Baghdasaryan <surenb@google.com> 16790 16790 R: Michal Hocko <mhocko@suse.com> ··· 16839 16839 16840 16840 MEMORY MANAGEMENT - PAGE ALLOCATOR 16841 16841 M: Andrew Morton <akpm@linux-foundation.org> 16842 - M: Vlastimil Babka <vbabka@suse.cz> 16842 + M: Vlastimil Babka <vbabka@kernel.org> 16843 16843 R: Suren Baghdasaryan <surenb@google.com> 16844 16844 R: Michal Hocko <mhocko@suse.com> 16845 16845 R: Brendan Jackman <jackmanb@google.com> ··· 16885 16885 M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16886 16886 R: Rik van Riel <riel@surriel.com> 16887 16887 R: Liam R. Howlett <Liam.Howlett@oracle.com> 16888 - R: Vlastimil Babka <vbabka@suse.cz> 16888 + R: Vlastimil Babka <vbabka@kernel.org> 16889 16889 R: Harry Yoo <harry.yoo@oracle.com> 16890 16890 R: Jann Horn <jannh@google.com> 16891 16891 L: linux-mm@kvack.org ··· 16984 16984 M: Andrew Morton <akpm@linux-foundation.org> 16985 16985 M: Liam R. Howlett <Liam.Howlett@oracle.com> 16986 16986 M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16987 - R: Vlastimil Babka <vbabka@suse.cz> 16987 + R: Vlastimil Babka <vbabka@kernel.org> 16988 16988 R: Jann Horn <jannh@google.com> 16989 16989 R: Pedro Falcato <pfalcato@suse.de> 16990 16990 L: linux-mm@kvack.org ··· 17014 17014 M: Suren Baghdasaryan <surenb@google.com> 17015 17015 M: Liam R. Howlett <Liam.Howlett@oracle.com> 17016 17016 M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 17017 - R: Vlastimil Babka <vbabka@suse.cz> 17017 + R: Vlastimil Babka <vbabka@kernel.org> 17018 17018 R: Shakeel Butt <shakeel.butt@linux.dev> 17019 17019 L: linux-mm@kvack.org 17020 17020 S: Maintained ··· 17030 17030 M: Liam R. Howlett <Liam.Howlett@oracle.com> 17031 17031 M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 17032 17032 M: David Hildenbrand <david@kernel.org> 17033 - R: Vlastimil Babka <vbabka@suse.cz> 17033 + R: Vlastimil Babka <vbabka@kernel.org> 17034 17034 R: Jann Horn <jannh@google.com> 17035 17035 L: linux-mm@kvack.org 17036 17036 S: Maintained ··· 23172 23172 RUST [ALLOC] 23173 23173 M: Danilo Krummrich <dakr@kernel.org> 23174 23174 R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 23175 - R: Vlastimil Babka <vbabka@suse.cz> 23175 + R: Vlastimil Babka <vbabka@kernel.org> 23176 23176 R: Liam R. Howlett <Liam.Howlett@oracle.com> 23177 23177 R: Uladzislau Rezki <urezki@gmail.com> 23178 23178 L: rust-for-linux@vger.kernel.org ··· 24348 24348 F: drivers/nvmem/layouts/sl28vpd.c 24349 24349 24350 24350 SLAB ALLOCATOR 24351 - M: Vlastimil Babka <vbabka@suse.cz> 24351 + M: Vlastimil Babka <vbabka@kernel.org> 24352 24352 M: Andrew Morton <akpm@linux-foundation.org> 24353 24353 R: Christoph Lameter <cl@gentwo.org> 24354 24354 R: David Rientjes <rientjes@google.com> ··· 29184 29184 29185 29185 ZSWAP COMPRESSED SWAP CACHING 29186 29186 M: Johannes Weiner <hannes@cmpxchg.org> 29187 - M: Yosry Ahmed <yosry.ahmed@linux.dev> 29187 + M: Yosry Ahmed <yosry@kernel.org> 29188 29188 M: Nhat Pham <nphamcs@gmail.com> 29189 29189 R: Chengming Zhou <chengming.zhou@linux.dev> 29190 29190 L: linux-mm@kvack.org
+3
fs/squashfs/cache.c
··· 344 344 if (unlikely(length < 0)) 345 345 return -EIO; 346 346 347 + if (unlikely(*offset < 0 || *offset >= SQUASHFS_METADATA_SIZE)) 348 + return -EIO; 349 + 347 350 while (length) { 348 351 entry = squashfs_cache_get(sb, msblk->block_cache, *block, 0); 349 352 if (entry->error) {
+5 -2
include/linux/gfp.h
··· 339 339 { 340 340 return folio_alloc_noprof(gfp, order); 341 341 } 342 - #define vma_alloc_folio_noprof(gfp, order, vma, addr) \ 343 - folio_alloc_noprof(gfp, order) 342 + static inline struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, 343 + struct vm_area_struct *vma, unsigned long addr) 344 + { 345 + return folio_alloc_noprof(gfp, order); 346 + } 344 347 #endif 345 348 346 349 #define alloc_pages(...) alloc_hooks(alloc_pages_noprof(__VA_ARGS__))
+6 -3
include/linux/liveupdate.h
··· 23 23 /** 24 24 * struct liveupdate_file_op_args - Arguments for file operation callbacks. 25 25 * @handler: The file handler being called. 26 - * @retrieved: The retrieve status for the 'can_finish / finish' 27 - * operation. 26 + * @retrieve_status: The retrieve status for the 'can_finish / finish' 27 + * operation. A value of 0 means the retrieve has not been 28 + * attempted, a positive value means the retrieve was 29 + * successful, and a negative value means the retrieve failed, 30 + * and the value is the error code of the call. 28 31 * @file: The file object. For retrieve: [OUT] The callback sets 29 32 * this to the new file. For other ops: [IN] The caller sets 30 33 * this to the file being operated on. ··· 43 40 */ 44 41 struct liveupdate_file_op_args { 45 42 struct liveupdate_file_handler *handler; 46 - bool retrieved; 43 + int retrieve_status; 47 44 struct file *file; 48 45 u64 serialized_data; 49 46 void *private_data;
+7 -1
include/trace/events/kmem.h
··· 440 440 441 441 TP_fast_assign( 442 442 __entry->mm_id = mm_ptr_to_hash(mm); 443 - __entry->curr = !!(current->mm == mm); 443 + /* 444 + * curr is true if the mm matches the current task's mm_struct. 445 + * Since kthreads (PF_KTHREAD) have no mm_struct of their own 446 + * but can borrow one via kthread_use_mm(), we must filter them 447 + * out to avoid incorrectly attributing the RSS update to them. 448 + */ 449 + __entry->curr = current->mm == mm && !(current->flags & PF_KTHREAD); 444 450 __entry->member = member; 445 451 __entry->size = (percpu_counter_sum_positive(&mm->rss_stat[member]) 446 452 << PAGE_SHIFT);
+25 -16
kernel/liveupdate/luo_file.c
··· 134 134 * state that is not preserved. Set by the handler's .preserve() 135 135 * callback, and must be freed in the handler's .unpreserve() 136 136 * callback. 137 - * @retrieved: A flag indicating whether a user/kernel in the new kernel has 137 + * @retrieve_status: Status code indicating whether a user/kernel in the new kernel has 138 138 * successfully called retrieve() on this file. This prevents 139 - * multiple retrieval attempts. 139 + * multiple retrieval attempts. A value of 0 means a retrieve() 140 + * has not been attempted, a positive value means the retrieve() 141 + * was successful, and a negative value means the retrieve() 142 + * failed, and the value is the error code of the call. 140 143 * @mutex: A mutex that protects the fields of this specific instance 141 144 * (e.g., @retrieved, @file), ensuring that operations like 142 145 * retrieving or finishing a file are atomic. ··· 164 161 struct file *file; 165 162 u64 serialized_data; 166 163 void *private_data; 167 - bool retrieved; 164 + int retrieve_status; 168 165 struct mutex mutex; 169 166 struct list_head list; 170 167 u64 token; ··· 301 298 luo_file->file = file; 302 299 luo_file->fh = fh; 303 300 luo_file->token = token; 304 - luo_file->retrieved = false; 305 301 mutex_init(&luo_file->mutex); 306 302 307 303 args.handler = fh; ··· 579 577 return -ENOENT; 580 578 581 579 guard(mutex)(&luo_file->mutex); 582 - if (luo_file->retrieved) { 580 + if (luo_file->retrieve_status < 0) { 581 + /* Retrieve was attempted and it failed. Return the error code. */ 582 + return luo_file->retrieve_status; 583 + } 584 + 585 + if (luo_file->retrieve_status > 0) { 583 586 /* 584 587 * Someone is asking for this file again, so get a reference 585 588 * for them. ··· 597 590 args.handler = luo_file->fh; 598 591 args.serialized_data = luo_file->serialized_data; 599 592 err = luo_file->fh->ops->retrieve(&args); 600 - if (!err) { 601 - luo_file->file = args.file; 602 - 603 - /* Get reference so we can keep this file in LUO until finish */ 604 - get_file(luo_file->file); 605 - *filep = luo_file->file; 606 - luo_file->retrieved = true; 593 + if (err) { 594 + /* Keep the error code for later use. */ 595 + luo_file->retrieve_status = err; 596 + return err; 607 597 } 608 598 609 - return err; 599 + luo_file->file = args.file; 600 + /* Get reference so we can keep this file in LUO until finish */ 601 + get_file(luo_file->file); 602 + *filep = luo_file->file; 603 + luo_file->retrieve_status = 1; 604 + 605 + return 0; 610 606 } 611 607 612 608 static int luo_file_can_finish_one(struct luo_file_set *file_set, ··· 625 615 args.handler = luo_file->fh; 626 616 args.file = luo_file->file; 627 617 args.serialized_data = luo_file->serialized_data; 628 - args.retrieved = luo_file->retrieved; 618 + args.retrieve_status = luo_file->retrieve_status; 629 619 can_finish = luo_file->fh->ops->can_finish(&args); 630 620 } 631 621 ··· 642 632 args.handler = luo_file->fh; 643 633 args.file = luo_file->file; 644 634 args.serialized_data = luo_file->serialized_data; 645 - args.retrieved = luo_file->retrieved; 635 + args.retrieve_status = luo_file->retrieve_status; 646 636 647 637 luo_file->fh->ops->finish(&args); 648 638 luo_flb_file_finish(luo_file->fh); ··· 798 788 luo_file->file = NULL; 799 789 luo_file->serialized_data = file_ser[i].data; 800 790 luo_file->token = file_ser[i].token; 801 - luo_file->retrieved = false; 802 791 mutex_init(&luo_file->mutex); 803 792 list_add_tail(&luo_file->list, &file_set->files_list); 804 793 }
+3
mm/damon/core.c
··· 1252 1252 { 1253 1253 int err; 1254 1254 1255 + if (!is_power_of_2(src->min_region_sz)) 1256 + return -EINVAL; 1257 + 1255 1258 err = damon_commit_schemes(dst, src); 1256 1259 if (err) 1257 1260 return err;
+3
mm/huge_memory.c
··· 94 94 95 95 inode = file_inode(vma->vm_file); 96 96 97 + if (IS_ANON_FILE(inode)) 98 + return false; 99 + 97 100 return !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode); 98 101 } 99 102
+23 -6
mm/kfence/core.c
··· 13 13 #include <linux/hash.h> 14 14 #include <linux/irq_work.h> 15 15 #include <linux/jhash.h> 16 + #include <linux/kasan-enabled.h> 16 17 #include <linux/kcsan-checks.h> 17 18 #include <linux/kfence.h> 18 19 #include <linux/kmemleak.h> ··· 918 917 return; 919 918 920 919 /* 920 + * If KASAN hardware tags are enabled, disable KFENCE, because it 921 + * does not support MTE yet. 922 + */ 923 + if (kasan_hw_tags_enabled()) { 924 + pr_info("disabled as KASAN HW tags are enabled\n"); 925 + if (__kfence_pool) { 926 + memblock_free(__kfence_pool, KFENCE_POOL_SIZE); 927 + __kfence_pool = NULL; 928 + } 929 + kfence_sample_interval = 0; 930 + return; 931 + } 932 + 933 + /* 921 934 * If the pool has already been initialized by arch, there is no need to 922 935 * re-allocate the memory pool. 923 936 */ ··· 1004 989 #ifdef CONFIG_CONTIG_ALLOC 1005 990 struct page *pages; 1006 991 1007 - pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL, first_online_node, 1008 - NULL); 992 + pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL | __GFP_SKIP_KASAN, 993 + first_online_node, NULL); 1009 994 if (!pages) 1010 995 return -ENOMEM; 1011 996 1012 997 __kfence_pool = page_to_virt(pages); 1013 - pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL, first_online_node, 1014 - NULL); 998 + pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL | __GFP_SKIP_KASAN, 999 + first_online_node, NULL); 1015 1000 if (pages) 1016 1001 kfence_metadata_init = page_to_virt(pages); 1017 1002 #else ··· 1021 1006 return -EINVAL; 1022 1007 } 1023 1008 1024 - __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL); 1009 + __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, 1010 + GFP_KERNEL | __GFP_SKIP_KASAN); 1025 1011 if (!__kfence_pool) 1026 1012 return -ENOMEM; 1027 1013 1028 - kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE, GFP_KERNEL); 1014 + kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE, 1015 + GFP_KERNEL | __GFP_SKIP_KASAN); 1029 1016 #endif 1030 1017 1031 1018 if (!kfence_metadata_init)
+6 -1
mm/memfd_luo.c
··· 326 326 struct memfd_luo_folio_ser *folios_ser; 327 327 struct memfd_luo_ser *ser; 328 328 329 - if (args->retrieved) 329 + /* 330 + * If retrieve was successful, nothing to do. If it failed, retrieve() 331 + * already cleaned up everything it could. So nothing to do there 332 + * either. Only need to clean up when retrieve was not called. 333 + */ 334 + if (args->retrieve_status) 330 335 return; 331 336 332 337 ser = phys_to_virt(args->serialized_data);
+5 -1
mm/mm_init.c
··· 1896 1896 for_each_node(nid) { 1897 1897 pg_data_t *pgdat; 1898 1898 1899 - if (!node_online(nid)) 1899 + /* 1900 + * If an architecture has not allocated node data for 1901 + * this node, presume the node is memoryless or offline. 1902 + */ 1903 + if (!NODE_DATA(nid)) 1900 1904 alloc_offline_node_data(nid); 1901 1905 1902 1906 pgdat = NODE_DATA(nid);
+2 -1
mm/page_alloc.c
··· 6928 6928 { 6929 6929 const gfp_t reclaim_mask = __GFP_IO | __GFP_FS | __GFP_RECLAIM; 6930 6930 const gfp_t action_mask = __GFP_COMP | __GFP_RETRY_MAYFAIL | __GFP_NOWARN | 6931 - __GFP_ZERO | __GFP_ZEROTAGS | __GFP_SKIP_ZERO; 6931 + __GFP_ZERO | __GFP_ZEROTAGS | __GFP_SKIP_ZERO | 6932 + __GFP_SKIP_KASAN; 6932 6933 const gfp_t cc_action_mask = __GFP_RETRY_MAYFAIL | __GFP_NOWARN; 6933 6934 6934 6935 /*