Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'mm-hotfixes-stable-2026-04-30-15-39' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull MM fixes from Andrew Morton:
"20 hotfixes. All are for MM (and for MMish maintainers). 9 are
cc:stable and the remainder are for post-7.0 issues or aren't deemed
suitable for backporting.

There are two DAMON series from SeongJae Park which address races
which could lead to use-after-free errors, and avoid the possibility
of presenting stale parameter values to users"

* tag 'mm-hotfixes-stable-2026-04-30-15-39' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
mm: memcontrol: fix rcu unbalance in get_non_dying_memcg_end()
mm/userfaultfd: detect VMA type change after copy retry in mfill_copy_folio_retry()
MAINTAINERS: remove stale kdump project URL
mm/damon/stat: detect and use fresh enabled value
mm/damon/lru_sort: detect and use fresh enabled and kdamond_pid values
mm/damon/reclaim: detect and use fresh enabled and kdamond_pid values
selftests/mm: specify requirement for PROC_MEM_ALWAYS_FORCE=y
mm/damon/sysfs-schemes: protect path kfree() with damon_sysfs_lock
mm/damon/sysfs-schemes: protect memcg_path kfree() with damon_sysfs_lock
MAINTAINERS: update Li Wang's email address
MAINTAINERS, mailmap: update email address for Qi Zheng
MAINTAINERS: update Liam's email address
mm/hugetlb_cma: round up per_node before logging it
MAINTAINERS: fix regex pattern in CORE MM category
mm/vma: do not try to unmap a VMA if mmap_prepare() invoked from mmap()
mm: start background writeback based on per-wb threshold for strictlimit BDIs
kho: fix error handling in kho_add_subtree()
liveupdate: fix return value on session allocation failure
mailmap: update entry for Dan Carpenter
vmalloc: fix buffer overflow in vrealloc_node_align()

+262 -140
+5
.mailmap
··· 207 207 Colin Ian King <colin.i.king@gmail.com> <colin.king@canonical.com> 208 208 Corey Minyard <minyard@acm.org> 209 209 Damian Hobson-Garcia <dhobsong@igel.co.jp> 210 + Dan Carpenter <error27@gmail.com> <dan.carpenter@linaro.org> 210 211 Dan Carpenter <error27@gmail.com> <dan.carpenter@oracle.com> 211 212 Dan Williams <djbw@kernel.org> <dan.j.williams@intel.com> 212 213 Daniel Borkmann <daniel@iogearbox.net> <danborkmann@googlemail.com> ··· 496 495 Leon Romanovsky <leon@kernel.org> <leonro@mellanox.com> 497 496 Leon Romanovsky <leon@kernel.org> <leonro@nvidia.com> 498 497 Leo Yan <leo.yan@linux.dev> <leo.yan@linaro.org> 498 + Liam R. Howlett <liam@infradead.org> <Liam.Howlett@oracle.com> 499 499 Liam Mark <quic_lmark@quicinc.com> <lmark@codeaurora.org> 500 500 Linas Vepstas <linas@austin.ibm.com> 501 501 Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch> ··· 507 505 Linus Walleij <linusw@kernel.org> <linus.walleij@linaro.org> 508 506 Linus Walleij <linusw@kernel.org> <triad@df.lth.se> 509 507 <linux-hardening@vger.kernel.org> <kernel-hardening@lists.openwall.com> 508 + Li Wang <li.wang@linux.dev> <liwang@redhat.com> 509 + Li Wang <li.wang@linux.dev> <wangli.ahau@gmail.com> 510 510 Li Yang <leoyang.li@nxp.com> <leoli@freescale.com> 511 511 Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org> 512 512 Lior David <quic_liord@quicinc.com> <liord@codeaurora.org> ··· 691 687 Puranjay Mohan <puranjay@kernel.org> <puranjay12@gmail.com> 692 688 Qais Yousef <qyousef@layalina.io> <qais.yousef@imgtec.com> 693 689 Qais Yousef <qyousef@layalina.io> <qais.yousef@arm.com> 690 + Qi Zheng <qi.zheng@linux.dev> <zhengqi.arch@bytedance.com> 694 691 Quentin Monnet <qmo@kernel.org> <quentin.monnet@netronome.com> 695 692 Quentin Monnet <qmo@kernel.org> <quentin@isovalent.com> 696 693 Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com>
+15 -15
MAINTAINERS
··· 13860 13860 R: Dave Young <ruirui.yang@linux.dev> 13861 13861 L: kexec@lists.infradead.org 13862 13862 S: Maintained 13863 - W: http://lse.sourceforge.net/kdump/ 13864 13863 F: Documentation/admin-guide/kdump/ 13865 13864 F: fs/proc/vmcore.c 13866 13865 F: include/linux/crash_core.h ··· 15251 15252 M: Cyril Hrubis <chrubis@suse.cz> 15252 15253 M: Jan Stancek <jstancek@redhat.com> 15253 15254 M: Petr Vorel <pvorel@suse.cz> 15254 - M: Li Wang <liwang@redhat.com> 15255 + M: Li Wang <li.wang@linux.dev> 15255 15256 M: Yang Xu <xuyang2018.jy@fujitsu.com> 15256 15257 M: Xiao Yang <yangx.jy@fujitsu.com> 15257 15258 L: ltp@lists.linux.it (subscribers-only) ··· 15398 15399 F: net/mctp/ 15399 15400 15400 15401 MAPLE TREE 15401 - M: Liam R. Howlett <Liam.Howlett@oracle.com> 15402 + M: Liam R. Howlett <liam@infradead.org> 15402 15403 R: Alice Ryhl <aliceryhl@google.com> 15403 15404 R: Andrew Ballance <andrewjballance@gmail.com> 15404 15405 L: maple-tree@lists.infradead.org ··· 16758 16759 M: Andrew Morton <akpm@linux-foundation.org> 16759 16760 M: David Hildenbrand <david@kernel.org> 16760 16761 R: Lorenzo Stoakes <ljs@kernel.org> 16761 - R: Liam R. Howlett <Liam.Howlett@oracle.com> 16762 + R: Liam R. Howlett <liam@infradead.org> 16762 16763 R: Vlastimil Babka <vbabka@kernel.org> 16763 16764 R: Mike Rapoport <rppt@kernel.org> 16764 16765 R: Suren Baghdasaryan <surenb@google.com> ··· 16804 16805 F: mm/util.c 16805 16806 F: mm/vmpressure.c 16806 16807 F: mm/vmstat.c 16807 - N: include/linux/page[-_]* 16808 + N: include\/linux\/page[-_][a-zA-Z]* 16808 16809 16809 16810 MEMORY MANAGEMENT - EXECMEM 16810 16811 M: Andrew Morton <akpm@linux-foundation.org> ··· 16894 16895 M: Andrew Morton <akpm@linux-foundation.org> 16895 16896 M: David Hildenbrand <david@kernel.org> 16896 16897 R: Lorenzo Stoakes <ljs@kernel.org> 16897 - R: Liam R. Howlett <Liam.Howlett@oracle.com> 16898 + R: Liam R. Howlett <liam@infradead.org> 16898 16899 R: Vlastimil Babka <vbabka@kernel.org> 16899 16900 R: Mike Rapoport <rppt@kernel.org> 16900 16901 R: Suren Baghdasaryan <surenb@google.com> ··· 16961 16962 F: include/linux/compaction.h 16962 16963 F: include/linux/gfp.h 16963 16964 F: include/linux/page-isolation.h 16965 + F: include/linux/pageblock-flags.h 16964 16966 F: mm/compaction.c 16965 16967 F: mm/debug_page_alloc.c 16966 16968 F: mm/debug_page_ref.c ··· 16983 16983 M: Johannes Weiner <hannes@cmpxchg.org> 16984 16984 R: David Hildenbrand <david@kernel.org> 16985 16985 R: Michal Hocko <mhocko@kernel.org> 16986 - R: Qi Zheng <zhengqi.arch@bytedance.com> 16986 + R: Qi Zheng <qi.zheng@linux.dev> 16987 16987 R: Shakeel Butt <shakeel.butt@linux.dev> 16988 16988 R: Lorenzo Stoakes <ljs@kernel.org> 16989 16989 L: linux-mm@kvack.org ··· 16996 16996 M: David Hildenbrand <david@kernel.org> 16997 16997 M: Lorenzo Stoakes <ljs@kernel.org> 16998 16998 R: Rik van Riel <riel@surriel.com> 16999 - R: Liam R. Howlett <Liam.Howlett@oracle.com> 16999 + R: Liam R. Howlett <liam@infradead.org> 17000 17000 R: Vlastimil Babka <vbabka@kernel.org> 17001 17001 R: Harry Yoo <harry@kernel.org> 17002 17002 R: Jann Horn <jannh@google.com> ··· 17043 17043 M: Lorenzo Stoakes <ljs@kernel.org> 17044 17044 R: Zi Yan <ziy@nvidia.com> 17045 17045 R: Baolin Wang <baolin.wang@linux.alibaba.com> 17046 - R: Liam R. Howlett <Liam.Howlett@oracle.com> 17046 + R: Liam R. Howlett <liam@infradead.org> 17047 17047 R: Nico Pache <npache@redhat.com> 17048 17048 R: Ryan Roberts <ryan.roberts@arm.com> 17049 17049 R: Dev Jain <dev.jain@arm.com> ··· 17081 17081 MEMORY MANAGEMENT - RUST 17082 17082 M: Alice Ryhl <aliceryhl@google.com> 17083 17083 R: Lorenzo Stoakes <ljs@kernel.org> 17084 - R: Liam R. Howlett <Liam.Howlett@oracle.com> 17084 + R: Liam R. Howlett <liam@infradead.org> 17085 17085 L: linux-mm@kvack.org 17086 17086 L: rust-for-linux@vger.kernel.org 17087 17087 S: Maintained ··· 17095 17095 17096 17096 MEMORY MAPPING 17097 17097 M: Andrew Morton <akpm@linux-foundation.org> 17098 - M: Liam R. Howlett <Liam.Howlett@oracle.com> 17098 + M: Liam R. Howlett <liam@infradead.org> 17099 17099 M: Lorenzo Stoakes <ljs@kernel.org> 17100 17100 R: Vlastimil Babka <vbabka@kernel.org> 17101 17101 R: Jann Horn <jannh@google.com> ··· 17127 17127 MEMORY MAPPING - LOCKING 17128 17128 M: Andrew Morton <akpm@linux-foundation.org> 17129 17129 M: Suren Baghdasaryan <surenb@google.com> 17130 - M: Liam R. Howlett <Liam.Howlett@oracle.com> 17130 + M: Liam R. Howlett <liam@infradead.org> 17131 17131 M: Lorenzo Stoakes <ljs@kernel.org> 17132 17132 R: Vlastimil Babka <vbabka@kernel.org> 17133 17133 R: Shakeel Butt <shakeel.butt@linux.dev> ··· 17142 17142 17143 17143 MEMORY MAPPING - MADVISE (MEMORY ADVICE) 17144 17144 M: Andrew Morton <akpm@linux-foundation.org> 17145 - M: Liam R. Howlett <Liam.Howlett@oracle.com> 17145 + M: Liam R. Howlett <liam@infradead.org> 17146 17146 M: Lorenzo Stoakes <ljs@kernel.org> 17147 17147 M: David Hildenbrand <david@kernel.org> 17148 17148 R: Vlastimil Babka <vbabka@kernel.org> ··· 23403 23403 M: Danilo Krummrich <dakr@kernel.org> 23404 23404 R: Lorenzo Stoakes <ljs@kernel.org> 23405 23405 R: Vlastimil Babka <vbabka@kernel.org> 23406 - R: Liam R. Howlett <Liam.Howlett@oracle.com> 23406 + R: Liam R. Howlett <liam@infradead.org> 23407 23407 R: Uladzislau Rezki <urezki@gmail.com> 23408 23408 L: rust-for-linux@vger.kernel.org 23409 23409 S: Maintained ··· 24348 24348 SHRINKER 24349 24349 M: Andrew Morton <akpm@linux-foundation.org> 24350 24350 M: Dave Chinner <david@fromorbit.com> 24351 - R: Qi Zheng <zhengqi.arch@bytedance.com> 24351 + R: Qi Zheng <qi.zheng@linux.dev> 24352 24352 R: Roman Gushchin <roman.gushchin@linux.dev> 24353 24353 R: Muchun Song <muchun.song@linux.dev> 24354 24354 L: linux-mm@kvack.org
+1 -1
include/linux/maple_tree.h
··· 4 4 /* 5 5 * Maple Tree - An RCU-safe adaptive tree for storing ranges 6 6 * Copyright (c) 2018-2022 Oracle 7 - * Authors: Liam R. Howlett <Liam.Howlett@Oracle.com> 7 + * Authors: Liam R. Howlett <liam@infradead.org> 8 8 * Matthew Wilcox <willy@infradead.org> 9 9 */ 10 10
+1 -1
include/linux/mm.h
··· 4391 4391 4392 4392 int mmap_action_prepare(struct vm_area_desc *desc); 4393 4393 int mmap_action_complete(struct vm_area_struct *vma, 4394 - struct mmap_action *action); 4394 + struct mmap_action *action, bool is_compat); 4395 4395 4396 4396 /* Look up the first VMA which exactly match the interval vm_start ... vm_end */ 4397 4397 static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
+13 -8
kernel/liveupdate/kexec_handover.c
··· 762 762 goto out_pack; 763 763 } 764 764 765 - err = fdt_setprop(root_fdt, off, KHO_SUB_TREE_PROP_NAME, 766 - &phys, sizeof(phys)); 767 - if (err < 0) 768 - goto out_pack; 765 + fdt_err = fdt_setprop(root_fdt, off, KHO_SUB_TREE_PROP_NAME, 766 + &phys, sizeof(phys)); 767 + if (fdt_err < 0) 768 + goto out_del_node; 769 769 770 - err = fdt_setprop(root_fdt, off, KHO_SUB_TREE_SIZE_PROP_NAME, 771 - &size_u64, sizeof(size_u64)); 772 - if (err < 0) 773 - goto out_pack; 770 + fdt_err = fdt_setprop(root_fdt, off, KHO_SUB_TREE_SIZE_PROP_NAME, 771 + &size_u64, sizeof(size_u64)); 772 + if (fdt_err < 0) 773 + goto out_del_node; 774 774 775 775 WARN_ON_ONCE(kho_debugfs_blob_add(&kho_out.dbg, name, blob, 776 776 size, false)); 777 777 778 + err = 0; 779 + goto out_pack; 780 + 781 + out_del_node: 782 + fdt_del_node(root_fdt, off); 778 783 out_pack: 779 784 fdt_pack(root_fdt); 780 785
+10 -5
kernel/liveupdate/luo_session.c
··· 514 514 { 515 515 struct luo_session_header *sh = &luo_session_global.incoming; 516 516 static bool is_deserialized; 517 - static int err; 517 + static int saved_err; 518 + int err; 518 519 519 520 /* If has been deserialized, always return the same error code */ 520 521 if (is_deserialized) 521 - return err; 522 + return saved_err; 522 523 523 524 is_deserialized = true; 524 525 if (!sh->active) ··· 548 547 pr_warn("Failed to allocate session [%.*s] during deserialization %pe\n", 549 548 (int)sizeof(sh->ser[i].name), 550 549 sh->ser[i].name, session); 551 - return PTR_ERR(session); 550 + err = PTR_ERR(session); 551 + goto save_err; 552 552 } 553 553 554 554 err = luo_session_insert(sh, session); ··· 557 555 pr_warn("Failed to insert session [%s] %pe\n", 558 556 session->name, ERR_PTR(err)); 559 557 luo_session_free(session); 560 - return err; 558 + goto save_err; 561 559 } 562 560 563 561 scoped_guard(mutex, &session->mutex) { ··· 567 565 if (err) { 568 566 pr_warn("Failed to deserialize files for session [%s] %pe\n", 569 567 session->name, ERR_PTR(err)); 570 - return err; 568 + goto save_err; 571 569 } 572 570 } 573 571 ··· 576 574 sh->ser = NULL; 577 575 578 576 return 0; 577 + save_err: 578 + saved_err = err; 579 + return err; 579 580 } 580 581 581 582 int luo_session_serialize(void)
+1 -1
lib/maple_tree.c
··· 2 2 /* 3 3 * Maple Tree implementation 4 4 * Copyright (c) 2018-2022 Oracle Corporation 5 - * Authors: Liam R. Howlett <Liam.Howlett@oracle.com> 5 + * Authors: Liam R. Howlett <liam@infradead.org> 6 6 * Matthew Wilcox <willy@infradead.org> 7 7 * Copyright (c) 2023 ByteDance 8 8 * Author: Peng Zhang <zhangpeng.00@bytedance.com>
+2 -2
lib/test_maple_tree.c
··· 2 2 /* 3 3 * test_maple_tree.c: Test the maple tree API 4 4 * Copyright (c) 2018-2022 Oracle Corporation 5 - * Author: Liam R. Howlett <Liam.Howlett@Oracle.com> 5 + * Author: Liam R. Howlett <liam@infradead.org> 6 6 * 7 7 * Any tests that only require the interface of the tree. 8 8 */ ··· 4021 4021 4022 4022 module_init(maple_tree_seed); 4023 4023 module_exit(maple_tree_harvest); 4024 - MODULE_AUTHOR("Liam R. Howlett <Liam.Howlett@Oracle.com>"); 4024 + MODULE_AUTHOR("Liam R. Howlett <liam@infradead.org>"); 4025 4025 MODULE_DESCRIPTION("maple tree API test module"); 4026 4026 MODULE_LICENSE("GPL");
+55 -30
mm/damon/lru_sort.c
··· 161 161 */ 162 162 static unsigned long addr_unit __read_mostly = 1; 163 163 164 - /* 165 - * PID of the DAMON thread 166 - * 167 - * If DAMON_LRU_SORT is enabled, this becomes the PID of the worker thread. 168 - * Else, -1. 169 - */ 170 - static int kdamond_pid __read_mostly = -1; 171 - module_param(kdamond_pid, int, 0400); 172 - 173 164 static struct damos_stat damon_lru_sort_hot_stat; 174 165 DEFINE_DAMON_MODULES_DAMOS_STATS_PARAMS(damon_lru_sort_hot_stat, 175 166 lru_sort_tried_hot_regions, lru_sorted_hot_regions, ··· 377 386 { 378 387 int err; 379 388 380 - if (!on) { 381 - err = damon_stop(&ctx, 1); 382 - if (!err) 383 - kdamond_pid = -1; 384 - return err; 385 - } 389 + if (!on) 390 + return damon_stop(&ctx, 1); 386 391 387 392 err = damon_lru_sort_apply_parameters(); 388 393 if (err) ··· 387 400 err = damon_start(&ctx, 1, true); 388 401 if (err) 389 402 return err; 390 - kdamond_pid = damon_kdamond_pid(ctx); 391 - if (kdamond_pid < 0) 392 - return kdamond_pid; 393 403 return damon_call(ctx, &call_control); 394 404 } 395 405 ··· 414 430 MODULE_PARM_DESC(addr_unit, 415 431 "Scale factor for DAMON_LRU_SORT to ops address conversion (default: 1)"); 416 432 433 + static bool damon_lru_sort_enabled(void) 434 + { 435 + if (!ctx) 436 + return false; 437 + return damon_is_running(ctx); 438 + } 439 + 417 440 static int damon_lru_sort_enabled_store(const char *val, 418 441 const struct kernel_param *kp) 419 442 { 420 - bool is_enabled = enabled; 421 - bool enable; 422 443 int err; 423 444 424 - err = kstrtobool(val, &enable); 445 + err = kstrtobool(val, &enabled); 425 446 if (err) 426 447 return err; 427 448 428 - if (is_enabled == enable) 449 + if (damon_lru_sort_enabled() == enabled) 429 450 return 0; 430 451 431 452 /* Called before init function. The function will handle this. */ 432 453 if (!damon_initialized()) 433 - goto set_param_out; 454 + return 0; 434 455 435 - err = damon_lru_sort_turn(enable); 436 - if (err) 437 - return err; 456 + return damon_lru_sort_turn(enabled); 457 + } 438 458 439 - set_param_out: 440 - enabled = enable; 441 - return err; 459 + static int damon_lru_sort_enabled_load(char *buffer, 460 + const struct kernel_param *kp) 461 + { 462 + return sprintf(buffer, "%c\n", damon_lru_sort_enabled() ? 'Y' : 'N'); 442 463 } 443 464 444 465 static const struct kernel_param_ops enabled_param_ops = { 445 466 .set = damon_lru_sort_enabled_store, 446 - .get = param_get_bool, 467 + .get = damon_lru_sort_enabled_load, 447 468 }; 448 469 449 470 module_param_cb(enabled, &enabled_param_ops, &enabled, 0600); 450 471 MODULE_PARM_DESC(enabled, 451 472 "Enable or disable DAMON_LRU_SORT (default: disabled)"); 473 + 474 + static int damon_lru_sort_kdamond_pid_store(const char *val, 475 + const struct kernel_param *kp) 476 + { 477 + /* 478 + * kdamond_pid is read-only, but kernel command line could write it. 479 + * Do nothing here. 480 + */ 481 + return 0; 482 + } 483 + 484 + static int damon_lru_sort_kdamond_pid_load(char *buffer, 485 + const struct kernel_param *kp) 486 + { 487 + int kdamond_pid = -1; 488 + 489 + if (ctx) { 490 + kdamond_pid = damon_kdamond_pid(ctx); 491 + if (kdamond_pid < 0) 492 + kdamond_pid = -1; 493 + } 494 + return sprintf(buffer, "%d\n", kdamond_pid); 495 + } 496 + 497 + static const struct kernel_param_ops kdamond_pid_param_ops = { 498 + .set = damon_lru_sort_kdamond_pid_store, 499 + .get = damon_lru_sort_kdamond_pid_load, 500 + }; 501 + 502 + /* 503 + * PID of the DAMON thread 504 + * 505 + * If DAMON_LRU_SORT is enabled, this becomes the PID of the worker thread. 506 + * Else, -1. 507 + */ 508 + module_param_cb(kdamond_pid, &kdamond_pid_param_ops, NULL, 0400); 452 509 453 510 static int __init damon_lru_sort_init(void) 454 511 {
+55 -30
mm/damon/reclaim.c
··· 144 144 static bool skip_anon __read_mostly; 145 145 module_param(skip_anon, bool, 0600); 146 146 147 - /* 148 - * PID of the DAMON thread 149 - * 150 - * If DAMON_RECLAIM is enabled, this becomes the PID of the worker thread. 151 - * Else, -1. 152 - */ 153 - static int kdamond_pid __read_mostly = -1; 154 - module_param(kdamond_pid, int, 0400); 155 - 156 147 static struct damos_stat damon_reclaim_stat; 157 148 DEFINE_DAMON_MODULES_DAMOS_STATS_PARAMS(damon_reclaim_stat, 158 149 reclaim_tried_regions, reclaimed_regions, quota_exceeds); ··· 279 288 { 280 289 int err; 281 290 282 - if (!on) { 283 - err = damon_stop(&ctx, 1); 284 - if (!err) 285 - kdamond_pid = -1; 286 - return err; 287 - } 291 + if (!on) 292 + return damon_stop(&ctx, 1); 288 293 289 294 err = damon_reclaim_apply_parameters(); 290 295 if (err) ··· 289 302 err = damon_start(&ctx, 1, true); 290 303 if (err) 291 304 return err; 292 - kdamond_pid = damon_kdamond_pid(ctx); 293 - if (kdamond_pid < 0) 294 - return kdamond_pid; 295 305 return damon_call(ctx, &call_control); 296 306 } 297 307 ··· 316 332 MODULE_PARM_DESC(addr_unit, 317 333 "Scale factor for DAMON_RECLAIM to ops address conversion (default: 1)"); 318 334 335 + static bool damon_reclaim_enabled(void) 336 + { 337 + if (!ctx) 338 + return false; 339 + return damon_is_running(ctx); 340 + } 341 + 319 342 static int damon_reclaim_enabled_store(const char *val, 320 343 const struct kernel_param *kp) 321 344 { 322 - bool is_enabled = enabled; 323 - bool enable; 324 345 int err; 325 346 326 - err = kstrtobool(val, &enable); 347 + err = kstrtobool(val, &enabled); 327 348 if (err) 328 349 return err; 329 350 330 - if (is_enabled == enable) 351 + if (damon_reclaim_enabled() == enabled) 331 352 return 0; 332 353 333 354 /* Called before init function. The function will handle this. */ 334 355 if (!damon_initialized()) 335 - goto set_param_out; 356 + return 0; 336 357 337 - err = damon_reclaim_turn(enable); 338 - if (err) 339 - return err; 358 + return damon_reclaim_turn(enabled); 359 + } 340 360 341 - set_param_out: 342 - enabled = enable; 343 - return err; 361 + static int damon_reclaim_enabled_load(char *buffer, 362 + const struct kernel_param *kp) 363 + { 364 + return sprintf(buffer, "%c\n", damon_reclaim_enabled() ? 'Y' : 'N'); 344 365 } 345 366 346 367 static const struct kernel_param_ops enabled_param_ops = { 347 368 .set = damon_reclaim_enabled_store, 348 - .get = param_get_bool, 369 + .get = damon_reclaim_enabled_load, 349 370 }; 350 371 351 372 module_param_cb(enabled, &enabled_param_ops, &enabled, 0600); 352 373 MODULE_PARM_DESC(enabled, 353 374 "Enable or disable DAMON_RECLAIM (default: disabled)"); 375 + 376 + static int damon_reclaim_kdamond_pid_store(const char *val, 377 + const struct kernel_param *kp) 378 + { 379 + /* 380 + * kdamond_pid is read-only, but kernel command line could write it. 381 + * Do nothing here. 382 + */ 383 + return 0; 384 + } 385 + 386 + static int damon_reclaim_kdamond_pid_load(char *buffer, 387 + const struct kernel_param *kp) 388 + { 389 + int kdamond_pid = -1; 390 + 391 + if (ctx) { 392 + kdamond_pid = damon_kdamond_pid(ctx); 393 + if (kdamond_pid < 0) 394 + kdamond_pid = -1; 395 + } 396 + return sprintf(buffer, "%d\n", kdamond_pid); 397 + } 398 + 399 + static const struct kernel_param_ops kdamond_pid_param_ops = { 400 + .set = damon_reclaim_kdamond_pid_store, 401 + .get = damon_reclaim_kdamond_pid_load, 402 + }; 403 + 404 + /* 405 + * PID of the DAMON thread 406 + * 407 + * If DAMON_RECLAIM is enabled, this becomes the PID of the worker thread. 408 + * Else, -1. 409 + */ 410 + module_param_cb(kdamond_pid, &kdamond_pid_param_ops, NULL, 0400); 354 411 355 412 static int __init damon_reclaim_init(void) 356 413 {
+20 -10
mm/damon/stat.c
··· 19 19 static int damon_stat_enabled_store( 20 20 const char *val, const struct kernel_param *kp); 21 21 22 + static int damon_stat_enabled_load(char *buffer, 23 + const struct kernel_param *kp); 24 + 22 25 static const struct kernel_param_ops enabled_param_ops = { 23 26 .set = damon_stat_enabled_store, 24 - .get = param_get_bool, 27 + .get = damon_stat_enabled_load, 25 28 }; 26 29 27 30 static bool enabled __read_mostly = IS_ENABLED( 28 31 CONFIG_DAMON_STAT_ENABLED_DEFAULT); 29 - module_param_cb(enabled, &enabled_param_ops, &enabled, 0600); 32 + module_param_cb(enabled, &enabled_param_ops, NULL, 0600); 30 33 MODULE_PARM_DESC(enabled, "Enable of disable DAMON_STAT"); 31 34 32 35 static unsigned long estimated_memory_bandwidth __read_mostly; ··· 276 273 damon_stat_context = NULL; 277 274 } 278 275 276 + static bool damon_stat_enabled(void) 277 + { 278 + if (!damon_stat_context) 279 + return false; 280 + return damon_is_running(damon_stat_context); 281 + } 282 + 279 283 static int damon_stat_enabled_store( 280 284 const char *val, const struct kernel_param *kp) 281 285 { 282 - bool is_enabled = enabled; 283 286 int err; 284 287 285 288 err = kstrtobool(val, &enabled); 286 289 if (err) 287 290 return err; 288 291 289 - if (is_enabled == enabled) 292 + if (damon_stat_enabled() == enabled) 290 293 return 0; 291 294 292 295 if (!damon_initialized()) ··· 302 293 */ 303 294 return 0; 304 295 305 - if (enabled) { 306 - err = damon_stat_start(); 307 - if (err) 308 - enabled = false; 309 - return err; 310 - } 296 + if (enabled) 297 + return damon_stat_start(); 311 298 damon_stat_stop(); 312 299 return 0; 300 + } 301 + 302 + static int damon_stat_enabled_load(char *buffer, const struct kernel_param *kp) 303 + { 304 + return sprintf(buffer, "%c\n", damon_stat_enabled() ? 'Y' : 'N'); 313 305 } 314 306 315 307 static int __init damon_stat_init(void)
+22 -2
mm/damon/sysfs-schemes.c
··· 533 533 { 534 534 struct damon_sysfs_scheme_filter *filter = container_of(kobj, 535 535 struct damon_sysfs_scheme_filter, kobj); 536 + int len; 536 537 537 - return sysfs_emit(buf, "%s\n", 538 + if (!mutex_trylock(&damon_sysfs_lock)) 539 + return -EBUSY; 540 + len = sysfs_emit(buf, "%s\n", 538 541 filter->memcg_path ? filter->memcg_path : ""); 542 + mutex_unlock(&damon_sysfs_lock); 543 + return len; 539 544 } 540 545 541 546 static ssize_t memcg_path_store(struct kobject *kobj, ··· 555 550 return -ENOMEM; 556 551 557 552 strscpy(path, buf, count + 1); 553 + if (!mutex_trylock(&damon_sysfs_lock)) { 554 + kfree(path); 555 + return -EBUSY; 556 + } 558 557 kfree(filter->memcg_path); 559 558 filter->memcg_path = path; 559 + mutex_unlock(&damon_sysfs_lock); 560 560 return count; 561 561 } 562 562 ··· 1197 1187 { 1198 1188 struct damos_sysfs_quota_goal *goal = container_of(kobj, 1199 1189 struct damos_sysfs_quota_goal, kobj); 1190 + int len; 1200 1191 1201 - return sysfs_emit(buf, "%s\n", goal->path ? goal->path : ""); 1192 + if (!mutex_trylock(&damon_sysfs_lock)) 1193 + return -EBUSY; 1194 + len = sysfs_emit(buf, "%s\n", goal->path ? goal->path : ""); 1195 + mutex_unlock(&damon_sysfs_lock); 1196 + return len; 1202 1197 } 1203 1198 1204 1199 static ssize_t path_store(struct kobject *kobj, ··· 1218 1203 return -ENOMEM; 1219 1204 1220 1205 strscpy(path, buf, count + 1); 1206 + if (!mutex_trylock(&damon_sysfs_lock)) { 1207 + kfree(path); 1208 + return -EBUSY; 1209 + } 1221 1210 kfree(goal->path); 1222 1211 goal->path = path; 1212 + mutex_unlock(&damon_sysfs_lock); 1223 1213 return count; 1224 1214 } 1225 1215
+1
mm/hugetlb_cma.c
··· 204 204 */ 205 205 per_node = DIV_ROUND_UP(hugetlb_cma_size, 206 206 nodes_weight(hugetlb_bootmem_nodes)); 207 + per_node = round_up(per_node, PAGE_SIZE << order); 207 208 pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n", 208 209 hugetlb_cma_size / SZ_1M, per_node / SZ_1M); 209 210 }
+19 -10
mm/memcontrol.c
··· 805 805 * Used in mod_memcg_state() and mod_memcg_lruvec_state() to avoid race with 806 806 * reparenting of non-hierarchical state_locals. 807 807 */ 808 - static inline struct mem_cgroup *get_non_dying_memcg_start(struct mem_cgroup *memcg) 808 + static inline struct mem_cgroup *get_non_dying_memcg_start(struct mem_cgroup *memcg, 809 + bool *rcu_locked) 809 810 { 810 - if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) 811 + /* Rebinding can cause this value to be changed at runtime */ 812 + if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) { 813 + *rcu_locked = false; 811 814 return memcg; 815 + } 812 816 813 817 rcu_read_lock(); 818 + *rcu_locked = true; 814 819 815 820 while (memcg_is_dying(memcg)) 816 821 memcg = parent_mem_cgroup(memcg); ··· 823 818 return memcg; 824 819 } 825 820 826 - static inline void get_non_dying_memcg_end(void) 821 + static inline void get_non_dying_memcg_end(bool rcu_locked) 827 822 { 828 - if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) 823 + if (!rcu_locked) 829 824 return; 830 825 831 826 rcu_read_unlock(); 832 827 } 833 828 #else 834 - static inline struct mem_cgroup *get_non_dying_memcg_start(struct mem_cgroup *memcg) 829 + static inline struct mem_cgroup *get_non_dying_memcg_start(struct mem_cgroup *memcg, 830 + bool *rcu_locked) 835 831 { 836 832 return memcg; 837 833 } 838 834 839 - static inline void get_non_dying_memcg_end(void) 835 + static inline void get_non_dying_memcg_end(bool rcu_locked) 840 836 { 841 837 } 842 838 #endif ··· 871 865 void mod_memcg_state(struct mem_cgroup *memcg, enum memcg_stat_item idx, 872 866 int val) 873 867 { 868 + bool rcu_locked = false; 869 + 874 870 if (mem_cgroup_disabled()) 875 871 return; 876 872 877 - memcg = get_non_dying_memcg_start(memcg); 873 + memcg = get_non_dying_memcg_start(memcg, &rcu_locked); 878 874 __mod_memcg_state(memcg, idx, val); 879 - get_non_dying_memcg_end(); 875 + get_non_dying_memcg_end(rcu_locked); 880 876 } 881 877 882 878 #ifdef CONFIG_MEMCG_V1 ··· 941 933 struct pglist_data *pgdat = lruvec_pgdat(lruvec); 942 934 struct mem_cgroup_per_node *pn; 943 935 struct mem_cgroup *memcg; 936 + bool rcu_locked = false; 944 937 945 938 pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); 946 - memcg = get_non_dying_memcg_start(pn->memcg); 939 + memcg = get_non_dying_memcg_start(pn->memcg, &rcu_locked); 947 940 pn = memcg->nodeinfo[pgdat->node_id]; 948 941 949 942 __mod_memcg_lruvec_state(pn, idx, val); 950 943 951 - get_non_dying_memcg_end(); 944 + get_non_dying_memcg_end(rcu_locked); 952 945 } 953 946 954 947 /**
+6 -10
mm/page-writeback.c
··· 1835 1835 balance_domain_limits(mdtc, strictlimit); 1836 1836 } 1837 1837 1838 - if (nr_dirty > gdtc->bg_thresh && !writeback_in_progress(wb)) 1838 + if (!writeback_in_progress(wb) && 1839 + (nr_dirty > gdtc->bg_thresh || 1840 + (strictlimit && gdtc->wb_dirty > gdtc->wb_bg_thresh))) 1839 1841 wb_start_background_writeback(wb); 1840 1842 1841 1843 /* ··· 1864 1862 * Unconditionally start background writeback if it's not 1865 1863 * already in progress. We need to do this because the global 1866 1864 * dirty threshold check above (nr_dirty > gdtc->bg_thresh) 1867 - * doesn't account for these cases: 1868 - * 1869 - * a) strictlimit BDIs: throttling is calculated using per-wb 1870 - * thresholds. The per-wb threshold can be exceeded even when 1871 - * nr_dirty < gdtc->bg_thresh 1872 - * 1873 - * b) memcg-based throttling: memcg uses its own dirty count and 1874 - * thresholds and can trigger throttling even when global 1875 - * nr_dirty < gdtc->bg_thresh 1865 + * doesn't account for the memcg-based throttling case. memcg 1866 + * uses its own dirty count and thresholds and can trigger 1867 + * throttling even when global nr_dirty < gdtc->bg_thresh 1876 1868 * 1877 1869 * Writeback needs to be started else the writer stalls in the 1878 1870 * throttle loop waiting for dirty pages to be written back
+11 -1
mm/userfaultfd.c
··· 443 443 return ret; 444 444 } 445 445 446 - static int mfill_copy_folio_retry(struct mfill_state *state, struct folio *folio) 446 + static int mfill_copy_folio_retry(struct mfill_state *state, 447 + struct folio *folio) 447 448 { 449 + const struct vm_uffd_ops *orig_ops = vma_uffd_ops(state->vma); 448 450 unsigned long src_addr = state->src_addr; 449 451 void *kaddr; 450 452 int err; ··· 466 464 err = mfill_get_vma(state); 467 465 if (err) 468 466 return err; 467 + 468 + /* 469 + * The VMA type may have changed while the lock was dropped 470 + * (e.g. replaced with a hugetlb mapping), making the caller's 471 + * ops pointer stale. 472 + */ 473 + if (vma_uffd_ops(state->vma) != orig_ops) 474 + return -EAGAIN; 469 475 470 476 err = mfill_establish_pmd(state); 471 477 if (err)
+17 -9
mm/util.c
··· 1232 1232 /* Update the VMA from the descriptor. */ 1233 1233 compat_set_vma_from_desc(vma, desc); 1234 1234 /* Complete any specified mmap actions. */ 1235 - return mmap_action_complete(vma, &desc->action); 1235 + return mmap_action_complete(vma, &desc->action, /*is_compat=*/true); 1236 1236 } 1237 1237 EXPORT_SYMBOL(__compat_vma_mmap); 1238 1238 ··· 1389 1389 } 1390 1390 1391 1391 static int mmap_action_finish(struct vm_area_struct *vma, 1392 - struct mmap_action *action, int err) 1392 + struct mmap_action *action, int err, 1393 + bool is_compat) 1393 1394 { 1394 1395 size_t len; 1395 1396 ··· 1401 1400 1402 1401 /* do_munmap() might take rmap lock, so release if held. */ 1403 1402 maybe_rmap_unlock_action(vma, action); 1404 - if (!err) 1405 - return 0; 1403 + /* 1404 + * If this is invoked from the compatibility layer, post-mmap() hook 1405 + * logic will handle cleanup for us. 1406 + */ 1407 + if (!err || is_compat) 1408 + return err; 1406 1409 1407 1410 /* 1408 1411 * If an error occurs, unmap the VMA altogether and return an error. We ··· 1456 1451 * mmap_action_complete - Execute VMA descriptor action. 1457 1452 * @vma: The VMA to perform the action upon. 1458 1453 * @action: The action to perform. 1454 + * @is_compat: Is this being invoked from the compatibility layer? 1459 1455 * 1460 1456 * Similar to mmap_action_prepare(). 1461 1457 * 1462 - * Return: 0 on success, or error, at which point the VMA will be unmapped. 1458 + * Return: 0 on success, or error, at which point the VMA will be unmapped if 1459 + * !@is_compat. 1463 1460 */ 1464 1461 int mmap_action_complete(struct vm_area_struct *vma, 1465 - struct mmap_action *action) 1462 + struct mmap_action *action, bool is_compat) 1466 1463 { 1467 1464 int err = 0; 1468 1465 ··· 1485 1478 break; 1486 1479 } 1487 1480 1488 - return mmap_action_finish(vma, action, err); 1481 + return mmap_action_finish(vma, action, err, is_compat); 1489 1482 } 1490 1483 EXPORT_SYMBOL(mmap_action_complete); 1491 1484 #else ··· 1507 1500 EXPORT_SYMBOL(mmap_action_prepare); 1508 1501 1509 1502 int mmap_action_complete(struct vm_area_struct *vma, 1510 - struct mmap_action *action) 1503 + struct mmap_action *action, 1504 + bool is_compat) 1511 1505 { 1512 1506 int err = 0; 1513 1507 ··· 1525 1517 break; 1526 1518 } 1527 1519 1528 - return mmap_action_finish(vma, action, err); 1520 + return mmap_action_finish(vma, action, err, is_compat); 1529 1521 } 1530 1522 EXPORT_SYMBOL(mmap_action_complete); 1531 1523 #endif
+2 -1
mm/vma.c
··· 2780 2780 __mmap_complete(&map, vma); 2781 2781 2782 2782 if (have_mmap_prepare && allocated_new) { 2783 - error = mmap_action_complete(vma, &desc.action); 2783 + error = mmap_action_complete(vma, &desc.action, 2784 + /*is_compat=*/false); 2784 2785 if (error) 2785 2786 return error; 2786 2787 }
+1 -1
mm/vmalloc.c
··· 4361 4361 return NULL; 4362 4362 4363 4363 if (p) { 4364 - memcpy(n, p, old_size); 4364 + memcpy(n, p, min(size, old_size)); 4365 4365 vfree(p); 4366 4366 } 4367 4367
+1 -1
tools/testing/radix-tree/maple.c
··· 2 2 /* 3 3 * maple_tree.c: Userspace testing for maple tree test-suite 4 4 * Copyright (c) 2018-2022 Oracle Corporation 5 - * Author: Liam R. Howlett <Liam.Howlett@Oracle.com> 5 + * Author: Liam R. Howlett <liam@infradead.org> 6 6 * 7 7 * Any tests that require internal knowledge of the tree or threads and other 8 8 * difficult to handle in kernel tests.
+1
tools/testing/selftests/mm/config
··· 13 13 CONFIG_UPROBES=y 14 14 CONFIG_MEMORY_FAILURE=y 15 15 CONFIG_HWPOISON_INJECT=m 16 + CONFIG_PROC_MEM_ALWAYS_FORCE=y
+1 -1
tools/testing/vma/include/dup.h
··· 1330 1330 /* Update the VMA from the descriptor. */ 1331 1331 compat_set_vma_from_desc(vma, desc); 1332 1332 /* Complete any specified mmap actions. */ 1333 - return mmap_action_complete(vma, &desc->action); 1333 + return mmap_action_complete(vma, &desc->action, /*is_compat=*/true); 1334 1334 } 1335 1335 1336 1336 static inline int compat_vma_mmap(struct file *file, struct vm_area_struct *vma)
+2 -1
tools/testing/vma/include/stubs.h
··· 87 87 } 88 88 89 89 static inline int mmap_action_complete(struct vm_area_struct *vma, 90 - struct mmap_action *action) 90 + struct mmap_action *action, 91 + bool is_compat) 91 92 { 92 93 return 0; 93 94 }