Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

kho: adopt radix tree for preserved memory tracking

Patch series "Make KHO Stateless", v9.

This series transitions KHO from an xarray-based metadata tracking system
with serialization to a radix tree data structure that can be passed
directly to the next kernel.

The key motivations for this change are to:
- Eliminate the need for data serialization before kexec.
- Remove the KHO finalize state.
- Pass preservation metadata more directly to the next kernel via the FDT.

The new approach uses a radix tree to mark preserved pages. A page's
physical address and its order are encoded into a single value. The tree
is composed of multiple levels of page-sized tables, with leaf nodes being
bitmaps where each set bit represents a preserved page. The physical
address of the radix tree's root is passed in the FDT, allowing the next
kernel to reconstruct the preserved memory map.

This series is broken down into the following patches:

1. kho: Adopt radix tree for preserved memory tracking:
Replaces the xarray-based tracker with the new radix tree
implementation and increments the ABI version.

2. kho: Remove finalize state and clients:
Removes the now-obsolete kho_finalize() function and its usage
from client code and debugfs.


This patch (of 2):

Introduce a radix tree implementation for tracking preserved memory pages
and switch the KHO memory tracking mechanism to use it. This lays the
groundwork for a stateless KHO implementation that eliminates the need for
serialization and the associated "finalize" state.

This patch introduces the core radix tree data structures and constants to
the KHO ABI. It adds the radix tree node and leaf structures, along with
documentation for the radix tree key encoding scheme that combines a
page's physical address and order.

To support broader use by other kernel subsystems, such as hugetlb
preservation, the core radix tree manipulation functions are exported as a
public API.

The xarray-based memory tracking is replaced with this new radix tree
implementation. The core KHO preservation and unpreservation functions
are wired up to use the radix tree helpers. On boot, the second kernel
restores the preserved memory map by walking the radix tree whose root
physical address is passed via the FDT.

The ABI `compatible` version is bumped to "kho-v2" to reflect the
structural changes in the preserved memory map and sub-FDT property names.
This includes renaming "fdt" to "preserved-data" to better reflect that
preserved state may use formats other than FDT.

[ran.xiaokai@zte.com.cn: fix child node parsing for debugfs in/sub_fdts]
Link: https://lkml.kernel.org/r/20260309033530.244508-1-ranxiaokai627@163.com
Link: https://lkml.kernel.org/r/20260206021428.3386442-1-jasonmiu@google.com
Link: https://lkml.kernel.org/r/20260206021428.3386442-2-jasonmiu@google.com
Signed-off-by: Jason Miu <jasonmiu@google.com>
Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Alexander Graf <graf@amazon.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Changyuan Lyu <changyuanl@google.com>
Cc: David Matlack <dmatlack@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Pratyush Yadav <pratyush@kernel.org>
Cc: Ran Xiaokai <ran.xiaokai@zte.com.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Jason Miu and committed by
Andrew Morton
3f2ad900 63de231e

+563 -323
+6
Documentation/core-api/kho/abi.rst
··· 22 22 .. kernel-doc:: include/linux/kho/abi/memblock.h 23 23 :doc: memblock kexec handover ABI 24 24 25 + KHO persistent memory tracker ABI 26 + ================================= 27 + 28 + .. kernel-doc:: include/linux/kho/abi/kexec_handover.h 29 + :doc: KHO persistent memory tracker 30 + 25 31 See Also 26 32 ======== 27 33
+12
Documentation/core-api/kho/index.rst
··· 83 83 of the system may become immutable because they are already written down 84 84 in the FDT. That state is called the KHO finalization phase. 85 85 86 + Kexec Handover Radix Tree 87 + ========================= 88 + 89 + .. kernel-doc:: include/linux/kho_radix_tree.h 90 + :doc: Kexec Handover Radix Tree 91 + 92 + Public API 93 + ========== 94 + 95 + .. kernel-doc:: kernel/liveupdate/kexec_handover.c 96 + :export: 97 + 86 98 See Also 87 99 ======== 88 100
+129 -15
include/linux/kho/abi/kexec_handover.h
··· 10 10 #ifndef _LINUX_KHO_ABI_KEXEC_HANDOVER_H 11 11 #define _LINUX_KHO_ABI_KEXEC_HANDOVER_H 12 12 13 + #include <linux/bits.h> 14 + #include <linux/log2.h> 15 + #include <linux/math.h> 13 16 #include <linux/types.h> 17 + 18 + #include <asm/page.h> 14 19 15 20 /** 16 21 * DOC: Kexec Handover ABI ··· 34 29 * compatibility is only guaranteed for kernels supporting the same ABI version. 35 30 * 36 31 * FDT Structure Overview: 37 - * The FDT serves as a central registry for physical 38 - * addresses of preserved data structures and sub-FDTs. The first kernel 39 - * populates this FDT with references to memory regions and other FDTs that 40 - * need to persist across the kexec transition. The subsequent kernel then 41 - * parses this FDT to locate and restore the preserved data.:: 32 + * The FDT serves as a central registry for physical addresses of preserved 33 + * data structures. The first kernel populates this FDT with references to 34 + * memory regions and other metadata that need to persist across the kexec 35 + * transition. The subsequent kernel then parses this FDT to locate and 36 + * restore the preserved data.:: 42 37 * 43 38 * / { 44 - * compatible = "kho-v1"; 39 + * compatible = "kho-v2"; 45 40 * 46 41 * preserved-memory-map = <0x...>; 47 42 * 48 43 * <subnode-name-1> { 49 - * fdt = <0x...>; 44 + * preserved-data = <0x...>; 50 45 * }; 51 46 * 52 47 * <subnode-name-2> { 53 - * fdt = <0x...>; 48 + * preserved-data = <0x...>; 54 49 * }; 55 50 * ... ... 56 51 * <subnode-name-N> { 57 - * fdt = <0x...>; 52 + * preserved-data = <0x...>; 58 53 * }; 59 54 * }; 60 55 * 61 56 * Root KHO Node (/): 62 - * - compatible: "kho-v1" 57 + * - compatible: "kho-v2" 63 58 * 64 59 * Indentifies the overall KHO ABI version. 65 60 * ··· 74 69 * is provided by the subsystem that uses KHO for preserving its 75 70 * data. 76 71 * 77 - * - fdt: u64 72 + * - preserved-data: u64 78 73 * 79 - * Physical address pointing to a subnode FDT blob that is also 74 + * Physical address pointing to a subnode data blob that is also 80 75 * being preserved. 81 76 */ 82 77 83 78 /* The compatible string for the KHO FDT root node. */ 84 - #define KHO_FDT_COMPATIBLE "kho-v1" 79 + #define KHO_FDT_COMPATIBLE "kho-v2" 85 80 86 81 /* The FDT property for the preserved memory map. */ 87 82 #define KHO_FDT_MEMORY_MAP_PROP_NAME "preserved-memory-map" 88 83 89 - /* The FDT property for sub-FDTs. */ 90 - #define KHO_FDT_SUB_TREE_PROP_NAME "fdt" 84 + /* The FDT property for preserved data blobs. */ 85 + #define KHO_FDT_SUB_TREE_PROP_NAME "preserved-data" 91 86 92 87 /** 93 88 * DOC: Kexec Handover ABI for vmalloc Preservation ··· 163 158 unsigned int total_pages; 164 159 unsigned short flags; 165 160 unsigned short order; 161 + }; 162 + 163 + /** 164 + * DOC: KHO persistent memory tracker 165 + * 166 + * KHO tracks preserved memory using a radix tree data structure. Each node of 167 + * the tree is exactly a single page. The leaf nodes are bitmaps where each set 168 + * bit is a preserved page of any order. The intermediate nodes are tables of 169 + * physical addresses that point to a lower level node. 170 + * 171 + * The tree hierarchy is shown below:: 172 + * 173 + * root 174 + * +-------------------+ 175 + * | Level 5 | (struct kho_radix_node) 176 + * +-------------------+ 177 + * | 178 + * v 179 + * +-------------------+ 180 + * | Level 4 | (struct kho_radix_node) 181 + * +-------------------+ 182 + * | 183 + * | ... (intermediate levels) 184 + * | 185 + * v 186 + * +-------------------+ 187 + * | Level 0 | (struct kho_radix_leaf) 188 + * +-------------------+ 189 + * 190 + * The tree is traversed using a key that encodes the page's physical address 191 + * (pa) and its order into a single unsigned long value. The encoded key value 192 + * is composed of two parts: the 'order bit' in the upper part and the 193 + * 'shifted physical address' in the lower part.:: 194 + * 195 + * +------------+-----------------------------+--------------------------+ 196 + * | Page Order | Order Bit | Shifted Physical Address | 197 + * +------------+-----------------------------+--------------------------+ 198 + * | 0 | ...000100 ... (at bit 52) | pa >> (PAGE_SHIFT + 0) | 199 + * | 1 | ...000010 ... (at bit 51) | pa >> (PAGE_SHIFT + 1) | 200 + * | 2 | ...000001 ... (at bit 50) | pa >> (PAGE_SHIFT + 2) | 201 + * | ... | ... | ... | 202 + * +------------+-----------------------------+--------------------------+ 203 + * 204 + * Shifted Physical Address: 205 + * The 'shifted physical address' is the physical address normalized for its 206 + * order. It effectively represents the PFN shifted right by the order. 207 + * 208 + * Order Bit: 209 + * The 'order bit' encodes the page order by setting a single bit at a 210 + * specific position. The position of this bit itself represents the order. 211 + * 212 + * For instance, on a 64-bit system with 4KB pages (PAGE_SHIFT = 12), the 213 + * maximum range for the shifted physical address (for order 0) is 52 bits 214 + * (64 - 12). This address occupies bits [0-51]. For order 0, the order bit is 215 + * set at position 52. 216 + * 217 + * The following diagram illustrates how the encoded key value is split into 218 + * indices for the tree levels, with PAGE_SIZE of 4KB:: 219 + * 220 + * 63:60 59:51 50:42 41:33 32:24 23:15 14:0 221 + * +---------+--------+--------+--------+--------+--------+-----------------+ 222 + * | 0 | Lv 5 | Lv 4 | Lv 3 | Lv 2 | Lv 1 | Lv 0 (bitmap) | 223 + * +---------+--------+--------+--------+--------+--------+-----------------+ 224 + * 225 + * The radix tree stores pages of all orders in a single 6-level hierarchy. It 226 + * efficiently shares higher tree levels, especially due to common zero top 227 + * address bits, allowing a single, efficient algorithm to manage all 228 + * pages. This bitmap approach also offers memory efficiency; for example, a 229 + * 512KB bitmap can cover a 16GB memory range for 0-order pages with PAGE_SIZE = 230 + * 4KB. 231 + * 232 + * The data structures defined here are part of the KHO ABI. Any modification 233 + * to these structures that breaks backward compatibility must be accompanied by 234 + * an update to the "compatible" string. This ensures that a newer kernel can 235 + * correctly interpret the data passed by an older kernel. 236 + */ 237 + 238 + /* 239 + * Defines constants for the KHO radix tree structure, used to track preserved 240 + * memory. These constants govern the indexing, sizing, and depth of the tree. 241 + */ 242 + enum kho_radix_consts { 243 + /* 244 + * The bit position of the order bit (and also the length of the 245 + * shifted physical address) for an order-0 page. 246 + */ 247 + KHO_ORDER_0_LOG2 = 64 - PAGE_SHIFT, 248 + 249 + /* Size of the table in kho_radix_node, in log2 */ 250 + KHO_TABLE_SIZE_LOG2 = const_ilog2(PAGE_SIZE / sizeof(phys_addr_t)), 251 + 252 + /* Number of bits in the kho_radix_leaf bitmap, in log2 */ 253 + KHO_BITMAP_SIZE_LOG2 = PAGE_SHIFT + const_ilog2(BITS_PER_BYTE), 254 + 255 + /* 256 + * The total tree depth is the number of intermediate levels 257 + * and 1 bitmap level. 258 + */ 259 + KHO_TREE_MAX_DEPTH = 260 + DIV_ROUND_UP(KHO_ORDER_0_LOG2 - KHO_BITMAP_SIZE_LOG2, 261 + KHO_TABLE_SIZE_LOG2) + 1, 262 + }; 263 + 264 + struct kho_radix_node { 265 + u64 table[1 << KHO_TABLE_SIZE_LOG2]; 266 + }; 267 + 268 + struct kho_radix_leaf { 269 + DECLARE_BITMAP(bitmap, 1 << KHO_BITMAP_SIZE_LOG2); 166 270 }; 167 271 168 272 #endif /* _LINUX_KHO_ABI_KEXEC_HANDOVER_H */
+70
include/linux/kho_radix_tree.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _LINUX_KHO_RADIX_TREE_H 4 + #define _LINUX_KHO_RADIX_TREE_H 5 + 6 + #include <linux/err.h> 7 + #include <linux/errno.h> 8 + #include <linux/mutex_types.h> 9 + #include <linux/types.h> 10 + 11 + /** 12 + * DOC: Kexec Handover Radix Tree 13 + * 14 + * This is a radix tree implementation for tracking physical memory pages 15 + * across kexec transitions. It was developed for the KHO mechanism but is 16 + * designed for broader use by any subsystem that needs to preserve pages. 17 + * 18 + * The radix tree is a multi-level tree where leaf nodes are bitmaps 19 + * representing individual pages. To allow pages of different sizes (orders) 20 + * to be stored efficiently in a single tree, it uses a unique key encoding 21 + * scheme. Each key is an unsigned long that combines a page's physical 22 + * address and its order. 23 + * 24 + * Client code is responsible for allocating the root node of the tree, 25 + * initializing the mutex lock, and managing its lifecycle. It must use the 26 + * tree data structures defined in the KHO ABI, 27 + * `include/linux/kho/abi/kexec_handover.h`. 28 + */ 29 + 30 + struct kho_radix_node; 31 + 32 + struct kho_radix_tree { 33 + struct kho_radix_node *root; 34 + struct mutex lock; /* protects the tree's structure and root pointer */ 35 + }; 36 + 37 + typedef int (*kho_radix_tree_walk_callback_t)(phys_addr_t phys, 38 + unsigned int order); 39 + 40 + #ifdef CONFIG_KEXEC_HANDOVER 41 + 42 + int kho_radix_add_page(struct kho_radix_tree *tree, unsigned long pfn, 43 + unsigned int order); 44 + 45 + void kho_radix_del_page(struct kho_radix_tree *tree, unsigned long pfn, 46 + unsigned int order); 47 + 48 + int kho_radix_walk_tree(struct kho_radix_tree *tree, 49 + kho_radix_tree_walk_callback_t cb); 50 + 51 + #else /* #ifdef CONFIG_KEXEC_HANDOVER */ 52 + 53 + static inline int kho_radix_add_page(struct kho_radix_tree *tree, long pfn, 54 + unsigned int order) 55 + { 56 + return -EOPNOTSUPP; 57 + } 58 + 59 + static inline void kho_radix_del_page(struct kho_radix_tree *tree, 60 + unsigned long pfn, unsigned int order) { } 61 + 62 + static inline int kho_radix_walk_tree(struct kho_radix_tree *tree, 63 + kho_radix_tree_walk_callback_t cb) 64 + { 65 + return -EOPNOTSUPP; 66 + } 67 + 68 + #endif /* #ifdef CONFIG_KEXEC_HANDOVER */ 69 + 70 + #endif /* _LINUX_KHO_RADIX_TREE_H */
+344 -307
kernel/liveupdate/kexec_handover.c
··· 5 5 * Copyright (C) 2025 Microsoft Corporation, Mike Rapoport <rppt@kernel.org> 6 6 * Copyright (C) 2025 Google LLC, Changyuan Lyu <changyuanl@google.com> 7 7 * Copyright (C) 2025 Pasha Tatashin <pasha.tatashin@soleen.com> 8 + * Copyright (C) 2026 Google LLC, Jason Miu <jasonmiu@google.com> 8 9 */ 9 10 10 11 #define pr_fmt(fmt) "KHO: " fmt ··· 16 15 #include <linux/count_zeros.h> 17 16 #include <linux/kexec.h> 18 17 #include <linux/kexec_handover.h> 18 + #include <linux/kho_radix_tree.h> 19 19 #include <linux/kho/abi/kexec_handover.h> 20 20 #include <linux/libfdt.h> 21 21 #include <linux/list.h> ··· 66 64 } 67 65 early_param("kho", kho_parse_enable); 68 66 69 - /* 70 - * Keep track of memory that is to be preserved across KHO. 71 - * 72 - * The serializing side uses two levels of xarrays to manage chunks of per-order 73 - * PAGE_SIZE byte bitmaps. For instance if PAGE_SIZE = 4096, the entire 1G order 74 - * of a 8TB system would fit inside a single 4096 byte bitmap. For order 0 75 - * allocations each bitmap will cover 128M of address space. Thus, for 16G of 76 - * memory at most 512K of bitmap memory will be needed for order 0. 77 - * 78 - * This approach is fully incremental, as the serialization progresses folios 79 - * can continue be aggregated to the tracker. The final step, immediately prior 80 - * to kexec would serialize the xarray information into a linked list for the 81 - * successor kernel to parse. 82 - */ 83 - 84 - #define PRESERVE_BITS (PAGE_SIZE * 8) 85 - 86 - struct kho_mem_phys_bits { 87 - DECLARE_BITMAP(preserve, PRESERVE_BITS); 88 - }; 89 - 90 - static_assert(sizeof(struct kho_mem_phys_bits) == PAGE_SIZE); 91 - 92 - struct kho_mem_phys { 93 - /* 94 - * Points to kho_mem_phys_bits, a sparse bitmap array. Each bit is sized 95 - * to order. 96 - */ 97 - struct xarray phys_bits; 98 - }; 99 - 100 - struct kho_mem_track { 101 - /* Points to kho_mem_phys, each order gets its own bitmap tree */ 102 - struct xarray orders; 103 - }; 104 - 105 - struct khoser_mem_chunk; 106 - 107 67 struct kho_out { 108 68 void *fdt; 109 69 bool finalized; 110 70 struct mutex lock; /* protects KHO FDT finalization */ 111 71 112 - struct kho_mem_track track; 72 + struct kho_radix_tree radix_tree; 113 73 struct kho_debugfs dbg; 114 74 }; 115 75 116 76 static struct kho_out kho_out = { 117 77 .lock = __MUTEX_INITIALIZER(kho_out.lock), 118 - .track = { 119 - .orders = XARRAY_INIT(kho_out.track.orders, 0), 78 + .radix_tree = { 79 + .lock = __MUTEX_INITIALIZER(kho_out.radix_tree.lock), 120 80 }, 121 81 .finalized = false, 122 82 }; 123 83 124 - static void *xa_load_or_alloc(struct xarray *xa, unsigned long index) 84 + /** 85 + * kho_radix_encode_key - Encodes a physical address and order into a radix key. 86 + * @phys: The physical address of the page. 87 + * @order: The order of the page. 88 + * 89 + * This function combines a page's physical address and its order into a 90 + * single unsigned long, which is used as a key for all radix tree 91 + * operations. 92 + * 93 + * Return: The encoded unsigned long radix key. 94 + */ 95 + static unsigned long kho_radix_encode_key(phys_addr_t phys, unsigned int order) 125 96 { 126 - void *res = xa_load(xa, index); 97 + /* Order bits part */ 98 + unsigned long h = 1UL << (KHO_ORDER_0_LOG2 - order); 99 + /* Shifted physical address part */ 100 + unsigned long l = phys >> (PAGE_SHIFT + order); 127 101 128 - if (res) 129 - return res; 130 - 131 - void *elm __free(free_page) = (void *)get_zeroed_page(GFP_KERNEL); 132 - 133 - if (!elm) 134 - return ERR_PTR(-ENOMEM); 135 - 136 - if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), PAGE_SIZE))) 137 - return ERR_PTR(-EINVAL); 138 - 139 - res = xa_cmpxchg(xa, index, NULL, elm, GFP_KERNEL); 140 - if (xa_is_err(res)) 141 - return ERR_PTR(xa_err(res)); 142 - else if (res) 143 - return res; 144 - 145 - return no_free_ptr(elm); 102 + return h | l; 146 103 } 147 104 148 - static void __kho_unpreserve_order(struct kho_mem_track *track, unsigned long pfn, 149 - unsigned int order) 105 + /** 106 + * kho_radix_decode_key - Decodes a radix key back into a physical address and order. 107 + * @key: The unsigned long key to decode. 108 + * @order: An output parameter, a pointer to an unsigned int where the decoded 109 + * page order will be stored. 110 + * 111 + * This function reverses the encoding performed by kho_radix_encode_key(), 112 + * extracting the original physical address and page order from a given key. 113 + * 114 + * Return: The decoded physical address. 115 + */ 116 + static phys_addr_t kho_radix_decode_key(unsigned long key, unsigned int *order) 150 117 { 151 - struct kho_mem_phys_bits *bits; 152 - struct kho_mem_phys *physxa; 153 - const unsigned long pfn_high = pfn >> order; 118 + unsigned int order_bit = fls64(key); 119 + phys_addr_t phys; 154 120 155 - physxa = xa_load(&track->orders, order); 156 - if (WARN_ON_ONCE(!physxa)) 157 - return; 121 + /* order_bit is numbered starting at 1 from fls64 */ 122 + *order = KHO_ORDER_0_LOG2 - order_bit + 1; 123 + /* The order is discarded by the shift */ 124 + phys = key << (PAGE_SHIFT + *order); 158 125 159 - bits = xa_load(&physxa->phys_bits, pfn_high / PRESERVE_BITS); 160 - if (WARN_ON_ONCE(!bits)) 161 - return; 162 - 163 - clear_bit(pfn_high % PRESERVE_BITS, bits->preserve); 126 + return phys; 164 127 } 165 128 166 - static void __kho_unpreserve(struct kho_mem_track *track, unsigned long pfn, 167 - unsigned long end_pfn) 129 + static unsigned long kho_radix_get_bitmap_index(unsigned long key) 130 + { 131 + return key % (1 << KHO_BITMAP_SIZE_LOG2); 132 + } 133 + 134 + static unsigned long kho_radix_get_table_index(unsigned long key, 135 + unsigned int level) 136 + { 137 + int s; 138 + 139 + s = ((level - 1) * KHO_TABLE_SIZE_LOG2) + KHO_BITMAP_SIZE_LOG2; 140 + return (key >> s) % (1 << KHO_TABLE_SIZE_LOG2); 141 + } 142 + 143 + /** 144 + * kho_radix_add_page - Marks a page as preserved in the radix tree. 145 + * @tree: The KHO radix tree. 146 + * @pfn: The page frame number of the page to preserve. 147 + * @order: The order of the page. 148 + * 149 + * This function traverses the radix tree based on the key derived from @pfn 150 + * and @order. It sets the corresponding bit in the leaf bitmap to mark the 151 + * page for preservation. If intermediate nodes do not exist along the path, 152 + * they are allocated and added to the tree. 153 + * 154 + * Return: 0 on success, or a negative error code on failure. 155 + */ 156 + int kho_radix_add_page(struct kho_radix_tree *tree, 157 + unsigned long pfn, unsigned int order) 158 + { 159 + /* Newly allocated nodes for error cleanup */ 160 + struct kho_radix_node *intermediate_nodes[KHO_TREE_MAX_DEPTH] = { 0 }; 161 + unsigned long key = kho_radix_encode_key(PFN_PHYS(pfn), order); 162 + struct kho_radix_node *anchor_node = NULL; 163 + struct kho_radix_node *node = tree->root; 164 + struct kho_radix_node *new_node; 165 + unsigned int i, idx, anchor_idx; 166 + struct kho_radix_leaf *leaf; 167 + int err = 0; 168 + 169 + if (WARN_ON_ONCE(!tree->root)) 170 + return -EINVAL; 171 + 172 + might_sleep(); 173 + 174 + guard(mutex)(&tree->lock); 175 + 176 + /* Go from high levels to low levels */ 177 + for (i = KHO_TREE_MAX_DEPTH - 1; i > 0; i--) { 178 + idx = kho_radix_get_table_index(key, i); 179 + 180 + if (node->table[idx]) { 181 + node = phys_to_virt(node->table[idx]); 182 + continue; 183 + } 184 + 185 + /* Next node is empty, create a new node for it */ 186 + new_node = (struct kho_radix_node *)get_zeroed_page(GFP_KERNEL); 187 + if (!new_node) { 188 + err = -ENOMEM; 189 + goto err_free_nodes; 190 + } 191 + 192 + node->table[idx] = virt_to_phys(new_node); 193 + 194 + /* 195 + * Capture the node where the new branch starts for cleanup 196 + * if allocation fails. 197 + */ 198 + if (!anchor_node) { 199 + anchor_node = node; 200 + anchor_idx = idx; 201 + } 202 + intermediate_nodes[i] = new_node; 203 + 204 + node = new_node; 205 + } 206 + 207 + /* Handle the leaf level bitmap (level 0) */ 208 + idx = kho_radix_get_bitmap_index(key); 209 + leaf = (struct kho_radix_leaf *)node; 210 + __set_bit(idx, leaf->bitmap); 211 + 212 + return 0; 213 + 214 + err_free_nodes: 215 + for (i = KHO_TREE_MAX_DEPTH - 1; i > 0; i--) { 216 + if (intermediate_nodes[i]) 217 + free_page((unsigned long)intermediate_nodes[i]); 218 + } 219 + if (anchor_node) 220 + anchor_node->table[anchor_idx] = 0; 221 + 222 + return err; 223 + } 224 + EXPORT_SYMBOL_GPL(kho_radix_add_page); 225 + 226 + /** 227 + * kho_radix_del_page - Removes a page's preservation status from the radix tree. 228 + * @tree: The KHO radix tree. 229 + * @pfn: The page frame number of the page to unpreserve. 230 + * @order: The order of the page. 231 + * 232 + * This function traverses the radix tree and clears the bit corresponding to 233 + * the page, effectively removing its "preserved" status. It does not free 234 + * the tree's intermediate nodes, even if they become empty. 235 + */ 236 + void kho_radix_del_page(struct kho_radix_tree *tree, unsigned long pfn, 237 + unsigned int order) 238 + { 239 + unsigned long key = kho_radix_encode_key(PFN_PHYS(pfn), order); 240 + struct kho_radix_node *node = tree->root; 241 + struct kho_radix_leaf *leaf; 242 + unsigned int i, idx; 243 + 244 + if (WARN_ON_ONCE(!tree->root)) 245 + return; 246 + 247 + might_sleep(); 248 + 249 + guard(mutex)(&tree->lock); 250 + 251 + /* Go from high levels to low levels */ 252 + for (i = KHO_TREE_MAX_DEPTH - 1; i > 0; i--) { 253 + idx = kho_radix_get_table_index(key, i); 254 + 255 + /* 256 + * Attempting to delete a page that has not been preserved, 257 + * return with a warning. 258 + */ 259 + if (WARN_ON(!node->table[idx])) 260 + return; 261 + 262 + node = phys_to_virt(node->table[idx]); 263 + } 264 + 265 + /* Handle the leaf level bitmap (level 0) */ 266 + leaf = (struct kho_radix_leaf *)node; 267 + idx = kho_radix_get_bitmap_index(key); 268 + __clear_bit(idx, leaf->bitmap); 269 + } 270 + EXPORT_SYMBOL_GPL(kho_radix_del_page); 271 + 272 + static int kho_radix_walk_leaf(struct kho_radix_leaf *leaf, 273 + unsigned long key, 274 + kho_radix_tree_walk_callback_t cb) 275 + { 276 + unsigned long *bitmap = (unsigned long *)leaf; 277 + unsigned int order; 278 + phys_addr_t phys; 279 + unsigned int i; 280 + int err; 281 + 282 + for_each_set_bit(i, bitmap, PAGE_SIZE * BITS_PER_BYTE) { 283 + phys = kho_radix_decode_key(key | i, &order); 284 + err = cb(phys, order); 285 + if (err) 286 + return err; 287 + } 288 + 289 + return 0; 290 + } 291 + 292 + static int __kho_radix_walk_tree(struct kho_radix_node *root, 293 + unsigned int level, unsigned long start, 294 + kho_radix_tree_walk_callback_t cb) 295 + { 296 + struct kho_radix_node *node; 297 + struct kho_radix_leaf *leaf; 298 + unsigned long key, i; 299 + unsigned int shift; 300 + int err; 301 + 302 + for (i = 0; i < PAGE_SIZE / sizeof(phys_addr_t); i++) { 303 + if (!root->table[i]) 304 + continue; 305 + 306 + shift = ((level - 1) * KHO_TABLE_SIZE_LOG2) + 307 + KHO_BITMAP_SIZE_LOG2; 308 + key = start | (i << shift); 309 + 310 + node = phys_to_virt(root->table[i]); 311 + 312 + if (level == 1) { 313 + /* 314 + * we are at level 1, 315 + * node is pointing to the level 0 bitmap. 316 + */ 317 + leaf = (struct kho_radix_leaf *)node; 318 + err = kho_radix_walk_leaf(leaf, key, cb); 319 + } else { 320 + err = __kho_radix_walk_tree(node, level - 1, 321 + key, cb); 322 + } 323 + 324 + if (err) 325 + return err; 326 + } 327 + 328 + return 0; 329 + } 330 + 331 + /** 332 + * kho_radix_walk_tree - Traverses the radix tree and calls a callback for each preserved page. 333 + * @tree: A pointer to the KHO radix tree to walk. 334 + * @cb: A callback function of type kho_radix_tree_walk_callback_t that will be 335 + * invoked for each preserved page found in the tree. The callback receives 336 + * the physical address and order of the preserved page. 337 + * 338 + * This function walks the radix tree, searching from the specified top level 339 + * down to the lowest level (level 0). For each preserved page found, it invokes 340 + * the provided callback, passing the page's physical address and order. 341 + * 342 + * Return: 0 if the walk completed the specified tree, or the non-zero return 343 + * value from the callback that stopped the walk. 344 + */ 345 + int kho_radix_walk_tree(struct kho_radix_tree *tree, 346 + kho_radix_tree_walk_callback_t cb) 347 + { 348 + if (WARN_ON_ONCE(!tree->root)) 349 + return -EINVAL; 350 + 351 + guard(mutex)(&tree->lock); 352 + 353 + return __kho_radix_walk_tree(tree->root, KHO_TREE_MAX_DEPTH - 1, 0, cb); 354 + } 355 + EXPORT_SYMBOL_GPL(kho_radix_walk_tree); 356 + 357 + static void __kho_unpreserve(struct kho_radix_tree *tree, 358 + unsigned long pfn, unsigned long end_pfn) 168 359 { 169 360 unsigned int order; 170 361 171 362 while (pfn < end_pfn) { 172 363 order = min(count_trailing_zeros(pfn), ilog2(end_pfn - pfn)); 173 364 174 - __kho_unpreserve_order(track, pfn, order); 365 + kho_radix_del_page(tree, pfn, order); 175 366 176 367 pfn += 1 << order; 177 368 } 178 - } 179 - 180 - static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, 181 - unsigned int order) 182 - { 183 - struct kho_mem_phys_bits *bits; 184 - struct kho_mem_phys *physxa, *new_physxa; 185 - const unsigned long pfn_high = pfn >> order; 186 - 187 - might_sleep(); 188 - physxa = xa_load(&track->orders, order); 189 - if (!physxa) { 190 - int err; 191 - 192 - new_physxa = kzalloc_obj(*physxa); 193 - if (!new_physxa) 194 - return -ENOMEM; 195 - 196 - xa_init(&new_physxa->phys_bits); 197 - physxa = xa_cmpxchg(&track->orders, order, NULL, new_physxa, 198 - GFP_KERNEL); 199 - 200 - err = xa_err(physxa); 201 - if (err || physxa) { 202 - xa_destroy(&new_physxa->phys_bits); 203 - kfree(new_physxa); 204 - 205 - if (err) 206 - return err; 207 - } else { 208 - physxa = new_physxa; 209 - } 210 - } 211 - 212 - bits = xa_load_or_alloc(&physxa->phys_bits, pfn_high / PRESERVE_BITS); 213 - if (IS_ERR(bits)) 214 - return PTR_ERR(bits); 215 - 216 - set_bit(pfn_high % PRESERVE_BITS, bits->preserve); 217 - 218 - return 0; 219 369 } 220 370 221 371 /* For physically contiguous 0-order pages. */ ··· 472 318 } 473 319 EXPORT_SYMBOL_GPL(kho_restore_pages); 474 320 475 - /* Serialize and deserialize struct kho_mem_phys across kexec 476 - * 477 - * Record all the bitmaps in a linked list of pages for the next kernel to 478 - * process. Each chunk holds bitmaps of the same order and each block of bitmaps 479 - * starts at a given physical address. This allows the bitmaps to be sparse. The 480 - * xarray is used to store them in a tree while building up the data structure, 481 - * but the KHO successor kernel only needs to process them once in order. 482 - * 483 - * All of this memory is normal kmalloc() memory and is not marked for 484 - * preservation. The successor kernel will remain isolated to the scratch space 485 - * until it completes processing this list. Once processed all the memory 486 - * storing these ranges will be marked as free. 487 - */ 488 - 489 - struct khoser_mem_bitmap_ptr { 490 - phys_addr_t phys_start; 491 - DECLARE_KHOSER_PTR(bitmap, struct kho_mem_phys_bits *); 492 - }; 493 - 494 - struct khoser_mem_chunk_hdr { 495 - DECLARE_KHOSER_PTR(next, struct khoser_mem_chunk *); 496 - unsigned int order; 497 - unsigned int num_elms; 498 - }; 499 - 500 - #define KHOSER_BITMAP_SIZE \ 501 - ((PAGE_SIZE - sizeof(struct khoser_mem_chunk_hdr)) / \ 502 - sizeof(struct khoser_mem_bitmap_ptr)) 503 - 504 - struct khoser_mem_chunk { 505 - struct khoser_mem_chunk_hdr hdr; 506 - struct khoser_mem_bitmap_ptr bitmaps[KHOSER_BITMAP_SIZE]; 507 - }; 508 - 509 - static_assert(sizeof(struct khoser_mem_chunk) == PAGE_SIZE); 510 - 511 - static struct khoser_mem_chunk *new_chunk(struct khoser_mem_chunk *cur_chunk, 512 - unsigned long order) 321 + static int __init kho_preserved_memory_reserve(phys_addr_t phys, 322 + unsigned int order) 513 323 { 514 - struct khoser_mem_chunk *chunk __free(free_page) = NULL; 324 + union kho_page_info info; 325 + struct page *page; 326 + u64 sz; 515 327 516 - chunk = (void *)get_zeroed_page(GFP_KERNEL); 517 - if (!chunk) 518 - return ERR_PTR(-ENOMEM); 328 + sz = 1 << (order + PAGE_SHIFT); 329 + page = phys_to_page(phys); 519 330 520 - if (WARN_ON(kho_scratch_overlap(virt_to_phys(chunk), PAGE_SIZE))) 521 - return ERR_PTR(-EINVAL); 522 - 523 - chunk->hdr.order = order; 524 - if (cur_chunk) 525 - KHOSER_STORE_PTR(cur_chunk->hdr.next, chunk); 526 - return no_free_ptr(chunk); 527 - } 528 - 529 - static void kho_mem_ser_free(struct khoser_mem_chunk *first_chunk) 530 - { 531 - struct khoser_mem_chunk *chunk = first_chunk; 532 - 533 - while (chunk) { 534 - struct khoser_mem_chunk *tmp = chunk; 535 - 536 - chunk = KHOSER_LOAD_PTR(chunk->hdr.next); 537 - free_page((unsigned long)tmp); 538 - } 539 - } 540 - 541 - /* 542 - * Update memory map property, if old one is found discard it via 543 - * kho_mem_ser_free(). 544 - */ 545 - static void kho_update_memory_map(struct khoser_mem_chunk *first_chunk) 546 - { 547 - void *ptr; 548 - u64 phys; 549 - 550 - ptr = fdt_getprop_w(kho_out.fdt, 0, KHO_FDT_MEMORY_MAP_PROP_NAME, NULL); 551 - 552 - /* Check and discard previous memory map */ 553 - phys = get_unaligned((u64 *)ptr); 554 - if (phys) 555 - kho_mem_ser_free((struct khoser_mem_chunk *)phys_to_virt(phys)); 556 - 557 - /* Update with the new value */ 558 - phys = first_chunk ? (u64)virt_to_phys(first_chunk) : 0; 559 - put_unaligned(phys, (u64 *)ptr); 560 - } 561 - 562 - static int kho_mem_serialize(struct kho_out *kho_out) 563 - { 564 - struct khoser_mem_chunk *first_chunk = NULL; 565 - struct khoser_mem_chunk *chunk = NULL; 566 - struct kho_mem_phys *physxa; 567 - unsigned long order; 568 - int err = -ENOMEM; 569 - 570 - xa_for_each(&kho_out->track.orders, order, physxa) { 571 - struct kho_mem_phys_bits *bits; 572 - unsigned long phys; 573 - 574 - chunk = new_chunk(chunk, order); 575 - if (IS_ERR(chunk)) { 576 - err = PTR_ERR(chunk); 577 - goto err_free; 578 - } 579 - 580 - if (!first_chunk) 581 - first_chunk = chunk; 582 - 583 - xa_for_each(&physxa->phys_bits, phys, bits) { 584 - struct khoser_mem_bitmap_ptr *elm; 585 - 586 - if (chunk->hdr.num_elms == ARRAY_SIZE(chunk->bitmaps)) { 587 - chunk = new_chunk(chunk, order); 588 - if (IS_ERR(chunk)) { 589 - err = PTR_ERR(chunk); 590 - goto err_free; 591 - } 592 - } 593 - 594 - elm = &chunk->bitmaps[chunk->hdr.num_elms]; 595 - chunk->hdr.num_elms++; 596 - elm->phys_start = (phys * PRESERVE_BITS) 597 - << (order + PAGE_SHIFT); 598 - KHOSER_STORE_PTR(elm->bitmap, bits); 599 - } 600 - } 601 - 602 - kho_update_memory_map(first_chunk); 331 + /* Reserve the memory preserved in KHO in memblock */ 332 + memblock_reserve(phys, sz); 333 + memblock_reserved_mark_noinit(phys, sz); 334 + info.magic = KHO_PAGE_MAGIC; 335 + info.order = order; 336 + page->private = info.page_private; 603 337 604 338 return 0; 605 - 606 - err_free: 607 - kho_mem_ser_free(first_chunk); 608 - return err; 609 - } 610 - 611 - static void __init deserialize_bitmap(unsigned int order, 612 - struct khoser_mem_bitmap_ptr *elm) 613 - { 614 - struct kho_mem_phys_bits *bitmap = KHOSER_LOAD_PTR(elm->bitmap); 615 - unsigned long bit; 616 - 617 - for_each_set_bit(bit, bitmap->preserve, PRESERVE_BITS) { 618 - int sz = 1 << (order + PAGE_SHIFT); 619 - phys_addr_t phys = 620 - elm->phys_start + (bit << (order + PAGE_SHIFT)); 621 - struct page *page = phys_to_page(phys); 622 - union kho_page_info info; 623 - 624 - memblock_reserve(phys, sz); 625 - memblock_reserved_mark_noinit(phys, sz); 626 - info.magic = KHO_PAGE_MAGIC; 627 - info.order = order; 628 - page->private = info.page_private; 629 - } 630 339 } 631 340 632 341 /* Returns physical address of the preserved memory map from FDT */ ··· 500 483 501 484 mem_ptr = fdt_getprop(fdt, 0, KHO_FDT_MEMORY_MAP_PROP_NAME, &len); 502 485 if (!mem_ptr || len != sizeof(u64)) { 503 - pr_err("failed to get preserved memory bitmaps\n"); 486 + pr_err("failed to get preserved memory map\n"); 504 487 return 0; 505 488 } 506 489 507 490 return get_unaligned((const u64 *)mem_ptr); 508 - } 509 - 510 - static void __init kho_mem_deserialize(struct khoser_mem_chunk *chunk) 511 - { 512 - while (chunk) { 513 - unsigned int i; 514 - 515 - for (i = 0; i != chunk->hdr.num_elms; i++) 516 - deserialize_bitmap(chunk->hdr.order, 517 - &chunk->bitmaps[i]); 518 - chunk = KHOSER_LOAD_PTR(chunk->hdr.next); 519 - } 520 491 } 521 492 522 493 /* ··· 817 812 */ 818 813 int kho_preserve_folio(struct folio *folio) 819 814 { 815 + struct kho_radix_tree *tree = &kho_out.radix_tree; 820 816 const unsigned long pfn = folio_pfn(folio); 821 817 const unsigned int order = folio_order(folio); 822 - struct kho_mem_track *track = &kho_out.track; 823 818 824 819 if (WARN_ON(kho_scratch_overlap(pfn << PAGE_SHIFT, PAGE_SIZE << order))) 825 820 return -EINVAL; 826 821 827 - return __kho_preserve_order(track, pfn, order); 822 + return kho_radix_add_page(tree, pfn, order); 828 823 } 829 824 EXPORT_SYMBOL_GPL(kho_preserve_folio); 830 825 ··· 838 833 */ 839 834 void kho_unpreserve_folio(struct folio *folio) 840 835 { 836 + struct kho_radix_tree *tree = &kho_out.radix_tree; 841 837 const unsigned long pfn = folio_pfn(folio); 842 838 const unsigned int order = folio_order(folio); 843 - struct kho_mem_track *track = &kho_out.track; 844 839 845 - __kho_unpreserve_order(track, pfn, order); 840 + kho_radix_del_page(tree, pfn, order); 846 841 } 847 842 EXPORT_SYMBOL_GPL(kho_unpreserve_folio); 848 843 ··· 858 853 */ 859 854 int kho_preserve_pages(struct page *page, unsigned long nr_pages) 860 855 { 861 - struct kho_mem_track *track = &kho_out.track; 856 + struct kho_radix_tree *tree = &kho_out.radix_tree; 862 857 const unsigned long start_pfn = page_to_pfn(page); 863 858 const unsigned long end_pfn = start_pfn + nr_pages; 864 859 unsigned long pfn = start_pfn; ··· 874 869 const unsigned int order = 875 870 min(count_trailing_zeros(pfn), ilog2(end_pfn - pfn)); 876 871 877 - err = __kho_preserve_order(track, pfn, order); 872 + err = kho_radix_add_page(tree, pfn, order); 878 873 if (err) { 879 874 failed_pfn = pfn; 880 875 break; ··· 884 879 } 885 880 886 881 if (err) 887 - __kho_unpreserve(track, start_pfn, failed_pfn); 882 + __kho_unpreserve(tree, start_pfn, failed_pfn); 888 883 889 884 return err; 890 885 } ··· 902 897 */ 903 898 void kho_unpreserve_pages(struct page *page, unsigned long nr_pages) 904 899 { 905 - struct kho_mem_track *track = &kho_out.track; 900 + struct kho_radix_tree *tree = &kho_out.radix_tree; 906 901 const unsigned long start_pfn = page_to_pfn(page); 907 902 const unsigned long end_pfn = start_pfn + nr_pages; 908 903 909 - __kho_unpreserve(track, start_pfn, end_pfn); 904 + __kho_unpreserve(tree, start_pfn, end_pfn); 910 905 } 911 906 EXPORT_SYMBOL_GPL(kho_unpreserve_pages); 912 907 ··· 965 960 static void kho_vmalloc_unpreserve_chunk(struct kho_vmalloc_chunk *chunk, 966 961 unsigned short order) 967 962 { 968 - struct kho_mem_track *track = &kho_out.track; 963 + struct kho_radix_tree *tree = &kho_out.radix_tree; 969 964 unsigned long pfn = PHYS_PFN(virt_to_phys(chunk)); 970 965 971 - __kho_unpreserve(track, pfn, pfn + 1); 966 + __kho_unpreserve(tree, pfn, pfn + 1); 972 967 973 968 for (int i = 0; i < ARRAY_SIZE(chunk->phys) && chunk->phys[i]; i++) { 974 969 pfn = PHYS_PFN(chunk->phys[i]); 975 - __kho_unpreserve(track, pfn, pfn + (1 << order)); 970 + __kho_unpreserve(tree, pfn, pfn + (1 << order)); 976 971 } 977 972 } 978 973 ··· 1243 1238 1244 1239 int kho_finalize(void) 1245 1240 { 1246 - int ret; 1247 - 1248 1241 if (!kho_enable) 1249 1242 return -EOPNOTSUPP; 1250 1243 1251 1244 guard(mutex)(&kho_out.lock); 1252 - ret = kho_mem_serialize(&kho_out); 1253 - if (ret) 1254 - return ret; 1255 - 1256 1245 kho_out.finalized = true; 1257 1246 1258 1247 return 0; ··· 1261 1262 struct kho_in { 1262 1263 phys_addr_t fdt_phys; 1263 1264 phys_addr_t scratch_phys; 1264 - phys_addr_t mem_map_phys; 1265 1265 struct kho_debugfs dbg; 1266 1266 }; 1267 1267 ··· 1328 1330 } 1329 1331 EXPORT_SYMBOL_GPL(kho_retrieve_subtree); 1330 1332 1333 + static int __init kho_mem_retrieve(const void *fdt) 1334 + { 1335 + struct kho_radix_tree tree; 1336 + const phys_addr_t *mem; 1337 + int len; 1338 + 1339 + /* Retrieve the KHO radix tree from passed-in FDT. */ 1340 + mem = fdt_getprop(fdt, 0, KHO_FDT_MEMORY_MAP_PROP_NAME, &len); 1341 + 1342 + if (!mem || len != sizeof(*mem)) { 1343 + pr_err("failed to get preserved KHO memory tree\n"); 1344 + return -ENOENT; 1345 + } 1346 + 1347 + if (!*mem) 1348 + return -EINVAL; 1349 + 1350 + tree.root = phys_to_virt(*mem); 1351 + mutex_init(&tree.lock); 1352 + return kho_radix_walk_tree(&tree, kho_preserved_memory_reserve); 1353 + } 1354 + 1331 1355 static __init int kho_out_fdt_setup(void) 1332 1356 { 1357 + struct kho_radix_tree *tree = &kho_out.radix_tree; 1333 1358 void *root = kho_out.fdt; 1334 - u64 empty_mem_map = 0; 1359 + u64 preserved_mem_tree_pa; 1335 1360 int err; 1336 1361 1337 1362 err = fdt_create(root, PAGE_SIZE); 1338 1363 err |= fdt_finish_reservemap(root); 1339 1364 err |= fdt_begin_node(root, ""); 1340 1365 err |= fdt_property_string(root, "compatible", KHO_FDT_COMPATIBLE); 1341 - err |= fdt_property(root, KHO_FDT_MEMORY_MAP_PROP_NAME, &empty_mem_map, 1342 - sizeof(empty_mem_map)); 1366 + 1367 + preserved_mem_tree_pa = virt_to_phys(tree->root); 1368 + 1369 + err |= fdt_property(root, KHO_FDT_MEMORY_MAP_PROP_NAME, 1370 + &preserved_mem_tree_pa, 1371 + sizeof(preserved_mem_tree_pa)); 1372 + 1343 1373 err |= fdt_end_node(root); 1344 1374 err |= fdt_finish(root); 1345 1375 ··· 1376 1350 1377 1351 static __init int kho_init(void) 1378 1352 { 1353 + struct kho_radix_tree *tree = &kho_out.radix_tree; 1379 1354 const void *fdt = kho_get_fdt(); 1380 1355 int err = 0; 1381 1356 1382 1357 if (!kho_enable) 1383 1358 return 0; 1384 1359 1360 + tree->root = kzalloc(PAGE_SIZE, GFP_KERNEL); 1361 + if (!tree->root) { 1362 + err = -ENOMEM; 1363 + goto err_free_scratch; 1364 + } 1365 + 1385 1366 kho_out.fdt = kho_alloc_preserve(PAGE_SIZE); 1386 1367 if (IS_ERR(kho_out.fdt)) { 1387 1368 err = PTR_ERR(kho_out.fdt); 1388 - goto err_free_scratch; 1369 + goto err_free_kho_radix_tree_root; 1389 1370 } 1390 1371 1391 1372 err = kho_debugfs_init(); ··· 1438 1405 1439 1406 err_free_fdt: 1440 1407 kho_unpreserve_free(kho_out.fdt); 1408 + err_free_kho_radix_tree_root: 1409 + kfree(tree->root); 1410 + tree->root = NULL; 1441 1411 err_free_scratch: 1442 1412 kho_out.fdt = NULL; 1443 1413 for (int i = 0; i < kho_scratch_cnt; i++) { ··· 1480 1444 1481 1445 void __init kho_memory_init(void) 1482 1446 { 1483 - if (kho_in.mem_map_phys) { 1447 + if (kho_in.scratch_phys) { 1484 1448 kho_scratch = phys_to_virt(kho_in.scratch_phys); 1485 1449 kho_release_scratch(); 1486 - kho_mem_deserialize(phys_to_virt(kho_in.mem_map_phys)); 1450 + 1451 + if (kho_mem_retrieve(kho_get_fdt())) 1452 + kho_in.fdt_phys = 0; 1487 1453 } else { 1488 1454 kho_reserve_scratch(); 1489 1455 } ··· 1563 1525 1564 1526 kho_in.fdt_phys = fdt_phys; 1565 1527 kho_in.scratch_phys = scratch_phys; 1566 - kho_in.mem_map_phys = mem_map_phys; 1567 1528 kho_scratch_cnt = scratch_cnt; 1568 1529 1569 1530 populated = true;
+2 -1
kernel/liveupdate/kexec_handover_debugfs.c
··· 13 13 #include <linux/io.h> 14 14 #include <linux/libfdt.h> 15 15 #include <linux/mm.h> 16 + #include <linux/kho/abi/kexec_handover.h> 16 17 #include "kexec_handover_internal.h" 17 18 18 19 static struct dentry *debugfs_root; ··· 140 139 const char *name = fdt_get_name(fdt, child, NULL); 141 140 const u64 *fdt_phys; 142 141 143 - fdt_phys = fdt_getprop(fdt, child, "fdt", &len); 142 + fdt_phys = fdt_getprop(fdt, child, KHO_FDT_SUB_TREE_PROP_NAME, &len); 144 143 if (!fdt_phys) 145 144 continue; 146 145 if (len != sizeof(*fdt_phys)) {