Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'driver-core-7.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core

Pull driver core updates from Danilo Krummrich:
"debugfs:
- Fix NULL pointer dereference in debugfs_create_str()
- Fix misplaced EXPORT_SYMBOL_GPL for debugfs_create_str()
- Fix soundwire debugfs NULL pointer dereference from uninitialized
firmware_file

device property:
- Make fwnode flags modifications thread safe; widen the field to
unsigned long and use set_bit() / clear_bit() based accessors
- Document how to check for the property presence

devres:
- Separate struct devres_node from its "subclasses" (struct devres,
struct devres_group); give struct devres_node its own release and
free callbacks for per-type dispatch
- Introduce struct devres_action for devres actions, avoiding the
ARCH_DMA_MINALIGN alignment overhead of struct devres
- Export struct devres_node and its init/add/remove/dbginfo
primitives for use by Rust Devres<T>
- Fix missing node debug info in devm_krealloc()
- Use guard(spinlock_irqsave) where applicable; consolidate unlock
paths in devres_release_group()

driver_override:
- Convert PCI, WMI, vdpa, s390/cio, s390/ap, and fsl-mc to the
generic driver_override infrastructure, replacing per-bus
driver_override strings, sysfs attributes, and match logic; fixes a
potential UAF from unsynchronized access to driver_override in bus
match() callbacks
- Simplify __device_set_driver_override() logic

kernfs:
- Send IN_DELETE_SELF and IN_IGNORED inotify events on kernfs file
and directory removal
- Add corresponding selftests for memcg

platform:
- Allow attaching software nodes when creating platform devices via a
new 'swnode' field in struct platform_device_info
- Add kerneldoc for struct platform_device_info

software node:
- Move software node initialization from postcore_initcall() to
driver_init(), making it available early in the boot process
- Move kernel_kobj initialization (ksysfs_init) earlier to support
the above
- Remove software_node_exit(); dead code in a built-in unit

SoC:
- Introduce of_machine_read_compatible() and of_machine_read_model()
OF helpers and export soc_attr_read_machine() to replace direct
accesses to of_root from SoC drivers; also enables
CONFIG_COMPILE_TEST coverage for these drivers

sysfs:
- Constify attribute group array pointers to
'const struct attribute_group *const *' in sysfs functions,
device_add_groups() / device_remove_groups(), and struct class

Rust:
- Devres:
- Embed struct devres_node directly in Devres<T> instead of going
through devm_add_action(), avoiding the extra allocation and the
unnecessary ARCH_DMA_MINALIGN alignment

- I/O:
- Turn IoCapable from a marker trait into a functional trait
carrying the raw I/O accessor implementation (io_read /
io_write), providing working defaults for the per-type Io
methods
- Add RelaxedMmio wrapper type, making relaxed accessors usable in
code generic over the Io trait
- Remove overloaded per-type Io methods and per-backend macros
from Mmio and PCI ConfigSpace

- I/O (Register):
- Add IoLoc trait and generic read/write/update methods to the Io
trait, making I/O operations parameterizable by typed locations
- Add register! macro for defining hardware register types with
typed bitfield accessors backed by Bounded values; supports
direct, relative, and array register addressing
- Add write_reg() / try_write_reg() and LocatedRegister trait
- Update PCI sample driver to demonstrate the register! macro

Example:

```
register! {
/// UART control register.
CTRL(u32) @ 0x18 {
/// Receiver enable.
19:19 rx_enable => bool;
/// Parity configuration.
14:13 parity ?=> Parity;
}

/// FIFO watermark and counter register.
WATER(u32) @ 0x2c {
/// Number of datawords in the receive FIFO.
26:24 rx_count;
/// RX interrupt threshold.
17:16 rx_water;
}
}

impl WATER {
fn rx_above_watermark(&self) -> bool {
self.rx_count() > self.rx_water()
}
}

fn init(bar: &pci::Bar<BAR0_SIZE>) {
let water = WATER::zeroed()
.with_const_rx_water::<1>(); // > 3 would not compile
bar.write_reg(water);

let ctrl = CTRL::zeroed()
.with_parity(Parity::Even)
.with_rx_enable(true);
bar.write_reg(ctrl);
}

fn handle_rx(bar: &pci::Bar<BAR0_SIZE>) {
if bar.read(WATER).rx_above_watermark() {
// drain the FIFO
}
}

fn set_parity(bar: &pci::Bar<BAR0_SIZE>, parity: Parity) {
bar.update(CTRL, |r| r.with_parity(parity));
}
```

- IRQ:
- Move 'static bounds from where clauses to trait declarations for
IRQ handler traits

- Misc:
- Enable the generic_arg_infer Rust feature
- Extend Bounded with shift operations, single-bit bool
conversion, and const get()

Misc:
- Make deferred_probe_timeout default a Kconfig option
- Drop auxiliary_dev_pm_ops; the PM core falls back to driver PM
callbacks when no bus type PM ops are set
- Add conditional guard support for device_lock()
- Add ksysfs.c to the DRIVER CORE MAINTAINERS entry
- Fix kernel-doc warnings in base.h
- Fix stale reference to memory_block_add_nid() in documentation"

* tag 'driver-core-7.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core: (67 commits)
bus: fsl-mc: use generic driver_override infrastructure
s390/ap: use generic driver_override infrastructure
s390/cio: use generic driver_override infrastructure
vdpa: use generic driver_override infrastructure
platform/wmi: use generic driver_override infrastructure
PCI: use generic driver_override infrastructure
driver core: make software nodes available earlier
software node: remove software_node_exit()
kernel: ksysfs: initialize kernel_kobj earlier
MAINTAINERS: add ksysfs.c to the DRIVER CORE entry
drivers/base/memory: fix stale reference to memory_block_add_nid()
device property: Document how to check for the property presence
soundwire: debugfs: initialize firmware_file to empty string
debugfs: fix placement of EXPORT_SYMBOL_GPL for debugfs_create_str()
debugfs: check for NULL pointer in debugfs_create_str()
driver core: Make deferred_probe_timeout default a Kconfig option
driver core: simplify __device_set_driver_override() clearing logic
driver core: auxiliary bus: Drop auxiliary_dev_pm_ops
device property: Make modifications of fwnode "flags" thread safe
rust: devres: embed struct devres_node directly
...

+2847 -1022
+2
MAINTAINERS
··· 7807 7807 F: include/linux/device.h 7808 7808 F: include/linux/fwnode.h 7809 7809 F: include/linux/kobj* 7810 + F: include/linux/ksysfs.h 7810 7811 F: include/linux/property.h 7811 7812 F: include/linux/sysfs.h 7813 + F: kernel/ksysfs.c 7812 7814 F: lib/kobj* 7813 7815 F: rust/kernel/debugfs.rs 7814 7816 F: rust/kernel/debugfs/
+9
drivers/base/Kconfig
··· 73 73 with the PROT_EXEC flag. This can break, for example, non-KMS 74 74 video drivers. 75 75 76 + config DRIVER_DEFERRED_PROBE_TIMEOUT 77 + int "Default value for deferred_probe_timeout" 78 + default 0 if !MODULES 79 + default 10 if MODULES 80 + help 81 + Set the default value for the deferred_probe_timeout kernel parameter. 82 + See Documentation/admin-guide/kernel-parameters.txt for a description 83 + of the deferred_probe_timeout kernel parameter. 84 + 76 85 config STANDALONE 77 86 bool "Select only drivers that don't need compile-time external firmware" 78 87 default y
-6
drivers/base/auxiliary.c
··· 207 207 (int)(p - name), name); 208 208 } 209 209 210 - static const struct dev_pm_ops auxiliary_dev_pm_ops = { 211 - SET_RUNTIME_PM_OPS(pm_generic_runtime_suspend, pm_generic_runtime_resume, NULL) 212 - SET_SYSTEM_SLEEP_PM_OPS(pm_generic_suspend, pm_generic_resume) 213 - }; 214 - 215 210 static int auxiliary_bus_probe(struct device *dev) 216 211 { 217 212 const struct auxiliary_driver *auxdrv = to_auxiliary_drv(dev->driver); ··· 253 258 .shutdown = auxiliary_bus_shutdown, 254 259 .match = auxiliary_match, 255 260 .uevent = auxiliary_uevent, 256 - .pm = &auxiliary_dev_pm_ops, 257 261 }; 258 262 259 263 /**
+60 -38
drivers/base/base.h
··· 13 13 #include <linux/notifier.h> 14 14 15 15 /** 16 - * struct subsys_private - structure to hold the private to the driver core portions of the bus_type/class structure. 17 - * 18 - * @subsys - the struct kset that defines this subsystem 19 - * @devices_kset - the subsystem's 'devices' directory 20 - * @interfaces - list of subsystem interfaces associated 21 - * @mutex - protect the devices, and interfaces lists. 22 - * 23 - * @drivers_kset - the list of drivers associated 24 - * @klist_devices - the klist to iterate over the @devices_kset 25 - * @klist_drivers - the klist to iterate over the @drivers_kset 26 - * @bus_notifier - the bus notifier list for anything that cares about things 27 - * on this bus. 28 - * @bus - pointer back to the struct bus_type that this structure is associated 29 - * with. 16 + * struct subsys_private - structure to hold the private to the driver core 17 + * portions of the bus_type/class structure. 18 + * @subsys: the struct kset that defines this subsystem 19 + * @devices_kset: the subsystem's 'devices' directory 20 + * @interfaces: list of subsystem interfaces associated 21 + * @mutex: protect the devices, and interfaces lists. 22 + * @drivers_kset: the list of drivers associated 23 + * @klist_devices: the klist to iterate over the @devices_kset 24 + * @klist_drivers: the klist to iterate over the @drivers_kset 25 + * @bus_notifier: the bus notifier list for anything that cares about things 26 + * on this bus. 27 + * @drivers_autoprobe: gate whether new devices are automatically attached to 28 + * registered drivers, or new drivers automatically attach 29 + * to existing devices. 30 + * @bus: pointer back to the struct bus_type that this structure is associated 31 + * with. 30 32 * @dev_root: Default device to use as the parent. 31 - * 32 - * @glue_dirs - "glue" directory to put in-between the parent device to 33 - * avoid namespace conflicts 34 - * @class - pointer back to the struct class that this structure is associated 35 - * with. 36 - * @lock_key: Lock class key for use by the lock validator 33 + * @glue_dirs: "glue" directory to put in-between the parent device to 34 + * avoid namespace conflicts 35 + * @class: pointer back to the struct class that this structure is associated 36 + * with. 37 + * @lock_key: Lock class key for use by the lock validator 37 38 * 38 39 * This structure is the one that is the actual kobject allowing struct 39 40 * bus_type/class to be statically allocated safely. Nothing outside of the ··· 99 98 #endif 100 99 101 100 /** 102 - * struct device_private - structure to hold the private to the driver core portions of the device structure. 103 - * 104 - * @klist_children - klist containing all children of this device 105 - * @knode_parent - node in sibling list 106 - * @knode_driver - node in driver list 107 - * @knode_bus - node in bus list 108 - * @knode_class - node in class list 109 - * @deferred_probe - entry in deferred_probe_list which is used to retry the 110 - * binding of drivers which were unable to get all the resources needed by 111 - * the device; typically because it depends on another driver getting 112 - * probed first. 113 - * @async_driver - pointer to device driver awaiting probe via async_probe 114 - * @device - pointer back to the struct device that this structure is 115 - * associated with. 116 - * @driver_type - The type of the bound Rust driver. 117 - * @dead - This device is currently either in the process of or has been 118 - * removed from the system. Any asynchronous events scheduled for this 119 - * device should exit without taking any action. 101 + * struct device_private - structure to hold the private to the driver core 102 + * portions of the device structure. 103 + * @klist_children: klist containing all children of this device 104 + * @knode_parent: node in sibling list 105 + * @knode_driver: node in driver list 106 + * @knode_bus: node in bus list 107 + * @knode_class: node in class list 108 + * @deferred_probe: entry in deferred_probe_list which is used to retry the 109 + * binding of drivers which were unable to get all the 110 + * resources needed by the device; typically because it depends 111 + * on another driver getting probed first. 112 + * @async_driver: pointer to device driver awaiting probe via async_probe 113 + * @deferred_probe_reason: capture the -EPROBE_DEFER message emitted with 114 + * dev_err_probe() for later retrieval via debugfs 115 + * @device: pointer back to the struct device that this structure is 116 + * associated with. 117 + * @driver_type: The type of the bound Rust driver. 118 + * @dead: This device is currently either in the process of or has been 119 + * removed from the system. Any asynchronous events scheduled for this 120 + * device should exit without taking any action. 120 121 * 121 122 * Nothing outside of the driver core should ever touch these fields. 122 123 */ ··· 216 213 WRITE_ONCE(dev->driver, (struct device_driver *)drv); 217 214 } 218 215 216 + struct devres_node; 217 + typedef void (*dr_node_release_t)(struct device *dev, struct devres_node *node); 218 + typedef void (*dr_node_free_t)(struct devres_node *node); 219 + 220 + struct devres_node { 221 + struct list_head entry; 222 + dr_node_release_t release; 223 + dr_node_free_t free_node; 224 + const char *name; 225 + size_t size; 226 + }; 227 + 228 + void devres_node_init(struct devres_node *node, dr_node_release_t release, 229 + dr_node_free_t free_node); 230 + void devres_node_add(struct device *dev, struct devres_node *node); 231 + bool devres_node_remove(struct device *dev, struct devres_node *node); 232 + void devres_set_node_dbginfo(struct devres_node *node, const char *name, 233 + size_t size); 219 234 void devres_for_each_res(struct device *dev, dr_release_t release, 220 235 dr_match_t match, void *match_data, 221 236 void (*fn)(struct device *, void *, void *), ··· 312 291 static inline int devtmpfs_delete_node(struct device *dev) { return 0; } 313 292 #endif 314 293 294 + void software_node_init(void); 315 295 void software_node_notify(struct device *dev); 316 296 void software_node_notify_remove(struct device *dev); 317 297
+15 -14
drivers/base/core.c
··· 182 182 if (fwnode->dev) 183 183 return; 184 184 185 - fwnode->flags |= FWNODE_FLAG_NOT_DEVICE; 185 + fwnode_set_flag(fwnode, FWNODE_FLAG_NOT_DEVICE); 186 186 fwnode_links_purge_consumers(fwnode); 187 187 188 188 fwnode_for_each_available_child_node(fwnode, child) ··· 228 228 if (fwnode->dev && fwnode->dev->bus) 229 229 return; 230 230 231 - fwnode->flags |= FWNODE_FLAG_NOT_DEVICE; 231 + fwnode_set_flag(fwnode, FWNODE_FLAG_NOT_DEVICE); 232 232 __fwnode_links_move_consumers(fwnode, new_sup); 233 233 234 234 fwnode_for_each_available_child_node(fwnode, child) ··· 1012 1012 static bool dev_is_best_effort(struct device *dev) 1013 1013 { 1014 1014 return (fw_devlink_best_effort && dev->can_match) || 1015 - (dev->fwnode && (dev->fwnode->flags & FWNODE_FLAG_BEST_EFFORT)); 1015 + (dev->fwnode && fwnode_test_flag(dev->fwnode, FWNODE_FLAG_BEST_EFFORT)); 1016 1016 } 1017 1017 1018 1018 static struct fwnode_handle *fwnode_links_check_suppliers( ··· 1723 1723 1724 1724 static void fw_devlink_parse_fwnode(struct fwnode_handle *fwnode) 1725 1725 { 1726 - if (fwnode->flags & FWNODE_FLAG_LINKS_ADDED) 1726 + if (fwnode_test_flag(fwnode, FWNODE_FLAG_LINKS_ADDED)) 1727 1727 return; 1728 1728 1729 1729 fwnode_call_int_op(fwnode, add_links); 1730 - fwnode->flags |= FWNODE_FLAG_LINKS_ADDED; 1730 + fwnode_set_flag(fwnode, FWNODE_FLAG_LINKS_ADDED); 1731 1731 } 1732 1732 1733 1733 static void fw_devlink_parse_fwtree(struct fwnode_handle *fwnode) ··· 1885 1885 struct device *dev; 1886 1886 bool ret; 1887 1887 1888 - if (!(fwnode->flags & FWNODE_FLAG_INITIALIZED)) 1888 + if (!fwnode_test_flag(fwnode, FWNODE_FLAG_INITIALIZED)) 1889 1889 return false; 1890 1890 1891 1891 dev = get_dev_from_fwnode(fwnode); ··· 2001 2001 * We aren't trying to find all cycles. Just a cycle between con and 2002 2002 * sup_handle. 2003 2003 */ 2004 - if (sup_handle->flags & FWNODE_FLAG_VISITED) 2004 + if (fwnode_test_flag(sup_handle, FWNODE_FLAG_VISITED)) 2005 2005 return false; 2006 2006 2007 - sup_handle->flags |= FWNODE_FLAG_VISITED; 2007 + fwnode_set_flag(sup_handle, FWNODE_FLAG_VISITED); 2008 2008 2009 2009 /* Termination condition. */ 2010 2010 if (sup_handle == con_handle) { ··· 2074 2074 } 2075 2075 2076 2076 out: 2077 - sup_handle->flags &= ~FWNODE_FLAG_VISITED; 2077 + fwnode_clear_flag(sup_handle, FWNODE_FLAG_VISITED); 2078 2078 put_device(sup_dev); 2079 2079 put_device(con_dev); 2080 2080 put_device(par_dev); ··· 2127 2127 * When such a flag is set, we can't create device links where P is the 2128 2128 * supplier of C as that would delay the probe of C. 2129 2129 */ 2130 - if (sup_handle->flags & FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD && 2130 + if (fwnode_test_flag(sup_handle, FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD) && 2131 2131 fwnode_is_ancestor_of(sup_handle, con->fwnode)) 2132 2132 return -EINVAL; 2133 2133 ··· 2150 2150 else 2151 2151 flags = FW_DEVLINK_FLAGS_PERMISSIVE; 2152 2152 2153 - if (sup_handle->flags & FWNODE_FLAG_NOT_DEVICE) 2153 + if (fwnode_test_flag(sup_handle, FWNODE_FLAG_NOT_DEVICE)) 2154 2154 sup_dev = fwnode_get_next_parent_dev(sup_handle); 2155 2155 else 2156 2156 sup_dev = get_dev_from_fwnode(sup_handle); ··· 2162 2162 * supplier device indefinitely. 2163 2163 */ 2164 2164 if (sup_dev->links.status == DL_DEV_NO_DRIVER && 2165 - sup_handle->flags & FWNODE_FLAG_INITIALIZED) { 2165 + fwnode_test_flag(sup_handle, FWNODE_FLAG_INITIALIZED)) { 2166 2166 dev_dbg(con, 2167 2167 "Not linking %pfwf - dev might never probe\n", 2168 2168 sup_handle); ··· 2831 2831 } 2832 2832 static DEVICE_ATTR_RO(removable); 2833 2833 2834 - int device_add_groups(struct device *dev, const struct attribute_group **groups) 2834 + int device_add_groups(struct device *dev, 2835 + const struct attribute_group *const *groups) 2835 2836 { 2836 2837 return sysfs_create_groups(&dev->kobj, groups); 2837 2838 } 2838 2839 EXPORT_SYMBOL_GPL(device_add_groups); 2839 2840 2840 2841 void device_remove_groups(struct device *dev, 2841 - const struct attribute_group **groups) 2842 + const struct attribute_group *const *groups) 2842 2843 { 2843 2844 sysfs_remove_groups(&dev->kobj, groups); 2844 2845 }
+23 -35
drivers/base/dd.c
··· 257 257 } 258 258 DEFINE_SHOW_ATTRIBUTE(deferred_devs); 259 259 260 - #ifdef CONFIG_MODULES 261 - static int driver_deferred_probe_timeout = 10; 262 - #else 263 - static int driver_deferred_probe_timeout; 264 - #endif 260 + static int driver_deferred_probe_timeout = CONFIG_DRIVER_DEFERRED_PROBE_TIMEOUT; 265 261 266 262 static int __init deferred_probe_timeout_setup(char *str) 267 263 { ··· 379 383 380 384 int __device_set_driver_override(struct device *dev, const char *s, size_t len) 381 385 { 382 - const char *new, *old; 383 - char *cp; 386 + const char *new = NULL, *old; 384 387 385 388 if (!s) 386 389 return -EINVAL; ··· 399 404 */ 400 405 len = strlen(s); 401 406 402 - if (!len) { 403 - /* Empty string passed - clear override */ 404 - spin_lock(&dev->driver_override.lock); 407 + /* Handle trailing newline */ 408 + if (len) { 409 + char *cp; 410 + 411 + cp = strnchr(s, len, '\n'); 412 + if (cp) 413 + len = cp - s; 414 + } 415 + 416 + /* 417 + * If empty string or "\n" passed, new remains NULL, clearing 418 + * the driver_override.name. 419 + */ 420 + if (len) { 421 + new = kstrndup(s, len, GFP_KERNEL); 422 + if (!new) 423 + return -ENOMEM; 424 + } 425 + 426 + scoped_guard(spinlock, &dev->driver_override.lock) { 405 427 old = dev->driver_override.name; 406 - dev->driver_override.name = NULL; 407 - spin_unlock(&dev->driver_override.lock); 408 - kfree(old); 409 - 410 - return 0; 411 - } 412 - 413 - cp = strnchr(s, len, '\n'); 414 - if (cp) 415 - len = cp - s; 416 - 417 - new = kstrndup(s, len, GFP_KERNEL); 418 - if (!new) 419 - return -ENOMEM; 420 - 421 - spin_lock(&dev->driver_override.lock); 422 - old = dev->driver_override.name; 423 - if (cp != s) { 424 428 dev->driver_override.name = new; 425 - spin_unlock(&dev->driver_override.lock); 426 - } else { 427 - /* "\n" passed - clear override */ 428 - dev->driver_override.name = NULL; 429 - spin_unlock(&dev->driver_override.lock); 430 - 431 - kfree(new); 432 429 } 430 + 433 431 kfree(old); 434 432 435 433 return 0;
+185 -96
drivers/base/devres.c
··· 16 16 #include "base.h" 17 17 #include "trace.h" 18 18 19 - struct devres_node { 20 - struct list_head entry; 21 - dr_release_t release; 22 - const char *name; 23 - size_t size; 24 - }; 25 - 26 19 struct devres { 27 20 struct devres_node node; 21 + dr_release_t release; 28 22 /* 29 23 * Some archs want to perform DMA into kmalloc caches 30 24 * and need a guaranteed alignment larger than ··· 36 42 /* -- 8 pointers */ 37 43 }; 38 44 39 - static void set_node_dbginfo(struct devres_node *node, const char *name, 45 + void devres_node_init(struct devres_node *node, 46 + dr_node_release_t release, 47 + dr_node_free_t free_node) 48 + { 49 + INIT_LIST_HEAD(&node->entry); 50 + node->release = release; 51 + node->free_node = free_node; 52 + } 53 + 54 + static inline void free_node(struct devres_node *node) 55 + { 56 + node->free_node(node); 57 + } 58 + 59 + void devres_set_node_dbginfo(struct devres_node *node, const char *name, 40 60 size_t size) 41 61 { 42 62 node->name = name; ··· 83 75 * Release functions for devres group. These callbacks are used only 84 76 * for identification. 85 77 */ 86 - static void group_open_release(struct device *dev, void *res) 78 + static void group_open_release(struct device *dev, struct devres_node *node) 87 79 { 88 80 /* noop */ 89 81 } 90 82 91 - static void group_close_release(struct device *dev, void *res) 83 + static void group_close_release(struct device *dev, struct devres_node *node) 92 84 { 93 85 /* noop */ 94 86 } ··· 115 107 return true; 116 108 } 117 109 110 + static void dr_node_release(struct device *dev, struct devres_node *node) 111 + { 112 + struct devres *dr = container_of(node, struct devres, node); 113 + 114 + dr->release(dev, dr->data); 115 + } 116 + 117 + static void dr_node_free(struct devres_node *node) 118 + { 119 + struct devres *dr = container_of(node, struct devres, node); 120 + 121 + kfree(dr); 122 + } 123 + 118 124 static __always_inline struct devres *alloc_dr(dr_release_t release, 119 125 size_t size, gfp_t gfp, int nid) 120 126 { ··· 146 124 if (!(gfp & __GFP_ZERO)) 147 125 memset(dr, 0, offsetof(struct devres, data)); 148 126 149 - INIT_LIST_HEAD(&dr->node.entry); 150 - dr->node.release = release; 127 + devres_node_init(&dr->node, dr_node_release, dr_node_free); 128 + dr->release = release; 151 129 return dr; 152 130 } 153 131 ··· 189 167 dr = alloc_dr(release, size, gfp | __GFP_ZERO, nid); 190 168 if (unlikely(!dr)) 191 169 return NULL; 192 - set_node_dbginfo(&dr->node, name, size); 170 + devres_set_node_dbginfo(&dr->node, name, size); 193 171 return dr->data; 194 172 } 195 173 EXPORT_SYMBOL_GPL(__devres_alloc_node); ··· 216 194 { 217 195 struct devres_node *node; 218 196 struct devres_node *tmp; 219 - unsigned long flags; 220 197 221 198 if (!fn) 222 199 return; 223 200 224 - spin_lock_irqsave(&dev->devres_lock, flags); 201 + guard(spinlock_irqsave)(&dev->devres_lock); 225 202 list_for_each_entry_safe_reverse(node, tmp, 226 203 &dev->devres_head, entry) { 227 204 struct devres *dr = container_of(node, struct devres, node); 228 205 229 - if (node->release != release) 206 + if (node->release != dr_node_release) 207 + continue; 208 + if (dr->release != release) 230 209 continue; 231 210 if (match && !match(dev, dr->data, match_data)) 232 211 continue; 233 212 fn(dev, dr->data, data); 234 213 } 235 - spin_unlock_irqrestore(&dev->devres_lock, flags); 236 214 } 237 215 EXPORT_SYMBOL_GPL(devres_for_each_res); 216 + 217 + static inline void free_dr(struct devres *dr) 218 + { 219 + free_node(&dr->node); 220 + } 238 221 239 222 /** 240 223 * devres_free - Free device resource data ··· 253 226 struct devres *dr = container_of(res, struct devres, data); 254 227 255 228 BUG_ON(!list_empty(&dr->node.entry)); 256 - kfree(dr); 229 + free_dr(dr); 257 230 } 258 231 } 259 232 EXPORT_SYMBOL_GPL(devres_free); 233 + 234 + void devres_node_add(struct device *dev, struct devres_node *node) 235 + { 236 + guard(spinlock_irqsave)(&dev->devres_lock); 237 + 238 + add_dr(dev, node); 239 + } 260 240 261 241 /** 262 242 * devres_add - Register device resource ··· 277 243 void devres_add(struct device *dev, void *res) 278 244 { 279 245 struct devres *dr = container_of(res, struct devres, data); 280 - unsigned long flags; 281 246 282 - spin_lock_irqsave(&dev->devres_lock, flags); 283 - add_dr(dev, &dr->node); 284 - spin_unlock_irqrestore(&dev->devres_lock, flags); 247 + devres_node_add(dev, &dr->node); 285 248 } 286 249 EXPORT_SYMBOL_GPL(devres_add); 287 250 ··· 290 259 list_for_each_entry_reverse(node, &dev->devres_head, entry) { 291 260 struct devres *dr = container_of(node, struct devres, node); 292 261 293 - if (node->release != release) 262 + if (node->release != dr_node_release) 263 + continue; 264 + if (dr->release != release) 294 265 continue; 295 266 if (match && !match(dev, dr->data, match_data)) 296 267 continue; ··· 320 287 dr_match_t match, void *match_data) 321 288 { 322 289 struct devres *dr; 323 - unsigned long flags; 324 290 325 - spin_lock_irqsave(&dev->devres_lock, flags); 291 + guard(spinlock_irqsave)(&dev->devres_lock); 326 292 dr = find_dr(dev, release, match, match_data); 327 - spin_unlock_irqrestore(&dev->devres_lock, flags); 328 - 329 293 if (dr) 330 294 return dr->data; 295 + 331 296 return NULL; 332 297 } 333 298 EXPORT_SYMBOL_GPL(devres_find); ··· 352 321 unsigned long flags; 353 322 354 323 spin_lock_irqsave(&dev->devres_lock, flags); 355 - dr = find_dr(dev, new_dr->node.release, match, match_data); 324 + dr = find_dr(dev, new_dr->release, match, match_data); 356 325 if (!dr) { 357 326 add_dr(dev, &new_dr->node); 358 327 dr = new_dr; ··· 364 333 return dr->data; 365 334 } 366 335 EXPORT_SYMBOL_GPL(devres_get); 336 + 337 + bool devres_node_remove(struct device *dev, struct devres_node *node) 338 + { 339 + struct devres_node *__node; 340 + 341 + guard(spinlock_irqsave)(&dev->devres_lock); 342 + list_for_each_entry_reverse(__node, &dev->devres_head, entry) { 343 + if (__node == node) { 344 + list_del_init(&node->entry); 345 + devres_log(dev, node, "REM"); 346 + return true; 347 + } 348 + } 349 + 350 + return false; 351 + } 367 352 368 353 /** 369 354 * devres_remove - Find a device resource and remove it ··· 400 353 dr_match_t match, void *match_data) 401 354 { 402 355 struct devres *dr; 403 - unsigned long flags; 404 356 405 - spin_lock_irqsave(&dev->devres_lock, flags); 357 + guard(spinlock_irqsave)(&dev->devres_lock); 406 358 dr = find_dr(dev, release, match, match_data); 407 359 if (dr) { 408 360 list_del_init(&dr->node.entry); 409 361 devres_log(dev, &dr->node, "REM"); 410 - } 411 - spin_unlock_irqrestore(&dev->devres_lock, flags); 412 - 413 - if (dr) 414 362 return dr->data; 363 + } 364 + 415 365 return NULL; 416 366 } 417 367 EXPORT_SYMBOL_GPL(devres_remove); ··· 539 495 540 496 static void release_nodes(struct device *dev, struct list_head *todo) 541 497 { 542 - struct devres *dr, *tmp; 498 + struct devres_node *node, *tmp; 543 499 544 - /* Release. Note that both devres and devres_group are 545 - * handled as devres in the following loop. This is safe. 546 - */ 547 - list_for_each_entry_safe_reverse(dr, tmp, todo, node.entry) { 548 - devres_log(dev, &dr->node, "REL"); 549 - dr->node.release(dev, dr->data); 550 - kfree(dr); 500 + list_for_each_entry_safe_reverse(node, tmp, todo, entry) { 501 + devres_log(dev, node, "REL"); 502 + node->release(dev, node); 503 + free_node(node); 551 504 } 552 505 } 553 506 ··· 577 536 return cnt; 578 537 } 579 538 539 + static void devres_group_free(struct devres_node *node) 540 + { 541 + struct devres_group *grp = container_of(node, struct devres_group, node[0]); 542 + 543 + kfree(grp); 544 + } 545 + 580 546 /** 581 547 * devres_open_group - Open a new devres group 582 548 * @dev: Device to open devres group for ··· 600 552 void *devres_open_group(struct device *dev, void *id, gfp_t gfp) 601 553 { 602 554 struct devres_group *grp; 603 - unsigned long flags; 604 555 605 556 grp = kmalloc_obj(*grp, gfp); 606 557 if (unlikely(!grp)) 607 558 return NULL; 608 559 609 - grp->node[0].release = &group_open_release; 610 - grp->node[1].release = &group_close_release; 611 - INIT_LIST_HEAD(&grp->node[0].entry); 612 - INIT_LIST_HEAD(&grp->node[1].entry); 613 - set_node_dbginfo(&grp->node[0], "grp<", 0); 614 - set_node_dbginfo(&grp->node[1], "grp>", 0); 560 + devres_node_init(&grp->node[0], &group_open_release, devres_group_free); 561 + devres_node_init(&grp->node[1], &group_close_release, NULL); 562 + devres_set_node_dbginfo(&grp->node[0], "grp<", 0); 563 + devres_set_node_dbginfo(&grp->node[1], "grp>", 0); 615 564 grp->id = grp; 616 565 if (id) 617 566 grp->id = id; 618 567 grp->color = 0; 619 568 620 - spin_lock_irqsave(&dev->devres_lock, flags); 621 - add_dr(dev, &grp->node[0]); 622 - spin_unlock_irqrestore(&dev->devres_lock, flags); 569 + devres_node_add(dev, &grp->node[0]); 623 570 return grp->id; 624 571 } 625 572 EXPORT_SYMBOL_GPL(devres_open_group); ··· 656 613 void devres_close_group(struct device *dev, void *id) 657 614 { 658 615 struct devres_group *grp; 659 - unsigned long flags; 660 616 661 - spin_lock_irqsave(&dev->devres_lock, flags); 662 - 617 + guard(spinlock_irqsave)(&dev->devres_lock); 663 618 grp = find_group(dev, id); 664 619 if (grp) 665 620 add_dr(dev, &grp->node[1]); 666 621 else 667 622 WARN_ON(1); 668 - 669 - spin_unlock_irqrestore(&dev->devres_lock, flags); 670 623 } 671 624 EXPORT_SYMBOL_GPL(devres_close_group); 672 625 ··· 716 677 int cnt = 0; 717 678 718 679 spin_lock_irqsave(&dev->devres_lock, flags); 719 - 720 680 grp = find_group(dev, id); 721 681 if (grp) { 722 682 struct list_head *first = &grp->node[0].entry; ··· 725 687 end = grp->node[1].entry.next; 726 688 727 689 cnt = remove_nodes(dev, first, end, &todo); 728 - spin_unlock_irqrestore(&dev->devres_lock, flags); 729 - 730 - release_nodes(dev, &todo); 731 690 } else if (list_empty(&dev->devres_head)) { 732 691 /* 733 692 * dev is probably dying via devres_release_all(): groups 734 693 * have already been removed and are on the process of 735 694 * being released - don't touch and don't warn. 736 695 */ 737 - spin_unlock_irqrestore(&dev->devres_lock, flags); 738 696 } else { 739 697 WARN_ON(1); 740 - spin_unlock_irqrestore(&dev->devres_lock, flags); 741 698 } 699 + spin_unlock_irqrestore(&dev->devres_lock, flags); 700 + 701 + release_nodes(dev, &todo); 742 702 743 703 return cnt; 744 704 } ··· 752 716 void (*action)(void *); 753 717 }; 754 718 755 - static int devm_action_match(struct device *dev, void *res, void *p) 756 - { 757 - struct action_devres *devres = res; 758 - struct action_devres *target = p; 719 + struct devres_action { 720 + struct devres_node node; 721 + struct action_devres action; 722 + }; 759 723 760 - return devres->action == target->action && 761 - devres->data == target->data; 724 + static int devm_action_match(struct devres_action *devres, struct action_devres *target) 725 + { 726 + return devres->action.action == target->action && 727 + devres->action.data == target->data; 762 728 } 763 729 764 - static void devm_action_release(struct device *dev, void *res) 730 + static void devm_action_release(struct device *dev, struct devres_node *node) 765 731 { 766 - struct action_devres *devres = res; 732 + struct devres_action *devres = container_of(node, struct devres_action, node); 767 733 768 - devres->action(devres->data); 734 + devres->action.action(devres->action.data); 735 + } 736 + 737 + static void devm_action_free(struct devres_node *node) 738 + { 739 + struct devres_action *action = container_of(node, struct devres_action, node); 740 + 741 + kfree(action); 769 742 } 770 743 771 744 /** ··· 789 744 */ 790 745 int __devm_add_action(struct device *dev, void (*action)(void *), void *data, const char *name) 791 746 { 792 - struct action_devres *devres; 747 + struct devres_action *devres; 793 748 794 - devres = __devres_alloc_node(devm_action_release, sizeof(struct action_devres), 795 - GFP_KERNEL, NUMA_NO_NODE, name); 749 + devres = kzalloc_obj(*devres); 796 750 if (!devres) 797 751 return -ENOMEM; 798 752 799 - devres->data = data; 800 - devres->action = action; 753 + devres_node_init(&devres->node, devm_action_release, devm_action_free); 754 + devres_set_node_dbginfo(&devres->node, name, sizeof(*devres)); 801 755 802 - devres_add(dev, devres); 756 + devres->action.data = data; 757 + devres->action.action = action; 758 + 759 + devres_node_add(dev, &devres->node); 803 760 return 0; 804 761 } 805 762 EXPORT_SYMBOL_GPL(__devm_add_action); 806 763 807 - bool devm_is_action_added(struct device *dev, void (*action)(void *), void *data) 764 + static struct devres_action *devres_action_find(struct device *dev, 765 + void (*action)(void *), 766 + void *data) 808 767 { 809 - struct action_devres devres = { 768 + struct devres_node *node; 769 + struct action_devres target = { 810 770 .data = data, 811 771 .action = action, 812 772 }; 813 773 814 - return devres_find(dev, devm_action_release, devm_action_match, &devres); 774 + list_for_each_entry_reverse(node, &dev->devres_head, entry) { 775 + struct devres_action *dr = container_of(node, struct devres_action, node); 776 + 777 + if (node->release != devm_action_release) 778 + continue; 779 + if (devm_action_match(dr, &target)) 780 + return dr; 781 + } 782 + 783 + return NULL; 784 + } 785 + 786 + bool devm_is_action_added(struct device *dev, void (*action)(void *), void *data) 787 + { 788 + guard(spinlock_irqsave)(&dev->devres_lock); 789 + 790 + return !!devres_action_find(dev, action, data); 815 791 } 816 792 EXPORT_SYMBOL_GPL(devm_is_action_added); 793 + 794 + static struct devres_action *remove_action(struct device *dev, 795 + void (*action)(void *), 796 + void *data) 797 + { 798 + struct devres_action *dr; 799 + 800 + guard(spinlock_irqsave)(&dev->devres_lock); 801 + 802 + dr = devres_action_find(dev, action, data); 803 + if (!dr) 804 + return ERR_PTR(-ENOENT); 805 + 806 + list_del_init(&dr->node.entry); 807 + devres_log(dev, &dr->node, "REM"); 808 + 809 + return dr; 810 + } 817 811 818 812 /** 819 813 * devm_remove_action_nowarn() - removes previously added custom action ··· 878 794 void (*action)(void *), 879 795 void *data) 880 796 { 881 - struct action_devres devres = { 882 - .data = data, 883 - .action = action, 884 - }; 797 + struct devres_action *dr; 885 798 886 - return devres_destroy(dev, devm_action_release, devm_action_match, 887 - &devres); 799 + dr = remove_action(dev, action, data); 800 + if (IS_ERR(dr)) 801 + return PTR_ERR(dr); 802 + 803 + kfree(dr); 804 + 805 + return 0; 888 806 } 889 807 EXPORT_SYMBOL_GPL(devm_remove_action_nowarn); 890 808 ··· 902 816 */ 903 817 void devm_release_action(struct device *dev, void (*action)(void *), void *data) 904 818 { 905 - struct action_devres devres = { 906 - .data = data, 907 - .action = action, 908 - }; 819 + struct devres_action *dr; 909 820 910 - WARN_ON(devres_release(dev, devm_action_release, devm_action_match, 911 - &devres)); 821 + dr = remove_action(dev, action, data); 822 + if (WARN_ON(IS_ERR(dr))) 823 + return; 912 824 825 + dr->action.action(dr->action.data); 826 + 827 + kfree(dr); 913 828 } 914 829 EXPORT_SYMBOL_GPL(devm_release_action); 915 830 ··· 956 869 * This is named devm_kzalloc_release for historical reasons 957 870 * The initial implementation did not support kmalloc, only kzalloc 958 871 */ 959 - set_node_dbginfo(&dr->node, "devm_kzalloc_release", size); 872 + devres_set_node_dbginfo(&dr->node, "devm_kzalloc_release", size); 960 873 devres_add(dev, dr->data); 961 874 return dr->data; 962 875 } ··· 1027 940 if (!new_dr) 1028 941 return NULL; 1029 942 943 + devres_set_node_dbginfo(&new_dr->node, "devm_krealloc_release", new_size); 944 + 1030 945 /* 1031 946 * The spinlock protects the linked list against concurrent 1032 947 * modifications but not the resource itself. ··· 1038 949 old_dr = find_dr(dev, devm_kmalloc_release, devm_kmalloc_match, ptr); 1039 950 if (!old_dr) { 1040 951 spin_unlock_irqrestore(&dev->devres_lock, flags); 1041 - kfree(new_dr); 952 + free_dr(new_dr); 1042 953 WARN(1, "Memory chunk not managed or managed by a different device."); 1043 954 return NULL; 1044 955 } ··· 1058 969 * list. This is also the reason why we must not use devm_kfree() - the 1059 970 * links are no longer valid. 1060 971 */ 1061 - kfree(old_dr); 972 + free_dr(old_dr); 1062 973 1063 974 return new_dr->data; 1064 975 }
+1
drivers/base/init.c
··· 34 34 */ 35 35 faux_bus_init(); 36 36 of_core_init(); 37 + software_node_init(); 37 38 platform_bus_init(); 38 39 auxiliary_bus_init(); 39 40 memory_dev_init();
+1 -1
drivers/base/memory.c
··· 815 815 /* 816 816 * MEM_ONLINE at this point implies early memory. With NUMA, 817 817 * we'll determine the zone when setting the node id via 818 - * memory_block_add_nid(). Memory hotplug updated the zone 818 + * memory_block_add_nid_early(). Memory hotplug updated the zone 819 819 * manually when memory onlining/offlining succeeds. 820 820 */ 821 821 mem->zone = early_node_zone_for_memory_block(mem, NUMA_NO_NODE);
+29 -29
drivers/base/platform.c
··· 75 75 for (i = 0; i < dev->num_resources; i++) { 76 76 struct resource *r = &dev->resource[i]; 77 77 78 - if ((resource_type(r) & (IORESOURCE_MEM|IORESOURCE_IO)) && num-- == 0) 78 + if ((resource_type(r) & (IORESOURCE_MEM | IORESOURCE_IO)) && num-- == 0) 79 79 return r; 80 80 } 81 81 return NULL; ··· 97 97 */ 98 98 void __iomem * 99 99 devm_platform_get_and_ioremap_resource(struct platform_device *pdev, 100 - unsigned int index, struct resource **res) 100 + unsigned int index, struct resource **res) 101 101 { 102 102 struct resource *r; 103 103 ··· 172 172 * @num: interrupt number index 173 173 * @affinity: optional cpumask pointer to get the affinity of a per-cpu interrupt 174 174 * 175 - * Gets an interupt for a platform device. Device drivers should check the 175 + * Gets an interrupt for a platform device. Device drivers should check the 176 176 * return value for errors so as to not pass a negative integer value to 177 177 * the request_irq() APIs. Optional affinity information is provided in the 178 178 * affinity pointer if available, and NULL otherwise. ··· 843 843 * 844 844 * Returns &struct platform_device pointer on success, or ERR_PTR() on error. 845 845 */ 846 - struct platform_device *platform_device_register_full( 847 - const struct platform_device_info *pdevinfo) 846 + struct platform_device *platform_device_register_full(const struct platform_device_info *pdevinfo) 848 847 { 849 848 int ret; 850 849 struct platform_device *pdev; 850 + 851 + if (pdevinfo->swnode && pdevinfo->properties) 852 + return ERR_PTR(-EINVAL); 851 853 852 854 pdev = platform_device_alloc(pdevinfo->name, pdevinfo->id); 853 855 if (!pdev) ··· 866 864 pdev->dev.coherent_dma_mask = pdevinfo->dma_mask; 867 865 } 868 866 869 - ret = platform_device_add_resources(pdev, 870 - pdevinfo->res, pdevinfo->num_res); 867 + ret = platform_device_add_resources(pdev, pdevinfo->res, pdevinfo->num_res); 871 868 if (ret) 872 869 goto err; 873 870 874 - ret = platform_device_add_data(pdev, 875 - pdevinfo->data, pdevinfo->size_data); 871 + ret = platform_device_add_data(pdev, pdevinfo->data, pdevinfo->size_data); 876 872 if (ret) 877 873 goto err; 878 874 879 - if (pdevinfo->properties) { 875 + if (pdevinfo->swnode) { 876 + ret = device_add_software_node(&pdev->dev, pdevinfo->swnode); 877 + if (ret) 878 + goto err; 879 + } else if (pdevinfo->properties) { 880 880 ret = device_create_managed_software_node(&pdev->dev, 881 881 pdevinfo->properties, NULL); 882 882 if (ret) ··· 902 898 * @drv: platform driver structure 903 899 * @owner: owning module/driver 904 900 */ 905 - int __platform_driver_register(struct platform_driver *drv, 906 - struct module *owner) 901 + int __platform_driver_register(struct platform_driver *drv, struct module *owner) 907 902 { 908 903 drv->driver.owner = owner; 909 904 drv->driver.bus = &platform_bus_type; ··· 954 951 * a negative error code and with the driver not registered. 955 952 */ 956 953 int __init_or_module __platform_driver_probe(struct platform_driver *drv, 957 - int (*probe)(struct platform_device *), struct module *module) 954 + int (*probe)(struct platform_device *), 955 + struct module *module) 958 956 { 959 957 int retval; 960 958 961 959 if (drv->driver.probe_type == PROBE_PREFER_ASYNCHRONOUS) { 962 960 pr_err("%s: drivers registered with %s can not be probed asynchronously\n", 963 - drv->driver.name, __func__); 961 + drv->driver.name, __func__); 964 962 return -EINVAL; 965 963 } 966 964 ··· 1017 1013 * 1018 1014 * Returns &struct platform_device pointer on success, or ERR_PTR() on error. 1019 1015 */ 1020 - struct platform_device * __init_or_module __platform_create_bundle( 1021 - struct platform_driver *driver, 1022 - int (*probe)(struct platform_device *), 1023 - struct resource *res, unsigned int n_res, 1024 - const void *data, size_t size, struct module *module) 1016 + struct platform_device * __init_or_module 1017 + __platform_create_bundle(struct platform_driver *driver, 1018 + int (*probe)(struct platform_device *), 1019 + struct resource *res, unsigned int n_res, 1020 + const void *data, size_t size, struct module *module) 1025 1021 { 1026 1022 struct platform_device *pdev; 1027 1023 int error; ··· 1120 1116 } 1121 1117 EXPORT_SYMBOL_GPL(platform_unregister_drivers); 1122 1118 1123 - static const struct platform_device_id *platform_match_id( 1124 - const struct platform_device_id *id, 1125 - struct platform_device *pdev) 1119 + static const struct platform_device_id * 1120 + platform_match_id(const struct platform_device_id *id, struct platform_device *pdev) 1126 1121 { 1127 1122 while (id->name[0]) { 1128 1123 if (strcmp(pdev->name, id->name) == 0) { ··· 1314 1311 NULL, 1315 1312 }; 1316 1313 1317 - static umode_t platform_dev_attrs_visible(struct kobject *kobj, struct attribute *a, 1318 - int n) 1314 + static umode_t platform_dev_attrs_visible(struct kobject *kobj, 1315 + struct attribute *a, int n) 1319 1316 { 1320 1317 struct device *dev = container_of(kobj, typeof(*dev), kobj); 1321 1318 1322 - if (a == &dev_attr_numa_node.attr && 1323 - dev_to_node(dev) == NUMA_NO_NODE) 1319 + if (a == &dev_attr_numa_node.attr && dev_to_node(dev) == NUMA_NO_NODE) 1324 1320 return 0; 1325 1321 1326 1322 return a->mode; ··· 1330 1328 .is_visible = platform_dev_attrs_visible, 1331 1329 }; 1332 1330 __ATTRIBUTE_GROUPS(platform_dev); 1333 - 1334 1331 1335 1332 /** 1336 1333 * platform_match - bind platform device to platform driver. ··· 1385 1384 if (rc != -ENODEV) 1386 1385 return rc; 1387 1386 1388 - add_uevent_var(env, "MODALIAS=%s%s", PLATFORM_MODULE_PREFIX, 1389 - pdev->name); 1387 + add_uevent_var(env, "MODALIAS=%s%s", PLATFORM_MODULE_PREFIX, pdev->name); 1390 1388 return 0; 1391 1389 } 1392 1390
+11 -3
drivers/base/property.c
··· 38 38 * @propname: Name of the property 39 39 * 40 40 * Check if property @propname is present in the device firmware description. 41 + * This function is the unambiguous way to check that given property is present 42 + * in the device firmware description. 41 43 * 42 44 * Return: true if property @propname is present. Otherwise, returns false. 43 45 */ ··· 53 51 * fwnode_property_present - check if a property of a firmware node is present 54 52 * @fwnode: Firmware node whose property to check 55 53 * @propname: Name of the property 54 + * 55 + * Check if property @propname is present in the firmware node description. 56 + * This function is the unambiguous way to check that given property is present 57 + * in the firmware node description. 56 58 * 57 59 * Return: true if property @propname is present. Otherwise, returns false. 58 60 */ ··· 81 75 * @dev: Device whose property is being checked 82 76 * @propname: Name of the property 83 77 * 84 - * Return if property @propname is true or false in the device firmware description. 78 + * Use device_property_present() to check for the property presence. 85 79 * 86 - * Return: true if property @propname is present. Otherwise, returns false. 80 + * Return: if property @propname is true or false in the device firmware description. 87 81 */ 88 82 bool device_property_read_bool(const struct device *dev, const char *propname) 89 83 { ··· 96 90 * @fwnode: Firmware node whose property to check 97 91 * @propname: Name of the property 98 92 * 99 - * Return if property @propname is true or false in the firmware description. 93 + * Use fwnode_property_present() to check for the property presence. 94 + * 95 + * Return: if property @propname is true or false in the firmware node description. 100 96 */ 101 97 bool fwnode_property_read_bool(const struct fwnode_handle *fwnode, 102 98 const char *propname)
+13 -16
drivers/base/soc.c
··· 5 5 * Author: Lee Jones <lee.jones@linaro.org> for ST-Ericsson. 6 6 */ 7 7 8 - #include <linux/sysfs.h> 9 - #include <linux/init.h> 10 - #include <linux/of.h> 11 - #include <linux/stat.h> 12 - #include <linux/slab.h> 13 - #include <linux/idr.h> 14 - #include <linux/spinlock.h> 15 - #include <linux/sys_soc.h> 16 8 #include <linux/err.h> 17 9 #include <linux/glob.h> 10 + #include <linux/idr.h> 11 + #include <linux/init.h> 12 + #include <linux/of.h> 13 + #include <linux/slab.h> 14 + #include <linux/spinlock.h> 15 + #include <linux/stat.h> 16 + #include <linux/sysfs.h> 17 + #include <linux/sys_soc.h> 18 18 19 19 static DEFINE_IDA(soc_ida); 20 20 ··· 111 111 kfree(soc_dev); 112 112 } 113 113 114 - static void soc_device_get_machine(struct soc_device_attribute *soc_dev_attr) 114 + int soc_attr_read_machine(struct soc_device_attribute *soc_dev_attr) 115 115 { 116 - struct device_node *np; 117 - 118 116 if (soc_dev_attr->machine) 119 - return; 117 + return -EBUSY; 120 118 121 - np = of_find_node_by_path("/"); 122 - of_property_read_string(np, "model", &soc_dev_attr->machine); 123 - of_node_put(np); 119 + return of_machine_read_model(&soc_dev_attr->machine); 124 120 } 121 + EXPORT_SYMBOL_GPL(soc_attr_read_machine); 125 122 126 123 static struct soc_device_attribute *early_soc_dev_attr; 127 124 ··· 128 131 const struct attribute_group **soc_attr_groups; 129 132 int ret; 130 133 131 - soc_device_get_machine(soc_dev_attr); 134 + soc_attr_read_machine(soc_dev_attr); 132 135 133 136 if (!soc_bus_registered) { 134 137 if (early_soc_dev_attr)
+2 -11
drivers/base/swnode.c
··· 1127 1127 } 1128 1128 } 1129 1129 1130 - static int __init software_node_init(void) 1130 + void __init software_node_init(void) 1131 1131 { 1132 1132 swnode_kset = kset_create_and_add("software_nodes", NULL, kernel_kobj); 1133 1133 if (!swnode_kset) 1134 - return -ENOMEM; 1135 - return 0; 1134 + pr_err("failed to register software nodes\n"); 1136 1135 } 1137 - postcore_initcall(software_node_init); 1138 - 1139 - static void __exit software_node_exit(void) 1140 - { 1141 - ida_destroy(&swnode_root_ids); 1142 - kset_unregister(swnode_kset); 1143 - } 1144 - __exitcall(software_node_exit);
+7 -36
drivers/bus/fsl-mc/fsl-mc-bus.c
··· 86 86 struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 87 87 const struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(drv); 88 88 bool found = false; 89 + int ret; 89 90 90 91 /* When driver_override is set, only bind to the matching driver */ 91 - if (mc_dev->driver_override) { 92 - found = !strcmp(mc_dev->driver_override, mc_drv->driver.name); 92 + ret = device_match_driver_override(dev, drv); 93 + if (ret > 0) { 94 + found = true; 93 95 goto out; 94 96 } 97 + if (ret == 0) 98 + goto out; 95 99 96 100 if (!mc_drv->match_id_table) 97 101 goto out; ··· 214 210 } 215 211 static DEVICE_ATTR_RO(modalias); 216 212 217 - static ssize_t driver_override_store(struct device *dev, 218 - struct device_attribute *attr, 219 - const char *buf, size_t count) 220 - { 221 - struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 222 - int ret; 223 - 224 - if (WARN_ON(dev->bus != &fsl_mc_bus_type)) 225 - return -EINVAL; 226 - 227 - ret = driver_set_override(dev, &mc_dev->driver_override, buf, count); 228 - if (ret) 229 - return ret; 230 - 231 - return count; 232 - } 233 - 234 - static ssize_t driver_override_show(struct device *dev, 235 - struct device_attribute *attr, char *buf) 236 - { 237 - struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 238 - ssize_t len; 239 - 240 - device_lock(dev); 241 - len = sysfs_emit(buf, "%s\n", mc_dev->driver_override); 242 - device_unlock(dev); 243 - return len; 244 - } 245 - static DEVICE_ATTR_RW(driver_override); 246 - 247 213 static struct attribute *fsl_mc_dev_attrs[] = { 248 214 &dev_attr_modalias.attr, 249 - &dev_attr_driver_override.attr, 250 215 NULL, 251 216 }; 252 217 ··· 318 345 319 346 const struct bus_type fsl_mc_bus_type = { 320 347 .name = "fsl-mc", 348 + .driver_override = true, 321 349 .match = fsl_mc_bus_match, 322 350 .uevent = fsl_mc_bus_uevent, 323 351 .probe = fsl_mc_probe, ··· 884 910 */ 885 911 void fsl_mc_device_remove(struct fsl_mc_device *mc_dev) 886 912 { 887 - kfree(mc_dev->driver_override); 888 - mc_dev->driver_override = NULL; 889 - 890 913 /* 891 914 * The device-specific remove callback will get invoked by device_del() 892 915 */
+1 -1
drivers/bus/imx-weim.c
··· 332 332 * fw_devlink doesn't skip adding consumers to this 333 333 * device. 334 334 */ 335 - rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; 335 + fwnode_clear_flag(&rd->dn->fwnode, FWNODE_FLAG_NOT_DEVICE); 336 336 if (!of_platform_device_create(rd->dn, NULL, &pdev->dev)) { 337 337 dev_err(&pdev->dev, 338 338 "Failed to create child device '%pOF'\n",
+1 -1
drivers/i2c/i2c-core-of.c
··· 180 180 * Clear the flag before adding the device so that fw_devlink 181 181 * doesn't skip adding consumers to this device. 182 182 */ 183 - rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; 183 + fwnode_clear_flag(&rd->dn->fwnode, FWNODE_FLAG_NOT_DEVICE); 184 184 client = of_i2c_register_device(adap, rd->dn); 185 185 if (IS_ERR(client)) { 186 186 dev_err(&adap->dev, "failed to create client for '%pOF'\n",
+2 -2
drivers/net/phy/mdio_bus_provider.c
··· 294 294 return -EINVAL; 295 295 296 296 if (bus->parent && bus->parent->of_node) 297 - bus->parent->of_node->fwnode.flags |= 298 - FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD; 297 + fwnode_set_flag(&bus->parent->of_node->fwnode, 298 + FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD); 299 299 300 300 WARN(bus->state != MDIOBUS_ALLOCATED && 301 301 bus->state != MDIOBUS_UNREGISTERED,
+29 -1
drivers/of/base.c
··· 435 435 EXPORT_SYMBOL(of_machine_compatible_match); 436 436 437 437 /** 438 + * of_machine_read_compatible - Get the compatible string of this machine 439 + * @compatible: address at which the address of the compatible string will be 440 + * stored 441 + * @index: index of the compatible entry in the list 442 + * 443 + * Returns: 444 + * 0 on success, negative error number on failure. 445 + */ 446 + int of_machine_read_compatible(const char **compatible, unsigned int index) 447 + { 448 + return of_property_read_string_index(of_root, "compatible", index, compatible); 449 + } 450 + EXPORT_SYMBOL_GPL(of_machine_read_compatible); 451 + 452 + /** 453 + * of_machine_read_model - Get the model string of this machine 454 + * @model: address at which the address of the model string will be stored 455 + * 456 + * Returns: 457 + * 0 on success, negative error number on failure. 458 + */ 459 + int of_machine_read_model(const char **model) 460 + { 461 + return of_property_read_string(of_root, "model", model); 462 + } 463 + EXPORT_SYMBOL_GPL(of_machine_read_model); 464 + 465 + /** 438 466 * of_machine_device_match - Test root of device tree against a of_device_id array 439 467 * @matches: NULL terminated array of of_device_id match structures to search in 440 468 * ··· 1943 1915 if (name) 1944 1916 of_stdout = of_find_node_opts_by_path(name, &of_stdout_options); 1945 1917 if (of_stdout) 1946 - of_stdout->fwnode.flags |= FWNODE_FLAG_BEST_EFFORT; 1918 + fwnode_set_flag(&of_stdout->fwnode, FWNODE_FLAG_BEST_EFFORT); 1947 1919 } 1948 1920 1949 1921 if (!of_aliases)
+1 -1
drivers/of/dynamic.c
··· 225 225 np->sibling = np->parent->child; 226 226 np->parent->child = np; 227 227 of_node_clear_flag(np, OF_DETACHED); 228 - np->fwnode.flags |= FWNODE_FLAG_NOT_DEVICE; 228 + fwnode_set_flag(&np->fwnode, FWNODE_FLAG_NOT_DEVICE); 229 229 230 230 raw_spin_unlock_irqrestore(&devtree_lock, flags); 231 231
+1 -1
drivers/of/platform.c
··· 742 742 * Clear the flag before adding the device so that fw_devlink 743 743 * doesn't skip adding consumers to this device. 744 744 */ 745 - rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; 745 + fwnode_clear_flag(&rd->dn->fwnode, FWNODE_FLAG_NOT_DEVICE); 746 746 /* pdev_parent may be NULL when no bus platform device */ 747 747 pdev_parent = of_find_device_by_node(parent); 748 748 pdev = of_platform_device_create(rd->dn, NULL,
+7 -4
drivers/pci/pci-driver.c
··· 138 138 { 139 139 struct pci_dynid *dynid; 140 140 const struct pci_device_id *found_id = NULL, *ids; 141 + int ret; 141 142 142 143 /* When driver_override is set, only bind to the matching driver */ 143 - if (dev->driver_override && strcmp(dev->driver_override, drv->name)) 144 + ret = device_match_driver_override(&dev->dev, &drv->driver); 145 + if (ret == 0) 144 146 return NULL; 145 147 146 148 /* Look at the dynamic ids first, before the static ones */ ··· 166 164 * matching. 167 165 */ 168 166 if (found_id->override_only) { 169 - if (dev->driver_override) 167 + if (ret > 0) 170 168 return found_id; 171 169 } else { 172 170 return found_id; ··· 174 172 } 175 173 176 174 /* driver_override will always match, send a dummy id */ 177 - if (dev->driver_override) 175 + if (ret > 0) 178 176 return &pci_device_id_any; 179 177 return NULL; 180 178 } ··· 454 452 static inline bool pci_device_can_probe(struct pci_dev *pdev) 455 453 { 456 454 return (!pdev->is_virtfn || pdev->physfn->sriov->drivers_autoprobe || 457 - pdev->driver_override); 455 + device_has_driver_override(&pdev->dev)); 458 456 } 459 457 #else 460 458 static inline bool pci_device_can_probe(struct pci_dev *pdev) ··· 1724 1722 1725 1723 const struct bus_type pci_bus_type = { 1726 1724 .name = "pci", 1725 + .driver_override = true, 1727 1726 .match = pci_bus_match, 1728 1727 .uevent = pci_uevent, 1729 1728 .probe = pci_device_probe,
-28
drivers/pci/pci-sysfs.c
··· 615 615 static DEVICE_ATTR_RO(devspec); 616 616 #endif 617 617 618 - static ssize_t driver_override_store(struct device *dev, 619 - struct device_attribute *attr, 620 - const char *buf, size_t count) 621 - { 622 - struct pci_dev *pdev = to_pci_dev(dev); 623 - int ret; 624 - 625 - ret = driver_set_override(dev, &pdev->driver_override, buf, count); 626 - if (ret) 627 - return ret; 628 - 629 - return count; 630 - } 631 - 632 - static ssize_t driver_override_show(struct device *dev, 633 - struct device_attribute *attr, char *buf) 634 - { 635 - struct pci_dev *pdev = to_pci_dev(dev); 636 - ssize_t len; 637 - 638 - device_lock(dev); 639 - len = sysfs_emit(buf, "%s\n", pdev->driver_override); 640 - device_unlock(dev); 641 - return len; 642 - } 643 - static DEVICE_ATTR_RW(driver_override); 644 - 645 618 static struct attribute *pci_dev_attrs[] = { 646 619 &dev_attr_power_state.attr, 647 620 &dev_attr_resource.attr, ··· 642 669 #ifdef CONFIG_OF 643 670 &dev_attr_devspec.attr, 644 671 #endif 645 - &dev_attr_driver_override.attr, 646 672 &dev_attr_ari_enabled.attr, 647 673 NULL, 648 674 };
-1
drivers/pci/probe.c
··· 2488 2488 pci_release_of_node(pci_dev); 2489 2489 pcibios_release_device(pci_dev); 2490 2490 pci_bus_put(pci_dev->bus); 2491 - kfree(pci_dev->driver_override); 2492 2491 bitmap_free(pci_dev->dma_alias_mask); 2493 2492 dev_dbg(dev, "device released\n"); 2494 2493 kfree(pci_dev);
+5 -31
drivers/platform/wmi/core.c
··· 842 842 } 843 843 static DEVICE_ATTR_RO(expensive); 844 844 845 - static ssize_t driver_override_show(struct device *dev, struct device_attribute *attr, 846 - char *buf) 847 - { 848 - struct wmi_device *wdev = to_wmi_device(dev); 849 - ssize_t ret; 850 - 851 - device_lock(dev); 852 - ret = sysfs_emit(buf, "%s\n", wdev->driver_override); 853 - device_unlock(dev); 854 - 855 - return ret; 856 - } 857 - 858 - static ssize_t driver_override_store(struct device *dev, struct device_attribute *attr, 859 - const char *buf, size_t count) 860 - { 861 - struct wmi_device *wdev = to_wmi_device(dev); 862 - int ret; 863 - 864 - ret = driver_set_override(dev, &wdev->driver_override, buf, count); 865 - if (ret < 0) 866 - return ret; 867 - 868 - return count; 869 - } 870 - static DEVICE_ATTR_RW(driver_override); 871 - 872 845 static struct attribute *wmi_attrs[] = { 873 846 &dev_attr_modalias.attr, 874 847 &dev_attr_guid.attr, 875 848 &dev_attr_instance_count.attr, 876 849 &dev_attr_expensive.attr, 877 - &dev_attr_driver_override.attr, 878 850 NULL 879 851 }; 880 852 ATTRIBUTE_GROUPS(wmi); ··· 915 943 { 916 944 struct wmi_block *wblock = dev_to_wblock(dev); 917 945 918 - kfree(wblock->dev.driver_override); 919 946 kfree(wblock); 920 947 } 921 948 ··· 923 952 const struct wmi_driver *wmi_driver = to_wmi_driver(driver); 924 953 struct wmi_block *wblock = dev_to_wblock(dev); 925 954 const struct wmi_device_id *id = wmi_driver->id_table; 955 + int ret; 926 956 927 957 /* When driver_override is set, only bind to the matching driver */ 928 - if (wblock->dev.driver_override) 929 - return !strcmp(wblock->dev.driver_override, driver->name); 958 + ret = device_match_driver_override(dev, driver); 959 + if (ret >= 0) 960 + return ret; 930 961 931 962 if (id == NULL) 932 963 return 0; ··· 1049 1076 static const struct bus_type wmi_bus_type = { 1050 1077 .name = "wmi", 1051 1078 .dev_groups = wmi_groups, 1079 + .driver_override = true, 1052 1080 .match = wmi_dev_match, 1053 1081 .uevent = wmi_dev_uevent, 1054 1082 .probe = wmi_dev_probe,
-5
drivers/s390/cio/cio.h
··· 103 103 struct work_struct todo_work; 104 104 struct schib_config config; 105 105 u64 dma_mask; 106 - /* 107 - * Driver name to force a match. Do not set directly, because core 108 - * frees it. Use driver_set_override() to set or clear it. 109 - */ 110 - const char *driver_override; 111 106 } __attribute__ ((aligned(8))); 112 107 113 108 DECLARE_PER_CPU_ALIGNED(struct irb, cio_irb);
+4 -30
drivers/s390/cio/css.c
··· 159 159 160 160 sch->config.intparm = 0; 161 161 cio_commit_config(sch); 162 - kfree(sch->driver_override); 163 162 kfree(sch); 164 163 } 165 164 ··· 322 323 323 324 static DEVICE_ATTR_RO(modalias); 324 325 325 - static ssize_t driver_override_store(struct device *dev, 326 - struct device_attribute *attr, 327 - const char *buf, size_t count) 328 - { 329 - struct subchannel *sch = to_subchannel(dev); 330 - int ret; 331 - 332 - ret = driver_set_override(dev, &sch->driver_override, buf, count); 333 - if (ret) 334 - return ret; 335 - 336 - return count; 337 - } 338 - 339 - static ssize_t driver_override_show(struct device *dev, 340 - struct device_attribute *attr, char *buf) 341 - { 342 - struct subchannel *sch = to_subchannel(dev); 343 - ssize_t len; 344 - 345 - device_lock(dev); 346 - len = sysfs_emit(buf, "%s\n", sch->driver_override); 347 - device_unlock(dev); 348 - return len; 349 - } 350 - static DEVICE_ATTR_RW(driver_override); 351 - 352 326 static struct attribute *subch_attrs[] = { 353 327 &dev_attr_type.attr, 354 328 &dev_attr_modalias.attr, 355 - &dev_attr_driver_override.attr, 356 329 NULL, 357 330 }; 358 331 ··· 1327 1356 struct subchannel *sch = to_subchannel(dev); 1328 1357 const struct css_driver *driver = to_cssdriver(drv); 1329 1358 struct css_device_id *id; 1359 + int ret; 1330 1360 1331 1361 /* When driver_override is set, only bind to the matching driver */ 1332 - if (sch->driver_override && strcmp(sch->driver_override, drv->name)) 1362 + ret = device_match_driver_override(dev, drv); 1363 + if (ret == 0) 1333 1364 return 0; 1334 1365 1335 1366 for (id = driver->subchannel_type; id->match_flags; id++) { ··· 1388 1415 1389 1416 static const struct bus_type css_bus_type = { 1390 1417 .name = "css", 1418 + .driver_override = true, 1391 1419 .match = css_bus_match, 1392 1420 .probe = css_probe, 1393 1421 .remove = css_remove,
+17 -17
drivers/s390/crypto/ap_bus.c
··· 859 859 860 860 static int __ap_revise_reserved(struct device *dev, void *dummy) 861 861 { 862 - int rc, card, queue, devres, drvres; 862 + int rc, card, queue, devres, drvres, ovrd; 863 863 864 864 if (is_queue_dev(dev)) { 865 865 struct ap_driver *ap_drv = to_ap_drv(dev->driver); 866 866 struct ap_queue *aq = to_ap_queue(dev); 867 - struct ap_device *ap_dev = &aq->ap_dev; 868 867 869 868 card = AP_QID_CARD(aq->qid); 870 869 queue = AP_QID_QUEUE(aq->qid); 871 870 872 - if (ap_dev->driver_override) { 873 - if (strcmp(ap_dev->driver_override, 874 - ap_drv->driver.name)) { 875 - pr_debug("reprobing queue=%02x.%04x\n", card, queue); 876 - rc = device_reprobe(dev); 877 - if (rc) { 878 - AP_DBF_WARN("%s reprobing queue=%02x.%04x failed\n", 879 - __func__, card, queue); 880 - } 871 + ovrd = device_match_driver_override(dev, &ap_drv->driver); 872 + if (ovrd > 0) { 873 + /* override set and matches, nothing to do */ 874 + } else if (ovrd == 0) { 875 + pr_debug("reprobing queue=%02x.%04x\n", card, queue); 876 + rc = device_reprobe(dev); 877 + if (rc) { 878 + AP_DBF_WARN("%s reprobing queue=%02x.%04x failed\n", 879 + __func__, card, queue); 881 880 } 882 881 } else { 883 882 mutex_lock(&ap_attr_mutex); ··· 927 928 if (aq) { 928 929 const struct device_driver *drv = aq->ap_dev.device.driver; 929 930 const struct ap_driver *ap_drv = to_ap_drv(drv); 930 - bool override = !!aq->ap_dev.driver_override; 931 + bool override = device_has_driver_override(&aq->ap_dev.device); 931 932 932 933 if (override && drv && ap_drv->flags & AP_DRIVER_FLAG_DEFAULT) 933 934 rc = 1; ··· 976 977 { 977 978 struct ap_device *ap_dev = to_ap_dev(dev); 978 979 struct ap_driver *ap_drv = to_ap_drv(dev->driver); 979 - int card, queue, devres, drvres, rc = -ENODEV; 980 + int card, queue, devres, drvres, rc = -ENODEV, ovrd; 980 981 981 982 if (!get_device(dev)) 982 983 return rc; ··· 990 991 */ 991 992 card = AP_QID_CARD(to_ap_queue(dev)->qid); 992 993 queue = AP_QID_QUEUE(to_ap_queue(dev)->qid); 993 - if (ap_dev->driver_override) { 994 - if (strcmp(ap_dev->driver_override, 995 - ap_drv->driver.name)) 996 - goto out; 994 + ovrd = device_match_driver_override(dev, &ap_drv->driver); 995 + if (ovrd > 0) { 996 + /* override set and matches, nothing to do */ 997 + } else if (ovrd == 0) { 998 + goto out; 997 999 } else { 998 1000 mutex_lock(&ap_attr_mutex); 999 1001 devres = test_bit_inv(card, ap_perms.apm) &&
-1
drivers/s390/crypto/ap_bus.h
··· 166 166 struct ap_device { 167 167 struct device device; 168 168 int device_type; /* AP device type. */ 169 - const char *driver_override; 170 169 }; 171 170 172 171 #define to_ap_dev(x) container_of((x), struct ap_device, device)
+6 -18
drivers/s390/crypto/ap_queue.c
··· 734 734 struct device_attribute *attr, 735 735 char *buf) 736 736 { 737 - struct ap_queue *aq = to_ap_queue(dev); 738 - struct ap_device *ap_dev = &aq->ap_dev; 739 - int rc; 740 - 741 - device_lock(dev); 742 - if (ap_dev->driver_override) 743 - rc = sysfs_emit(buf, "%s\n", ap_dev->driver_override); 744 - else 745 - rc = sysfs_emit(buf, "\n"); 746 - device_unlock(dev); 747 - 748 - return rc; 737 + guard(spinlock)(&dev->driver_override.lock); 738 + return sysfs_emit(buf, "%s\n", dev->driver_override.name ?: ""); 749 739 } 750 740 751 741 static ssize_t driver_override_store(struct device *dev, 752 742 struct device_attribute *attr, 753 743 const char *buf, size_t count) 754 744 { 755 - struct ap_queue *aq = to_ap_queue(dev); 756 - struct ap_device *ap_dev = &aq->ap_dev; 757 745 int rc = -EINVAL; 758 746 bool old_value; 759 747 ··· 752 764 if (ap_apmask_aqmask_in_use) 753 765 goto out; 754 766 755 - old_value = ap_dev->driver_override ? true : false; 756 - rc = driver_set_override(dev, &ap_dev->driver_override, buf, count); 767 + old_value = device_has_driver_override(dev); 768 + rc = __device_set_driver_override(dev, buf, count); 757 769 if (rc) 758 770 goto out; 759 - if (old_value && !ap_dev->driver_override) 771 + if (old_value && !device_has_driver_override(dev)) 760 772 --ap_driver_override_ctr; 761 - else if (!old_value && ap_dev->driver_override) 773 + else if (!old_value && device_has_driver_override(dev)) 762 774 ++ap_driver_override_ctr; 763 775 764 776 rc = count;
+3 -9
drivers/soc/fsl/guts.c
··· 186 186 const struct fsl_soc_data *soc_data; 187 187 const struct of_device_id *match; 188 188 struct ccsr_guts __iomem *regs; 189 - const char *machine = NULL; 190 189 struct device_node *np; 191 190 bool little_endian; 192 191 u64 soc_uid = 0; ··· 216 217 if (!soc_dev_attr) 217 218 return -ENOMEM; 218 219 219 - if (of_property_read_string(of_root, "model", &machine)) 220 - of_property_read_string_index(of_root, "compatible", 0, &machine); 221 - if (machine) { 222 - soc_dev_attr->machine = kstrdup(machine, GFP_KERNEL); 223 - if (!soc_dev_attr->machine) 224 - goto err_nomem; 225 - } 220 + ret = soc_attr_read_machine(soc_dev_attr); 221 + if (ret) 222 + of_machine_read_compatible(&soc_dev_attr->machine, 0); 226 223 227 224 soc_die = fsl_soc_die_match(svr, fsl_soc_die); 228 225 if (soc_die) { ··· 262 267 err_nomem: 263 268 ret = -ENOMEM; 264 269 err: 265 - kfree(soc_dev_attr->machine); 266 270 kfree(soc_dev_attr->family); 267 271 kfree(soc_dev_attr->soc_id); 268 272 kfree(soc_dev_attr->revision);
+3 -8
drivers/soc/imx/soc-imx8m.c
··· 226 226 const struct imx8_soc_data *data; 227 227 struct imx8_soc_drvdata *drvdata; 228 228 struct device *dev = &pdev->dev; 229 - const struct of_device_id *id; 230 229 struct soc_device *soc_dev; 231 230 u32 soc_rev = 0; 232 231 u64 soc_uid[2] = {0, 0}; ··· 243 244 244 245 soc_dev_attr->family = "Freescale i.MX"; 245 246 246 - ret = of_property_read_string(of_root, "model", &soc_dev_attr->machine); 247 + ret = soc_attr_read_machine(soc_dev_attr); 247 248 if (ret) 248 249 return ret; 249 250 250 - id = of_match_node(imx8_soc_match, of_root); 251 - if (!id) 252 - return -ENODEV; 253 - 254 - data = id->data; 251 + data = device_get_match_data(dev); 255 252 if (data) { 256 253 soc_dev_attr->soc_id = data->name; 257 254 ret = imx8m_soc_prepare(pdev, data->ocotp_compatible); ··· 321 326 int ret; 322 327 323 328 /* No match means this is non-i.MX8M hardware, do nothing. */ 324 - if (!of_match_node(imx8_soc_match, of_root)) 329 + if (!of_machine_device_match(imx8_soc_match)) 325 330 return 0; 326 331 327 332 ret = platform_driver_register(&imx8m_soc_driver);
+2 -2
drivers/soc/imx/soc-imx9.c
··· 30 30 if (!attr) 31 31 return -ENOMEM; 32 32 33 - err = of_property_read_string(of_root, "model", &attr->machine); 33 + err = soc_attr_read_machine(attr); 34 34 if (err) 35 35 return dev_err_probe(dev, err, "%s: missing model property\n", __func__); 36 36 ··· 89 89 struct platform_device *pdev; 90 90 91 91 /* No match means it is not an i.MX 9 series SoC, do nothing. */ 92 - if (!of_match_node(imx9_soc_match, of_root)) 92 + if (!of_machine_device_match(imx9_soc_match)) 93 93 return 0; 94 94 95 95 ret = platform_driver_register(&imx9_soc_driver);
+1 -1
drivers/soc/sunxi/sunxi_mbus.c
··· 118 118 119 119 static int __init sunxi_mbus_init(void) 120 120 { 121 - if (!of_device_compatible_match(of_root, sunxi_mbus_platforms)) 121 + if (!of_machine_compatible_match(sunxi_mbus_platforms)) 122 122 return 0; 123 123 124 124 bus_register_notifier(&platform_bus_type, &sunxi_mbus_nb);
+7 -2
drivers/soundwire/debugfs.c
··· 358 358 debugfs_create_file("go", 0200, d, slave, &cmd_go_fops); 359 359 360 360 debugfs_create_file("read_buffer", 0400, d, slave, &read_buffer_fops); 361 - firmware_file = NULL; 362 - debugfs_create_str("firmware_file", 0200, d, &firmware_file); 361 + if (firmware_file) 362 + debugfs_create_str("firmware_file", 0200, d, &firmware_file); 363 363 364 364 slave->debugfs = d; 365 365 } ··· 371 371 372 372 void sdw_debugfs_init(void) 373 373 { 374 + if (!firmware_file) 375 + firmware_file = kstrdup("", GFP_KERNEL); 376 + 374 377 sdw_debugfs_root = debugfs_create_dir("soundwire", NULL); 375 378 } 376 379 377 380 void sdw_debugfs_exit(void) 378 381 { 379 382 debugfs_remove_recursive(sdw_debugfs_root); 383 + kfree(firmware_file); 384 + firmware_file = NULL; 380 385 }
+1 -1
drivers/spi/spi.c
··· 4943 4943 * Clear the flag before adding the device so that fw_devlink 4944 4944 * doesn't skip adding consumers to this device. 4945 4945 */ 4946 - rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; 4946 + fwnode_clear_flag(&rd->dn->fwnode, FWNODE_FLAG_NOT_DEVICE); 4947 4947 spi = of_register_spi_device(ctlr, rd->dn); 4948 4948 put_device(&ctlr->dev); 4949 4949
+5 -43
drivers/vdpa/vdpa.c
··· 67 67 68 68 static int vdpa_dev_match(struct device *dev, const struct device_driver *drv) 69 69 { 70 - struct vdpa_device *vdev = dev_to_vdpa(dev); 70 + int ret; 71 71 72 72 /* Check override first, and if set, only use the named driver */ 73 - if (vdev->driver_override) 74 - return strcmp(vdev->driver_override, drv->name) == 0; 73 + ret = device_match_driver_override(dev, drv); 74 + if (ret >= 0) 75 + return ret; 75 76 76 77 /* Currently devices must be supported by all vDPA bus drivers */ 77 78 return 1; 78 79 } 79 80 80 - static ssize_t driver_override_store(struct device *dev, 81 - struct device_attribute *attr, 82 - const char *buf, size_t count) 83 - { 84 - struct vdpa_device *vdev = dev_to_vdpa(dev); 85 - int ret; 86 - 87 - ret = driver_set_override(dev, &vdev->driver_override, buf, count); 88 - if (ret) 89 - return ret; 90 - 91 - return count; 92 - } 93 - 94 - static ssize_t driver_override_show(struct device *dev, 95 - struct device_attribute *attr, char *buf) 96 - { 97 - struct vdpa_device *vdev = dev_to_vdpa(dev); 98 - ssize_t len; 99 - 100 - device_lock(dev); 101 - len = sysfs_emit(buf, "%s\n", vdev->driver_override); 102 - device_unlock(dev); 103 - 104 - return len; 105 - } 106 - static DEVICE_ATTR_RW(driver_override); 107 - 108 - static struct attribute *vdpa_dev_attrs[] = { 109 - &dev_attr_driver_override.attr, 110 - NULL, 111 - }; 112 - 113 - static const struct attribute_group vdpa_dev_group = { 114 - .attrs = vdpa_dev_attrs, 115 - }; 116 - __ATTRIBUTE_GROUPS(vdpa_dev); 117 - 118 81 static const struct bus_type vdpa_bus = { 119 82 .name = "vdpa", 120 - .dev_groups = vdpa_dev_groups, 83 + .driver_override = true, 121 84 .match = vdpa_dev_match, 122 85 .probe = vdpa_dev_probe, 123 86 .remove = vdpa_dev_remove, ··· 95 132 ops->free(vdev); 96 133 97 134 ida_free(&vdpa_index_ida, vdev->index); 98 - kfree(vdev->driver_override); 99 135 kfree(vdev); 100 136 } 101 137
+1 -3
drivers/vfio/fsl-mc/vfio_fsl_mc.c
··· 424 424 425 425 if (action == BUS_NOTIFY_ADD_DEVICE && 426 426 vdev->mc_dev == mc_cont) { 427 - mc_dev->driver_override = kasprintf(GFP_KERNEL, "%s", 428 - vfio_fsl_mc_ops.name); 429 - if (!mc_dev->driver_override) 427 + if (device_set_driver_override(dev, vfio_fsl_mc_ops.name)) 430 428 dev_warn(dev, "VFIO_FSL_MC: Setting driver override for device in dprc %s failed\n", 431 429 dev_name(&mc_cont->dev)); 432 430 else
+2 -3
drivers/vfio/pci/vfio_pci_core.c
··· 1987 1987 pdev->is_virtfn && physfn == vdev->pdev) { 1988 1988 pci_info(vdev->pdev, "Captured SR-IOV VF %s driver_override\n", 1989 1989 pci_name(pdev)); 1990 - pdev->driver_override = kasprintf(GFP_KERNEL, "%s", 1991 - vdev->vdev.ops->name); 1992 - WARN_ON(!pdev->driver_override); 1990 + WARN_ON(device_set_driver_override(&pdev->dev, 1991 + vdev->vdev.ops->name)); 1993 1992 } else if (action == BUS_NOTIFY_BOUND_DRIVER && 1994 1993 pdev->is_virtfn && physfn == vdev->pdev) { 1995 1994 struct pci_driver *drv = pci_dev_driver(pdev);
+4 -2
drivers/xen/xen-pciback/pci_stub.c
··· 598 598 return err; 599 599 } 600 600 601 + static struct pci_driver xen_pcibk_pci_driver; 602 + 601 603 /* Called when 'bind'. This means we must _NOT_ call pci_reset_function or 602 604 * other functions that take the sysfs lock. */ 603 605 static int pcistub_probe(struct pci_dev *dev, const struct pci_device_id *id) ··· 611 609 612 610 match = pcistub_match(dev); 613 611 614 - if ((dev->driver_override && 615 - !strcmp(dev->driver_override, PCISTUB_DRIVER_NAME)) || 612 + if (device_match_driver_override(&dev->dev, 613 + &xen_pcibk_pci_driver.driver) > 0 || 616 614 match) { 617 615 618 616 if (dev->hdr_type != PCI_HEADER_TYPE_NORMAL
+5 -2
fs/debugfs/file.c
··· 1047 1047 1048 1048 return ret; 1049 1049 } 1050 - EXPORT_SYMBOL_GPL(debugfs_create_str); 1051 1050 1052 1051 static ssize_t debugfs_write_file_str(struct file *file, const char __user *user_buf, 1053 1052 size_t count, loff_t *ppos) ··· 1126 1127 * directory dentry if set. If this parameter is %NULL, then the 1127 1128 * file will be created in the root of the debugfs filesystem. 1128 1129 * @value: a pointer to the variable that the file should read to and write 1129 - * from. 1130 + * from. This pointer and the string it points to must not be %NULL. 1130 1131 * 1131 1132 * This function creates a file in debugfs with the given name that 1132 1133 * contains the value of the variable @value. If the @mode variable is so ··· 1135 1136 void debugfs_create_str(const char *name, umode_t mode, 1136 1137 struct dentry *parent, char **value) 1137 1138 { 1139 + if (WARN_ON(!value || !*value)) 1140 + return; 1141 + 1138 1142 debugfs_create_mode_unsafe(name, mode, parent, value, &fops_str, 1139 1143 &fops_str_ro, &fops_str_wo); 1140 1144 } 1145 + EXPORT_SYMBOL_GPL(debugfs_create_str); 1141 1146 1142 1147 static ssize_t read_file_blob(struct file *file, char __user *user_buf, 1143 1148 size_t count, loff_t *ppos)
+54 -4
fs/kernfs/dir.c
··· 498 498 /** 499 499 * kernfs_drain - drain kernfs_node 500 500 * @kn: kernfs_node to drain 501 + * @drop_supers: Set to true if this function is called with the 502 + * kernfs_supers_rwsem locked. 501 503 * 502 504 * Drain existing usages and nuke all existing mmaps of @kn. Multiple 503 505 * removers may invoke this function concurrently on @kn and all will 504 506 * return after draining is complete. 505 507 */ 506 - static void kernfs_drain(struct kernfs_node *kn) 508 + static void kernfs_drain(struct kernfs_node *kn, bool drop_supers) 507 509 __releases(&kernfs_root(kn)->kernfs_rwsem) 508 510 __acquires(&kernfs_root(kn)->kernfs_rwsem) 509 511 { ··· 525 523 return; 526 524 527 525 up_write(&root->kernfs_rwsem); 526 + if (drop_supers) 527 + up_read(&root->kernfs_supers_rwsem); 528 528 529 529 if (kernfs_lockdep(kn)) { 530 530 rwsem_acquire(&kn->dep_map, 0, 0, _RET_IP_); ··· 545 541 if (kernfs_should_drain_open_files(kn)) 546 542 kernfs_drain_open_files(kn); 547 543 544 + if (drop_supers) 545 + down_read(&root->kernfs_supers_rwsem); 548 546 down_write(&root->kernfs_rwsem); 549 547 } 550 548 ··· 1498 1492 kn->flags |= KERNFS_HIDDEN; 1499 1493 if (kernfs_active(kn)) 1500 1494 atomic_add(KN_DEACTIVATED_BIAS, &kn->active); 1501 - kernfs_drain(kn); 1495 + kernfs_drain(kn, false); 1502 1496 } 1503 1497 1504 1498 up_write(&root->kernfs_rwsem); 1499 + } 1500 + 1501 + /* 1502 + * This function enables VFS to send fsnotify events for deletions. 1503 + * There is gap in this implementation for certain file removals due their 1504 + * unique nature in kernfs. Directory removals that trigger file removals occur 1505 + * through vfs_rmdir, which shrinks the dcache and emits fsnotify events after 1506 + * the rmdir operation; there is no issue here. However kernfs writes to 1507 + * particular files (e.g. cgroup.subtree_control) can also cause file removal, 1508 + * but vfs_write does not attempt to emit fsnotify events after the write 1509 + * operation, even if i_nlink counts are 0. As a usecase for monitoring this 1510 + * category of file removals is not known, they are left without having 1511 + * IN_DELETE or IN_DELETE_SELF events generated. 1512 + * Fanotify recursive monitoring also does not work for kernfs nodes that do not 1513 + * have inodes attached, as they are created on-demand in kernfs. 1514 + */ 1515 + static void kernfs_clear_inode_nlink(struct kernfs_node *kn) 1516 + { 1517 + struct kernfs_root *root = kernfs_root(kn); 1518 + struct kernfs_super_info *info; 1519 + 1520 + lockdep_assert_held_read(&root->kernfs_supers_rwsem); 1521 + 1522 + list_for_each_entry(info, &root->supers, node) { 1523 + struct inode *inode = ilookup(info->sb, kernfs_ino(kn)); 1524 + 1525 + if (inode) { 1526 + clear_nlink(inode); 1527 + iput(inode); 1528 + } 1529 + } 1505 1530 } 1506 1531 1507 1532 static void __kernfs_remove(struct kernfs_node *kn) ··· 1543 1506 if (!kn) 1544 1507 return; 1545 1508 1509 + lockdep_assert_held_read(&kernfs_root(kn)->kernfs_supers_rwsem); 1546 1510 lockdep_assert_held_write(&kernfs_root(kn)->kernfs_rwsem); 1547 1511 1548 1512 /* ··· 1556 1518 pr_debug("kernfs %s: removing\n", kernfs_rcu_name(kn)); 1557 1519 1558 1520 /* prevent new usage by marking all nodes removing and deactivating */ 1521 + down_write(&kernfs_root(kn)->kernfs_iattr_rwsem); 1559 1522 pos = NULL; 1560 1523 while ((pos = kernfs_next_descendant_post(pos, kn))) { 1561 1524 pos->flags |= KERNFS_REMOVING; 1562 1525 if (kernfs_active(pos)) 1563 1526 atomic_add(KN_DEACTIVATED_BIAS, &pos->active); 1564 1527 } 1528 + up_write(&kernfs_root(kn)->kernfs_iattr_rwsem); 1565 1529 1566 1530 /* deactivate and unlink the subtree node-by-node */ 1567 1531 do { ··· 1577 1537 */ 1578 1538 kernfs_get(pos); 1579 1539 1580 - kernfs_drain(pos); 1540 + kernfs_drain(pos, true); 1581 1541 parent = kernfs_parent(pos); 1582 1542 /* 1583 1543 * kernfs_unlink_sibling() succeeds once per node. Use it ··· 1587 1547 struct kernfs_iattrs *ps_iattr = 1588 1548 parent ? parent->iattr : NULL; 1589 1549 1590 - /* update timestamps on the parent */ 1591 1550 down_write(&kernfs_root(kn)->kernfs_iattr_rwsem); 1592 1551 1552 + kernfs_clear_inode_nlink(pos); 1553 + 1554 + /* update timestamps on the parent */ 1593 1555 if (ps_iattr) { 1594 1556 ktime_get_real_ts64(&ps_iattr->ia_ctime); 1595 1557 ps_iattr->ia_mtime = ps_iattr->ia_ctime; ··· 1620 1578 1621 1579 root = kernfs_root(kn); 1622 1580 1581 + down_read(&root->kernfs_supers_rwsem); 1623 1582 down_write(&root->kernfs_rwsem); 1624 1583 __kernfs_remove(kn); 1625 1584 up_write(&root->kernfs_rwsem); 1585 + up_read(&root->kernfs_supers_rwsem); 1626 1586 } 1627 1587 1628 1588 /** ··· 1715 1671 bool ret; 1716 1672 struct kernfs_root *root = kernfs_root(kn); 1717 1673 1674 + down_read(&root->kernfs_supers_rwsem); 1718 1675 down_write(&root->kernfs_rwsem); 1719 1676 kernfs_break_active_protection(kn); 1720 1677 ··· 1745 1700 break; 1746 1701 1747 1702 up_write(&root->kernfs_rwsem); 1703 + up_read(&root->kernfs_supers_rwsem); 1748 1704 schedule(); 1705 + down_read(&root->kernfs_supers_rwsem); 1749 1706 down_write(&root->kernfs_rwsem); 1750 1707 } 1751 1708 finish_wait(waitq, &wait); ··· 1762 1715 kernfs_unbreak_active_protection(kn); 1763 1716 1764 1717 up_write(&root->kernfs_rwsem); 1718 + up_read(&root->kernfs_supers_rwsem); 1765 1719 return ret; 1766 1720 } 1767 1721 ··· 1789 1741 } 1790 1742 1791 1743 root = kernfs_root(parent); 1744 + down_read(&root->kernfs_supers_rwsem); 1792 1745 down_write(&root->kernfs_rwsem); 1793 1746 1794 1747 kn = kernfs_find_ns(parent, name, ns); ··· 1800 1751 } 1801 1752 1802 1753 up_write(&root->kernfs_rwsem); 1754 + up_read(&root->kernfs_supers_rwsem); 1803 1755 1804 1756 if (kn) 1805 1757 return 0;
+1 -1
fs/kernfs/inode.c
··· 177 177 */ 178 178 set_inode_attr(inode, attrs); 179 179 180 - if (kernfs_type(kn) == KERNFS_DIR) 180 + if (kernfs_type(kn) == KERNFS_DIR && !(kn->flags & KERNFS_REMOVING)) 181 181 set_nlink(inode, kn->dir.subdirs + 2); 182 182 } 183 183
+5 -5
fs/sysfs/group.c
··· 217 217 EXPORT_SYMBOL_GPL(sysfs_create_group); 218 218 219 219 static int internal_create_groups(struct kobject *kobj, int update, 220 - const struct attribute_group **groups) 220 + const struct attribute_group *const *groups) 221 221 { 222 222 int error = 0; 223 223 int i; ··· 250 250 * Returns 0 on success or error code from sysfs_create_group on failure. 251 251 */ 252 252 int sysfs_create_groups(struct kobject *kobj, 253 - const struct attribute_group **groups) 253 + const struct attribute_group *const *groups) 254 254 { 255 255 return internal_create_groups(kobj, 0, groups); 256 256 } ··· 268 268 * Returns 0 on success or error code from sysfs_update_group on failure. 269 269 */ 270 270 int sysfs_update_groups(struct kobject *kobj, 271 - const struct attribute_group **groups) 271 + const struct attribute_group *const *groups) 272 272 { 273 273 return internal_create_groups(kobj, 1, groups); 274 274 } ··· 342 342 * If groups is not NULL, remove the specified groups from the kobject. 343 343 */ 344 344 void sysfs_remove_groups(struct kobject *kobj, 345 - const struct attribute_group **groups) 345 + const struct attribute_group *const *groups) 346 346 { 347 347 int i; 348 348 ··· 613 613 * Returns 0 on success or error code on failure. 614 614 */ 615 615 int sysfs_groups_change_owner(struct kobject *kobj, 616 - const struct attribute_group **groups, 616 + const struct attribute_group *const *groups, 617 617 kuid_t kuid, kgid_t kgid) 618 618 { 619 619 int error = 0, i;
+3 -2
include/linux/device.h
··· 965 965 } 966 966 967 967 DEFINE_GUARD(device, struct device *, device_lock(_T), device_unlock(_T)) 968 + DEFINE_GUARD_COND(device, _intr, device_lock_interruptible(_T), _RET == 0) 968 969 969 970 static inline void device_lock_assert(struct device *dev) 970 971 { ··· 1186 1185 void device_destroy(const struct class *cls, dev_t devt); 1187 1186 1188 1187 int __must_check device_add_groups(struct device *dev, 1189 - const struct attribute_group **groups); 1188 + const struct attribute_group *const *groups); 1190 1189 void device_remove_groups(struct device *dev, 1191 - const struct attribute_group **groups); 1190 + const struct attribute_group *const *groups); 1192 1191 1193 1192 static inline int __must_check device_add_group(struct device *dev, 1194 1193 const struct attribute_group *grp)
+2 -2
include/linux/device/class.h
··· 50 50 struct class { 51 51 const char *name; 52 52 53 - const struct attribute_group **class_groups; 54 - const struct attribute_group **dev_groups; 53 + const struct attribute_group *const *class_groups; 54 + const struct attribute_group *const *dev_groups; 55 55 56 56 int (*dev_uevent)(const struct device *dev, struct kobj_uevent_env *env); 57 57 char *(*devnode)(const struct device *dev, umode_t *mode);
-4
include/linux/fsl/mc.h
··· 178 178 * @regions: pointer to array of MMIO region entries 179 179 * @irqs: pointer to array of pointers to interrupts allocated to this device 180 180 * @resource: generic resource associated with this MC object device, if any. 181 - * @driver_override: driver name to force a match; do not set directly, 182 - * because core frees it; use driver_set_override() to 183 - * set or clear it. 184 181 * 185 182 * Generic device object for MC object devices that are "attached" to a 186 183 * MC bus. ··· 211 214 struct fsl_mc_device_irq **irqs; 212 215 struct fsl_mc_resource *resource; 213 216 struct device_link *consumer_link; 214 - const char *driver_override; 215 217 }; 216 218 217 219 #define to_fsl_mc_device(_dev) \
+33 -11
include/linux/fwnode.h
··· 15 15 #define _LINUX_FWNODE_H_ 16 16 17 17 #include <linux/bits.h> 18 + #include <linux/bitops.h> 18 19 #include <linux/err.h> 19 20 #include <linux/list.h> 20 21 #include <linux/types.h> ··· 43 42 * suppliers. Only enforce ordering with suppliers that have 44 43 * drivers. 45 44 */ 46 - #define FWNODE_FLAG_LINKS_ADDED BIT(0) 47 - #define FWNODE_FLAG_NOT_DEVICE BIT(1) 48 - #define FWNODE_FLAG_INITIALIZED BIT(2) 49 - #define FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD BIT(3) 50 - #define FWNODE_FLAG_BEST_EFFORT BIT(4) 51 - #define FWNODE_FLAG_VISITED BIT(5) 45 + #define FWNODE_FLAG_LINKS_ADDED 0 46 + #define FWNODE_FLAG_NOT_DEVICE 1 47 + #define FWNODE_FLAG_INITIALIZED 2 48 + #define FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD 3 49 + #define FWNODE_FLAG_BEST_EFFORT 4 50 + #define FWNODE_FLAG_VISITED 5 52 51 53 52 struct fwnode_handle { 54 53 struct fwnode_handle *secondary; ··· 58 57 struct device *dev; 59 58 struct list_head suppliers; 60 59 struct list_head consumers; 61 - u8 flags; 60 + unsigned long flags; 62 61 }; 63 62 64 63 /* ··· 213 212 INIT_LIST_HEAD(&fwnode->suppliers); 214 213 } 215 214 215 + static inline void fwnode_set_flag(struct fwnode_handle *fwnode, 216 + unsigned int bit) 217 + { 218 + set_bit(bit, &fwnode->flags); 219 + } 220 + 221 + static inline void fwnode_clear_flag(struct fwnode_handle *fwnode, 222 + unsigned int bit) 223 + { 224 + clear_bit(bit, &fwnode->flags); 225 + } 226 + 227 + static inline void fwnode_assign_flag(struct fwnode_handle *fwnode, 228 + unsigned int bit, bool value) 229 + { 230 + assign_bit(bit, &fwnode->flags, value); 231 + } 232 + 233 + static inline bool fwnode_test_flag(struct fwnode_handle *fwnode, 234 + unsigned int bit) 235 + { 236 + return test_bit(bit, &fwnode->flags); 237 + } 238 + 216 239 static inline void fwnode_dev_initialized(struct fwnode_handle *fwnode, 217 240 bool initialized) 218 241 { 219 242 if (IS_ERR_OR_NULL(fwnode)) 220 243 return; 221 244 222 - if (initialized) 223 - fwnode->flags |= FWNODE_FLAG_INITIALIZED; 224 - else 225 - fwnode->flags &= ~FWNODE_FLAG_INITIALIZED; 245 + fwnode_assign_flag(fwnode, FWNODE_FLAG_INITIALIZED, initialized); 226 246 } 227 247 228 248 int fwnode_link_add(struct fwnode_handle *con, struct fwnode_handle *sup,
+8
include/linux/ksysfs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _KSYSFS_H_ 4 + #define _KSYSFS_H_ 5 + 6 + void ksysfs_init(void); 7 + 8 + #endif /* _KSYSFS_H_ */
+14
include/linux/of.h
··· 426 426 return of_machine_compatible_match(compats); 427 427 } 428 428 429 + int of_machine_read_compatible(const char **compatible, unsigned int index); 430 + int of_machine_read_model(const char **model); 431 + 429 432 extern int of_add_property(struct device_node *np, struct property *prop); 430 433 extern int of_remove_property(struct device_node *np, struct property *prop); 431 434 extern int of_update_property(struct device_node *np, struct property *newprop); ··· 852 849 static inline int of_machine_is_compatible(const char *compat) 853 850 { 854 851 return 0; 852 + } 853 + 854 + static inline int of_machine_read_compatible(const char **compatible, 855 + unsigned int index) 856 + { 857 + return -ENOSYS; 858 + } 859 + 860 + static inline int of_machine_read_model(const char **model) 861 + { 862 + return -ENOSYS; 855 863 } 856 864 857 865 static inline int of_add_property(struct device_node *np, struct property *prop)
-6
include/linux/pci.h
··· 575 575 u8 supported_speeds; /* Supported Link Speeds Vector */ 576 576 phys_addr_t rom; /* Physical address if not from BAR */ 577 577 size_t romlen; /* Length if not from BAR */ 578 - /* 579 - * Driver name to force a match. Do not set directly, because core 580 - * frees it. Use driver_set_override() to set or clear it. 581 - */ 582 - const char *driver_override; 583 - 584 578 unsigned long priv_flags; /* Private flags for the PCI driver */ 585 579 586 580 /* These methods index pci_reset_fn_methods[] */
+47 -11
include/linux/platform_device.h
··· 113 113 const char *name); 114 114 extern int platform_add_devices(struct platform_device **, int); 115 115 116 + /** 117 + * struct platform_device_info - set of parameters for creating a platform device 118 + * @parent: parent device for the new platform device. 119 + * @fwnode: firmware node associated with the device. 120 + * @of_node_reused: indicates that device tree node associated with the device 121 + * is shared with another device, typically its ancestor. Setting this to 122 + * %true prevents the device from being matched via the OF match table, 123 + * and stops the device core from automatically binding pinctrl 124 + * configuration to avoid disrupting the other device. 125 + * @name: name of the device. 126 + * @id: instance ID of the device. Use %PLATFORM_DEVID_NONE if there is only 127 + * one instance of the device, or %PLATFORM_DEVID_AUTO to let the 128 + * kernel automatically assign a unique instance ID. 129 + * @res: set of resources to attach to the device. 130 + * @num_res: number of entries in @res. 131 + * @data: device-specific data for this platform device. 132 + * @size_data: size of device-specific data. 133 + * @dma_mask: DMA mask for the device. 134 + * @swnode: a secondary software node to be attached to the device. The node 135 + * will be automatically registered and its lifetime tied to the platform 136 + * device if it is not registered yet. 137 + * @properties: a set of software properties for the device. If provided, 138 + * a managed software node will be automatically created and 139 + * assigned to the device. The properties array must be terminated 140 + * with a sentinel entry. Specifying both @properties and @swnode is not 141 + * allowed. 142 + * 143 + * This structure is used to hold information needed to create and register 144 + * a platform device using platform_device_register_full(). 145 + * 146 + * platform_device_register_full() makes deep copies of @name, @res, @data and 147 + * @properties, so the caller does not need to keep them after registration. 148 + * If the registration is performed during initialization, these can be marked 149 + * as __initconst. 150 + */ 116 151 struct platform_device_info { 117 - struct device *parent; 118 - struct fwnode_handle *fwnode; 119 - bool of_node_reused; 152 + struct device *parent; 153 + struct fwnode_handle *fwnode; 154 + bool of_node_reused; 120 155 121 - const char *name; 122 - int id; 156 + const char *name; 157 + int id; 123 158 124 - const struct resource *res; 125 - unsigned int num_res; 159 + const struct resource *res; 160 + unsigned int num_res; 126 161 127 - const void *data; 128 - size_t size_data; 129 - u64 dma_mask; 162 + const void *data; 163 + size_t size_data; 164 + u64 dma_mask; 130 165 131 - const struct property_entry *properties; 166 + const struct software_node *swnode; 167 + const struct property_entry *properties; 132 168 }; 133 169 extern struct platform_device *platform_device_register_full( 134 170 const struct platform_device_info *pdevinfo);
+10
include/linux/sys_soc.h
··· 37 37 */ 38 38 struct device *soc_device_to_device(struct soc_device *soc); 39 39 40 + /** 41 + * soc_attr_read_machine - retrieve the machine model and store it in 42 + * the soc_device_attribute structure 43 + * @soc_dev_attr: SoC attribute structure to store the model in 44 + * 45 + * Returns: 46 + * 0 on success, negative error number on failure. 47 + */ 48 + int soc_attr_read_machine(struct soc_device_attribute *soc_dev_attr); 49 + 40 50 #ifdef CONFIG_SOC_BUS 41 51 const struct soc_device_attribute *soc_device_match( 42 52 const struct soc_device_attribute *matches);
+8 -8
include/linux/sysfs.h
··· 445 445 int __must_check sysfs_create_group(struct kobject *kobj, 446 446 const struct attribute_group *grp); 447 447 int __must_check sysfs_create_groups(struct kobject *kobj, 448 - const struct attribute_group **groups); 448 + const struct attribute_group *const *groups); 449 449 int __must_check sysfs_update_groups(struct kobject *kobj, 450 - const struct attribute_group **groups); 450 + const struct attribute_group *const *groups); 451 451 int sysfs_update_group(struct kobject *kobj, 452 452 const struct attribute_group *grp); 453 453 void sysfs_remove_group(struct kobject *kobj, 454 454 const struct attribute_group *grp); 455 455 void sysfs_remove_groups(struct kobject *kobj, 456 - const struct attribute_group **groups); 456 + const struct attribute_group *const *groups); 457 457 int sysfs_add_file_to_group(struct kobject *kobj, 458 458 const struct attribute *attr, const char *group); 459 459 void sysfs_remove_file_from_group(struct kobject *kobj, ··· 486 486 int sysfs_link_change_owner(struct kobject *kobj, struct kobject *targ, 487 487 const char *name, kuid_t kuid, kgid_t kgid); 488 488 int sysfs_groups_change_owner(struct kobject *kobj, 489 - const struct attribute_group **groups, 489 + const struct attribute_group *const *groups, 490 490 kuid_t kuid, kgid_t kgid); 491 491 int sysfs_group_change_owner(struct kobject *kobj, 492 492 const struct attribute_group *groups, kuid_t kuid, ··· 629 629 } 630 630 631 631 static inline int sysfs_create_groups(struct kobject *kobj, 632 - const struct attribute_group **groups) 632 + const struct attribute_group *const *groups) 633 633 { 634 634 return 0; 635 635 } 636 636 637 637 static inline int sysfs_update_groups(struct kobject *kobj, 638 - const struct attribute_group **groups) 638 + const struct attribute_group *const *groups) 639 639 { 640 640 return 0; 641 641 } ··· 652 652 } 653 653 654 654 static inline void sysfs_remove_groups(struct kobject *kobj, 655 - const struct attribute_group **groups) 655 + const struct attribute_group *const *groups) 656 656 { 657 657 } 658 658 ··· 733 733 } 734 734 735 735 static inline int sysfs_groups_change_owner(struct kobject *kobj, 736 - const struct attribute_group **groups, 736 + const struct attribute_group *const *groups, 737 737 kuid_t kuid, kgid_t kgid) 738 738 { 739 739 return 0;
-4
include/linux/vdpa.h
··· 72 72 * struct vdpa_device - representation of a vDPA device 73 73 * @dev: underlying device 74 74 * @vmap: the metadata passed to upper layer to be used for mapping 75 - * @driver_override: driver name to force a match; do not set directly, 76 - * because core frees it; use driver_set_override() to 77 - * set or clear it. 78 75 * @config: the configuration ops for this device. 79 76 * @map: the map ops for this device 80 77 * @cf_lock: Protects get and set access to configuration layout. ··· 87 90 struct vdpa_device { 88 91 struct device dev; 89 92 union virtio_map vmap; 90 - const char *driver_override; 91 93 const struct vdpa_config_ops *config; 92 94 const struct virtio_map_ops *map; 93 95 struct rw_semaphore cf_lock; /* Protects get/set config */
-4
include/linux/wmi.h
··· 18 18 * struct wmi_device - WMI device structure 19 19 * @dev: Device associated with this WMI device 20 20 * @setable: True for devices implementing the Set Control Method 21 - * @driver_override: Driver name to force a match; do not set directly, 22 - * because core frees it; use driver_set_override() to 23 - * set or clear it. 24 21 * 25 22 * This represents WMI devices discovered by the WMI driver core. 26 23 */ 27 24 struct wmi_device { 28 25 struct device dev; 29 26 bool setable; 30 - const char *driver_override; 31 27 }; 32 28 33 29 /**
+2
init/main.c
··· 36 36 #include <linux/kmod.h> 37 37 #include <linux/kprobes.h> 38 38 #include <linux/kmsan.h> 39 + #include <linux/ksysfs.h> 39 40 #include <linux/vmalloc.h> 40 41 #include <linux/kernel_stat.h> 41 42 #include <linux/start_kernel.h> ··· 1481 1480 static void __init do_basic_setup(void) 1482 1481 { 1483 1482 cpuset_init_smp(); 1483 + ksysfs_init(); 1484 1484 driver_init(); 1485 1485 init_irq_proc(); 1486 1486 do_ctors();
+4 -5
kernel/ksysfs.c
··· 8 8 9 9 #include <asm/byteorder.h> 10 10 #include <linux/kobject.h> 11 + #include <linux/ksysfs.h> 11 12 #include <linux/string.h> 12 13 #include <linux/sysfs.h> 13 14 #include <linux/export.h> ··· 214 213 .attrs = kernel_attrs, 215 214 }; 216 215 217 - static int __init ksysfs_init(void) 216 + void __init ksysfs_init(void) 218 217 { 219 218 int error; 220 219 ··· 235 234 goto group_exit; 236 235 } 237 236 238 - return 0; 237 + return; 239 238 240 239 group_exit: 241 240 sysfs_remove_group(kernel_kobj, &kernel_attr_group); 242 241 kset_exit: 243 242 kobject_put(kernel_kobj); 244 243 exit: 245 - return error; 244 + pr_err("failed to initialize the kernel kobject: %d\n", error); 246 245 } 247 - 248 - core_initcall(ksysfs_init);
+139 -46
rust/kernel/devres.rs
··· 23 23 rcu, 24 24 Arc, // 25 25 }, 26 - types::ForeignOwnable, 26 + types::{ 27 + ForeignOwnable, 28 + Opaque, // 29 + }, 27 30 }; 31 + 32 + /// Inner type that embeds a `struct devres_node` and the `Revocable<T>`. 33 + #[repr(C)] 34 + #[pin_data] 35 + struct Inner<T> { 36 + #[pin] 37 + node: Opaque<bindings::devres_node>, 38 + #[pin] 39 + data: Revocable<T>, 40 + } 28 41 29 42 /// This abstraction is meant to be used by subsystems to containerize [`Device`] bound resources to 30 43 /// manage their lifetime. ··· 124 111 /// ``` 125 112 pub struct Devres<T: Send> { 126 113 dev: ARef<Device>, 127 - /// Pointer to [`Self::devres_callback`]. 128 - /// 129 - /// Has to be stored, since Rust does not guarantee to always return the same address for a 130 - /// function. However, the C API uses the address as a key. 131 - callback: unsafe extern "C" fn(*mut c_void), 132 - data: Arc<Revocable<T>>, 114 + inner: Arc<Inner<T>>, 115 + } 116 + 117 + // Calling the FFI functions from the `base` module directly from the `Devres<T>` impl may result in 118 + // them being called directly from driver modules. This happens since the Rust compiler will use 119 + // monomorphisation, so it might happen that functions are instantiated within the calling driver 120 + // module. For now, work around this with `#[inline(never)]` helpers. 121 + // 122 + // TODO: Remove once a more generic solution has been implemented. For instance, we may be able to 123 + // leverage `bindgen` to take care of this depending on whether a symbol is (already) exported. 124 + mod base { 125 + use kernel::{ 126 + bindings, 127 + prelude::*, // 128 + }; 129 + 130 + #[inline(never)] 131 + #[allow(clippy::missing_safety_doc)] 132 + pub(super) unsafe fn devres_node_init( 133 + node: *mut bindings::devres_node, 134 + release: bindings::dr_node_release_t, 135 + free: bindings::dr_node_free_t, 136 + ) { 137 + // SAFETY: Safety requirements are the same as `bindings::devres_node_init`. 138 + unsafe { bindings::devres_node_init(node, release, free) } 139 + } 140 + 141 + #[inline(never)] 142 + #[allow(clippy::missing_safety_doc)] 143 + pub(super) unsafe fn devres_set_node_dbginfo( 144 + node: *mut bindings::devres_node, 145 + name: *const c_char, 146 + size: usize, 147 + ) { 148 + // SAFETY: Safety requirements are the same as `bindings::devres_set_node_dbginfo`. 149 + unsafe { bindings::devres_set_node_dbginfo(node, name, size) } 150 + } 151 + 152 + #[inline(never)] 153 + #[allow(clippy::missing_safety_doc)] 154 + pub(super) unsafe fn devres_node_add( 155 + dev: *mut bindings::device, 156 + node: *mut bindings::devres_node, 157 + ) { 158 + // SAFETY: Safety requirements are the same as `bindings::devres_node_add`. 159 + unsafe { bindings::devres_node_add(dev, node) } 160 + } 161 + 162 + #[must_use] 163 + #[inline(never)] 164 + #[allow(clippy::missing_safety_doc)] 165 + pub(super) unsafe fn devres_node_remove( 166 + dev: *mut bindings::device, 167 + node: *mut bindings::devres_node, 168 + ) -> bool { 169 + // SAFETY: Safety requirements are the same as `bindings::devres_node_remove`. 170 + unsafe { bindings::devres_node_remove(dev, node) } 171 + } 133 172 } 134 173 135 174 impl<T: Send> Devres<T> { ··· 193 128 where 194 129 Error: From<E>, 195 130 { 196 - let callback = Self::devres_callback; 197 - let data = Arc::pin_init(Revocable::new(data), GFP_KERNEL)?; 198 - let devres_data = data.clone(); 131 + let inner = Arc::pin_init::<Error>( 132 + try_pin_init!(Inner { 133 + node <- Opaque::ffi_init(|node: *mut bindings::devres_node| { 134 + // SAFETY: `node` is a valid pointer to an uninitialized `struct devres_node`. 135 + unsafe { 136 + base::devres_node_init( 137 + node, 138 + Some(Self::devres_node_release), 139 + Some(Self::devres_node_free_node), 140 + ) 141 + }; 142 + 143 + // SAFETY: `node` is a valid pointer to an uninitialized `struct devres_node`. 144 + unsafe { 145 + base::devres_set_node_dbginfo( 146 + node, 147 + // TODO: Use `core::any::type_name::<T>()` once it is a `const fn`, 148 + // such that we can convert the `&str` to a `&CStr` at compile-time. 149 + c"Devres<T>".as_char_ptr(), 150 + core::mem::size_of::<Revocable<T>>(), 151 + ) 152 + }; 153 + }), 154 + data <- Revocable::new(data), 155 + }), 156 + GFP_KERNEL, 157 + )?; 199 158 200 159 // SAFETY: 201 - // - `dev.as_raw()` is a pointer to a valid bound device. 202 - // - `data` is guaranteed to be a valid for the duration of the lifetime of `Self`. 203 - // - `devm_add_action()` is guaranteed not to call `callback` for the entire lifetime of 204 - // `dev`. 205 - to_result(unsafe { 206 - bindings::devm_add_action( 207 - dev.as_raw(), 208 - Some(callback), 209 - Arc::as_ptr(&data).cast_mut().cast(), 210 - ) 211 - })?; 160 + // - `dev` is a valid pointer to a bound `struct device`. 161 + // - `node` is a valid pointer to a `struct devres_node`. 162 + // - `devres_node_add()` is guaranteed not to call `devres_node_release()` for the entire 163 + // lifetime of `dev`. 164 + unsafe { base::devres_node_add(dev.as_raw(), inner.node.get()) }; 212 165 213 - // `devm_add_action()` was successful and has consumed the reference count. 214 - core::mem::forget(devres_data); 166 + // Take additional reference count for `devres_node_add()`. 167 + core::mem::forget(inner.clone()); 215 168 216 169 Ok(Self { 217 170 dev: dev.into(), 218 - callback, 219 - data, 171 + inner, 220 172 }) 221 173 } 222 174 223 175 fn data(&self) -> &Revocable<T> { 224 - &self.data 176 + &self.inner.data 225 177 } 226 178 227 179 #[allow(clippy::missing_safety_doc)] 228 - unsafe extern "C" fn devres_callback(ptr: *mut kernel::ffi::c_void) { 229 - // SAFETY: In `Self::new` we've passed a valid pointer of `Revocable<T>` to 230 - // `devm_add_action()`, hence `ptr` must be a valid pointer to `Revocable<T>`. 231 - let data = unsafe { Arc::from_raw(ptr.cast::<Revocable<T>>()) }; 180 + unsafe extern "C" fn devres_node_release( 181 + _dev: *mut bindings::device, 182 + node: *mut bindings::devres_node, 183 + ) { 184 + let node = Opaque::cast_from(node); 232 185 233 - data.revoke(); 186 + // SAFETY: `node` is in the same allocation as its container. 187 + let inner = unsafe { kernel::container_of!(node, Inner<T>, node) }; 188 + 189 + // SAFETY: `inner` is a valid `Inner<T>` pointer. 190 + let inner = unsafe { &*inner }; 191 + 192 + inner.data.revoke(); 234 193 } 235 194 236 - fn remove_action(&self) -> bool { 195 + #[allow(clippy::missing_safety_doc)] 196 + unsafe extern "C" fn devres_node_free_node(node: *mut bindings::devres_node) { 197 + let node = Opaque::cast_from(node); 198 + 199 + // SAFETY: `node` is in the same allocation as its container. 200 + let inner = unsafe { kernel::container_of!(node, Inner<T>, node) }; 201 + 202 + // SAFETY: `inner` points to the entire `Inner<T>` allocation. 203 + drop(unsafe { Arc::from_raw(inner) }); 204 + } 205 + 206 + fn remove_node(&self) -> bool { 237 207 // SAFETY: 238 - // - `self.dev` is a valid `Device`, 239 - // - the `action` and `data` pointers are the exact same ones as given to 240 - // `devm_add_action()` previously, 241 - (unsafe { 242 - bindings::devm_remove_action_nowarn( 243 - self.dev.as_raw(), 244 - Some(self.callback), 245 - core::ptr::from_ref(self.data()).cast_mut().cast(), 246 - ) 247 - } == 0) 208 + // - `self.device().as_raw()` is a valid pointer to a bound `struct device`. 209 + // - `self.inner.node.get()` is a valid pointer to a `struct devres_node`. 210 + unsafe { base::devres_node_remove(self.device().as_raw(), self.inner.node.get()) } 248 211 } 249 212 250 213 /// Return a reference of the [`Device`] this [`Devres`] instance has been created with. ··· 354 261 // SAFETY: When `drop` runs, it is guaranteed that nobody is accessing the revocable data 355 262 // anymore, hence it is safe not to wait for the grace period to finish. 356 263 if unsafe { self.data().revoke_nosync() } { 357 - // We revoked `self.data` before the devres action did, hence try to remove it. 358 - if self.remove_action() { 264 + // We revoked `self.data` before devres did, hence try to remove it. 265 + if self.remove_node() { 359 266 // SAFETY: In `Self::new` we have taken an additional reference count of `self.data` 360 - // for `devm_add_action()`. Since `remove_action()` was successful, we have to drop 267 + // for `devres_node_add()`. Since `remove_node()` was successful, we have to drop 361 268 // this additional reference count. 362 - drop(unsafe { Arc::from_raw(Arc::as_ptr(&self.data)) }); 269 + drop(unsafe { Arc::from_raw(Arc::as_ptr(&self.inner)) }); 363 270 } 364 271 } 365 272 }
+491 -291
rust/kernel/io.rs
··· 11 11 12 12 pub mod mem; 13 13 pub mod poll; 14 + pub mod register; 14 15 pub mod resource; 15 16 17 + pub use crate::register; 16 18 pub use resource::Resource; 19 + 20 + use register::LocatedRegister; 17 21 18 22 /// Physical address type. 19 23 /// ··· 141 137 #[repr(transparent)] 142 138 pub struct Mmio<const SIZE: usize = 0>(MmioRaw<SIZE>); 143 139 144 - /// Internal helper macros used to invoke C MMIO read functions. 145 - /// 146 - /// This macro is intended to be used by higher-level MMIO access macros (io_define_read) and 147 - /// provides a unified expansion for infallible vs. fallible read semantics. It emits a direct call 148 - /// into the corresponding C helper and performs the required cast to the Rust return type. 149 - /// 150 - /// # Parameters 151 - /// 152 - /// * `$c_fn` – The C function performing the MMIO read. 153 - /// * `$self` – The I/O backend object. 154 - /// * `$ty` – The type of the value to be read. 155 - /// * `$addr` – The MMIO address to read. 156 - /// 157 - /// This macro does not perform any validation; all invariants must be upheld by the higher-level 158 - /// abstraction invoking it. 159 - macro_rules! call_mmio_read { 160 - (infallible, $c_fn:ident, $self:ident, $type:ty, $addr:expr) => { 161 - // SAFETY: By the type invariant `addr` is a valid address for MMIO operations. 162 - unsafe { bindings::$c_fn($addr as *const c_void) as $type } 163 - }; 164 - 165 - (fallible, $c_fn:ident, $self:ident, $type:ty, $addr:expr) => {{ 166 - // SAFETY: By the type invariant `addr` is a valid address for MMIO operations. 167 - Ok(unsafe { bindings::$c_fn($addr as *const c_void) as $type }) 168 - }}; 169 - } 170 - 171 - /// Internal helper macros used to invoke C MMIO write functions. 172 - /// 173 - /// This macro is intended to be used by higher-level MMIO access macros (io_define_write) and 174 - /// provides a unified expansion for infallible vs. fallible write semantics. It emits a direct call 175 - /// into the corresponding C helper and performs the required cast to the Rust return type. 176 - /// 177 - /// # Parameters 178 - /// 179 - /// * `$c_fn` – The C function performing the MMIO write. 180 - /// * `$self` – The I/O backend object. 181 - /// * `$ty` – The type of the written value. 182 - /// * `$addr` – The MMIO address to write. 183 - /// * `$value` – The value to write. 184 - /// 185 - /// This macro does not perform any validation; all invariants must be upheld by the higher-level 186 - /// abstraction invoking it. 187 - macro_rules! call_mmio_write { 188 - (infallible, $c_fn:ident, $self:ident, $ty:ty, $addr:expr, $value:expr) => { 189 - // SAFETY: By the type invariant `addr` is a valid address for MMIO operations. 190 - unsafe { bindings::$c_fn($value, $addr as *mut c_void) } 191 - }; 192 - 193 - (fallible, $c_fn:ident, $self:ident, $ty:ty, $addr:expr, $value:expr) => {{ 194 - // SAFETY: By the type invariant `addr` is a valid address for MMIO operations. 195 - unsafe { bindings::$c_fn($value, $addr as *mut c_void) }; 196 - Ok(()) 197 - }}; 198 - } 199 - 200 - /// Generates an accessor method for reading from an I/O backend. 201 - /// 202 - /// This macro reduces boilerplate by automatically generating either compile-time bounds-checked 203 - /// (infallible) or runtime bounds-checked (fallible) read methods. It abstracts the address 204 - /// calculation and bounds checking, and delegates the actual I/O read operation to a specified 205 - /// helper macro, making it generic over different I/O backends. 206 - /// 207 - /// # Parameters 208 - /// 209 - /// * `infallible` / `fallible` - Determines the bounds-checking strategy. `infallible` relies on 210 - /// `IoKnownSize` for compile-time checks and returns the value directly. `fallible` performs 211 - /// runtime checks against `maxsize()` and returns a `Result<T>`. 212 - /// * `$(#[$attr:meta])*` - Optional attributes to apply to the generated method (e.g., 213 - /// `#[cfg(CONFIG_64BIT)]` or inline directives). 214 - /// * `$vis:vis` - The visibility of the generated method (e.g., `pub`). 215 - /// * `$name:ident` / `$try_name:ident` - The name of the generated method (e.g., `read32`, 216 - /// `try_read8`). 217 - /// * `$call_macro:ident` - The backend-specific helper macro used to emit the actual I/O call 218 - /// (e.g., `call_mmio_read`). 219 - /// * `$c_fn:ident` - The backend-specific C function or identifier to be passed into the 220 - /// `$call_macro`. 221 - /// * `$type_name:ty` - The Rust type of the value being read (e.g., `u8`, `u32`). 222 - #[macro_export] 223 - macro_rules! io_define_read { 224 - (infallible, $(#[$attr:meta])* $vis:vis $name:ident, $call_macro:ident($c_fn:ident) -> 225 - $type_name:ty) => { 226 - /// Read IO data from a given offset known at compile time. 227 - /// 228 - /// Bound checks are performed on compile time, hence if the offset is not known at compile 229 - /// time, the build will fail. 230 - $(#[$attr])* 231 - // Always inline to optimize out error path of `io_addr_assert`. 232 - #[inline(always)] 233 - $vis fn $name(&self, offset: usize) -> $type_name { 234 - let addr = self.io_addr_assert::<$type_name>(offset); 235 - 236 - // SAFETY: By the type invariant `addr` is a valid address for IO operations. 237 - $call_macro!(infallible, $c_fn, self, $type_name, addr) 238 - } 239 - }; 240 - 241 - (fallible, $(#[$attr:meta])* $vis:vis $try_name:ident, $call_macro:ident($c_fn:ident) -> 242 - $type_name:ty) => { 243 - /// Read IO data from a given offset. 244 - /// 245 - /// Bound checks are performed on runtime, it fails if the offset (plus the type size) is 246 - /// out of bounds. 247 - $(#[$attr])* 248 - $vis fn $try_name(&self, offset: usize) -> Result<$type_name> { 249 - let addr = self.io_addr::<$type_name>(offset)?; 250 - 251 - // SAFETY: By the type invariant `addr` is a valid address for IO operations. 252 - $call_macro!(fallible, $c_fn, self, $type_name, addr) 253 - } 254 - }; 255 - } 256 - pub use io_define_read; 257 - 258 - /// Generates an accessor method for writing to an I/O backend. 259 - /// 260 - /// This macro reduces boilerplate by automatically generating either compile-time bounds-checked 261 - /// (infallible) or runtime bounds-checked (fallible) write methods. It abstracts the address 262 - /// calculation and bounds checking, and delegates the actual I/O write operation to a specified 263 - /// helper macro, making it generic over different I/O backends. 264 - /// 265 - /// # Parameters 266 - /// 267 - /// * `infallible` / `fallible` - Determines the bounds-checking strategy. `infallible` relies on 268 - /// `IoKnownSize` for compile-time checks and returns `()`. `fallible` performs runtime checks 269 - /// against `maxsize()` and returns a `Result`. 270 - /// * `$(#[$attr:meta])*` - Optional attributes to apply to the generated method (e.g., 271 - /// `#[cfg(CONFIG_64BIT)]` or inline directives). 272 - /// * `$vis:vis` - The visibility of the generated method (e.g., `pub`). 273 - /// * `$name:ident` / `$try_name:ident` - The name of the generated method (e.g., `write32`, 274 - /// `try_write8`). 275 - /// * `$call_macro:ident` - The backend-specific helper macro used to emit the actual I/O call 276 - /// (e.g., `call_mmio_write`). 277 - /// * `$c_fn:ident` - The backend-specific C function or identifier to be passed into the 278 - /// `$call_macro`. 279 - /// * `$type_name:ty` - The Rust type of the value being written (e.g., `u8`, `u32`). Note the use 280 - /// of `<-` before the type to denote a write operation. 281 - #[macro_export] 282 - macro_rules! io_define_write { 283 - (infallible, $(#[$attr:meta])* $vis:vis $name:ident, $call_macro:ident($c_fn:ident) <- 284 - $type_name:ty) => { 285 - /// Write IO data from a given offset known at compile time. 286 - /// 287 - /// Bound checks are performed on compile time, hence if the offset is not known at compile 288 - /// time, the build will fail. 289 - $(#[$attr])* 290 - // Always inline to optimize out error path of `io_addr_assert`. 291 - #[inline(always)] 292 - $vis fn $name(&self, value: $type_name, offset: usize) { 293 - let addr = self.io_addr_assert::<$type_name>(offset); 294 - 295 - $call_macro!(infallible, $c_fn, self, $type_name, addr, value); 296 - } 297 - }; 298 - 299 - (fallible, $(#[$attr:meta])* $vis:vis $try_name:ident, $call_macro:ident($c_fn:ident) <- 300 - $type_name:ty) => { 301 - /// Write IO data from a given offset. 302 - /// 303 - /// Bound checks are performed on runtime, it fails if the offset (plus the type size) is 304 - /// out of bounds. 305 - $(#[$attr])* 306 - $vis fn $try_name(&self, value: $type_name, offset: usize) -> Result { 307 - let addr = self.io_addr::<$type_name>(offset)?; 308 - 309 - $call_macro!(fallible, $c_fn, self, $type_name, addr, value) 310 - } 311 - }; 312 - } 313 - pub use io_define_write; 314 - 315 140 /// Checks whether an access of type `U` at the given `offset` 316 141 /// is valid within this region. 317 142 #[inline] ··· 153 320 } 154 321 } 155 322 156 - /// Marker trait indicating that an I/O backend supports operations of a certain type. 323 + /// Trait indicating that an I/O backend supports operations of a certain type and providing an 324 + /// implementation for these operations. 157 325 /// 158 326 /// Different I/O backends can implement this trait to expose only the operations they support. 159 327 /// 160 328 /// For example, a PCI configuration space may implement `IoCapable<u8>`, `IoCapable<u16>`, 161 329 /// and `IoCapable<u32>`, but not `IoCapable<u64>`, while an MMIO region on a 64-bit 162 330 /// system might implement all four. 163 - pub trait IoCapable<T> {} 331 + pub trait IoCapable<T> { 332 + /// Performs an I/O read of type `T` at `address` and returns the result. 333 + /// 334 + /// # Safety 335 + /// 336 + /// The range `[address..address + size_of::<T>()]` must be within the bounds of `Self`. 337 + unsafe fn io_read(&self, address: usize) -> T; 338 + 339 + /// Performs an I/O write of `value` at `address`. 340 + /// 341 + /// # Safety 342 + /// 343 + /// The range `[address..address + size_of::<T>()]` must be within the bounds of `Self`. 344 + unsafe fn io_write(&self, value: T, address: usize); 345 + } 346 + 347 + /// Describes a given I/O location: its offset, width, and type to convert the raw value from and 348 + /// into. 349 + /// 350 + /// This trait is the key abstraction allowing [`Io::read`], [`Io::write`], and [`Io::update`] (and 351 + /// their fallible [`try_read`](Io::try_read), [`try_write`](Io::try_write) and 352 + /// [`try_update`](Io::try_update) counterparts) to work uniformly with both raw [`usize`] offsets 353 + /// (for primitive types like [`u32`]) and typed ones (like those generated by the [`register!`] 354 + /// macro). 355 + /// 356 + /// An `IoLoc<T>` carries three pieces of information: 357 + /// 358 + /// - The offset to access (returned by [`IoLoc::offset`]), 359 + /// - The width of the access (determined by [`IoLoc::IoType`]), 360 + /// - The type `T` in which the raw data is returned or provided. 361 + /// 362 + /// `T` and `IoLoc::IoType` may differ: for instance, a typed register has `T` = the register type 363 + /// with its bitfields, and `IoType` = its backing primitive (e.g. `u32`). 364 + pub trait IoLoc<T> { 365 + /// Size ([`u8`], [`u16`], etc) of the I/O performed on the returned [`offset`](IoLoc::offset). 366 + type IoType: Into<T> + From<T>; 367 + 368 + /// Consumes `self` and returns the offset of this location. 369 + fn offset(self) -> usize; 370 + } 371 + 372 + /// Implements [`IoLoc<$ty>`] for [`usize`], allowing [`usize`] to be used as a parameter of 373 + /// [`Io::read`] and [`Io::write`]. 374 + macro_rules! impl_usize_ioloc { 375 + ($($ty:ty),*) => { 376 + $( 377 + impl IoLoc<$ty> for usize { 378 + type IoType = $ty; 379 + 380 + #[inline(always)] 381 + fn offset(self) -> usize { 382 + self 383 + } 384 + } 385 + )* 386 + } 387 + } 388 + 389 + // Provide the ability to read any primitive type from a [`usize`]. 390 + impl_usize_ioloc!(u8, u16, u32, u64); 164 391 165 392 /// Types implementing this trait (e.g. MMIO BARs or PCI config regions) 166 393 /// can perform I/O operations on regions of memory. ··· 262 369 263 370 /// Fallible 8-bit read with runtime bounds check. 264 371 #[inline(always)] 265 - fn try_read8(&self, _offset: usize) -> Result<u8> 372 + fn try_read8(&self, offset: usize) -> Result<u8> 266 373 where 267 374 Self: IoCapable<u8>, 268 375 { 269 - build_error!("Backend does not support fallible 8-bit read") 376 + self.try_read(offset) 270 377 } 271 378 272 379 /// Fallible 16-bit read with runtime bounds check. 273 380 #[inline(always)] 274 - fn try_read16(&self, _offset: usize) -> Result<u16> 381 + fn try_read16(&self, offset: usize) -> Result<u16> 275 382 where 276 383 Self: IoCapable<u16>, 277 384 { 278 - build_error!("Backend does not support fallible 16-bit read") 385 + self.try_read(offset) 279 386 } 280 387 281 388 /// Fallible 32-bit read with runtime bounds check. 282 389 #[inline(always)] 283 - fn try_read32(&self, _offset: usize) -> Result<u32> 390 + fn try_read32(&self, offset: usize) -> Result<u32> 284 391 where 285 392 Self: IoCapable<u32>, 286 393 { 287 - build_error!("Backend does not support fallible 32-bit read") 394 + self.try_read(offset) 288 395 } 289 396 290 397 /// Fallible 64-bit read with runtime bounds check. 291 398 #[inline(always)] 292 - fn try_read64(&self, _offset: usize) -> Result<u64> 399 + fn try_read64(&self, offset: usize) -> Result<u64> 293 400 where 294 401 Self: IoCapable<u64>, 295 402 { 296 - build_error!("Backend does not support fallible 64-bit read") 403 + self.try_read(offset) 297 404 } 298 405 299 406 /// Fallible 8-bit write with runtime bounds check. 300 407 #[inline(always)] 301 - fn try_write8(&self, _value: u8, _offset: usize) -> Result 408 + fn try_write8(&self, value: u8, offset: usize) -> Result 302 409 where 303 410 Self: IoCapable<u8>, 304 411 { 305 - build_error!("Backend does not support fallible 8-bit write") 412 + self.try_write(offset, value) 306 413 } 307 414 308 415 /// Fallible 16-bit write with runtime bounds check. 309 416 #[inline(always)] 310 - fn try_write16(&self, _value: u16, _offset: usize) -> Result 417 + fn try_write16(&self, value: u16, offset: usize) -> Result 311 418 where 312 419 Self: IoCapable<u16>, 313 420 { 314 - build_error!("Backend does not support fallible 16-bit write") 421 + self.try_write(offset, value) 315 422 } 316 423 317 424 /// Fallible 32-bit write with runtime bounds check. 318 425 #[inline(always)] 319 - fn try_write32(&self, _value: u32, _offset: usize) -> Result 426 + fn try_write32(&self, value: u32, offset: usize) -> Result 320 427 where 321 428 Self: IoCapable<u32>, 322 429 { 323 - build_error!("Backend does not support fallible 32-bit write") 430 + self.try_write(offset, value) 324 431 } 325 432 326 433 /// Fallible 64-bit write with runtime bounds check. 327 434 #[inline(always)] 328 - fn try_write64(&self, _value: u64, _offset: usize) -> Result 435 + fn try_write64(&self, value: u64, offset: usize) -> Result 329 436 where 330 437 Self: IoCapable<u64>, 331 438 { 332 - build_error!("Backend does not support fallible 64-bit write") 439 + self.try_write(offset, value) 333 440 } 334 441 335 442 /// Infallible 8-bit read with compile-time bounds check. 336 443 #[inline(always)] 337 - fn read8(&self, _offset: usize) -> u8 444 + fn read8(&self, offset: usize) -> u8 338 445 where 339 446 Self: IoKnownSize + IoCapable<u8>, 340 447 { 341 - build_error!("Backend does not support infallible 8-bit read") 448 + self.read(offset) 342 449 } 343 450 344 451 /// Infallible 16-bit read with compile-time bounds check. 345 452 #[inline(always)] 346 - fn read16(&self, _offset: usize) -> u16 453 + fn read16(&self, offset: usize) -> u16 347 454 where 348 455 Self: IoKnownSize + IoCapable<u16>, 349 456 { 350 - build_error!("Backend does not support infallible 16-bit read") 457 + self.read(offset) 351 458 } 352 459 353 460 /// Infallible 32-bit read with compile-time bounds check. 354 461 #[inline(always)] 355 - fn read32(&self, _offset: usize) -> u32 462 + fn read32(&self, offset: usize) -> u32 356 463 where 357 464 Self: IoKnownSize + IoCapable<u32>, 358 465 { 359 - build_error!("Backend does not support infallible 32-bit read") 466 + self.read(offset) 360 467 } 361 468 362 469 /// Infallible 64-bit read with compile-time bounds check. 363 470 #[inline(always)] 364 - fn read64(&self, _offset: usize) -> u64 471 + fn read64(&self, offset: usize) -> u64 365 472 where 366 473 Self: IoKnownSize + IoCapable<u64>, 367 474 { 368 - build_error!("Backend does not support infallible 64-bit read") 475 + self.read(offset) 369 476 } 370 477 371 478 /// Infallible 8-bit write with compile-time bounds check. 372 479 #[inline(always)] 373 - fn write8(&self, _value: u8, _offset: usize) 480 + fn write8(&self, value: u8, offset: usize) 374 481 where 375 482 Self: IoKnownSize + IoCapable<u8>, 376 483 { 377 - build_error!("Backend does not support infallible 8-bit write") 484 + self.write(offset, value) 378 485 } 379 486 380 487 /// Infallible 16-bit write with compile-time bounds check. 381 488 #[inline(always)] 382 - fn write16(&self, _value: u16, _offset: usize) 489 + fn write16(&self, value: u16, offset: usize) 383 490 where 384 491 Self: IoKnownSize + IoCapable<u16>, 385 492 { 386 - build_error!("Backend does not support infallible 16-bit write") 493 + self.write(offset, value) 387 494 } 388 495 389 496 /// Infallible 32-bit write with compile-time bounds check. 390 497 #[inline(always)] 391 - fn write32(&self, _value: u32, _offset: usize) 498 + fn write32(&self, value: u32, offset: usize) 392 499 where 393 500 Self: IoKnownSize + IoCapable<u32>, 394 501 { 395 - build_error!("Backend does not support infallible 32-bit write") 502 + self.write(offset, value) 396 503 } 397 504 398 505 /// Infallible 64-bit write with compile-time bounds check. 399 506 #[inline(always)] 400 - fn write64(&self, _value: u64, _offset: usize) 507 + fn write64(&self, value: u64, offset: usize) 401 508 where 402 509 Self: IoKnownSize + IoCapable<u64>, 403 510 { 404 - build_error!("Backend does not support infallible 64-bit write") 511 + self.write(offset, value) 512 + } 513 + 514 + /// Generic fallible read with runtime bounds check. 515 + /// 516 + /// # Examples 517 + /// 518 + /// Read a primitive type from an I/O address: 519 + /// 520 + /// ```no_run 521 + /// use kernel::io::{ 522 + /// Io, 523 + /// Mmio, 524 + /// }; 525 + /// 526 + /// fn do_reads(io: &Mmio) -> Result { 527 + /// // 32-bit read from address `0x10`. 528 + /// let v: u32 = io.try_read(0x10)?; 529 + /// 530 + /// // 8-bit read from address `0xfff`. 531 + /// let v: u8 = io.try_read(0xfff)?; 532 + /// 533 + /// Ok(()) 534 + /// } 535 + /// ``` 536 + #[inline(always)] 537 + fn try_read<T, L>(&self, location: L) -> Result<T> 538 + where 539 + L: IoLoc<T>, 540 + Self: IoCapable<L::IoType>, 541 + { 542 + let address = self.io_addr::<L::IoType>(location.offset())?; 543 + 544 + // SAFETY: `address` has been validated by `io_addr`. 545 + Ok(unsafe { self.io_read(address) }.into()) 546 + } 547 + 548 + /// Generic fallible write with runtime bounds check. 549 + /// 550 + /// # Examples 551 + /// 552 + /// Write a primitive type to an I/O address: 553 + /// 554 + /// ```no_run 555 + /// use kernel::io::{ 556 + /// Io, 557 + /// Mmio, 558 + /// }; 559 + /// 560 + /// fn do_writes(io: &Mmio) -> Result { 561 + /// // 32-bit write of value `1` at address `0x10`. 562 + /// io.try_write(0x10, 1u32)?; 563 + /// 564 + /// // 8-bit write of value `0xff` at address `0xfff`. 565 + /// io.try_write(0xfff, 0xffu8)?; 566 + /// 567 + /// Ok(()) 568 + /// } 569 + /// ``` 570 + #[inline(always)] 571 + fn try_write<T, L>(&self, location: L, value: T) -> Result 572 + where 573 + L: IoLoc<T>, 574 + Self: IoCapable<L::IoType>, 575 + { 576 + let address = self.io_addr::<L::IoType>(location.offset())?; 577 + let io_value = value.into(); 578 + 579 + // SAFETY: `address` has been validated by `io_addr`. 580 + unsafe { self.io_write(io_value, address) } 581 + 582 + Ok(()) 583 + } 584 + 585 + /// Generic fallible write of a fully-located register value. 586 + /// 587 + /// # Examples 588 + /// 589 + /// Tuples carrying a location and a value can be used with this method: 590 + /// 591 + /// ```no_run 592 + /// use kernel::io::{ 593 + /// register, 594 + /// Io, 595 + /// Mmio, 596 + /// }; 597 + /// 598 + /// register! { 599 + /// VERSION(u32) @ 0x100 { 600 + /// 15:8 major; 601 + /// 7:0 minor; 602 + /// } 603 + /// } 604 + /// 605 + /// impl VERSION { 606 + /// fn new(major: u8, minor: u8) -> Self { 607 + /// VERSION::zeroed().with_major(major).with_minor(minor) 608 + /// } 609 + /// } 610 + /// 611 + /// fn do_write_reg(io: &Mmio) -> Result { 612 + /// 613 + /// io.try_write_reg(VERSION::new(1, 0)) 614 + /// } 615 + /// ``` 616 + #[inline(always)] 617 + fn try_write_reg<T, L, V>(&self, value: V) -> Result 618 + where 619 + L: IoLoc<T>, 620 + V: LocatedRegister<Location = L, Value = T>, 621 + Self: IoCapable<L::IoType>, 622 + { 623 + let (location, value) = value.into_io_op(); 624 + 625 + self.try_write(location, value) 626 + } 627 + 628 + /// Generic fallible update with runtime bounds check. 629 + /// 630 + /// Note: this does not perform any synchronization. The caller is responsible for ensuring 631 + /// exclusive access if required. 632 + /// 633 + /// # Examples 634 + /// 635 + /// Read the u32 value at address `0x10`, increment it, and store the updated value back: 636 + /// 637 + /// ```no_run 638 + /// use kernel::io::{ 639 + /// Io, 640 + /// Mmio, 641 + /// }; 642 + /// 643 + /// fn do_update(io: &Mmio<0x1000>) -> Result { 644 + /// io.try_update(0x10, |v: u32| { 645 + /// v + 1 646 + /// }) 647 + /// } 648 + /// ``` 649 + #[inline(always)] 650 + fn try_update<T, L, F>(&self, location: L, f: F) -> Result 651 + where 652 + L: IoLoc<T>, 653 + Self: IoCapable<L::IoType>, 654 + F: FnOnce(T) -> T, 655 + { 656 + let address = self.io_addr::<L::IoType>(location.offset())?; 657 + 658 + // SAFETY: `address` has been validated by `io_addr`. 659 + let value: T = unsafe { self.io_read(address) }.into(); 660 + let io_value = f(value).into(); 661 + 662 + // SAFETY: `address` has been validated by `io_addr`. 663 + unsafe { self.io_write(io_value, address) } 664 + 665 + Ok(()) 666 + } 667 + 668 + /// Generic infallible read with compile-time bounds check. 669 + /// 670 + /// # Examples 671 + /// 672 + /// Read a primitive type from an I/O address: 673 + /// 674 + /// ```no_run 675 + /// use kernel::io::{ 676 + /// Io, 677 + /// Mmio, 678 + /// }; 679 + /// 680 + /// fn do_reads(io: &Mmio<0x1000>) { 681 + /// // 32-bit read from address `0x10`. 682 + /// let v: u32 = io.read(0x10); 683 + /// 684 + /// // 8-bit read from the top of the I/O space. 685 + /// let v: u8 = io.read(0xfff); 686 + /// } 687 + /// ``` 688 + #[inline(always)] 689 + fn read<T, L>(&self, location: L) -> T 690 + where 691 + L: IoLoc<T>, 692 + Self: IoKnownSize + IoCapable<L::IoType>, 693 + { 694 + let address = self.io_addr_assert::<L::IoType>(location.offset()); 695 + 696 + // SAFETY: `address` has been validated by `io_addr_assert`. 697 + unsafe { self.io_read(address) }.into() 698 + } 699 + 700 + /// Generic infallible write with compile-time bounds check. 701 + /// 702 + /// # Examples 703 + /// 704 + /// Write a primitive type to an I/O address: 705 + /// 706 + /// ```no_run 707 + /// use kernel::io::{ 708 + /// Io, 709 + /// Mmio, 710 + /// }; 711 + /// 712 + /// fn do_writes(io: &Mmio<0x1000>) { 713 + /// // 32-bit write of value `1` at address `0x10`. 714 + /// io.write(0x10, 1u32); 715 + /// 716 + /// // 8-bit write of value `0xff` at the top of the I/O space. 717 + /// io.write(0xfff, 0xffu8); 718 + /// } 719 + /// ``` 720 + #[inline(always)] 721 + fn write<T, L>(&self, location: L, value: T) 722 + where 723 + L: IoLoc<T>, 724 + Self: IoKnownSize + IoCapable<L::IoType>, 725 + { 726 + let address = self.io_addr_assert::<L::IoType>(location.offset()); 727 + let io_value = value.into(); 728 + 729 + // SAFETY: `address` has been validated by `io_addr_assert`. 730 + unsafe { self.io_write(io_value, address) } 731 + } 732 + 733 + /// Generic infallible write of a fully-located register value. 734 + /// 735 + /// # Examples 736 + /// 737 + /// Tuples carrying a location and a value can be used with this method: 738 + /// 739 + /// ```no_run 740 + /// use kernel::io::{ 741 + /// register, 742 + /// Io, 743 + /// Mmio, 744 + /// }; 745 + /// 746 + /// register! { 747 + /// VERSION(u32) @ 0x100 { 748 + /// 15:8 major; 749 + /// 7:0 minor; 750 + /// } 751 + /// } 752 + /// 753 + /// impl VERSION { 754 + /// fn new(major: u8, minor: u8) -> Self { 755 + /// VERSION::zeroed().with_major(major).with_minor(minor) 756 + /// } 757 + /// } 758 + /// 759 + /// fn do_write_reg(io: &Mmio<0x1000>) { 760 + /// io.write_reg(VERSION::new(1, 0)); 761 + /// } 762 + /// ``` 763 + #[inline(always)] 764 + fn write_reg<T, L, V>(&self, value: V) 765 + where 766 + L: IoLoc<T>, 767 + V: LocatedRegister<Location = L, Value = T>, 768 + Self: IoKnownSize + IoCapable<L::IoType>, 769 + { 770 + let (location, value) = value.into_io_op(); 771 + 772 + self.write(location, value) 773 + } 774 + 775 + /// Generic infallible update with compile-time bounds check. 776 + /// 777 + /// Note: this does not perform any synchronization. The caller is responsible for ensuring 778 + /// exclusive access if required. 779 + /// 780 + /// # Examples 781 + /// 782 + /// Read the u32 value at address `0x10`, increment it, and store the updated value back: 783 + /// 784 + /// ```no_run 785 + /// use kernel::io::{ 786 + /// Io, 787 + /// Mmio, 788 + /// }; 789 + /// 790 + /// fn do_update(io: &Mmio<0x1000>) { 791 + /// io.update(0x10, |v: u32| { 792 + /// v + 1 793 + /// }) 794 + /// } 795 + /// ``` 796 + #[inline(always)] 797 + fn update<T, L, F>(&self, location: L, f: F) 798 + where 799 + L: IoLoc<T>, 800 + Self: IoKnownSize + IoCapable<L::IoType> + Sized, 801 + F: FnOnce(T) -> T, 802 + { 803 + let address = self.io_addr_assert::<L::IoType>(location.offset()); 804 + 805 + // SAFETY: `address` has been validated by `io_addr_assert`. 806 + let value: T = unsafe { self.io_read(address) }.into(); 807 + let io_value = f(value).into(); 808 + 809 + // SAFETY: `address` has been validated by `io_addr_assert`. 810 + unsafe { self.io_write(io_value, address) } 405 811 } 406 812 } 407 813 ··· 726 534 } 727 535 } 728 536 729 - // MMIO regions support 8, 16, and 32-bit accesses. 730 - impl<const SIZE: usize> IoCapable<u8> for Mmio<SIZE> {} 731 - impl<const SIZE: usize> IoCapable<u16> for Mmio<SIZE> {} 732 - impl<const SIZE: usize> IoCapable<u32> for Mmio<SIZE> {} 537 + /// Implements [`IoCapable`] on `$mmio` for `$ty` using `$read_fn` and `$write_fn`. 538 + macro_rules! impl_mmio_io_capable { 539 + ($mmio:ident, $(#[$attr:meta])* $ty:ty, $read_fn:ident, $write_fn:ident) => { 540 + $(#[$attr])* 541 + impl<const SIZE: usize> IoCapable<$ty> for $mmio<SIZE> { 542 + unsafe fn io_read(&self, address: usize) -> $ty { 543 + // SAFETY: By the trait invariant `address` is a valid address for MMIO operations. 544 + unsafe { bindings::$read_fn(address as *const c_void) } 545 + } 733 546 547 + unsafe fn io_write(&self, value: $ty, address: usize) { 548 + // SAFETY: By the trait invariant `address` is a valid address for MMIO operations. 549 + unsafe { bindings::$write_fn(value, address as *mut c_void) } 550 + } 551 + } 552 + }; 553 + } 554 + 555 + // MMIO regions support 8, 16, and 32-bit accesses. 556 + impl_mmio_io_capable!(Mmio, u8, readb, writeb); 557 + impl_mmio_io_capable!(Mmio, u16, readw, writew); 558 + impl_mmio_io_capable!(Mmio, u32, readl, writel); 734 559 // MMIO regions on 64-bit systems also support 64-bit accesses. 735 - #[cfg(CONFIG_64BIT)] 736 - impl<const SIZE: usize> IoCapable<u64> for Mmio<SIZE> {} 560 + impl_mmio_io_capable!( 561 + Mmio, 562 + #[cfg(CONFIG_64BIT)] 563 + u64, 564 + readq, 565 + writeq 566 + ); 737 567 738 568 impl<const SIZE: usize> Io for Mmio<SIZE> { 739 569 /// Returns the base address of this mapping. ··· 769 555 fn maxsize(&self) -> usize { 770 556 self.0.maxsize() 771 557 } 772 - 773 - io_define_read!(fallible, try_read8, call_mmio_read(readb) -> u8); 774 - io_define_read!(fallible, try_read16, call_mmio_read(readw) -> u16); 775 - io_define_read!(fallible, try_read32, call_mmio_read(readl) -> u32); 776 - io_define_read!( 777 - fallible, 778 - #[cfg(CONFIG_64BIT)] 779 - try_read64, 780 - call_mmio_read(readq) -> u64 781 - ); 782 - 783 - io_define_write!(fallible, try_write8, call_mmio_write(writeb) <- u8); 784 - io_define_write!(fallible, try_write16, call_mmio_write(writew) <- u16); 785 - io_define_write!(fallible, try_write32, call_mmio_write(writel) <- u32); 786 - io_define_write!( 787 - fallible, 788 - #[cfg(CONFIG_64BIT)] 789 - try_write64, 790 - call_mmio_write(writeq) <- u64 791 - ); 792 - 793 - io_define_read!(infallible, read8, call_mmio_read(readb) -> u8); 794 - io_define_read!(infallible, read16, call_mmio_read(readw) -> u16); 795 - io_define_read!(infallible, read32, call_mmio_read(readl) -> u32); 796 - io_define_read!( 797 - infallible, 798 - #[cfg(CONFIG_64BIT)] 799 - read64, 800 - call_mmio_read(readq) -> u64 801 - ); 802 - 803 - io_define_write!(infallible, write8, call_mmio_write(writeb) <- u8); 804 - io_define_write!(infallible, write16, call_mmio_write(writew) <- u16); 805 - io_define_write!(infallible, write32, call_mmio_write(writel) <- u32); 806 - io_define_write!( 807 - infallible, 808 - #[cfg(CONFIG_64BIT)] 809 - write64, 810 - call_mmio_write(writeq) <- u64 811 - ); 812 558 } 813 559 814 560 impl<const SIZE: usize> IoKnownSize for Mmio<SIZE> { ··· 786 612 // SAFETY: `Mmio` is a transparent wrapper around `MmioRaw`. 787 613 unsafe { &*core::ptr::from_ref(raw).cast() } 788 614 } 789 - 790 - io_define_read!(infallible, pub read8_relaxed, call_mmio_read(readb_relaxed) -> u8); 791 - io_define_read!(infallible, pub read16_relaxed, call_mmio_read(readw_relaxed) -> u16); 792 - io_define_read!(infallible, pub read32_relaxed, call_mmio_read(readl_relaxed) -> u32); 793 - io_define_read!( 794 - infallible, 795 - #[cfg(CONFIG_64BIT)] 796 - pub read64_relaxed, 797 - call_mmio_read(readq_relaxed) -> u64 798 - ); 799 - 800 - io_define_read!(fallible, pub try_read8_relaxed, call_mmio_read(readb_relaxed) -> u8); 801 - io_define_read!(fallible, pub try_read16_relaxed, call_mmio_read(readw_relaxed) -> u16); 802 - io_define_read!(fallible, pub try_read32_relaxed, call_mmio_read(readl_relaxed) -> u32); 803 - io_define_read!( 804 - fallible, 805 - #[cfg(CONFIG_64BIT)] 806 - pub try_read64_relaxed, 807 - call_mmio_read(readq_relaxed) -> u64 808 - ); 809 - 810 - io_define_write!(infallible, pub write8_relaxed, call_mmio_write(writeb_relaxed) <- u8); 811 - io_define_write!(infallible, pub write16_relaxed, call_mmio_write(writew_relaxed) <- u16); 812 - io_define_write!(infallible, pub write32_relaxed, call_mmio_write(writel_relaxed) <- u32); 813 - io_define_write!( 814 - infallible, 815 - #[cfg(CONFIG_64BIT)] 816 - pub write64_relaxed, 817 - call_mmio_write(writeq_relaxed) <- u64 818 - ); 819 - 820 - io_define_write!(fallible, pub try_write8_relaxed, call_mmio_write(writeb_relaxed) <- u8); 821 - io_define_write!(fallible, pub try_write16_relaxed, call_mmio_write(writew_relaxed) <- u16); 822 - io_define_write!(fallible, pub try_write32_relaxed, call_mmio_write(writel_relaxed) <- u32); 823 - io_define_write!( 824 - fallible, 825 - #[cfg(CONFIG_64BIT)] 826 - pub try_write64_relaxed, 827 - call_mmio_write(writeq_relaxed) <- u64 828 - ); 829 615 } 616 + 617 + /// [`Mmio`] wrapper using relaxed accessors. 618 + /// 619 + /// This type provides an implementation of [`Io`] that uses relaxed I/O MMIO operands instead of 620 + /// the regular ones. 621 + /// 622 + /// See [`Mmio::relaxed`] for a usage example. 623 + #[repr(transparent)] 624 + pub struct RelaxedMmio<const SIZE: usize = 0>(Mmio<SIZE>); 625 + 626 + impl<const SIZE: usize> Io for RelaxedMmio<SIZE> { 627 + #[inline] 628 + fn addr(&self) -> usize { 629 + self.0.addr() 630 + } 631 + 632 + #[inline] 633 + fn maxsize(&self) -> usize { 634 + self.0.maxsize() 635 + } 636 + } 637 + 638 + impl<const SIZE: usize> IoKnownSize for RelaxedMmio<SIZE> { 639 + const MIN_SIZE: usize = SIZE; 640 + } 641 + 642 + impl<const SIZE: usize> Mmio<SIZE> { 643 + /// Returns a [`RelaxedMmio`] reference that performs relaxed I/O operations. 644 + /// 645 + /// Relaxed accessors do not provide ordering guarantees with respect to DMA or memory accesses 646 + /// and can be used when such ordering is not required. 647 + /// 648 + /// # Examples 649 + /// 650 + /// ```no_run 651 + /// use kernel::io::{ 652 + /// Io, 653 + /// Mmio, 654 + /// RelaxedMmio, 655 + /// }; 656 + /// 657 + /// fn do_io(io: &Mmio<0x100>) { 658 + /// // The access is performed using `readl_relaxed` instead of `readl`. 659 + /// let v = io.relaxed().read32(0x10); 660 + /// } 661 + /// 662 + /// ``` 663 + pub fn relaxed(&self) -> &RelaxedMmio<SIZE> { 664 + // SAFETY: `RelaxedMmio` is `#[repr(transparent)]` over `Mmio`, so `Mmio<SIZE>` and 665 + // `RelaxedMmio<SIZE>` have identical layout. 666 + unsafe { core::mem::transmute(self) } 667 + } 668 + } 669 + 670 + // MMIO regions support 8, 16, and 32-bit accesses. 671 + impl_mmio_io_capable!(RelaxedMmio, u8, readb_relaxed, writeb_relaxed); 672 + impl_mmio_io_capable!(RelaxedMmio, u16, readw_relaxed, writew_relaxed); 673 + impl_mmio_io_capable!(RelaxedMmio, u32, readl_relaxed, writel_relaxed); 674 + // MMIO regions on 64-bit systems also support 64-bit accesses. 675 + impl_mmio_io_capable!( 676 + RelaxedMmio, 677 + #[cfg(CONFIG_64BIT)] 678 + u64, 679 + readq_relaxed, 680 + writeq_relaxed 681 + );
+6 -4
rust/kernel/io/mem.rs
··· 54 54 /// use kernel::{ 55 55 /// bindings, 56 56 /// device::Core, 57 + /// io::Io, 57 58 /// of, 58 59 /// platform, 59 60 /// }; ··· 79 78 /// let io = iomem.access(pdev.as_ref())?; 80 79 /// 81 80 /// // Read and write a 32-bit value at `offset`. 82 - /// let data = io.read32_relaxed(offset); 81 + /// let data = io.read32(offset); 83 82 /// 84 - /// io.write32_relaxed(data, offset); 83 + /// io.write32(data, offset); 85 84 /// 86 85 /// # Ok(SampleDriver) 87 86 /// } ··· 118 117 /// use kernel::{ 119 118 /// bindings, 120 119 /// device::Core, 120 + /// io::Io, 121 121 /// of, 122 122 /// platform, 123 123 /// }; ··· 143 141 /// 144 142 /// let io = iomem.access(pdev.as_ref())?; 145 143 /// 146 - /// let data = io.try_read32_relaxed(offset)?; 144 + /// let data = io.try_read32(offset)?; 147 145 /// 148 - /// io.try_write32_relaxed(data, offset)?; 146 + /// io.try_write32(data, offset)?; 149 147 /// 150 148 /// # Ok(SampleDriver) 151 149 /// }
+1260
rust/kernel/io/register.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Macro to define register layout and accessors. 4 + //! 5 + //! The [`register!`](kernel::io::register!) macro provides an intuitive and readable syntax for 6 + //! defining a dedicated type for each register and accessing it using [`Io`](super::Io). Each such 7 + //! type comes with its own field accessors that can return an error if a field's value is invalid. 8 + //! 9 + //! Note: most of the items in this module are public so they can be referenced by the macro, but 10 + //! most are not to be used directly by users. Outside of the `register!` macro itself, the only 11 + //! items you might want to import from this module are [`WithBase`] and [`Array`]. 12 + //! 13 + //! # Simple example 14 + //! 15 + //! ```no_run 16 + //! use kernel::io::register; 17 + //! 18 + //! register! { 19 + //! /// Basic information about the chip. 20 + //! pub BOOT_0(u32) @ 0x00000100 { 21 + //! /// Vendor ID. 22 + //! 15:8 vendor_id; 23 + //! /// Major revision of the chip. 24 + //! 7:4 major_revision; 25 + //! /// Minor revision of the chip. 26 + //! 3:0 minor_revision; 27 + //! } 28 + //! } 29 + //! ``` 30 + //! 31 + //! This defines a 32-bit `BOOT_0` type which can be read from or written to offset `0x100` of an 32 + //! `Io` region, with the described bitfields. For instance, `minor_revision` consists of the 4 33 + //! least significant bits of the type. 34 + //! 35 + //! Fields are instances of [`Bounded`](kernel::num::Bounded) and can be read by calling their 36 + //! getter method, which is named after them. They also have setter methods prefixed with `with_` 37 + //! for runtime values and `with_const_` for constant values. All setters return the updated 38 + //! register value. 39 + //! 40 + //! Fields can also be transparently converted from/to an arbitrary type by using the `=>` and 41 + //! `?=>` syntaxes. 42 + //! 43 + //! If present, doc comments above register or fields definitions are added to the relevant item 44 + //! they document (the register type itself, or the field's setter and getter methods). 45 + //! 46 + //! Note that multiple registers can be defined in a single `register!` invocation. This can be 47 + //! useful to group related registers together. 48 + //! 49 + //! Here is how the register defined above can be used in code: 50 + //! 51 + //! 52 + //! ```no_run 53 + //! use kernel::{ 54 + //! io::{ 55 + //! register, 56 + //! Io, 57 + //! IoLoc, 58 + //! }, 59 + //! num::Bounded, 60 + //! }; 61 + //! # use kernel::io::Mmio; 62 + //! # register! { 63 + //! # pub BOOT_0(u32) @ 0x00000100 { 64 + //! # 15:8 vendor_id; 65 + //! # 7:4 major_revision; 66 + //! # 3:0 minor_revision; 67 + //! # } 68 + //! # } 69 + //! # fn test(io: &Mmio<0x1000>) { 70 + //! # fn obtain_vendor_id() -> u8 { 0xff } 71 + //! 72 + //! // Read from the register's defined offset (0x100). 73 + //! let boot0 = io.read(BOOT_0); 74 + //! pr_info!("chip revision: {}.{}", boot0.major_revision().get(), boot0.minor_revision().get()); 75 + //! 76 + //! // Update some fields and write the new value back. 77 + //! let new_boot0 = boot0 78 + //! // Constant values. 79 + //! .with_const_major_revision::<3>() 80 + //! .with_const_minor_revision::<10>() 81 + //! // Runtime value. 82 + //! .with_vendor_id(obtain_vendor_id()); 83 + //! io.write_reg(new_boot0); 84 + //! 85 + //! // Or, build a new value from zero and write it: 86 + //! io.write_reg(BOOT_0::zeroed() 87 + //! .with_const_major_revision::<3>() 88 + //! .with_const_minor_revision::<10>() 89 + //! .with_vendor_id(obtain_vendor_id()) 90 + //! ); 91 + //! 92 + //! // Or, read and update the register in a single step. 93 + //! io.update(BOOT_0, |r| r 94 + //! .with_const_major_revision::<3>() 95 + //! .with_const_minor_revision::<10>() 96 + //! .with_vendor_id(obtain_vendor_id()) 97 + //! ); 98 + //! 99 + //! // Constant values can also be built using the const setters. 100 + //! const V: BOOT_0 = pin_init::zeroed::<BOOT_0>() 101 + //! .with_const_major_revision::<3>() 102 + //! .with_const_minor_revision::<10>(); 103 + //! # } 104 + //! ``` 105 + //! 106 + //! For more extensive documentation about how to define registers, see the 107 + //! [`register!`](kernel::io::register!) macro. 108 + 109 + use core::marker::PhantomData; 110 + 111 + use crate::io::IoLoc; 112 + 113 + use kernel::build_assert; 114 + 115 + /// Trait implemented by all registers. 116 + pub trait Register: Sized { 117 + /// Backing primitive type of the register. 118 + type Storage: Into<Self> + From<Self>; 119 + 120 + /// Start offset of the register. 121 + /// 122 + /// The interpretation of this offset depends on the type of the register. 123 + const OFFSET: usize; 124 + } 125 + 126 + /// Trait implemented by registers with a fixed offset. 127 + pub trait FixedRegister: Register {} 128 + 129 + /// Allows `()` to be used as the `location` parameter of [`Io::write`](super::Io::write) when 130 + /// passing a [`FixedRegister`] value. 131 + impl<T> IoLoc<T> for () 132 + where 133 + T: FixedRegister, 134 + { 135 + type IoType = T::Storage; 136 + 137 + #[inline(always)] 138 + fn offset(self) -> usize { 139 + T::OFFSET 140 + } 141 + } 142 + 143 + /// A [`FixedRegister`] carries its location in its type. Thus `FixedRegister` values can be used 144 + /// as an [`IoLoc`]. 145 + impl<T> IoLoc<T> for T 146 + where 147 + T: FixedRegister, 148 + { 149 + type IoType = T::Storage; 150 + 151 + #[inline(always)] 152 + fn offset(self) -> usize { 153 + T::OFFSET 154 + } 155 + } 156 + 157 + /// Location of a fixed register. 158 + pub struct FixedRegisterLoc<T: FixedRegister>(PhantomData<T>); 159 + 160 + impl<T: FixedRegister> FixedRegisterLoc<T> { 161 + /// Returns the location of `T`. 162 + #[inline(always)] 163 + // We do not implement `Default` so we can be const. 164 + #[expect(clippy::new_without_default)] 165 + pub const fn new() -> Self { 166 + Self(PhantomData) 167 + } 168 + } 169 + 170 + impl<T> IoLoc<T> for FixedRegisterLoc<T> 171 + where 172 + T: FixedRegister, 173 + { 174 + type IoType = T::Storage; 175 + 176 + #[inline(always)] 177 + fn offset(self) -> usize { 178 + T::OFFSET 179 + } 180 + } 181 + 182 + /// Trait providing a base address to be added to the offset of a relative register to obtain 183 + /// its actual offset. 184 + /// 185 + /// The `T` generic argument is used to distinguish which base to use, in case a type provides 186 + /// several bases. It is given to the `register!` macro to restrict the use of the register to 187 + /// implementors of this particular variant. 188 + pub trait RegisterBase<T> { 189 + /// Base address to which register offsets are added. 190 + const BASE: usize; 191 + } 192 + 193 + /// Trait implemented by all registers that are relative to a base. 194 + pub trait WithBase { 195 + /// Family of bases applicable to this register. 196 + type BaseFamily; 197 + 198 + /// Returns the absolute location of this type when using `B` as its base. 199 + #[inline(always)] 200 + fn of<B: RegisterBase<Self::BaseFamily>>() -> RelativeRegisterLoc<Self, B> 201 + where 202 + Self: Register, 203 + { 204 + RelativeRegisterLoc::new() 205 + } 206 + } 207 + 208 + /// Trait implemented by relative registers. 209 + pub trait RelativeRegister: Register + WithBase {} 210 + 211 + /// Location of a relative register. 212 + /// 213 + /// This can either be an immediately accessible regular [`RelativeRegister`], or a 214 + /// [`RelativeRegisterArray`] that needs one additional resolution through 215 + /// [`RelativeRegisterLoc::at`]. 216 + pub struct RelativeRegisterLoc<T: WithBase, B: ?Sized>(PhantomData<T>, PhantomData<B>); 217 + 218 + impl<T, B> RelativeRegisterLoc<T, B> 219 + where 220 + T: Register + WithBase, 221 + B: RegisterBase<T::BaseFamily> + ?Sized, 222 + { 223 + /// Returns the location of a relative register or register array. 224 + #[inline(always)] 225 + // We do not implement `Default` so we can be const. 226 + #[expect(clippy::new_without_default)] 227 + pub const fn new() -> Self { 228 + Self(PhantomData, PhantomData) 229 + } 230 + 231 + // Returns the absolute offset of the relative register using base `B`. 232 + // 233 + // This is implemented as a private const method so it can be reused by the [`IoLoc`] 234 + // implementations of both [`RelativeRegisterLoc`] and [`RelativeRegisterArrayLoc`]. 235 + #[inline] 236 + const fn offset(self) -> usize { 237 + B::BASE + T::OFFSET 238 + } 239 + } 240 + 241 + impl<T, B> IoLoc<T> for RelativeRegisterLoc<T, B> 242 + where 243 + T: RelativeRegister, 244 + B: RegisterBase<T::BaseFamily> + ?Sized, 245 + { 246 + type IoType = T::Storage; 247 + 248 + #[inline(always)] 249 + fn offset(self) -> usize { 250 + RelativeRegisterLoc::offset(self) 251 + } 252 + } 253 + 254 + /// Trait implemented by arrays of registers. 255 + pub trait RegisterArray: Register { 256 + /// Number of elements in the registers array. 257 + const SIZE: usize; 258 + /// Number of bytes between the start of elements in the registers array. 259 + const STRIDE: usize; 260 + } 261 + 262 + /// Location of an array register. 263 + pub struct RegisterArrayLoc<T: RegisterArray>(usize, PhantomData<T>); 264 + 265 + impl<T: RegisterArray> RegisterArrayLoc<T> { 266 + /// Returns the location of register `T` at position `idx`, with build-time validation. 267 + #[inline(always)] 268 + pub fn new(idx: usize) -> Self { 269 + build_assert!(idx < T::SIZE); 270 + 271 + Self(idx, PhantomData) 272 + } 273 + 274 + /// Attempts to return the location of register `T` at position `idx`, with runtime validation. 275 + #[inline(always)] 276 + pub fn try_new(idx: usize) -> Option<Self> { 277 + if idx < T::SIZE { 278 + Some(Self(idx, PhantomData)) 279 + } else { 280 + None 281 + } 282 + } 283 + } 284 + 285 + impl<T> IoLoc<T> for RegisterArrayLoc<T> 286 + where 287 + T: RegisterArray, 288 + { 289 + type IoType = T::Storage; 290 + 291 + #[inline(always)] 292 + fn offset(self) -> usize { 293 + T::OFFSET + self.0 * T::STRIDE 294 + } 295 + } 296 + 297 + /// Trait providing location builders for [`RegisterArray`]s. 298 + pub trait Array { 299 + /// Returns the location of the register at position `idx`, with build-time validation. 300 + #[inline(always)] 301 + fn at(idx: usize) -> RegisterArrayLoc<Self> 302 + where 303 + Self: RegisterArray, 304 + { 305 + RegisterArrayLoc::new(idx) 306 + } 307 + 308 + /// Returns the location of the register at position `idx`, with runtime validation. 309 + #[inline(always)] 310 + fn try_at(idx: usize) -> Option<RegisterArrayLoc<Self>> 311 + where 312 + Self: RegisterArray, 313 + { 314 + RegisterArrayLoc::try_new(idx) 315 + } 316 + } 317 + 318 + /// Trait implemented by arrays of relative registers. 319 + pub trait RelativeRegisterArray: RegisterArray + WithBase {} 320 + 321 + /// Location of a relative array register. 322 + pub struct RelativeRegisterArrayLoc< 323 + T: RelativeRegisterArray, 324 + B: RegisterBase<T::BaseFamily> + ?Sized, 325 + >(RelativeRegisterLoc<T, B>, usize); 326 + 327 + impl<T, B> RelativeRegisterArrayLoc<T, B> 328 + where 329 + T: RelativeRegisterArray, 330 + B: RegisterBase<T::BaseFamily> + ?Sized, 331 + { 332 + /// Returns the location of register `T` from the base `B` at index `idx`, with build-time 333 + /// validation. 334 + #[inline(always)] 335 + pub fn new(idx: usize) -> Self { 336 + build_assert!(idx < T::SIZE); 337 + 338 + Self(RelativeRegisterLoc::new(), idx) 339 + } 340 + 341 + /// Attempts to return the location of register `T` from the base `B` at index `idx`, with 342 + /// runtime validation. 343 + #[inline(always)] 344 + pub fn try_new(idx: usize) -> Option<Self> { 345 + if idx < T::SIZE { 346 + Some(Self(RelativeRegisterLoc::new(), idx)) 347 + } else { 348 + None 349 + } 350 + } 351 + } 352 + 353 + /// Methods exclusive to [`RelativeRegisterLoc`]s created with a [`RelativeRegisterArray`]. 354 + impl<T, B> RelativeRegisterLoc<T, B> 355 + where 356 + T: RelativeRegisterArray, 357 + B: RegisterBase<T::BaseFamily> + ?Sized, 358 + { 359 + /// Returns the location of the register at position `idx`, with build-time validation. 360 + #[inline(always)] 361 + pub fn at(self, idx: usize) -> RelativeRegisterArrayLoc<T, B> { 362 + RelativeRegisterArrayLoc::new(idx) 363 + } 364 + 365 + /// Returns the location of the register at position `idx`, with runtime validation. 366 + #[inline(always)] 367 + pub fn try_at(self, idx: usize) -> Option<RelativeRegisterArrayLoc<T, B>> { 368 + RelativeRegisterArrayLoc::try_new(idx) 369 + } 370 + } 371 + 372 + impl<T, B> IoLoc<T> for RelativeRegisterArrayLoc<T, B> 373 + where 374 + T: RelativeRegisterArray, 375 + B: RegisterBase<T::BaseFamily> + ?Sized, 376 + { 377 + type IoType = T::Storage; 378 + 379 + #[inline(always)] 380 + fn offset(self) -> usize { 381 + self.0.offset() + self.1 * T::STRIDE 382 + } 383 + } 384 + 385 + /// Trait implemented by items that contain both a register value and the absolute I/O location at 386 + /// which to write it. 387 + /// 388 + /// Implementors can be used with [`Io::write_reg`](super::Io::write_reg). 389 + pub trait LocatedRegister { 390 + /// Register value to write. 391 + type Value: Register; 392 + /// Full location information at which to write the value. 393 + type Location: IoLoc<Self::Value>; 394 + 395 + /// Consumes `self` and returns a `(location, value)` tuple describing a valid I/O write 396 + /// operation. 397 + fn into_io_op(self) -> (Self::Location, Self::Value); 398 + } 399 + 400 + impl<T> LocatedRegister for T 401 + where 402 + T: FixedRegister, 403 + { 404 + type Location = FixedRegisterLoc<Self::Value>; 405 + type Value = T; 406 + 407 + #[inline(always)] 408 + fn into_io_op(self) -> (FixedRegisterLoc<T>, T) { 409 + (FixedRegisterLoc::new(), self) 410 + } 411 + } 412 + 413 + /// Defines a dedicated type for a register, including getter and setter methods for its fields and 414 + /// methods to read and write it from an [`Io`](kernel::io::Io) region. 415 + /// 416 + /// This documentation focuses on how to declare registers. See the [module-level 417 + /// documentation](mod@kernel::io::register) for examples of how to access them. 418 + /// 419 + /// There are 4 possible kinds of registers: fixed offset registers, relative registers, arrays of 420 + /// registers, and relative arrays of registers. 421 + /// 422 + /// ## Fixed offset registers 423 + /// 424 + /// These are the simplest kind of registers. Their location is simply an offset inside the I/O 425 + /// region. For instance: 426 + /// 427 + /// ```ignore 428 + /// register! { 429 + /// pub FIXED_REG(u16) @ 0x80 { 430 + /// ... 431 + /// } 432 + /// } 433 + /// ``` 434 + /// 435 + /// This creates a 16-bit register named `FIXED_REG` located at offset `0x80` of an I/O region. 436 + /// 437 + /// These registers' location can be built simply by referencing their name: 438 + /// 439 + /// ```no_run 440 + /// use kernel::{ 441 + /// io::{ 442 + /// register, 443 + /// Io, 444 + /// }, 445 + /// }; 446 + /// # use kernel::io::Mmio; 447 + /// 448 + /// register! { 449 + /// FIXED_REG(u32) @ 0x100 { 450 + /// 16:8 high_byte; 451 + /// 7:0 low_byte; 452 + /// } 453 + /// } 454 + /// 455 + /// # fn test(io: &Mmio<0x1000>) { 456 + /// let val = io.read(FIXED_REG); 457 + /// 458 + /// // Write from an already-existing value. 459 + /// io.write(FIXED_REG, val.with_low_byte(0xff)); 460 + /// 461 + /// // Create a register value from scratch. 462 + /// let val2 = FIXED_REG::zeroed().with_high_byte(0x80); 463 + /// 464 + /// // The location of fixed offset registers is already contained in their type. Thus, the 465 + /// // `location` argument of `Io::write` is technically redundant and can be replaced by `()`. 466 + /// io.write((), val2); 467 + /// 468 + /// // Or, the single-argument `Io::write_reg` can be used. 469 + /// io.write_reg(val2); 470 + /// # } 471 + /// 472 + /// ``` 473 + /// 474 + /// It is possible to create an alias of an existing register with new field definitions by using 475 + /// the `=> ALIAS` syntax. This is useful for cases where a register's interpretation depends on 476 + /// the context: 477 + /// 478 + /// ```no_run 479 + /// use kernel::io::register; 480 + /// 481 + /// register! { 482 + /// /// Scratch register. 483 + /// pub SCRATCH(u32) @ 0x00000200 { 484 + /// 31:0 value; 485 + /// } 486 + /// 487 + /// /// Boot status of the firmware. 488 + /// pub SCRATCH_BOOT_STATUS(u32) => SCRATCH { 489 + /// 0:0 completed; 490 + /// } 491 + /// } 492 + /// ``` 493 + /// 494 + /// In this example, `SCRATCH_BOOT_STATUS` uses the same I/O address as `SCRATCH`, while providing 495 + /// its own `completed` field. 496 + /// 497 + /// ## Relative registers 498 + /// 499 + /// Relative registers can be instantiated several times at a relative offset of a group of bases. 500 + /// For instance, imagine the following I/O space: 501 + /// 502 + /// ```text 503 + /// +-----------------------------+ 504 + /// | ... | 505 + /// | | 506 + /// 0x100--->+------------CPU0-------------+ 507 + /// | | 508 + /// 0x110--->+-----------------------------+ 509 + /// | CPU_CTL | 510 + /// +-----------------------------+ 511 + /// | ... | 512 + /// | | 513 + /// | | 514 + /// 0x200--->+------------CPU1-------------+ 515 + /// | | 516 + /// 0x210--->+-----------------------------+ 517 + /// | CPU_CTL | 518 + /// +-----------------------------+ 519 + /// | ... | 520 + /// +-----------------------------+ 521 + /// ``` 522 + /// 523 + /// `CPU0` and `CPU1` both have a `CPU_CTL` register that starts at offset `0x10` of their I/O 524 + /// space segment. Since both instances of `CPU_CTL` share the same layout, we don't want to define 525 + /// them twice and would prefer a way to select which one to use from a single definition. 526 + /// 527 + /// This can be done using the `Base + Offset` syntax when specifying the register's address: 528 + /// 529 + /// ```ignore 530 + /// register! { 531 + /// pub RELATIVE_REG(u32) @ Base + 0x80 { 532 + /// ... 533 + /// } 534 + /// } 535 + /// ``` 536 + /// 537 + /// This creates a register with an offset of `0x80` from a given base. 538 + /// 539 + /// `Base` is an arbitrary type (typically a ZST) to be used as a generic parameter of the 540 + /// [`RegisterBase`] trait to provide the base as a constant, i.e. each type providing a base for 541 + /// this register needs to implement `RegisterBase<Base>`. 542 + /// 543 + /// The location of relative registers can be built using the [`WithBase::of`] method to specify 544 + /// its base. All relative registers implement [`WithBase`]. 545 + /// 546 + /// Here is the above layout translated into code: 547 + /// 548 + /// ```no_run 549 + /// use kernel::{ 550 + /// io::{ 551 + /// register, 552 + /// register::{ 553 + /// RegisterBase, 554 + /// WithBase, 555 + /// }, 556 + /// Io, 557 + /// }, 558 + /// }; 559 + /// # use kernel::io::Mmio; 560 + /// 561 + /// // Type used to identify the base. 562 + /// pub struct CpuCtlBase; 563 + /// 564 + /// // ZST describing `CPU0`. 565 + /// struct Cpu0; 566 + /// impl RegisterBase<CpuCtlBase> for Cpu0 { 567 + /// const BASE: usize = 0x100; 568 + /// } 569 + /// 570 + /// // ZST describing `CPU1`. 571 + /// struct Cpu1; 572 + /// impl RegisterBase<CpuCtlBase> for Cpu1 { 573 + /// const BASE: usize = 0x200; 574 + /// } 575 + /// 576 + /// // This makes `CPU_CTL` accessible from all implementors of `RegisterBase<CpuCtlBase>`. 577 + /// register! { 578 + /// /// CPU core control. 579 + /// pub CPU_CTL(u32) @ CpuCtlBase + 0x10 { 580 + /// 0:0 start; 581 + /// } 582 + /// } 583 + /// 584 + /// # fn test(io: Mmio<0x1000>) { 585 + /// // Read the status of `Cpu0`. 586 + /// let cpu0_started = io.read(CPU_CTL::of::<Cpu0>()); 587 + /// 588 + /// // Stop `Cpu0`. 589 + /// io.write(WithBase::of::<Cpu0>(), CPU_CTL::zeroed()); 590 + /// # } 591 + /// 592 + /// // Aliases can also be defined for relative register. 593 + /// register! { 594 + /// /// Alias to CPU core control. 595 + /// pub CPU_CTL_ALIAS(u32) => CpuCtlBase + CPU_CTL { 596 + /// /// Start the aliased CPU core. 597 + /// 1:1 alias_start; 598 + /// } 599 + /// } 600 + /// 601 + /// # fn test2(io: Mmio<0x1000>) { 602 + /// // Start the aliased `CPU0`, leaving its other fields untouched. 603 + /// io.update(CPU_CTL_ALIAS::of::<Cpu0>(), |r| r.with_alias_start(true)); 604 + /// # } 605 + /// ``` 606 + /// 607 + /// ## Arrays of registers 608 + /// 609 + /// Some I/O areas contain consecutive registers that share the same field layout. These areas can 610 + /// be defined as an array of identical registers, allowing them to be accessed by index with 611 + /// compile-time or runtime bound checking: 612 + /// 613 + /// ```ignore 614 + /// register! { 615 + /// pub REGISTER_ARRAY(u8)[10, stride = 4] @ 0x100 { 616 + /// ... 617 + /// } 618 + /// } 619 + /// ``` 620 + /// 621 + /// This defines `REGISTER_ARRAY`, an array of 10 byte registers starting at offset `0x100`. Each 622 + /// register is separated from its neighbor by 4 bytes. 623 + /// 624 + /// The `stride` parameter is optional; if unspecified, the registers are placed consecutively from 625 + /// each other. 626 + /// 627 + /// A location for a register in a register array is built using the [`Array::at`] trait method. 628 + /// All arrays of registers implement [`Array`]. 629 + /// 630 + /// ```no_run 631 + /// use kernel::{ 632 + /// io::{ 633 + /// register, 634 + /// register::Array, 635 + /// Io, 636 + /// }, 637 + /// }; 638 + /// # use kernel::io::Mmio; 639 + /// # fn get_scratch_idx() -> usize { 640 + /// # 0x15 641 + /// # } 642 + /// 643 + /// // Array of 64 consecutive registers with the same layout starting at offset `0x80`. 644 + /// register! { 645 + /// /// Scratch registers. 646 + /// pub SCRATCH(u32)[64] @ 0x00000080 { 647 + /// 31:0 value; 648 + /// } 649 + /// } 650 + /// 651 + /// # fn test(io: &Mmio<0x1000>) 652 + /// # -> Result<(), Error>{ 653 + /// // Read scratch register 0, i.e. I/O address `0x80`. 654 + /// let scratch_0 = io.read(SCRATCH::at(0)).value(); 655 + /// 656 + /// // Write scratch register 15, i.e. I/O address `0x80 + (15 * 4)`. 657 + /// io.write(Array::at(15), SCRATCH::from(0xffeeaabb)); 658 + /// 659 + /// // This is out of bounds and won't build. 660 + /// // let scratch_128 = io.read(SCRATCH::at(128)).value(); 661 + /// 662 + /// // Runtime-obtained array index. 663 + /// let idx = get_scratch_idx(); 664 + /// // Access on a runtime index returns an error if it is out-of-bounds. 665 + /// let some_scratch = io.read(SCRATCH::try_at(idx).ok_or(EINVAL)?).value(); 666 + /// 667 + /// // Alias to a specific register in an array. 668 + /// // Here `SCRATCH[8]` is used to convey the firmware exit code. 669 + /// register! { 670 + /// /// Firmware exit status code. 671 + /// pub FIRMWARE_STATUS(u32) => SCRATCH[8] { 672 + /// 7:0 status; 673 + /// } 674 + /// } 675 + /// 676 + /// let status = io.read(FIRMWARE_STATUS).status(); 677 + /// 678 + /// // Non-contiguous register arrays can be defined by adding a stride parameter. 679 + /// // Here, each of the 16 registers of the array is separated by 8 bytes, meaning that the 680 + /// // registers of the two declarations below are interleaved. 681 + /// register! { 682 + /// /// Scratch registers bank 0. 683 + /// pub SCRATCH_INTERLEAVED_0(u32)[16, stride = 8] @ 0x000000c0 { 684 + /// 31:0 value; 685 + /// } 686 + /// 687 + /// /// Scratch registers bank 1. 688 + /// pub SCRATCH_INTERLEAVED_1(u32)[16, stride = 8] @ 0x000000c4 { 689 + /// 31:0 value; 690 + /// } 691 + /// } 692 + /// # Ok(()) 693 + /// # } 694 + /// ``` 695 + /// 696 + /// ## Relative arrays of registers 697 + /// 698 + /// Combining the two features described in the sections above, arrays of registers accessible from 699 + /// a base can also be defined: 700 + /// 701 + /// ```ignore 702 + /// register! { 703 + /// pub RELATIVE_REGISTER_ARRAY(u8)[10, stride = 4] @ Base + 0x100 { 704 + /// ... 705 + /// } 706 + /// } 707 + /// ``` 708 + /// 709 + /// Like relative registers, they implement the [`WithBase`] trait. However the return value of 710 + /// [`WithBase::of`] cannot be used directly as a location and must be further specified using the 711 + /// [`at`](RelativeRegisterLoc::at) method. 712 + /// 713 + /// ```no_run 714 + /// use kernel::{ 715 + /// io::{ 716 + /// register, 717 + /// register::{ 718 + /// RegisterBase, 719 + /// WithBase, 720 + /// }, 721 + /// Io, 722 + /// }, 723 + /// }; 724 + /// # use kernel::io::Mmio; 725 + /// # fn get_scratch_idx() -> usize { 726 + /// # 0x15 727 + /// # } 728 + /// 729 + /// // Type used as parameter of `RegisterBase` to specify the base. 730 + /// pub struct CpuCtlBase; 731 + /// 732 + /// // ZST describing `CPU0`. 733 + /// struct Cpu0; 734 + /// impl RegisterBase<CpuCtlBase> for Cpu0 { 735 + /// const BASE: usize = 0x100; 736 + /// } 737 + /// 738 + /// // ZST describing `CPU1`. 739 + /// struct Cpu1; 740 + /// impl RegisterBase<CpuCtlBase> for Cpu1 { 741 + /// const BASE: usize = 0x200; 742 + /// } 743 + /// 744 + /// // 64 per-cpu scratch registers, arranged as a contiguous array. 745 + /// register! { 746 + /// /// Per-CPU scratch registers. 747 + /// pub CPU_SCRATCH(u32)[64] @ CpuCtlBase + 0x00000080 { 748 + /// 31:0 value; 749 + /// } 750 + /// } 751 + /// 752 + /// # fn test(io: &Mmio<0x1000>) -> Result<(), Error> { 753 + /// // Read scratch register 0 of CPU0. 754 + /// let scratch = io.read(CPU_SCRATCH::of::<Cpu0>().at(0)); 755 + /// 756 + /// // Write the retrieved value into scratch register 15 of CPU1. 757 + /// io.write(WithBase::of::<Cpu1>().at(15), scratch); 758 + /// 759 + /// // This won't build. 760 + /// // let cpu0_scratch_128 = io.read(CPU_SCRATCH::of::<Cpu0>().at(128)).value(); 761 + /// 762 + /// // Runtime-obtained array index. 763 + /// let scratch_idx = get_scratch_idx(); 764 + /// // Access on a runtime index returns an error if it is out-of-bounds. 765 + /// let cpu0_scratch = io.read( 766 + /// CPU_SCRATCH::of::<Cpu0>().try_at(scratch_idx).ok_or(EINVAL)? 767 + /// ).value(); 768 + /// # Ok(()) 769 + /// # } 770 + /// 771 + /// // Alias to `SCRATCH[8]` used to convey the firmware exit code. 772 + /// register! { 773 + /// /// Per-CPU firmware exit status code. 774 + /// pub CPU_FIRMWARE_STATUS(u32) => CpuCtlBase + CPU_SCRATCH[8] { 775 + /// 7:0 status; 776 + /// } 777 + /// } 778 + /// 779 + /// // Non-contiguous relative register arrays can be defined by adding a stride parameter. 780 + /// // Here, each of the 16 registers of the array is separated by 8 bytes, meaning that the 781 + /// // registers of the two declarations below are interleaved. 782 + /// register! { 783 + /// /// Scratch registers bank 0. 784 + /// pub CPU_SCRATCH_INTERLEAVED_0(u32)[16, stride = 8] @ CpuCtlBase + 0x00000d00 { 785 + /// 31:0 value; 786 + /// } 787 + /// 788 + /// /// Scratch registers bank 1. 789 + /// pub CPU_SCRATCH_INTERLEAVED_1(u32)[16, stride = 8] @ CpuCtlBase + 0x00000d04 { 790 + /// 31:0 value; 791 + /// } 792 + /// } 793 + /// 794 + /// # fn test2(io: &Mmio<0x1000>) -> Result<(), Error> { 795 + /// let cpu0_status = io.read(CPU_FIRMWARE_STATUS::of::<Cpu0>()).status(); 796 + /// # Ok(()) 797 + /// # } 798 + /// ``` 799 + #[macro_export] 800 + macro_rules! register { 801 + // Entry point for the macro, allowing multiple registers to be defined in one call. 802 + // It matches all possible register declaration patterns to dispatch them to corresponding 803 + // `@reg` rule that defines a single register. 804 + ( 805 + $( 806 + $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) 807 + $([ $size:expr $(, stride = $stride:expr)? ])? 808 + $(@ $($base:ident +)? $offset:literal)? 809 + $(=> $alias:ident $(+ $alias_offset:ident)? $([$alias_idx:expr])? )? 810 + { $($fields:tt)* } 811 + )* 812 + ) => { 813 + $( 814 + $crate::register!( 815 + @reg $(#[$attr])* $vis $name ($storage) $([$size $(, stride = $stride)?])? 816 + $(@ $($base +)? $offset)? 817 + $(=> $alias $(+ $alias_offset)? $([$alias_idx])? )? 818 + { $($fields)* } 819 + ); 820 + )* 821 + }; 822 + 823 + // All the rules below are private helpers. 824 + 825 + // Creates a register at a fixed offset of the MMIO space. 826 + ( 827 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) @ $offset:literal 828 + { $($fields:tt)* } 829 + ) => { 830 + $crate::register!(@bitfield $(#[$attr])* $vis struct $name($storage) { $($fields)* }); 831 + $crate::register!(@io_base $name($storage) @ $offset); 832 + $crate::register!(@io_fixed $(#[$attr])* $vis $name($storage)); 833 + }; 834 + 835 + // Creates an alias register of fixed offset register `alias` with its own fields. 836 + ( 837 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) => $alias:ident 838 + { $($fields:tt)* } 839 + ) => { 840 + $crate::register!(@bitfield $(#[$attr])* $vis struct $name($storage) { $($fields)* }); 841 + $crate::register!( 842 + @io_base $name($storage) @ 843 + <$alias as $crate::io::register::Register>::OFFSET 844 + ); 845 + $crate::register!(@io_fixed $(#[$attr])* $vis $name($storage)); 846 + }; 847 + 848 + // Creates a register at a relative offset from a base address provider. 849 + ( 850 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) @ $base:ident + $offset:literal 851 + { $($fields:tt)* } 852 + ) => { 853 + $crate::register!(@bitfield $(#[$attr])* $vis struct $name($storage) { $($fields)* }); 854 + $crate::register!(@io_base $name($storage) @ $offset); 855 + $crate::register!(@io_relative $vis $name($storage) @ $base); 856 + }; 857 + 858 + // Creates an alias register of relative offset register `alias` with its own fields. 859 + ( 860 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) => $base:ident + $alias:ident 861 + { $($fields:tt)* } 862 + ) => { 863 + $crate::register!(@bitfield $(#[$attr])* $vis struct $name($storage) { $($fields)* }); 864 + $crate::register!( 865 + @io_base $name($storage) @ <$alias as $crate::io::register::Register>::OFFSET 866 + ); 867 + $crate::register!(@io_relative $vis $name($storage) @ $base); 868 + }; 869 + 870 + // Creates an array of registers at a fixed offset of the MMIO space. 871 + ( 872 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) 873 + [ $size:expr, stride = $stride:expr ] @ $offset:literal { $($fields:tt)* } 874 + ) => { 875 + ::kernel::static_assert!(::core::mem::size_of::<$storage>() <= $stride); 876 + 877 + $crate::register!(@bitfield $(#[$attr])* $vis struct $name($storage) { $($fields)* }); 878 + $crate::register!(@io_base $name($storage) @ $offset); 879 + $crate::register!(@io_array $vis $name($storage) [ $size, stride = $stride ]); 880 + }; 881 + 882 + // Shortcut for contiguous array of registers (stride == size of element). 883 + ( 884 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) [ $size:expr ] @ $offset:literal 885 + { $($fields:tt)* } 886 + ) => { 887 + $crate::register!( 888 + $(#[$attr])* $vis $name($storage) [ $size, stride = ::core::mem::size_of::<$storage>() ] 889 + @ $offset { $($fields)* } 890 + ); 891 + }; 892 + 893 + // Creates an alias of register `idx` of array of registers `alias` with its own fields. 894 + ( 895 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) => $alias:ident [ $idx:expr ] 896 + { $($fields:tt)* } 897 + ) => { 898 + ::kernel::static_assert!($idx < <$alias as $crate::io::register::RegisterArray>::SIZE); 899 + 900 + $crate::register!(@bitfield $(#[$attr])* $vis struct $name($storage) { $($fields)* }); 901 + $crate::register!( 902 + @io_base $name($storage) @ 903 + <$alias as $crate::io::register::Register>::OFFSET 904 + + $idx * <$alias as $crate::io::register::RegisterArray>::STRIDE 905 + ); 906 + $crate::register!(@io_fixed $(#[$attr])* $vis $name($storage)); 907 + }; 908 + 909 + // Creates an array of registers at a relative offset from a base address provider. 910 + ( 911 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) 912 + [ $size:expr, stride = $stride:expr ] 913 + @ $base:ident + $offset:literal { $($fields:tt)* } 914 + ) => { 915 + ::kernel::static_assert!(::core::mem::size_of::<$storage>() <= $stride); 916 + 917 + $crate::register!(@bitfield $(#[$attr])* $vis struct $name($storage) { $($fields)* }); 918 + $crate::register!(@io_base $name($storage) @ $offset); 919 + $crate::register!( 920 + @io_relative_array $vis $name($storage) [ $size, stride = $stride ] @ $base + $offset 921 + ); 922 + }; 923 + 924 + // Shortcut for contiguous array of relative registers (stride == size of element). 925 + ( 926 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) [ $size:expr ] 927 + @ $base:ident + $offset:literal { $($fields:tt)* } 928 + ) => { 929 + $crate::register!( 930 + $(#[$attr])* $vis $name($storage) [ $size, stride = ::core::mem::size_of::<$storage>() ] 931 + @ $base + $offset { $($fields)* } 932 + ); 933 + }; 934 + 935 + // Creates an alias of register `idx` of relative array of registers `alias` with its own 936 + // fields. 937 + ( 938 + @reg $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty) 939 + => $base:ident + $alias:ident [ $idx:expr ] { $($fields:tt)* } 940 + ) => { 941 + ::kernel::static_assert!($idx < <$alias as $crate::io::register::RegisterArray>::SIZE); 942 + 943 + $crate::register!(@bitfield $(#[$attr])* $vis struct $name($storage) { $($fields)* }); 944 + $crate::register!( 945 + @io_base $name($storage) @ 946 + <$alias as $crate::io::register::Register>::OFFSET + 947 + $idx * <$alias as $crate::io::register::RegisterArray>::STRIDE 948 + ); 949 + $crate::register!(@io_relative $vis $name($storage) @ $base); 950 + }; 951 + 952 + // Generates the bitfield for the register. 953 + // 954 + // `#[allow(non_camel_case_types)]` is added since register names typically use 955 + // `SCREAMING_CASE`. 956 + ( 957 + @bitfield $(#[$attr:meta])* $vis:vis struct $name:ident($storage:ty) { $($fields:tt)* } 958 + ) => { 959 + $crate::register!(@bitfield_core 960 + #[allow(non_camel_case_types)] 961 + $(#[$attr])* $vis $name $storage 962 + ); 963 + $crate::register!(@bitfield_fields $vis $name $storage { $($fields)* }); 964 + }; 965 + 966 + // Implementations shared by all registers types. 967 + (@io_base $name:ident($storage:ty) @ $offset:expr) => { 968 + impl $crate::io::register::Register for $name { 969 + type Storage = $storage; 970 + 971 + const OFFSET: usize = $offset; 972 + } 973 + }; 974 + 975 + // Implementations of fixed registers. 976 + (@io_fixed $(#[$attr:meta])* $vis:vis $name:ident ($storage:ty)) => { 977 + impl $crate::io::register::FixedRegister for $name {} 978 + 979 + $(#[$attr])* 980 + $vis const $name: $crate::io::register::FixedRegisterLoc<$name> = 981 + $crate::io::register::FixedRegisterLoc::<$name>::new(); 982 + }; 983 + 984 + // Implementations of relative registers. 985 + (@io_relative $vis:vis $name:ident ($storage:ty) @ $base:ident) => { 986 + impl $crate::io::register::WithBase for $name { 987 + type BaseFamily = $base; 988 + } 989 + 990 + impl $crate::io::register::RelativeRegister for $name {} 991 + }; 992 + 993 + // Implementations of register arrays. 994 + (@io_array $vis:vis $name:ident ($storage:ty) [ $size:expr, stride = $stride:expr ]) => { 995 + impl $crate::io::register::Array for $name {} 996 + 997 + impl $crate::io::register::RegisterArray for $name { 998 + const SIZE: usize = $size; 999 + const STRIDE: usize = $stride; 1000 + } 1001 + }; 1002 + 1003 + // Implementations of relative array registers. 1004 + ( 1005 + @io_relative_array $vis:vis $name:ident ($storage:ty) [ $size:expr, stride = $stride:expr ] 1006 + @ $base:ident + $offset:literal 1007 + ) => { 1008 + impl $crate::io::register::WithBase for $name { 1009 + type BaseFamily = $base; 1010 + } 1011 + 1012 + impl $crate::io::register::RegisterArray for $name { 1013 + const SIZE: usize = $size; 1014 + const STRIDE: usize = $stride; 1015 + } 1016 + 1017 + impl $crate::io::register::RelativeRegisterArray for $name {} 1018 + }; 1019 + 1020 + // Defines the wrapper `$name` type and its conversions from/to the storage type. 1021 + (@bitfield_core $(#[$attr:meta])* $vis:vis $name:ident $storage:ty) => { 1022 + $(#[$attr])* 1023 + #[repr(transparent)] 1024 + #[derive(Clone, Copy, PartialEq, Eq)] 1025 + $vis struct $name { 1026 + inner: $storage, 1027 + } 1028 + 1029 + #[allow(dead_code)] 1030 + impl $name { 1031 + /// Creates a bitfield from a raw value. 1032 + #[inline(always)] 1033 + $vis const fn from_raw(value: $storage) -> Self { 1034 + Self{ inner: value } 1035 + } 1036 + 1037 + /// Turns this bitfield into its raw value. 1038 + /// 1039 + /// This is similar to the [`From`] implementation, but is shorter to invoke in 1040 + /// most cases. 1041 + #[inline(always)] 1042 + $vis const fn into_raw(self) -> $storage { 1043 + self.inner 1044 + } 1045 + } 1046 + 1047 + // SAFETY: `$storage` is `Zeroable` and `$name` is transparent. 1048 + unsafe impl ::pin_init::Zeroable for $name {} 1049 + 1050 + impl ::core::convert::From<$name> for $storage { 1051 + #[inline(always)] 1052 + fn from(val: $name) -> $storage { 1053 + val.into_raw() 1054 + } 1055 + } 1056 + 1057 + impl ::core::convert::From<$storage> for $name { 1058 + #[inline(always)] 1059 + fn from(val: $storage) -> $name { 1060 + Self::from_raw(val) 1061 + } 1062 + } 1063 + }; 1064 + 1065 + // Definitions requiring knowledge of individual fields: private and public field accessors, 1066 + // and `Debug` implementation. 1067 + (@bitfield_fields $vis:vis $name:ident $storage:ty { 1068 + $($(#[doc = $doc:expr])* $hi:literal:$lo:literal $field:ident 1069 + $(?=> $try_into_type:ty)? 1070 + $(=> $into_type:ty)? 1071 + ; 1072 + )* 1073 + } 1074 + ) => { 1075 + #[allow(dead_code)] 1076 + impl $name { 1077 + $( 1078 + $crate::register!(@private_field_accessors $vis $name $storage : $hi:$lo $field); 1079 + $crate::register!( 1080 + @public_field_accessors $(#[doc = $doc])* $vis $name $storage : $hi:$lo $field 1081 + $(?=> $try_into_type)? 1082 + $(=> $into_type)? 1083 + ); 1084 + )* 1085 + } 1086 + 1087 + $crate::register!(@debug $name { $($field;)* }); 1088 + }; 1089 + 1090 + // Private field accessors working with the exact `Bounded` type for the field. 1091 + ( 1092 + @private_field_accessors $vis:vis $name:ident $storage:ty : $hi:tt:$lo:tt $field:ident 1093 + ) => { 1094 + ::kernel::macros::paste!( 1095 + $vis const [<$field:upper _RANGE>]: ::core::ops::RangeInclusive<u8> = $lo..=$hi; 1096 + $vis const [<$field:upper _MASK>]: $storage = 1097 + ((((1 << $hi) - 1) << 1) + 1) - ((1 << $lo) - 1); 1098 + $vis const [<$field:upper _SHIFT>]: u32 = $lo; 1099 + ); 1100 + 1101 + ::kernel::macros::paste!( 1102 + fn [<__ $field>](self) -> 1103 + ::kernel::num::Bounded<$storage, { $hi + 1 - $lo }> { 1104 + // Left shift to align the field's MSB with the storage MSB. 1105 + const ALIGN_TOP: u32 = $storage::BITS - ($hi + 1); 1106 + // Right shift to move the top-aligned field to bit 0 of the storage. 1107 + const ALIGN_BOTTOM: u32 = ALIGN_TOP + $lo; 1108 + 1109 + // Extract the field using two shifts. `Bounded::shr` produces the correctly-sized 1110 + // output type. 1111 + let val = ::kernel::num::Bounded::<$storage, { $storage::BITS }>::from( 1112 + self.inner << ALIGN_TOP 1113 + ); 1114 + val.shr::<ALIGN_BOTTOM, { $hi + 1 - $lo } >() 1115 + } 1116 + 1117 + const fn [<__with_ $field>]( 1118 + mut self, 1119 + value: ::kernel::num::Bounded<$storage, { $hi + 1 - $lo }>, 1120 + ) -> Self 1121 + { 1122 + const MASK: $storage = <$name>::[<$field:upper _MASK>]; 1123 + const SHIFT: u32 = <$name>::[<$field:upper _SHIFT>]; 1124 + 1125 + let value = value.get() << SHIFT; 1126 + self.inner = (self.inner & !MASK) | value; 1127 + 1128 + self 1129 + } 1130 + ); 1131 + }; 1132 + 1133 + // Public accessors for fields infallibly (`=>`) converted to a type. 1134 + ( 1135 + @public_field_accessors $(#[doc = $doc:expr])* $vis:vis $name:ident $storage:ty : 1136 + $hi:literal:$lo:literal $field:ident => $into_type:ty 1137 + ) => { 1138 + ::kernel::macros::paste!( 1139 + 1140 + $(#[doc = $doc])* 1141 + #[doc = "Returns the value of this field."] 1142 + #[inline(always)] 1143 + $vis fn $field(self) -> $into_type 1144 + { 1145 + self.[<__ $field>]().into() 1146 + } 1147 + 1148 + $(#[doc = $doc])* 1149 + #[doc = "Sets this field to the given `value`."] 1150 + #[inline(always)] 1151 + $vis fn [<with_ $field>](self, value: $into_type) -> Self 1152 + { 1153 + self.[<__with_ $field>](value.into()) 1154 + } 1155 + 1156 + ); 1157 + }; 1158 + 1159 + // Public accessors for fields fallibly (`?=>`) converted to a type. 1160 + ( 1161 + @public_field_accessors $(#[doc = $doc:expr])* $vis:vis $name:ident $storage:ty : 1162 + $hi:tt:$lo:tt $field:ident ?=> $try_into_type:ty 1163 + ) => { 1164 + ::kernel::macros::paste!( 1165 + 1166 + $(#[doc = $doc])* 1167 + #[doc = "Returns the value of this field."] 1168 + #[inline(always)] 1169 + $vis fn $field(self) -> 1170 + Result< 1171 + $try_into_type, 1172 + <$try_into_type as ::core::convert::TryFrom< 1173 + ::kernel::num::Bounded<$storage, { $hi + 1 - $lo }> 1174 + >>::Error 1175 + > 1176 + { 1177 + self.[<__ $field>]().try_into() 1178 + } 1179 + 1180 + $(#[doc = $doc])* 1181 + #[doc = "Sets this field to the given `value`."] 1182 + #[inline(always)] 1183 + $vis fn [<with_ $field>](self, value: $try_into_type) -> Self 1184 + { 1185 + self.[<__with_ $field>](value.into()) 1186 + } 1187 + 1188 + ); 1189 + }; 1190 + 1191 + // Public accessors for fields not converted to a type. 1192 + ( 1193 + @public_field_accessors $(#[doc = $doc:expr])* $vis:vis $name:ident $storage:ty : 1194 + $hi:tt:$lo:tt $field:ident 1195 + ) => { 1196 + ::kernel::macros::paste!( 1197 + 1198 + $(#[doc = $doc])* 1199 + #[doc = "Returns the value of this field."] 1200 + #[inline(always)] 1201 + $vis fn $field(self) -> 1202 + ::kernel::num::Bounded<$storage, { $hi + 1 - $lo }> 1203 + { 1204 + self.[<__ $field>]() 1205 + } 1206 + 1207 + $(#[doc = $doc])* 1208 + #[doc = "Sets this field to the compile-time constant `VALUE`."] 1209 + #[inline(always)] 1210 + $vis const fn [<with_const_ $field>]<const VALUE: $storage>(self) -> Self { 1211 + self.[<__with_ $field>]( 1212 + ::kernel::num::Bounded::<$storage, { $hi + 1 - $lo }>::new::<VALUE>() 1213 + ) 1214 + } 1215 + 1216 + $(#[doc = $doc])* 1217 + #[doc = "Sets this field to the given `value`."] 1218 + #[inline(always)] 1219 + $vis fn [<with_ $field>]<T>( 1220 + self, 1221 + value: T, 1222 + ) -> Self 1223 + where T: Into<::kernel::num::Bounded<$storage, { $hi + 1 - $lo }>>, 1224 + { 1225 + self.[<__with_ $field>](value.into()) 1226 + } 1227 + 1228 + $(#[doc = $doc])* 1229 + #[doc = "Tries to set this field to `value`, returning an error if it is out of range."] 1230 + #[inline(always)] 1231 + $vis fn [<try_with_ $field>]<T>( 1232 + self, 1233 + value: T, 1234 + ) -> ::kernel::error::Result<Self> 1235 + where T: ::kernel::num::TryIntoBounded<$storage, { $hi + 1 - $lo }>, 1236 + { 1237 + Ok( 1238 + self.[<__with_ $field>]( 1239 + value.try_into_bounded().ok_or(::kernel::error::code::EOVERFLOW)? 1240 + ) 1241 + ) 1242 + } 1243 + 1244 + ); 1245 + }; 1246 + 1247 + // `Debug` implementation. 1248 + (@debug $name:ident { $($field:ident;)* }) => { 1249 + impl ::kernel::fmt::Debug for $name { 1250 + fn fmt(&self, f: &mut ::kernel::fmt::Formatter<'_>) -> ::kernel::fmt::Result { 1251 + f.debug_struct(stringify!($name)) 1252 + .field("<raw>", &::kernel::prelude::fmt!("{:#x}", self.inner)) 1253 + $( 1254 + .field(stringify!($field), &self.$field()) 1255 + )* 1256 + .finish() 1257 + } 1258 + } 1259 + }; 1260 + }
+11 -17
rust/kernel/irq/request.rs
··· 27 27 } 28 28 29 29 /// Callbacks for an IRQ handler. 30 - pub trait Handler: Sync { 30 + pub trait Handler: Sync + 'static { 31 31 /// The hard IRQ handler. 32 32 /// 33 33 /// This is executed in interrupt context, hence all corresponding ··· 45 45 } 46 46 } 47 47 48 - impl<T: ?Sized + Handler, A: Allocator> Handler for Box<T, A> { 48 + impl<T: ?Sized + Handler, A: Allocator + 'static> Handler for Box<T, A> { 49 49 fn handle(&self, device: &Device<Bound>) -> IrqReturn { 50 50 T::handle(self, device) 51 51 } ··· 181 181 /// 182 182 /// * We own an irq handler whose cookie is a pointer to `Self`. 183 183 #[pin_data] 184 - pub struct Registration<T: Handler + 'static> { 184 + pub struct Registration<T: Handler> { 185 185 #[pin] 186 186 inner: Devres<RegistrationInner>, 187 187 ··· 194 194 _pin: PhantomPinned, 195 195 } 196 196 197 - impl<T: Handler + 'static> Registration<T> { 197 + impl<T: Handler> Registration<T> { 198 198 /// Registers the IRQ handler with the system for the given IRQ number. 199 199 pub fn new<'a>( 200 200 request: IrqRequest<'a>, ··· 260 260 /// # Safety 261 261 /// 262 262 /// This function should be only used as the callback in `request_irq`. 263 - unsafe extern "C" fn handle_irq_callback<T: Handler + 'static>( 264 - _irq: i32, 265 - ptr: *mut c_void, 266 - ) -> c_uint { 263 + unsafe extern "C" fn handle_irq_callback<T: Handler>(_irq: i32, ptr: *mut c_void) -> c_uint { 267 264 // SAFETY: `ptr` is a pointer to `Registration<T>` set in `Registration::new` 268 265 let registration = unsafe { &*(ptr as *const Registration<T>) }; 269 266 // SAFETY: The irq callback is removed before the device is unbound, so the fact that the irq ··· 284 287 } 285 288 286 289 /// Callbacks for a threaded IRQ handler. 287 - pub trait ThreadedHandler: Sync { 290 + pub trait ThreadedHandler: Sync + 'static { 288 291 /// The hard IRQ handler. 289 292 /// 290 293 /// This is executed in interrupt context, hence all corresponding ··· 315 318 } 316 319 } 317 320 318 - impl<T: ?Sized + ThreadedHandler, A: Allocator> ThreadedHandler for Box<T, A> { 321 + impl<T: ?Sized + ThreadedHandler, A: Allocator + 'static> ThreadedHandler for Box<T, A> { 319 322 fn handle(&self, device: &Device<Bound>) -> ThreadedIrqReturn { 320 323 T::handle(self, device) 321 324 } ··· 398 401 /// 399 402 /// * We own an irq handler whose cookie is a pointer to `Self`. 400 403 #[pin_data] 401 - pub struct ThreadedRegistration<T: ThreadedHandler + 'static> { 404 + pub struct ThreadedRegistration<T: ThreadedHandler> { 402 405 #[pin] 403 406 inner: Devres<RegistrationInner>, 404 407 ··· 411 414 _pin: PhantomPinned, 412 415 } 413 416 414 - impl<T: ThreadedHandler + 'static> ThreadedRegistration<T> { 417 + impl<T: ThreadedHandler> ThreadedRegistration<T> { 415 418 /// Registers the IRQ handler with the system for the given IRQ number. 416 419 pub fn new<'a>( 417 420 request: IrqRequest<'a>, ··· 478 481 /// # Safety 479 482 /// 480 483 /// This function should be only used as the callback in `request_threaded_irq`. 481 - unsafe extern "C" fn handle_threaded_irq_callback<T: ThreadedHandler + 'static>( 484 + unsafe extern "C" fn handle_threaded_irq_callback<T: ThreadedHandler>( 482 485 _irq: i32, 483 486 ptr: *mut c_void, 484 487 ) -> c_uint { ··· 494 497 /// # Safety 495 498 /// 496 499 /// This function should be only used as the callback in `request_threaded_irq`. 497 - unsafe extern "C" fn thread_fn_callback<T: ThreadedHandler + 'static>( 498 - _irq: i32, 499 - ptr: *mut c_void, 500 - ) -> c_uint { 500 + unsafe extern "C" fn thread_fn_callback<T: ThreadedHandler>(_irq: i32, ptr: *mut c_void) -> c_uint { 501 501 // SAFETY: `ptr` is a pointer to `ThreadedRegistration<T>` set in `ThreadedRegistration::new` 502 502 let registration = unsafe { &*(ptr as *const ThreadedRegistration<T>) }; 503 503 // SAFETY: The irq callback is removed before the device is unbound, so the fact that the irq
+3
rust/kernel/lib.rs
··· 16 16 // Please see https://github.com/Rust-for-Linux/linux/issues/2 for details on 17 17 // the unstable features in use. 18 18 // 19 + // Stable since Rust 1.89.0. 20 + #![feature(generic_arg_infer)] 21 + // 19 22 // Expected to become stable. 20 23 #![feature(arbitrary_self_types)] 21 24 #![feature(derive_coerce_pointee)]
+68 -2
rust/kernel/num/bounded.rs
··· 375 375 376 376 /// Returns the wrapped value as the backing type. 377 377 /// 378 + /// This is similar to the [`Deref`] implementation, but doesn't enforce the size invariant of 379 + /// the [`Bounded`], which might produce slightly less optimal code. 380 + /// 378 381 /// # Examples 379 382 /// 380 383 /// ``` ··· 386 383 /// let v = Bounded::<u32, 4>::new::<7>(); 387 384 /// assert_eq!(v.get(), 7u32); 388 385 /// ``` 389 - pub fn get(self) -> T { 390 - *self.deref() 386 + pub const fn get(self) -> T { 387 + self.0 391 388 } 392 389 393 390 /// Increases the number of bits usable for `self`. ··· 469 466 // SAFETY: Although the backing type has changed, the value is still represented within 470 467 // `N` bits, and with the same signedness. 471 468 unsafe { Bounded::__new(value) } 469 + } 470 + 471 + /// Right-shifts `self` by `SHIFT` and returns the result as a `Bounded<_, RES>`, where `RES >= 472 + /// N - SHIFT`. 473 + /// 474 + /// # Examples 475 + /// 476 + /// ``` 477 + /// use kernel::num::Bounded; 478 + /// 479 + /// let v = Bounded::<u32, 16>::new::<0xff00>(); 480 + /// let v_shifted: Bounded::<u32, 8> = v.shr::<8, _>(); 481 + /// 482 + /// assert_eq!(v_shifted.get(), 0xff); 483 + /// ``` 484 + pub fn shr<const SHIFT: u32, const RES: u32>(self) -> Bounded<T, RES> { 485 + const { assert!(RES + SHIFT >= N) } 486 + 487 + // SAFETY: We shift the value right by `SHIFT`, reducing the number of bits needed to 488 + // represent the shifted value by as much, and just asserted that `RES >= N - SHIFT`. 489 + unsafe { Bounded::__new(self.0 >> SHIFT) } 490 + } 491 + 492 + /// Left-shifts `self` by `SHIFT` and returns the result as a `Bounded<_, RES>`, where `RES >= 493 + /// N + SHIFT`. 494 + /// 495 + /// # Examples 496 + /// 497 + /// ``` 498 + /// use kernel::num::Bounded; 499 + /// 500 + /// let v = Bounded::<u32, 8>::new::<0xff>(); 501 + /// let v_shifted: Bounded::<u32, 16> = v.shl::<8, _>(); 502 + /// 503 + /// assert_eq!(v_shifted.get(), 0xff00); 504 + /// ``` 505 + pub fn shl<const SHIFT: u32, const RES: u32>(self) -> Bounded<T, RES> { 506 + const { assert!(RES >= N + SHIFT) } 507 + 508 + // SAFETY: We shift the value left by `SHIFT`, augmenting the number of bits needed to 509 + // represent the shifted value by as much, and just asserted that `RES >= N + SHIFT`. 510 + unsafe { Bounded::__new(self.0 << SHIFT) } 472 511 } 473 512 } 474 513 ··· 1096 1051 // SAFETY: A boolean can be represented using a single bit, and thus fits within any 1097 1052 // integer type for any `N` > 0. 1098 1053 unsafe { Self::__new(T::from(value)) } 1054 + } 1055 + } 1056 + 1057 + impl<T> Bounded<T, 1> 1058 + where 1059 + T: Integer + Zeroable, 1060 + { 1061 + /// Converts this [`Bounded`] into a [`bool`]. 1062 + /// 1063 + /// This is a shorter way of writing `bool::from(self)`. 1064 + /// 1065 + /// # Examples 1066 + /// 1067 + /// ``` 1068 + /// use kernel::num::Bounded; 1069 + /// 1070 + /// assert_eq!(Bounded::<u8, 1>::new::<0>().into_bool(), false); 1071 + /// assert_eq!(Bounded::<u8, 1>::new::<1>().into_bool(), true); 1072 + /// ``` 1073 + pub fn into_bool(self) -> bool { 1074 + self.into() 1099 1075 } 1100 1076 }
+30 -69
rust/kernel/pci/io.rs
··· 8 8 device, 9 9 devres::Devres, 10 10 io::{ 11 - io_define_read, 12 - io_define_write, 13 11 Io, 14 12 IoCapable, 15 13 IoKnownSize, ··· 83 85 _marker: PhantomData<S>, 84 86 } 85 87 86 - /// Internal helper macros used to invoke C PCI configuration space read functions. 87 - /// 88 - /// This macro is intended to be used by higher-level PCI configuration space access macros 89 - /// (io_define_read) and provides a unified expansion for infallible vs. fallible read semantics. It 90 - /// emits a direct call into the corresponding C helper and performs the required cast to the Rust 91 - /// return type. 92 - /// 93 - /// # Parameters 94 - /// 95 - /// * `$c_fn` – The C function performing the PCI configuration space write. 96 - /// * `$self` – The I/O backend object. 97 - /// * `$ty` – The type of the value to read. 98 - /// * `$addr` – The PCI configuration space offset to read. 99 - /// 100 - /// This macro does not perform any validation; all invariants must be upheld by the higher-level 101 - /// abstraction invoking it. 102 - macro_rules! call_config_read { 103 - (infallible, $c_fn:ident, $self:ident, $ty:ty, $addr:expr) => {{ 104 - let mut val: $ty = 0; 105 - // SAFETY: By the type invariant `$self.pdev` is a valid address. 106 - // CAST: The offset is cast to `i32` because the C functions expect a 32-bit signed offset 107 - // parameter. PCI configuration space size is at most 4096 bytes, so the value always fits 108 - // within `i32` without truncation or sign change. 109 - // Return value from C function is ignored in infallible accessors. 110 - let _ret = unsafe { bindings::$c_fn($self.pdev.as_raw(), $addr as i32, &mut val) }; 111 - val 112 - }}; 113 - } 88 + /// Implements [`IoCapable`] on [`ConfigSpace`] for `$ty` using `$read_fn` and `$write_fn`. 89 + macro_rules! impl_config_space_io_capable { 90 + ($ty:ty, $read_fn:ident, $write_fn:ident) => { 91 + impl<'a, S: ConfigSpaceKind> IoCapable<$ty> for ConfigSpace<'a, S> { 92 + unsafe fn io_read(&self, address: usize) -> $ty { 93 + let mut val: $ty = 0; 114 94 115 - /// Internal helper macros used to invoke C PCI configuration space write functions. 116 - /// 117 - /// This macro is intended to be used by higher-level PCI configuration space access macros 118 - /// (io_define_write) and provides a unified expansion for infallible vs. fallible read semantics. 119 - /// It emits a direct call into the corresponding C helper and performs the required cast to the 120 - /// Rust return type. 121 - /// 122 - /// # Parameters 123 - /// 124 - /// * `$c_fn` – The C function performing the PCI configuration space write. 125 - /// * `$self` – The I/O backend object. 126 - /// * `$ty` – The type of the written value. 127 - /// * `$addr` – The configuration space offset to write. 128 - /// * `$value` – The value to write. 129 - /// 130 - /// This macro does not perform any validation; all invariants must be upheld by the higher-level 131 - /// abstraction invoking it. 132 - macro_rules! call_config_write { 133 - (infallible, $c_fn:ident, $self:ident, $ty:ty, $addr:expr, $value:expr) => { 134 - // SAFETY: By the type invariant `$self.pdev` is a valid address. 135 - // CAST: The offset is cast to `i32` because the C functions expect a 32-bit signed offset 136 - // parameter. PCI configuration space size is at most 4096 bytes, so the value always fits 137 - // within `i32` without truncation or sign change. 138 - // Return value from C function is ignored in infallible accessors. 139 - let _ret = unsafe { bindings::$c_fn($self.pdev.as_raw(), $addr as i32, $value) }; 95 + // Return value from C function is ignored in infallible accessors. 96 + let _ret = 97 + // SAFETY: By the type invariant `self.pdev` is a valid address. 98 + // CAST: The offset is cast to `i32` because the C functions expect a 32-bit 99 + // signed offset parameter. PCI configuration space size is at most 4096 bytes, 100 + // so the value always fits within `i32` without truncation or sign change. 101 + unsafe { bindings::$read_fn(self.pdev.as_raw(), address as i32, &mut val) }; 102 + 103 + val 104 + } 105 + 106 + unsafe fn io_write(&self, value: $ty, address: usize) { 107 + // Return value from C function is ignored in infallible accessors. 108 + let _ret = 109 + // SAFETY: By the type invariant `self.pdev` is a valid address. 110 + // CAST: The offset is cast to `i32` because the C functions expect a 32-bit 111 + // signed offset parameter. PCI configuration space size is at most 4096 bytes, 112 + // so the value always fits within `i32` without truncation or sign change. 113 + unsafe { bindings::$write_fn(self.pdev.as_raw(), address as i32, value) }; 114 + } 115 + } 140 116 }; 141 117 } 142 118 143 119 // PCI configuration space supports 8, 16, and 32-bit accesses. 144 - impl<'a, S: ConfigSpaceKind> IoCapable<u8> for ConfigSpace<'a, S> {} 145 - impl<'a, S: ConfigSpaceKind> IoCapable<u16> for ConfigSpace<'a, S> {} 146 - impl<'a, S: ConfigSpaceKind> IoCapable<u32> for ConfigSpace<'a, S> {} 120 + impl_config_space_io_capable!(u8, pci_read_config_byte, pci_write_config_byte); 121 + impl_config_space_io_capable!(u16, pci_read_config_word, pci_write_config_word); 122 + impl_config_space_io_capable!(u32, pci_read_config_dword, pci_write_config_dword); 147 123 148 124 impl<'a, S: ConfigSpaceKind> Io for ConfigSpace<'a, S> { 149 125 /// Returns the base address of the I/O region. It is always 0 for configuration space. ··· 131 159 fn maxsize(&self) -> usize { 132 160 self.pdev.cfg_size().into_raw() 133 161 } 134 - 135 - // PCI configuration space does not support fallible operations. 136 - // The default implementations from the Io trait are not used. 137 - 138 - io_define_read!(infallible, read8, call_config_read(pci_read_config_byte) -> u8); 139 - io_define_read!(infallible, read16, call_config_read(pci_read_config_word) -> u16); 140 - io_define_read!(infallible, read32, call_config_read(pci_read_config_dword) -> u32); 141 - 142 - io_define_write!(infallible, write8, call_config_write(pci_write_config_byte) <- u8); 143 - io_define_write!(infallible, write16, call_config_write(pci_write_config_word) <- u16); 144 - io_define_write!(infallible, write32, call_config_write(pci_write_config_dword) <- u32); 145 162 } 146 163 147 164 impl<'a, S: ConfigSpaceKind> IoKnownSize for ConfigSpace<'a, S> {
+68 -22
samples/rust/rust_driver_pci.rs
··· 5 5 //! To make this driver probe, QEMU must be run with `-device pci-testdev`. 6 6 7 7 use kernel::{ 8 - device::Bound, 9 - device::Core, 8 + device::{ 9 + Bound, 10 + Core, // 11 + }, 10 12 devres::Devres, 11 - io::Io, 13 + io::{ 14 + register, 15 + register::Array, 16 + Io, // 17 + }, 18 + num::Bounded, 12 19 pci, 13 20 prelude::*, 14 21 sync::aref::ARef, // 15 22 }; 16 23 17 - struct Regs; 24 + mod regs { 25 + use super::*; 18 26 19 - impl Regs { 20 - const TEST: usize = 0x0; 21 - const OFFSET: usize = 0x4; 22 - const DATA: usize = 0x8; 23 - const COUNT: usize = 0xC; 24 - const END: usize = 0x10; 27 + register! { 28 + pub(super) TEST(u8) @ 0x0 { 29 + 7:0 index => TestIndex; 30 + } 31 + 32 + pub(super) OFFSET(u32) @ 0x4 { 33 + 31:0 offset; 34 + } 35 + 36 + pub(super) DATA(u8) @ 0x8 { 37 + 7:0 data; 38 + } 39 + 40 + pub(super) COUNT(u32) @ 0xC { 41 + 31:0 count; 42 + } 43 + } 44 + 45 + pub(super) const END: usize = 0x10; 25 46 } 26 47 27 - type Bar0 = pci::Bar<{ Regs::END }>; 48 + type Bar0 = pci::Bar<{ regs::END }>; 28 49 29 50 #[derive(Copy, Clone, Debug)] 30 51 struct TestIndex(u8); 52 + 53 + impl From<Bounded<u8, 8>> for TestIndex { 54 + fn from(value: Bounded<u8, 8>) -> Self { 55 + Self(value.into()) 56 + } 57 + } 58 + 59 + impl From<TestIndex> for Bounded<u8, 8> { 60 + fn from(value: TestIndex) -> Self { 61 + value.0.into() 62 + } 63 + } 31 64 32 65 impl TestIndex { 33 66 const NO_EVENTFD: Self = Self(0); ··· 87 54 impl SampleDriver { 88 55 fn testdev(index: &TestIndex, bar: &Bar0) -> Result<u32> { 89 56 // Select the test. 90 - bar.write8(index.0, Regs::TEST); 57 + bar.write_reg(regs::TEST::zeroed().with_index(*index)); 91 58 92 - let offset = bar.read32(Regs::OFFSET) as usize; 93 - let data = bar.read8(Regs::DATA); 59 + let offset = bar.read(regs::OFFSET).into_raw() as usize; 60 + let data = bar.read(regs::DATA).into(); 94 61 95 62 // Write `data` to `offset` to increase `count` by one. 96 63 // 97 64 // Note that we need `try_write8`, since `offset` can't be checked at compile-time. 98 65 bar.try_write8(data, offset)?; 99 66 100 - Ok(bar.read32(Regs::COUNT)) 67 + Ok(bar.read(regs::COUNT).into()) 101 68 } 102 69 103 70 fn config_space(pdev: &pci::Device<Bound>) { 104 71 let config = pdev.config_space(); 105 72 106 - // TODO: use the register!() macro for defining PCI configuration space registers once it 107 - // has been move out of nova-core. 73 + // Some PCI configuration space registers. 74 + register! { 75 + VENDOR_ID(u16) @ 0x0 { 76 + 15:0 vendor_id; 77 + } 78 + 79 + REVISION_ID(u8) @ 0x8 { 80 + 7:0 revision_id; 81 + } 82 + 83 + BAR(u32)[6] @ 0x10 { 84 + 31:0 value; 85 + } 86 + } 87 + 108 88 dev_info!( 109 89 pdev, 110 90 "pci-testdev config space read8 rev ID: {:x}\n", 111 - config.read8(0x8) 91 + config.read(REVISION_ID).revision_id() 112 92 ); 113 93 114 94 dev_info!( 115 95 pdev, 116 96 "pci-testdev config space read16 vendor ID: {:x}\n", 117 - config.read16(0) 97 + config.read(VENDOR_ID).vendor_id() 118 98 ); 119 99 120 100 dev_info!( 121 101 pdev, 122 102 "pci-testdev config space read32 BAR 0: {:x}\n", 123 - config.read32(0x10) 103 + config.read(BAR::at(0)).value() 124 104 ); 125 105 } 126 106 } ··· 157 111 pdev.set_master(); 158 112 159 113 Ok(try_pin_init!(Self { 160 - bar <- pdev.iomap_region_sized::<{ Regs::END }>(0, c"rust_driver_pci"), 114 + bar <- pdev.iomap_region_sized::<{ regs::END }>(0, c"rust_driver_pci"), 161 115 index: *info, 162 116 _: { 163 117 let bar = bar.access(pdev.as_ref())?; ··· 177 131 fn unbind(pdev: &pci::Device<Core>, this: Pin<&Self>) { 178 132 if let Ok(bar) = this.bar.access(pdev.as_ref()) { 179 133 // Reset pci-testdev by writing a new test index. 180 - bar.write8(this.index.0, Regs::TEST); 134 + bar.write_reg(regs::TEST::zeroed().with_index(this.index)); 181 135 } 182 136 } 183 137 }
+2 -1
scripts/Makefile.build
··· 311 311 # The features in this list are the ones allowed for non-`rust/` code. 312 312 # 313 313 # - Stable since Rust 1.87.0: `feature(asm_goto)`. 314 + # - Stable since Rust 1.89.0: `feature(generic_arg_infer)`. 314 315 # - Expected to become stable: `feature(arbitrary_self_types)`. 315 316 # - To be determined: `feature(used_with_arg)`. 316 317 # 317 318 # Please see https://github.com/Rust-for-Linux/linux/issues/2 for details on 318 319 # the unstable features in use. 319 - rust_allowed_features := arbitrary_self_types,asm_goto,used_with_arg 320 + rust_allowed_features := arbitrary_self_types,asm_goto,generic_arg_infer,used_with_arg 320 321 321 322 # `--out-dir` is required to avoid temporaries being created by `rustc` in the 322 323 # current working directory, which may be not accessible in the out-of-tree
+112
tools/testing/selftests/cgroup/test_memcontrol.c
··· 10 10 #include <sys/stat.h> 11 11 #include <sys/types.h> 12 12 #include <unistd.h> 13 + #include <sys/inotify.h> 13 14 #include <sys/socket.h> 14 15 #include <sys/wait.h> 15 16 #include <arpa/inet.h> ··· 1644 1643 return ret; 1645 1644 } 1646 1645 1646 + static int read_event(int inotify_fd, int expected_event, int expected_wd) 1647 + { 1648 + struct inotify_event event; 1649 + ssize_t len = 0; 1650 + 1651 + len = read(inotify_fd, &event, sizeof(event)); 1652 + if (len < (ssize_t)sizeof(event)) 1653 + return -1; 1654 + 1655 + if (event.mask != expected_event || event.wd != expected_wd) { 1656 + fprintf(stderr, 1657 + "event does not match expected values: mask %d (expected %d) wd %d (expected %d)\n", 1658 + event.mask, expected_event, event.wd, expected_wd); 1659 + return -1; 1660 + } 1661 + 1662 + return 0; 1663 + } 1664 + 1665 + static int test_memcg_inotify_delete_file(const char *root) 1666 + { 1667 + int ret = KSFT_FAIL; 1668 + char *memcg = NULL; 1669 + int fd, wd; 1670 + 1671 + memcg = cg_name(root, "memcg_test_0"); 1672 + 1673 + if (!memcg) 1674 + goto cleanup; 1675 + 1676 + if (cg_create(memcg)) 1677 + goto cleanup; 1678 + 1679 + fd = inotify_init1(0); 1680 + if (fd == -1) 1681 + goto cleanup; 1682 + 1683 + wd = inotify_add_watch(fd, cg_control(memcg, "memory.events"), IN_DELETE_SELF); 1684 + if (wd == -1) 1685 + goto cleanup; 1686 + 1687 + if (cg_destroy(memcg)) 1688 + goto cleanup; 1689 + free(memcg); 1690 + memcg = NULL; 1691 + 1692 + if (read_event(fd, IN_DELETE_SELF, wd)) 1693 + goto cleanup; 1694 + 1695 + if (read_event(fd, IN_IGNORED, wd)) 1696 + goto cleanup; 1697 + 1698 + ret = KSFT_PASS; 1699 + 1700 + cleanup: 1701 + if (fd >= 0) 1702 + close(fd); 1703 + if (memcg) 1704 + cg_destroy(memcg); 1705 + free(memcg); 1706 + 1707 + return ret; 1708 + } 1709 + 1710 + static int test_memcg_inotify_delete_dir(const char *root) 1711 + { 1712 + int ret = KSFT_FAIL; 1713 + char *memcg = NULL; 1714 + int fd, wd; 1715 + 1716 + memcg = cg_name(root, "memcg_test_0"); 1717 + 1718 + if (!memcg) 1719 + goto cleanup; 1720 + 1721 + if (cg_create(memcg)) 1722 + goto cleanup; 1723 + 1724 + fd = inotify_init1(0); 1725 + if (fd == -1) 1726 + goto cleanup; 1727 + 1728 + wd = inotify_add_watch(fd, memcg, IN_DELETE_SELF); 1729 + if (wd == -1) 1730 + goto cleanup; 1731 + 1732 + if (cg_destroy(memcg)) 1733 + goto cleanup; 1734 + free(memcg); 1735 + memcg = NULL; 1736 + 1737 + if (read_event(fd, IN_DELETE_SELF, wd)) 1738 + goto cleanup; 1739 + 1740 + if (read_event(fd, IN_IGNORED, wd)) 1741 + goto cleanup; 1742 + 1743 + ret = KSFT_PASS; 1744 + 1745 + cleanup: 1746 + if (fd >= 0) 1747 + close(fd); 1748 + if (memcg) 1749 + cg_destroy(memcg); 1750 + free(memcg); 1751 + 1752 + return ret; 1753 + } 1754 + 1647 1755 #define T(x) { x, #x } 1648 1756 struct memcg_test { 1649 1757 int (*fn)(const char *root); ··· 1772 1662 T(test_memcg_oom_group_leaf_events), 1773 1663 T(test_memcg_oom_group_parent_events), 1774 1664 T(test_memcg_oom_group_score_events), 1665 + T(test_memcg_inotify_delete_file), 1666 + T(test_memcg_inotify_delete_dir), 1775 1667 }; 1776 1668 #undef T 1777 1669