Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'acpi-7.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI support updates from Rafael Wysocki:
"These include an update of the CMOS RTC driver and the related ACPI
and x86 code that, among other things, switches it over to using the
platform device interface for device binding on x86 instead of the PNP
device driver interface (which allows the code in question to be
simplified quite a bit), a major update of the ACPI Time and Alarm
Device (TAD) driver adding an RTC class device interface to it, and
updates of core ACPI drivers that remove some unnecessary and not
really useful code from them.

Apart from that, two drivers are converted to using the platform
driver interface for device binding instead of the ACPI driver one,
which is slated for removal, support for the Performance Limited
register is added to the ACPI CPPC library and there are some
janitorial updates of it and the related cpufreq CPPC driver, the ACPI
processor driver is fixed and cleaned up, and NVIDIA vendor CPER
record handler is added to the APEI GHES code.

Also, the interface for obtaining a CPU UID from ACPI is consolidated
across architectures and used for fixing a problem with the PCI TPH
Steering Tag on ARM64, there are two updates related to ACPICA, a
minor ACPI OS Services Layer (OSL) update, and a few assorted updates
related to ACPI tables parsing.

Specifics:

- Update maintainers information regarding ACPICA (Rafael Wysocki)

- Replace strncpy() with strscpy_pad() in acpi_ut_safe_strncpy()
(Kees Cook)

- Trigger an ordered system power off after encountering a fatal
error operator in AML (Armin Wolf)

- Enable ACPI FPDT parsing on LoongArch (Xi Ruoyao)

- Remove the temporary stop-gap acpi_pptt_cache_v1_full structure
from the ACPI PPTT parser (Ben Horgan)

- Add support for exposing ACPI FPDT subtables FBPT and S3PT (Nate
DeSimone)

- Address multiple assorted issues and clean up the code in the ACPI
processor idle driver (Huisong Li)

- Replace strlcat() in the ACPI processor idle drive with a better
alternative (Andy Shevchenko)

- Rearrange and clean up acpi_processor_errata_piix4() (Rafael
Wysocki)

- Move reference performance to capabilities and fix an uninitialized
variable in the ACPI CPPC library (Pengjie Zhang)

- Add support for the Performance Limited Register to the ACPI CPPC
library (Sumit Gupta)

- Add cppc_get_perf() API to read performance controls, extend
cppc_set_epp_perf() for FFH/SystemMemory, and make the ACPI CPPC
library warn on missing mandatory DESIRED_PERF register (Sumit
Gupta)

- Modify the cpufreq CPPC driver to update MIN_PERF/MAX_PERF in
target callbacks to allow it to control performance bounds via
standard scaling_min_freq and scaling_max_freq sysfs attributes and
add sysfs documentation for the Performance Limited Register to it
(Sumit Gupta)

- Add ACPI support to the platform device interface in the CMOS RTC
driver, make the ACPI core device enumeration code create a
platform device for the CMOS RTC, and drop CMOS RTC PNP device
support (Rafael Wysocki)

- Consolidate the x86-specific CMOS RTC handling with the ACPI TAD
driver and clean up the CMOS RTC ACPI address space handler (Rafael
Wysocki)

- Enable ACPI alarm in the CMOS RTC driver if advertised in ACPI FADT
and allow that driver to work without a dedicated IRQ if the ACPI
alarm is used (Rafael Wysocki)

- Clean up the ACPI TAD driver in various ways and add an RTC class
device interface, including both the RTC setting/reading and alarm
timer support, to it (Rafael Wysocki)

- Clean up the ACPI AC and ACPI PAD (processor aggregator device)
drivers (Rafael Wysocki)

- Rework checking for duplicate video bus devices and consolidate
pnp.bus_id workarounds handling in the ACPI video bus driver
(Rafael Wysocki)

- Update the ACPI core device drivers to stop setting
acpi_device_name() unnecessarily (Rafael Wysocki)

- Rearrange code using acpi_device_class() in the ACPI core device
drivers and update them to stop setting acpi_device_class()
unnecessarily (Rafael Wysocki)

- Define ACPI_AC_CLASS in one place (Rafael Wysocki)

- Convert the ni903x_wdt watchdog driver and the xen ACPI PAD driver
to bind to platform devices instead of ACPI devices (Rafael
Wysocki)

- Add devm_ghes_register_vendor_record_notifier(), use it in the PCI
hisi driver, and Add NVIDIA vendor CPER record handler (Kai-Heng
Feng)

- Consolidate the interface for obtaining a CPU UID from ACPI across
architectures and use it to address incorrect PCI TPH Steering Tag
on ARM64 resulting from the invalid assumption that the ACPI
Processor UID would always be the same as the corresponding logical
CPU ID in Linux (Chengwen Feng)"

* tag 'acpi-7.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (73 commits)
ACPICA: Update maintainers information
watchdog: ni903x_wdt: Convert to a platform driver
ACPI: PAD: xen: Convert to a platform driver
ACPI: processor: idle: Reset cpuidle on C-state list changes
cpuidle: Extract and export no-lock variants of cpuidle_unregister_device()
PCI/TPH: Pass ACPI Processor UID to Cache Locality _DSM
ACPI: PPTT: Use acpi_get_cpu_uid() and remove get_acpi_id_for_cpu()
perf: arm_cspmu: Switch to acpi_get_cpu_uid() from get_acpi_id_for_cpu()
ACPI: Centralize acpi_get_cpu_uid() declaration in include/linux/acpi.h
x86/acpi: Add acpi_get_cpu_uid() for unified ACPI CPU UID retrieval
RISC-V: ACPI: Add acpi_get_cpu_uid() for unified ACPI CPU UID retrieval
LoongArch: Add acpi_get_cpu_uid() for unified ACPI CPU UID retrieval
arm64: acpi: Add acpi_get_cpu_uid() for unified ACPI CPU UID retrieval
ACPI: APEI: GHES: Add NVIDIA vendor CPER record handler
PCI: hisi: Use devm_ghes_register_vendor_record_notifier()
ACPI: APEI: GHES: Add devm_ghes_register_vendor_record_notifier()
ACPI: tables: Enable FPDT on LoongArch
ACPI: processor: idle: Fix NULL pointer dereference in hotplug path
ACPI: processor: idle: Reset power_setup_done flag on initialization failure
ACPI: TAD: Add alarm support to the RTC class device interface
...

+1458 -774
+18
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 327 327 328 328 This file is only present if the cppc-cpufreq driver is in use. 329 329 330 + What: /sys/devices/system/cpu/cpuX/cpufreq/perf_limited 331 + Date: February 2026 332 + Contact: linux-pm@vger.kernel.org 333 + Description: Performance Limited 334 + 335 + Read to check if platform throttling (thermal/power/current 336 + limits) caused delivered performance to fall below the 337 + requested level. A non-zero value indicates throttling occurred. 338 + 339 + Write the bitmask of bits to clear: 340 + 341 + - 0x1 = clear bit 0 (desired performance excursion) 342 + - 0x2 = clear bit 1 (minimum performance excursion) 343 + - 0x3 = clear both bits 344 + 345 + The platform sets these bits; OSPM can only clear them. 346 + 347 + This file is only present if the cppc-cpufreq driver is in use. 330 348 331 349 What: /sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1} 332 350 Date: August 2008
+6
Documentation/ABI/testing/sysfs-firmware-acpi
··· 41 41 platform runtime firmware S3 resume, just prior to 42 42 handoff to the OS waking vector. In nanoseconds. 43 43 44 + FBPT: The raw binary contents of the Firmware Basic Boot 45 + Performance Table (FBPT) subtable. 46 + 47 + S3PT: The raw binary contents of the S3 Performance Table 48 + (S3PT) subtable. 49 + 44 50 What: /sys/firmware/acpi/bgrt/ 45 51 Date: January 2012 46 52 Contact: Matthew Garrett <mjg@redhat.com>
+2 -2
Documentation/PCI/tph.rst
··· 79 79 CPU, use the following function:: 80 80 81 81 int pcie_tph_get_cpu_st(struct pci_dev *pdev, enum tph_mem_type type, 82 - unsigned int cpu_uid, u16 *tag); 82 + unsigned int cpu, u16 *tag); 83 83 84 84 The `type` argument is used to specify the memory type, either volatile 85 - or persistent, of the target memory. The `cpu_uid` argument specifies the 85 + or persistent, of the target memory. The `cpu` argument specifies the 86 86 CPU where the memory is associated to. 87 87 88 88 After the ST value is retrieved, the device driver can use the following
+8
Documentation/admin-guide/kernel-parameters.txt
··· 190 190 unusable. The "log_buf_len" parameter may be useful 191 191 if you need to capture more output. 192 192 193 + acpi.poweroff_on_fatal= [ACPI] 194 + {0 | 1} 195 + Causes the system to poweroff when the ACPI bytecode signals 196 + a fatal error. The default value of this setting is 1. 197 + Overriding this value should only be done for diagnosing 198 + ACPI firmware problems, as the system might behave erratically 199 + after having encountered a fatal ACPI error. 200 + 193 201 acpi_enforce_resources= [ACPI] 194 202 { strict | lax | no } 195 203 Check for resource conflicts between native drivers
+7 -1
MAINTAINERS
··· 318 318 319 319 ACPI COMPONENT ARCHITECTURE (ACPICA) 320 320 M: "Rafael J. Wysocki" <rafael@kernel.org> 321 - M: Robert Moore <robert.moore@intel.com> 321 + M: Saket Dumbre <saket.dumbre@intel.com> 322 322 L: linux-acpi@vger.kernel.org 323 323 L: acpica-devel@lists.linux.dev 324 324 S: Supported ··· 18922 18922 S: Maintained 18923 18923 F: drivers/video/fbdev/nvidia/ 18924 18924 F: drivers/video/fbdev/riva/ 18925 + 18926 + NVIDIA GHES VENDOR CPER RECORD HANDLER 18927 + M: Kai-Heng Feng <kaihengf@nvidia.com> 18928 + L: linux-acpi@vger.kernel.org 18929 + S: Maintained 18930 + F: drivers/acpi/apei/nvidia-ghes.c 18925 18931 18926 18932 NVIDIA VRS RTC DRIVER 18927 18933 M: Shubhi Garg <shgarg@nvidia.com>
+1 -16
arch/arm64/include/asm/acpi.h
··· 114 114 } 115 115 116 116 struct acpi_madt_generic_interrupt *acpi_cpu_get_madt_gicc(int cpu); 117 - static inline u32 get_acpi_id_for_cpu(unsigned int cpu) 118 - { 119 - return acpi_cpu_get_madt_gicc(cpu)->uid; 120 - } 121 - 122 - static inline int get_cpu_for_acpi_id(u32 uid) 123 - { 124 - int cpu; 125 - 126 - for (cpu = 0; cpu < nr_cpu_ids; cpu++) 127 - if (acpi_cpu_get_madt_gicc(cpu) && 128 - uid == get_acpi_id_for_cpu(cpu)) 129 - return cpu; 130 - 131 - return -EINVAL; 132 - } 117 + int get_cpu_for_acpi_id(u32 uid); 133 118 134 119 static inline void arch_fix_phys_package_id(int num, u32 slot) { } 135 120 void __init acpi_init_cpus(void);
+30
arch/arm64/kernel/acpi.c
··· 458 458 } 459 459 EXPORT_SYMBOL(acpi_unmap_cpu); 460 460 #endif /* CONFIG_ACPI_HOTPLUG_CPU */ 461 + 462 + int acpi_get_cpu_uid(unsigned int cpu, u32 *uid) 463 + { 464 + struct acpi_madt_generic_interrupt *gicc; 465 + 466 + if (cpu >= nr_cpu_ids) 467 + return -EINVAL; 468 + 469 + gicc = acpi_cpu_get_madt_gicc(cpu); 470 + if (!gicc) 471 + return -ENODEV; 472 + 473 + *uid = gicc->uid; 474 + return 0; 475 + } 476 + EXPORT_SYMBOL_GPL(acpi_get_cpu_uid); 477 + 478 + int get_cpu_for_acpi_id(u32 uid) 479 + { 480 + u32 cpu_uid; 481 + int ret; 482 + 483 + for (int cpu = 0; cpu < nr_cpu_ids; cpu++) { 484 + ret = acpi_get_cpu_uid(cpu, &cpu_uid); 485 + if (ret == 0 && uid == cpu_uid) 486 + return cpu; 487 + } 488 + 489 + return -EINVAL; 490 + }
-5
arch/loongarch/include/asm/acpi.h
··· 40 40 41 41 extern int __init parse_acpi_topology(void); 42 42 43 - static inline u32 get_acpi_id_for_cpu(unsigned int cpu) 44 - { 45 - return acpi_core_pic[cpu_logical_map(cpu)].processor_id; 46 - } 47 - 48 43 #endif /* !CONFIG_ACPI */ 49 44 50 45 #define ACPI_TABLE_UPGRADE_MAX_PHYS ARCH_LOW_ADDRESS_LIMIT
+9
arch/loongarch/kernel/acpi.c
··· 385 385 EXPORT_SYMBOL(acpi_unmap_cpu); 386 386 387 387 #endif /* CONFIG_ACPI_HOTPLUG_CPU */ 388 + 389 + int acpi_get_cpu_uid(unsigned int cpu, u32 *uid) 390 + { 391 + if (cpu >= nr_cpu_ids) 392 + return -EINVAL; 393 + *uid = acpi_core_pic[cpu_logical_map(cpu)].processor_id; 394 + return 0; 395 + } 396 + EXPORT_SYMBOL_GPL(acpi_get_cpu_uid);
-4
arch/riscv/include/asm/acpi.h
··· 61 61 62 62 void acpi_init_rintc_map(void); 63 63 struct acpi_madt_rintc *acpi_cpu_get_madt_rintc(int cpu); 64 - static inline u32 get_acpi_id_for_cpu(int cpu) 65 - { 66 - return acpi_cpu_get_madt_rintc(cpu)->uid; 67 - } 68 64 69 65 int acpi_get_riscv_isa(struct acpi_table_header *table, 70 66 unsigned int cpu, const char **isa);
+16
arch/riscv/kernel/acpi.c
··· 337 337 } 338 338 339 339 #endif /* CONFIG_PCI */ 340 + 341 + int acpi_get_cpu_uid(unsigned int cpu, u32 *uid) 342 + { 343 + struct acpi_madt_rintc *rintc; 344 + 345 + if (cpu >= nr_cpu_ids) 346 + return -EINVAL; 347 + 348 + rintc = acpi_cpu_get_madt_rintc(cpu); 349 + if (!rintc) 350 + return -ENODEV; 351 + 352 + *uid = rintc->uid; 353 + return 0; 354 + } 355 + EXPORT_SYMBOL_GPL(acpi_get_cpu_uid);
+6 -3
arch/riscv/kernel/acpi_numa.c
··· 37 37 38 38 static inline int get_cpu_for_acpi_id(u32 uid) 39 39 { 40 - int cpu; 40 + u32 cpu_uid; 41 + int ret; 41 42 42 - for (cpu = 0; cpu < nr_cpu_ids; cpu++) 43 - if (uid == get_acpi_id_for_cpu(cpu)) 43 + for (int cpu = 0; cpu < nr_cpu_ids; cpu++) { 44 + ret = acpi_get_cpu_uid(cpu, &cpu_uid); 45 + if (ret == 0 && uid == cpu_uid) 44 46 return cpu; 47 + } 45 48 46 49 return -EINVAL; 47 50 }
-1
arch/x86/include/asm/cpu.h
··· 11 11 12 12 #ifndef CONFIG_SMP 13 13 #define cpu_physical_id(cpu) boot_cpu_physical_apicid 14 - #define cpu_acpi_id(cpu) 0 15 14 #endif /* CONFIG_SMP */ 16 15 17 16 #ifdef CONFIG_HOTPLUG_CPU
-1
arch/x86/include/asm/smp.h
··· 130 130 __visible void smp_call_function_single_interrupt(struct pt_regs *r); 131 131 132 132 #define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu) 133 - #define cpu_acpi_id(cpu) per_cpu(x86_cpu_to_acpiid, cpu) 134 133 135 134 /* 136 135 * This function is needed by all SMP systems. It must _always_ be valid
+20
arch/x86/kernel/acpi/boot.c
··· 1848 1848 x86_acpi_os_ioremap; 1849 1849 EXPORT_SYMBOL_GPL(acpi_os_ioremap); 1850 1850 #endif 1851 + 1852 + int acpi_get_cpu_uid(unsigned int cpu, u32 *uid) 1853 + { 1854 + u32 acpi_id; 1855 + 1856 + if (cpu >= nr_cpu_ids) 1857 + return -EINVAL; 1858 + 1859 + #ifdef CONFIG_SMP 1860 + acpi_id = per_cpu(x86_cpu_to_acpiid, cpu); 1861 + if (acpi_id == CPU_ACPIID_INVALID) 1862 + return -ENODEV; 1863 + #else 1864 + acpi_id = 0; 1865 + #endif 1866 + 1867 + *uid = acpi_id; 1868 + return 0; 1869 + } 1870 + EXPORT_SYMBOL_GPL(acpi_get_cpu_uid);
+4 -15
arch/x86/kernel/rtc.c
··· 2 2 /* 3 3 * RTC related functions 4 4 */ 5 + #include <linux/acpi.h> 5 6 #include <linux/platform_device.h> 6 7 #include <linux/mc146818rtc.h> 7 8 #include <linux/export.h> 8 - #include <linux/pnp.h> 9 9 10 10 #include <asm/vsyscall.h> 11 11 #include <asm/x86_init.h> ··· 133 133 134 134 static __init int add_rtc_cmos(void) 135 135 { 136 - #ifdef CONFIG_PNP 137 - static const char * const ids[] __initconst = 138 - { "PNP0b00", "PNP0b01", "PNP0b02", }; 139 - struct pnp_dev *dev; 140 - int i; 136 + if (cmos_rtc_platform_device_present) 137 + return 0; 141 138 142 - pnp_for_each_dev(dev) { 143 - for (i = 0; i < ARRAY_SIZE(ids); i++) { 144 - if (compare_pnp_id(dev->id, ids[i]) != 0) 145 - return 0; 146 - } 147 - } 148 - #endif 149 139 if (!x86_platform.legacy.rtc) 150 140 return -ENODEV; 151 141 152 142 platform_device_register(&rtc_device); 153 - dev_info(&rtc_device.dev, 154 - "registered platform RTC device (no PNP device found)\n"); 143 + dev_info(&rtc_device.dev, "registered fallback platform RTC device\n"); 155 144 156 145 return 0; 157 146 }
+3 -2
arch/x86/xen/enlighten_hvm.c
··· 151 151 152 152 static int xen_cpu_up_prepare_hvm(unsigned int cpu) 153 153 { 154 + u32 cpu_uid; 154 155 int rc = 0; 155 156 156 157 /* ··· 162 161 */ 163 162 xen_uninit_lock_cpu(cpu); 164 163 165 - if (cpu_acpi_id(cpu) != CPU_ACPIID_INVALID) 166 - per_cpu(xen_vcpu_id, cpu) = cpu_acpi_id(cpu); 164 + if (acpi_get_cpu_uid(cpu, &cpu_uid) == 0) 165 + per_cpu(xen_vcpu_id, cpu) = cpu_uid; 167 166 else 168 167 per_cpu(xen_vcpu_id, cpu) = cpu; 169 168 xen_vcpu_setup(cpu);
+1 -1
drivers/acpi/Kconfig
··· 96 96 97 97 config ACPI_FPDT 98 98 bool "ACPI Firmware Performance Data Table (FPDT) support" 99 - depends on X86_64 || ARM64 99 + depends on X86_64 || ARM64 || LOONGARCH 100 100 help 101 101 Enable support for the Firmware Performance Data Table (FPDT). 102 102 This table provides information on the timing of the system
+9 -22
drivers/acpi/ac.c
··· 21 21 #include <linux/acpi.h> 22 22 #include <acpi/battery.h> 23 23 24 - #define ACPI_AC_CLASS "ac_adapter" 25 - #define ACPI_AC_DEVICE_NAME "AC Adapter" 26 24 #define ACPI_AC_FILE_STATE "state" 27 25 #define ACPI_AC_NOTIFY_STATUS 0x80 28 26 #define ACPI_AC_STATUS_OFFLINE 0x00 ··· 31 33 MODULE_DESCRIPTION("ACPI AC Adapter Driver"); 32 34 MODULE_LICENSE("GPL"); 33 35 34 - static int acpi_ac_probe(struct platform_device *pdev); 35 - static void acpi_ac_remove(struct platform_device *pdev); 36 - 37 - static void acpi_ac_notify(acpi_handle handle, u32 event, void *data); 38 - 39 36 static const struct acpi_device_id ac_device_ids[] = { 40 37 {"ACPI0003", 0}, 41 38 {"", 0}, 42 39 }; 43 40 MODULE_DEVICE_TABLE(acpi, ac_device_ids); 44 - 45 - #ifdef CONFIG_PM_SLEEP 46 - static int acpi_ac_resume(struct device *dev); 47 - #endif 48 - static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume); 49 41 50 42 static int ac_sleep_before_get_state_ms; 51 43 static int ac_only; ··· 129 141 msleep(ac_sleep_before_get_state_ms); 130 142 131 143 acpi_ac_get_state(ac); 132 - acpi_bus_generate_netlink_event(adev->pnp.device_class, 133 - dev_name(&adev->dev), event, 134 - (u32) ac->state); 135 - acpi_notifier_call_chain(adev, event, (u32) ac->state); 144 + acpi_bus_generate_netlink_event(ACPI_AC_CLASS, 145 + dev_name(&adev->dev), event, 146 + ac->state); 147 + acpi_notifier_call_chain(ACPI_AC_CLASS, acpi_device_bid(adev), 148 + event, ac->state); 136 149 power_supply_changed(ac->charger); 137 150 } 138 151 } ··· 202 213 return -ENOMEM; 203 214 204 215 ac->device = adev; 205 - strscpy(acpi_device_name(adev), ACPI_AC_DEVICE_NAME); 206 - strscpy(acpi_device_class(adev), ACPI_AC_CLASS); 207 216 208 217 platform_set_drvdata(pdev, ac); 209 218 ··· 223 236 goto err_release_ac; 224 237 } 225 238 226 - pr_info("%s [%s] (%s-line)\n", acpi_device_name(adev), 227 - acpi_device_bid(adev), str_on_off(ac->state)); 239 + pr_info("AC Adapter [%s] (%s-line)\n", acpi_device_bid(adev), 240 + str_on_off(ac->state)); 228 241 229 242 ac->battery_nb.notifier_call = acpi_ac_battery_notify; 230 243 register_acpi_notifier(&ac->battery_nb); ··· 259 272 260 273 return 0; 261 274 } 262 - #else 263 - #define acpi_ac_resume NULL 264 275 #endif 276 + 277 + static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume); 265 278 266 279 static void acpi_ac_remove(struct platform_device *pdev) 267 280 {
+28
drivers/acpi/acpi_fpdt.c
··· 141 141 .name = "boot", 142 142 }; 143 143 144 + static BIN_ATTR(FBPT, 0400, sysfs_bin_attr_simple_read, NULL, 0); 145 + static BIN_ATTR(S3PT, 0400, sysfs_bin_attr_simple_read, NULL, 0); 146 + 144 147 static struct kobject *fpdt_kobj; 145 148 146 149 #if defined CONFIG_X86 && defined CONFIG_PHYS_ADDR_T_64BIT ··· 257 254 break; 258 255 } 259 256 } 257 + 258 + if (subtable_type == SUBTABLE_FBPT) { 259 + bin_attr_FBPT.private = subtable_header; 260 + bin_attr_FBPT.size = length; 261 + result = sysfs_create_bin_file(fpdt_kobj, &bin_attr_FBPT); 262 + if (result) 263 + pr_warn("Failed to create FBPT sysfs attribute.\n"); 264 + } else if (subtable_type == SUBTABLE_S3PT) { 265 + bin_attr_S3PT.private = subtable_header; 266 + bin_attr_S3PT.size = length; 267 + result = sysfs_create_bin_file(fpdt_kobj, &bin_attr_S3PT); 268 + if (result) 269 + pr_warn("Failed to create S3PT sysfs attribute.\n"); 270 + } 271 + 260 272 return 0; 261 273 262 274 err: 275 + if (bin_attr_FBPT.private) { 276 + sysfs_remove_bin_file(fpdt_kobj, &bin_attr_FBPT); 277 + bin_attr_FBPT.private = NULL; 278 + } 279 + 280 + if (bin_attr_S3PT.private) { 281 + sysfs_remove_bin_file(fpdt_kobj, &bin_attr_S3PT); 282 + bin_attr_S3PT.private = NULL; 283 + } 284 + 263 285 if (record_boot) 264 286 sysfs_remove_group(fpdt_kobj, &boot_attr_group); 265 287
-4
drivers/acpi/acpi_memhotplug.c
··· 18 18 19 19 #include "internal.h" 20 20 21 - #define ACPI_MEMORY_DEVICE_CLASS "memory" 22 21 #define ACPI_MEMORY_DEVICE_HID "PNP0C80" 23 - #define ACPI_MEMORY_DEVICE_NAME "Hotplug Mem Device" 24 22 25 23 static const struct acpi_device_id memory_device_ids[] = { 26 24 {ACPI_MEMORY_DEVICE_HID, 0}, ··· 295 297 INIT_LIST_HEAD(&mem_device->res_list); 296 298 mem_device->device = device; 297 299 mem_device->mgid = -1; 298 - sprintf(acpi_device_name(device), "%s", ACPI_MEMORY_DEVICE_NAME); 299 - sprintf(acpi_device_class(device), "%s", ACPI_MEMORY_DEVICE_CLASS); 300 300 device->driver_data = mem_device; 301 301 302 302 /* Get the range from the _CRS */
+7 -21
drivers/acpi/acpi_pad.c
··· 23 23 #include <asm/mwait.h> 24 24 #include <xen/xen.h> 25 25 26 - #define ACPI_PROCESSOR_AGGREGATOR_CLASS "acpi_pad" 27 - #define ACPI_PROCESSOR_AGGREGATOR_DEVICE_NAME "Processor Aggregator" 28 26 #define ACPI_PROCESSOR_AGGREGATOR_NOTIFY 0x80 29 27 30 28 #define ACPI_PROCESSOR_AGGREGATOR_STATUS_SUCCESS 0 ··· 405 407 mutex_unlock(&isolated_cpus_lock); 406 408 } 407 409 408 - static void acpi_pad_notify(acpi_handle handle, u32 event, 409 - void *data) 410 + static void acpi_pad_notify(acpi_handle handle, u32 event, void *data) 410 411 { 411 412 struct acpi_device *adev = data; 412 413 413 414 switch (event) { 414 415 case ACPI_PROCESSOR_AGGREGATOR_NOTIFY: 415 416 acpi_pad_handle_notify(handle); 416 - acpi_bus_generate_netlink_event(adev->pnp.device_class, 417 - dev_name(&adev->dev), event, 0); 417 + acpi_bus_generate_netlink_event("acpi_pad", 418 + dev_name(&adev->dev), event, 0); 418 419 break; 419 420 default: 420 421 pr_warn("Unsupported event [0x%x]\n", event); ··· 424 427 static int acpi_pad_probe(struct platform_device *pdev) 425 428 { 426 429 struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); 427 - acpi_status status; 428 430 429 - strscpy(acpi_device_name(adev), ACPI_PROCESSOR_AGGREGATOR_DEVICE_NAME); 430 - strscpy(acpi_device_class(adev), ACPI_PROCESSOR_AGGREGATOR_CLASS); 431 - 432 - status = acpi_install_notify_handler(adev->handle, 433 - ACPI_DEVICE_NOTIFY, acpi_pad_notify, adev); 434 - 435 - if (ACPI_FAILURE(status)) 436 - return -ENODEV; 437 - 438 - return 0; 431 + return acpi_dev_install_notify_handler(adev, ACPI_DEVICE_NOTIFY, 432 + acpi_pad_notify, adev); 439 433 } 440 434 441 435 static void acpi_pad_remove(struct platform_device *pdev) 442 436 { 443 - struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); 444 - 445 437 mutex_lock(&isolated_cpus_lock); 446 438 acpi_pad_idle_cpus(0); 447 439 mutex_unlock(&isolated_cpus_lock); 448 440 449 - acpi_remove_notify_handler(adev->handle, 450 - ACPI_DEVICE_NOTIFY, acpi_pad_notify); 441 + acpi_dev_remove_notify_handler(ACPI_COMPANION(&pdev->dev), 442 + ACPI_DEVICE_NOTIFY, acpi_pad_notify); 451 443 } 452 444 453 445 static const struct acpi_device_id pad_device_ids[] = {
+1 -21
drivers/acpi/acpi_pnp.c
··· 125 125 {"PNP0401"}, /* ECP Printer Port */ 126 126 /* apple-gmux */ 127 127 {"APP000B"}, 128 - /* rtc_cmos */ 129 - {"PNP0b00"}, 130 - {"PNP0b01"}, 131 - {"PNP0b02"}, 132 128 /* c6xdigio */ 133 129 {"PNP0400"}, /* Standard LPT Printer Port */ 134 130 {"PNP0401"}, /* ECP Printer Port */ ··· 351 355 .attach = acpi_pnp_attach, 352 356 }; 353 357 354 - /* 355 - * For CMOS RTC devices, the PNP ACPI scan handler does not work, because 356 - * there is a CMOS RTC ACPI scan handler installed already, so we need to 357 - * check those devices and enumerate them to the PNP bus directly. 358 - */ 359 - static int is_cmos_rtc_device(struct acpi_device *adev) 360 - { 361 - static const struct acpi_device_id ids[] = { 362 - { "PNP0B00" }, 363 - { "PNP0B01" }, 364 - { "PNP0B02" }, 365 - {""}, 366 - }; 367 - return !acpi_match_device_ids(adev, ids); 368 - } 369 - 370 358 bool acpi_is_pnp_device(struct acpi_device *adev) 371 359 { 372 - return adev->handler == &acpi_pnp_handler || is_cmos_rtc_device(adev); 360 + return adev->handler == &acpi_pnp_handler; 373 361 } 374 362 EXPORT_SYMBOL_GPL(acpi_is_pnp_device); 375 363
+13 -18
drivers/acpi/acpi_processor.c
··· 48 48 49 49 static int acpi_processor_errata_piix4(struct pci_dev *dev) 50 50 { 51 - u8 value1 = 0; 52 - u8 value2 = 0; 53 - struct pci_dev *ide_dev = NULL, *isa_dev = NULL; 54 - 55 - 56 51 if (!dev) 57 52 return -EINVAL; 58 53 ··· 103 108 * each IDE controller's DMA status to make sure we catch all 104 109 * DMA activity. 105 110 */ 106 - ide_dev = pci_get_subsys(PCI_VENDOR_ID_INTEL, 111 + dev = pci_get_subsys(PCI_VENDOR_ID_INTEL, 107 112 PCI_DEVICE_ID_INTEL_82371AB, 108 113 PCI_ANY_ID, PCI_ANY_ID, NULL); 109 - if (ide_dev) { 110 - errata.piix4.bmisx = pci_resource_start(ide_dev, 4); 114 + if (dev) { 115 + errata.piix4.bmisx = pci_resource_start(dev, 4); 111 116 if (errata.piix4.bmisx) 112 - dev_dbg(&ide_dev->dev, 117 + dev_dbg(&dev->dev, 113 118 "Bus master activity detection (BM-IDE) erratum enabled\n"); 114 119 115 - pci_dev_put(ide_dev); 120 + pci_dev_put(dev); 116 121 } 117 122 118 123 /* ··· 124 129 * disable C3 support if this is enabled, as some legacy 125 130 * devices won't operate well if fast DMA is disabled. 126 131 */ 127 - isa_dev = pci_get_subsys(PCI_VENDOR_ID_INTEL, 132 + dev = pci_get_subsys(PCI_VENDOR_ID_INTEL, 128 133 PCI_DEVICE_ID_INTEL_82371AB_0, 129 134 PCI_ANY_ID, PCI_ANY_ID, NULL); 130 - if (isa_dev) { 131 - pci_read_config_byte(isa_dev, 0x76, &value1); 132 - pci_read_config_byte(isa_dev, 0x77, &value2); 135 + if (dev) { 136 + u8 value1 = 0, value2 = 0; 137 + 138 + pci_read_config_byte(dev, 0x76, &value1); 139 + pci_read_config_byte(dev, 0x77, &value2); 133 140 if ((value1 & 0x80) || (value2 & 0x80)) { 134 141 errata.piix4.fdma = 1; 135 - dev_dbg(&isa_dev->dev, 142 + dev_dbg(&dev->dev, 136 143 "Type-F DMA livelock erratum (C3 disabled)\n"); 137 144 } 138 - pci_dev_put(isa_dev); 145 + pci_dev_put(dev); 139 146 } 140 147 141 148 break; ··· 436 439 } 437 440 438 441 pr->handle = device->handle; 439 - strscpy(acpi_device_name(device), ACPI_PROCESSOR_DEVICE_NAME); 440 - strscpy(acpi_device_class(device), ACPI_PROCESSOR_CLASS); 441 442 device->driver_data = pr; 442 443 443 444 result = acpi_processor_get_info(device);
+345 -149
drivers/acpi/acpi_tad.c
··· 2 2 /* 3 3 * ACPI Time and Alarm (TAD) Device Driver 4 4 * 5 - * Copyright (C) 2018 Intel Corporation 5 + * Copyright (C) 2018 - 2026 Intel Corporation 6 6 * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 7 7 * 8 - * This driver is based on Section 9.18 of the ACPI 6.2 specification revision. 9 - * 10 - * It only supports the system wakeup capabilities of the TAD. 8 + * This driver is based on ACPI 6.6, Section 9.17. 11 9 * 12 10 * Provided are sysfs attributes, available under the TAD platform device, 13 11 * allowing user space to manage the AC and DC wakeup timers of the TAD: ··· 16 18 * 17 19 * The wakeup events handling and power management of the TAD is expected to 18 20 * be taken care of by the ACPI PM domain attached to its platform device. 21 + * 22 + * If the TAD supports the get/set real time features, as indicated by the 23 + * capability mask returned by _GCP under the TAD object, additional sysfs 24 + * attributes are created allowing the real time to be set and read and an RTC 25 + * class device is registered under the TAD platform device. 19 26 */ 20 27 21 28 #include <linux/acpi.h> 22 29 #include <linux/kernel.h> 30 + #include <linux/ktime.h> 23 31 #include <linux/module.h> 24 32 #include <linux/platform_device.h> 25 33 #include <linux/pm_runtime.h> 34 + #include <linux/rtc.h> 26 35 #include <linux/suspend.h> 27 36 28 37 MODULE_DESCRIPTION("ACPI Time and Alarm (TAD) Device Driver"); 29 38 MODULE_LICENSE("GPL v2"); 30 39 MODULE_AUTHOR("Rafael J. Wysocki"); 31 40 32 - /* ACPI TAD capability flags (ACPI 6.2, Section 9.18.2) */ 41 + /* ACPI TAD capability flags (ACPI 6.6, Section 9.17.2) */ 33 42 #define ACPI_TAD_AC_WAKE BIT(0) 34 43 #define ACPI_TAD_DC_WAKE BIT(1) 35 44 #define ACPI_TAD_RT BIT(2) ··· 53 48 54 49 /* Special value for disabled timer or expired timer wake policy. */ 55 50 #define ACPI_TAD_WAKE_DISABLED (~(u32)0) 51 + 52 + /* ACPI TAD RTC */ 53 + #define ACPI_TAD_TZ_UNSPEC 2047 54 + #define ACPI_TAD_TIME_ISDST 3 56 55 57 56 struct acpi_tad_driver_data { 58 57 u32 capabilities; ··· 76 67 u8 padding[3]; /* must be 0 */ 77 68 } __packed; 78 69 70 + static bool acpi_tad_rt_is_invalid(struct acpi_tad_rt *rt) 71 + { 72 + return rt->year < 1900 || rt->year > 9999 || 73 + rt->month < 1 || rt->month > 12 || 74 + rt->hour > 23 || rt->minute > 59 || rt->second > 59 || 75 + rt->tz < -1440 || 76 + (rt->tz > 1440 && rt->tz != ACPI_TAD_TZ_UNSPEC) || 77 + rt->daylight > 3; 78 + } 79 + 79 80 static int acpi_tad_set_real_time(struct device *dev, struct acpi_tad_rt *rt) 80 81 { 81 82 acpi_handle handle = ACPI_HANDLE(dev); ··· 99 80 unsigned long long retval; 100 81 acpi_status status; 101 82 102 - if (rt->year < 1900 || rt->year > 9999 || 103 - rt->month < 1 || rt->month > 12 || 104 - rt->hour > 23 || rt->minute > 59 || rt->second > 59 || 105 - rt->tz < -1440 || (rt->tz > 1440 && rt->tz != 2047) || 106 - rt->daylight > 3) 107 - return -ERANGE; 83 + if (acpi_tad_rt_is_invalid(rt)) 84 + return -EINVAL; 85 + 86 + rt->valid = 0; 87 + rt->msec = 0; 88 + memset(rt->padding, 0, 3); 108 89 109 90 args[0].buffer.pointer = (u8 *)rt; 110 91 args[0].buffer.length = sizeof(*rt); ··· 152 133 return ret; 153 134 } 154 135 155 - static int acpi_tad_get_real_time(struct device *dev, struct acpi_tad_rt *rt) 136 + static int __acpi_tad_get_real_time(struct device *dev, struct acpi_tad_rt *rt) 156 137 { 157 138 int ret; 158 - 159 - PM_RUNTIME_ACQUIRE(dev, pm); 160 - if (PM_RUNTIME_ACQUIRE_ERR(&pm)) 161 - return -ENXIO; 162 139 163 140 ret = acpi_tad_evaluate_grt(dev, rt); 164 141 if (ret) 165 142 return ret; 166 143 144 + if (acpi_tad_rt_is_invalid(rt)) 145 + return -ENODATA; 146 + 167 147 return 0; 168 148 } 149 + 150 + static int acpi_tad_get_real_time(struct device *dev, struct acpi_tad_rt *rt) 151 + { 152 + PM_RUNTIME_ACQUIRE(dev, pm); 153 + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) 154 + return -ENXIO; 155 + 156 + return __acpi_tad_get_real_time(dev, rt); 157 + } 158 + 159 + static int __acpi_tad_wake_set(struct device *dev, char *method, u32 timer_id, 160 + u32 value) 161 + { 162 + acpi_handle handle = ACPI_HANDLE(dev); 163 + union acpi_object args[] = { 164 + { .type = ACPI_TYPE_INTEGER, }, 165 + { .type = ACPI_TYPE_INTEGER, }, 166 + }; 167 + struct acpi_object_list arg_list = { 168 + .pointer = args, 169 + .count = ARRAY_SIZE(args), 170 + }; 171 + unsigned long long retval; 172 + acpi_status status; 173 + 174 + args[0].integer.value = timer_id; 175 + args[1].integer.value = value; 176 + 177 + status = acpi_evaluate_integer(handle, method, &arg_list, &retval); 178 + if (ACPI_FAILURE(status) || retval) 179 + return -EIO; 180 + 181 + return 0; 182 + } 183 + 184 + static int __acpi_tad_wake_read(struct device *dev, char *method, u32 timer_id, 185 + unsigned long long *retval) 186 + { 187 + acpi_handle handle = ACPI_HANDLE(dev); 188 + union acpi_object args[] = { 189 + { .type = ACPI_TYPE_INTEGER, }, 190 + }; 191 + struct acpi_object_list arg_list = { 192 + .pointer = args, 193 + .count = ARRAY_SIZE(args), 194 + }; 195 + acpi_status status; 196 + 197 + args[0].integer.value = timer_id; 198 + 199 + status = acpi_evaluate_integer(handle, method, &arg_list, retval); 200 + if (ACPI_FAILURE(status)) 201 + return -EIO; 202 + 203 + return 0; 204 + } 205 + 206 + /* sysfs interface */ 169 207 170 208 static char *acpi_tad_rt_next_field(char *s, int *val) 171 209 { ··· 243 167 const char *buf, size_t count) 244 168 { 245 169 struct acpi_tad_rt rt; 246 - char *str, *s; 247 - int val, ret = -ENODATA; 170 + int val, ret; 171 + char *s; 248 172 249 - str = kmemdup_nul(buf, count, GFP_KERNEL); 173 + char *str __free(kfree) = kmemdup_nul(buf, count, GFP_KERNEL); 250 174 if (!str) 251 175 return -ENOMEM; 252 176 253 177 s = acpi_tad_rt_next_field(str, &val); 254 178 if (!s) 255 - goto out_free; 179 + return -ENODATA; 256 180 257 181 rt.year = val; 258 182 259 183 s = acpi_tad_rt_next_field(s, &val); 260 184 if (!s) 261 - goto out_free; 185 + return -ENODATA; 262 186 263 187 rt.month = val; 264 188 265 189 s = acpi_tad_rt_next_field(s, &val); 266 190 if (!s) 267 - goto out_free; 191 + return -ENODATA; 268 192 269 193 rt.day = val; 270 194 271 195 s = acpi_tad_rt_next_field(s, &val); 272 196 if (!s) 273 - goto out_free; 197 + return -ENODATA; 274 198 275 199 rt.hour = val; 276 200 277 201 s = acpi_tad_rt_next_field(s, &val); 278 202 if (!s) 279 - goto out_free; 203 + return -ENODATA; 280 204 281 205 rt.minute = val; 282 206 283 207 s = acpi_tad_rt_next_field(s, &val); 284 208 if (!s) 285 - goto out_free; 209 + return -ENODATA; 286 210 287 211 rt.second = val; 288 212 289 213 s = acpi_tad_rt_next_field(s, &val); 290 214 if (!s) 291 - goto out_free; 215 + return -ENODATA; 292 216 293 217 rt.tz = val; 294 218 295 219 if (kstrtoint(s, 10, &val)) 296 - goto out_free; 220 + return -ENODATA; 297 221 298 222 rt.daylight = val; 299 223 300 - rt.valid = 0; 301 - rt.msec = 0; 302 - memset(rt.padding, 0, 3); 303 - 304 224 ret = acpi_tad_set_real_time(dev, &rt); 225 + if (ret) 226 + return ret; 305 227 306 - out_free: 307 - kfree(str); 308 - return ret ? ret : count; 228 + return count; 309 229 } 310 230 311 231 static ssize_t time_show(struct device *dev, struct device_attribute *attr, ··· 321 249 322 250 static DEVICE_ATTR_RW(time); 323 251 324 - static struct attribute *acpi_tad_time_attrs[] = { 325 - &dev_attr_time.attr, 326 - NULL, 327 - }; 328 - static const struct attribute_group acpi_tad_time_attr_group = { 329 - .attrs = acpi_tad_time_attrs, 330 - }; 331 - 332 252 static int acpi_tad_wake_set(struct device *dev, char *method, u32 timer_id, 333 253 u32 value) 334 254 { 335 - acpi_handle handle = ACPI_HANDLE(dev); 336 - union acpi_object args[] = { 337 - { .type = ACPI_TYPE_INTEGER, }, 338 - { .type = ACPI_TYPE_INTEGER, }, 339 - }; 340 - struct acpi_object_list arg_list = { 341 - .pointer = args, 342 - .count = ARRAY_SIZE(args), 343 - }; 344 - unsigned long long retval; 345 - acpi_status status; 346 - 347 - args[0].integer.value = timer_id; 348 - args[1].integer.value = value; 349 - 350 255 PM_RUNTIME_ACQUIRE(dev, pm); 351 256 if (PM_RUNTIME_ACQUIRE_ERR(&pm)) 352 257 return -ENXIO; 353 258 354 - status = acpi_evaluate_integer(handle, method, &arg_list, &retval); 355 - if (ACPI_FAILURE(status) || retval) 356 - return -EIO; 357 - 358 - return 0; 259 + return __acpi_tad_wake_set(dev, method, timer_id, value); 359 260 } 360 261 361 262 static int acpi_tad_wake_write(struct device *dev, const char *buf, char *method, ··· 354 309 static ssize_t acpi_tad_wake_read(struct device *dev, char *buf, char *method, 355 310 u32 timer_id, const char *specval) 356 311 { 357 - acpi_handle handle = ACPI_HANDLE(dev); 358 - union acpi_object args[] = { 359 - { .type = ACPI_TYPE_INTEGER, }, 360 - }; 361 - struct acpi_object_list arg_list = { 362 - .pointer = args, 363 - .count = ARRAY_SIZE(args), 364 - }; 365 312 unsigned long long retval; 366 - acpi_status status; 367 - 368 - args[0].integer.value = timer_id; 313 + int ret; 369 314 370 315 PM_RUNTIME_ACQUIRE(dev, pm); 371 316 if (PM_RUNTIME_ACQUIRE_ERR(&pm)) 372 317 return -ENXIO; 373 318 374 - status = acpi_evaluate_integer(handle, method, &arg_list, &retval); 375 - if (ACPI_FAILURE(status)) 376 - return -EIO; 319 + ret = __acpi_tad_wake_read(dev, method, timer_id, &retval); 320 + if (ret) 321 + return ret; 377 322 378 323 if ((u32)retval == ACPI_TAD_WAKE_DISABLED) 379 324 return sprintf(buf, "%s\n", specval); ··· 521 486 522 487 static DEVICE_ATTR_RW(ac_status); 523 488 524 - static struct attribute *acpi_tad_attrs[] = { 525 - &dev_attr_caps.attr, 526 - &dev_attr_ac_alarm.attr, 527 - &dev_attr_ac_policy.attr, 528 - &dev_attr_ac_status.attr, 529 - NULL, 530 - }; 531 - static const struct attribute_group acpi_tad_attr_group = { 532 - .attrs = acpi_tad_attrs, 533 - }; 534 - 535 489 static ssize_t dc_alarm_store(struct device *dev, struct device_attribute *attr, 536 490 const char *buf, size_t count) 537 491 { ··· 569 545 570 546 static DEVICE_ATTR_RW(dc_status); 571 547 572 - static struct attribute *acpi_tad_dc_attrs[] = { 548 + static struct attribute *acpi_tad_attrs[] = { 549 + &dev_attr_caps.attr, 550 + &dev_attr_ac_alarm.attr, 551 + &dev_attr_ac_policy.attr, 552 + &dev_attr_ac_status.attr, 573 553 &dev_attr_dc_alarm.attr, 574 554 &dev_attr_dc_policy.attr, 575 555 &dev_attr_dc_status.attr, 556 + &dev_attr_time.attr, 576 557 NULL, 577 558 }; 578 - static const struct attribute_group acpi_tad_dc_attr_group = { 579 - .attrs = acpi_tad_dc_attrs, 559 + 560 + static umode_t acpi_tad_attr_is_visible(struct kobject *kobj, 561 + struct attribute *a, int n) 562 + { 563 + struct acpi_tad_driver_data *dd = dev_get_drvdata(kobj_to_dev(kobj)); 564 + 565 + if (a == &dev_attr_caps.attr) 566 + return a->mode; 567 + 568 + if ((dd->capabilities & ACPI_TAD_AC_WAKE) && 569 + (a == &dev_attr_ac_alarm.attr || a == &dev_attr_ac_policy.attr || 570 + a == &dev_attr_ac_status.attr)) 571 + return a->mode; 572 + 573 + if ((dd->capabilities & ACPI_TAD_DC_WAKE) && 574 + (a == &dev_attr_dc_alarm.attr || a == &dev_attr_dc_policy.attr || 575 + a == &dev_attr_dc_status.attr)) 576 + return a->mode; 577 + 578 + if ((dd->capabilities & ACPI_TAD_RT) && a == &dev_attr_time.attr) 579 + return a->mode; 580 + 581 + return 0; 582 + } 583 + 584 + static const struct attribute_group acpi_tad_attr_group = { 585 + .attrs = acpi_tad_attrs, 586 + .is_visible = acpi_tad_attr_is_visible, 580 587 }; 588 + 589 + static const struct attribute_group *acpi_tad_attr_groups[] = { 590 + &acpi_tad_attr_group, 591 + NULL, 592 + }; 593 + 594 + #ifdef CONFIG_RTC_CLASS 595 + /* RTC class device interface */ 596 + 597 + static void acpi_tad_rt_to_tm(struct acpi_tad_rt *rt, struct rtc_time *tm) 598 + { 599 + tm->tm_year = rt->year - 1900; 600 + tm->tm_mon = rt->month - 1; 601 + tm->tm_mday = rt->day; 602 + tm->tm_hour = rt->hour; 603 + tm->tm_min = rt->minute; 604 + tm->tm_sec = rt->second; 605 + tm->tm_isdst = rt->daylight == ACPI_TAD_TIME_ISDST; 606 + } 607 + 608 + static int acpi_tad_rtc_set_time(struct device *dev, struct rtc_time *tm) 609 + { 610 + struct acpi_tad_rt rt; 611 + 612 + rt.year = tm->tm_year + 1900; 613 + rt.month = tm->tm_mon + 1; 614 + rt.day = tm->tm_mday; 615 + rt.hour = tm->tm_hour; 616 + rt.minute = tm->tm_min; 617 + rt.second = tm->tm_sec; 618 + rt.tz = ACPI_TAD_TZ_UNSPEC; 619 + rt.daylight = ACPI_TAD_TIME_ISDST * !!tm->tm_isdst; 620 + 621 + return acpi_tad_set_real_time(dev, &rt); 622 + } 623 + 624 + static int acpi_tad_rtc_read_time(struct device *dev, struct rtc_time *tm) 625 + { 626 + struct acpi_tad_rt rt; 627 + int ret; 628 + 629 + ret = acpi_tad_get_real_time(dev, &rt); 630 + if (ret) 631 + return ret; 632 + 633 + acpi_tad_rt_to_tm(&rt, tm); 634 + 635 + return 0; 636 + } 637 + 638 + static int acpi_tad_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *t) 639 + { 640 + struct acpi_tad_driver_data *dd = dev_get_drvdata(dev); 641 + s64 value = ACPI_TAD_WAKE_DISABLED; 642 + struct rtc_time tm_now; 643 + struct acpi_tad_rt rt; 644 + int ret; 645 + 646 + PM_RUNTIME_ACQUIRE(dev, pm); 647 + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) 648 + return -ENXIO; 649 + 650 + if (t->enabled) { 651 + /* 652 + * The value to pass to _STV is expected to be the number of 653 + * seconds between the time when the timer is programmed and the 654 + * time when it expires represented as a 32-bit integer. 655 + */ 656 + ret = __acpi_tad_get_real_time(dev, &rt); 657 + if (ret) 658 + return ret; 659 + 660 + acpi_tad_rt_to_tm(&rt, &tm_now); 661 + 662 + value = ktime_divns(ktime_sub(rtc_tm_to_ktime(t->time), 663 + rtc_tm_to_ktime(tm_now)), NSEC_PER_SEC); 664 + if (value <= 0 || value > U32_MAX) 665 + return -EINVAL; 666 + } 667 + 668 + ret = __acpi_tad_wake_set(dev, "_STV", ACPI_TAD_AC_TIMER, value); 669 + if (ret && t->enabled) 670 + return ret; 671 + 672 + /* 673 + * If a separate DC alarm timer is supported, set it to the same value 674 + * as the AC alarm timer. 675 + */ 676 + if (dd->capabilities & ACPI_TAD_DC_WAKE) { 677 + ret = __acpi_tad_wake_set(dev, "_STV", ACPI_TAD_DC_TIMER, value); 678 + if (ret && t->enabled) { 679 + __acpi_tad_wake_set(dev, "_STV", ACPI_TAD_AC_TIMER, 680 + ACPI_TAD_WAKE_DISABLED); 681 + return ret; 682 + } 683 + } 684 + 685 + /* Assume success if the alarm is being disabled. */ 686 + return 0; 687 + } 688 + 689 + static int acpi_tad_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *t) 690 + { 691 + unsigned long long retval; 692 + struct rtc_time tm_now; 693 + struct acpi_tad_rt rt; 694 + int ret; 695 + 696 + PM_RUNTIME_ACQUIRE(dev, pm); 697 + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) 698 + return -ENXIO; 699 + 700 + ret = __acpi_tad_get_real_time(dev, &rt); 701 + if (ret) 702 + return ret; 703 + 704 + acpi_tad_rt_to_tm(&rt, &tm_now); 705 + 706 + /* 707 + * Assume that the alarm was set by acpi_tad_rtc_set_alarm(), so the AC 708 + * and DC alarm timer settings are the same and it is sufficient to read 709 + * the former. 710 + * 711 + * The value returned by _TIV should be the number of seconds till the 712 + * expiration of the timer, represented as a 32-bit integer, or the 713 + * special ACPI_TAD_WAKE_DISABLED value meaning that the timer has 714 + * been disabled. 715 + */ 716 + ret = __acpi_tad_wake_read(dev, "_TIV", ACPI_TAD_AC_TIMER, &retval); 717 + if (ret) 718 + return ret; 719 + 720 + if (retval > U32_MAX) 721 + return -ENODATA; 722 + 723 + t->pending = 0; 724 + 725 + if (retval != ACPI_TAD_WAKE_DISABLED) { 726 + t->enabled = 1; 727 + t->time = rtc_ktime_to_tm(ktime_add_ns(rtc_tm_to_ktime(tm_now), 728 + (u64)retval * NSEC_PER_SEC)); 729 + } else { 730 + t->enabled = 0; 731 + t->time = tm_now; 732 + } 733 + 734 + return 0; 735 + } 736 + 737 + static const struct rtc_class_ops acpi_tad_rtc_ops = { 738 + .read_time = acpi_tad_rtc_read_time, 739 + .set_time = acpi_tad_rtc_set_time, 740 + .set_alarm = acpi_tad_rtc_set_alarm, 741 + .read_alarm = acpi_tad_rtc_read_alarm, 742 + }; 743 + 744 + static void acpi_tad_register_rtc(struct device *dev, unsigned long long caps) 745 + { 746 + struct rtc_device *rtc; 747 + 748 + rtc = devm_rtc_allocate_device(dev); 749 + if (IS_ERR(rtc)) 750 + return; 751 + 752 + rtc->range_min = mktime64(1900, 1, 1, 0, 0, 0); 753 + rtc->range_max = mktime64(9999, 12, 31, 23, 59, 59); 754 + 755 + rtc->ops = &acpi_tad_rtc_ops; 756 + 757 + if (!(caps & ACPI_TAD_AC_WAKE)) 758 + clear_bit(RTC_FEATURE_ALARM, rtc->features); 759 + 760 + devm_rtc_register_device(rtc); 761 + } 762 + #else /* !CONFIG_RTC_CLASS */ 763 + static inline void acpi_tad_register_rtc(struct device *dev, 764 + unsigned long long caps) {} 765 + #endif /* !CONFIG_RTC_CLASS */ 766 + 767 + /* Platform driver interface */ 581 768 582 769 static int acpi_tad_disable_timer(struct device *dev, u32 timer_id) 583 770 { ··· 798 563 static void acpi_tad_remove(struct platform_device *pdev) 799 564 { 800 565 struct device *dev = &pdev->dev; 801 - acpi_handle handle = ACPI_HANDLE(dev); 802 566 struct acpi_tad_driver_data *dd = dev_get_drvdata(dev); 803 567 804 568 device_init_wakeup(dev, false); 805 569 806 - if (dd->capabilities & ACPI_TAD_RT) 807 - sysfs_remove_group(&dev->kobj, &acpi_tad_time_attr_group); 808 - 809 - if (dd->capabilities & ACPI_TAD_DC_WAKE) 810 - sysfs_remove_group(&dev->kobj, &acpi_tad_dc_attr_group); 811 - 812 - sysfs_remove_group(&dev->kobj, &acpi_tad_attr_group); 813 - 814 570 scoped_guard(pm_runtime_noresume, dev) { 815 - acpi_tad_disable_timer(dev, ACPI_TAD_AC_TIMER); 816 - acpi_tad_clear_status(dev, ACPI_TAD_AC_TIMER); 571 + if (dd->capabilities & ACPI_TAD_AC_WAKE) { 572 + acpi_tad_disable_timer(dev, ACPI_TAD_AC_TIMER); 573 + acpi_tad_clear_status(dev, ACPI_TAD_AC_TIMER); 574 + } 817 575 if (dd->capabilities & ACPI_TAD_DC_WAKE) { 818 576 acpi_tad_disable_timer(dev, ACPI_TAD_DC_TIMER); 819 577 acpi_tad_clear_status(dev, ACPI_TAD_DC_TIMER); ··· 815 587 816 588 pm_runtime_suspend(dev); 817 589 pm_runtime_disable(dev); 818 - acpi_remove_cmos_rtc_space_handler(handle); 819 590 } 820 591 821 592 static int acpi_tad_probe(struct platform_device *pdev) ··· 824 597 struct acpi_tad_driver_data *dd; 825 598 acpi_status status; 826 599 unsigned long long caps; 827 - int ret; 828 600 829 - ret = acpi_install_cmos_rtc_space_handler(handle); 830 - if (ret < 0) { 831 - dev_info(dev, "Unable to install space handler\n"); 832 - return -ENODEV; 833 - } 834 601 /* 835 602 * Initialization failure messages are mostly about firmware issues, so 836 603 * print them at the "info" level. ··· 832 611 status = acpi_evaluate_integer(handle, "_GCP", NULL, &caps); 833 612 if (ACPI_FAILURE(status)) { 834 613 dev_info(dev, "Unable to get capabilities\n"); 835 - ret = -ENODEV; 836 - goto remove_handler; 837 - } 838 - 839 - if (!(caps & ACPI_TAD_AC_WAKE)) { 840 - dev_info(dev, "Unsupported capabilities\n"); 841 - ret = -ENODEV; 842 - goto remove_handler; 614 + return -ENODEV; 843 615 } 844 616 845 617 if (!acpi_has_method(handle, "_PRW")) { 846 618 dev_info(dev, "Missing _PRW\n"); 847 - ret = -ENODEV; 848 - goto remove_handler; 619 + caps &= ~(ACPI_TAD_AC_WAKE | ACPI_TAD_DC_WAKE); 849 620 } 850 621 622 + if (!(caps & ACPI_TAD_AC_WAKE)) 623 + caps &= ~ACPI_TAD_DC_WAKE; 624 + 851 625 dd = devm_kzalloc(dev, sizeof(*dd), GFP_KERNEL); 852 - if (!dd) { 853 - ret = -ENOMEM; 854 - goto remove_handler; 855 - } 626 + if (!dd) 627 + return -ENOMEM; 856 628 857 629 dd->capabilities = caps; 858 630 dev_set_drvdata(dev, dd); ··· 856 642 * runtime suspend. Everything else should be taken care of by the ACPI 857 643 * PM domain callbacks. 858 644 */ 859 - device_init_wakeup(dev, true); 860 - dev_pm_set_driver_flags(dev, DPM_FLAG_SMART_SUSPEND | 861 - DPM_FLAG_MAY_SKIP_RESUME); 645 + if (ACPI_TAD_AC_WAKE) { 646 + device_init_wakeup(dev, true); 647 + dev_pm_set_driver_flags(dev, DPM_FLAG_SMART_SUSPEND | 648 + DPM_FLAG_MAY_SKIP_RESUME); 649 + } 650 + 862 651 /* 863 652 * The platform bus type layer tells the ACPI PM domain powers up the 864 653 * device, so set the runtime PM status of it to "active". ··· 870 653 pm_runtime_enable(dev); 871 654 pm_runtime_suspend(dev); 872 655 873 - ret = sysfs_create_group(&dev->kobj, &acpi_tad_attr_group); 874 - if (ret) 875 - goto fail; 876 - 877 - if (caps & ACPI_TAD_DC_WAKE) { 878 - ret = sysfs_create_group(&dev->kobj, &acpi_tad_dc_attr_group); 879 - if (ret) 880 - goto fail; 881 - } 882 - 883 - if (caps & ACPI_TAD_RT) { 884 - ret = sysfs_create_group(&dev->kobj, &acpi_tad_time_attr_group); 885 - if (ret) 886 - goto fail; 887 - } 656 + if (caps & ACPI_TAD_RT) 657 + acpi_tad_register_rtc(dev, caps); 888 658 889 659 return 0; 890 - 891 - fail: 892 - acpi_tad_remove(pdev); 893 - /* Don't fallthrough because cmos rtc space handler is removed in acpi_tad_remove() */ 894 - return ret; 895 - 896 - remove_handler: 897 - acpi_remove_cmos_rtc_space_handler(handle); 898 - return ret; 899 660 } 900 661 901 662 static const struct acpi_device_id acpi_tad_ids[] = { ··· 885 690 .driver = { 886 691 .name = "acpi-tad", 887 692 .acpi_match_table = acpi_tad_ids, 693 + .dev_groups = acpi_tad_attr_groups, 888 694 }, 889 695 .probe = acpi_tad_probe, 890 696 .remove = acpi_tad_remove,
+46 -54
drivers/acpi/acpi_video.c
··· 30 30 #include <linux/uaccess.h> 31 31 #include <linux/string_choices.h> 32 32 33 - #define ACPI_VIDEO_BUS_NAME "Video Bus" 34 - #define ACPI_VIDEO_DEVICE_NAME "Video Device" 35 - 36 33 #define MAX_NAME_LEN 20 37 34 38 35 MODULE_AUTHOR("Bruno Ducrot"); ··· 1141 1144 return -ENOMEM; 1142 1145 } 1143 1146 1144 - strscpy(acpi_device_name(device), ACPI_VIDEO_DEVICE_NAME); 1145 - strscpy(acpi_device_class(device), ACPI_VIDEO_CLASS); 1146 - 1147 1147 data->device_id = device_id; 1148 1148 data->video = video; 1149 1149 data->dev = device; ··· 1564 1570 break; 1565 1571 } 1566 1572 1567 - if (acpi_notifier_call_chain(device, event, 0)) 1573 + if (acpi_notifier_call_chain(ACPI_VIDEO_CLASS, acpi_device_bid(device), 1574 + event, 0)) 1568 1575 /* Something vetoed the keypress. */ 1569 1576 keycode = 0; 1570 1577 ··· 1606 1611 if (video_device->backlight) 1607 1612 backlight_force_update(video_device->backlight, 1608 1613 BACKLIGHT_UPDATE_HOTKEY); 1609 - acpi_notifier_call_chain(device, event, 0); 1614 + acpi_notifier_call_chain(ACPI_VIDEO_CLASS, acpi_device_bid(device), 1615 + event, 0); 1610 1616 return; 1611 1617 } 1612 1618 ··· 1640 1644 if (keycode) 1641 1645 may_report_brightness_keys = true; 1642 1646 1643 - acpi_notifier_call_chain(device, event, 0); 1647 + acpi_notifier_call_chain(ACPI_VIDEO_CLASS, acpi_device_bid(device), 1648 + event, 0); 1644 1649 1645 1650 if (keycode && (report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS)) { 1646 1651 input_report_key(input, keycode, 1); ··· 1676 1679 return NOTIFY_OK; 1677 1680 } 1678 1681 return NOTIFY_DONE; 1679 - } 1680 - 1681 - static acpi_status 1682 - acpi_video_bus_match(acpi_handle handle, u32 level, void *context, 1683 - void **return_value) 1684 - { 1685 - struct acpi_device *device = context; 1686 - struct acpi_device *sibling; 1687 - 1688 - if (handle == device->handle) 1689 - return AE_CTRL_TERMINATE; 1690 - 1691 - sibling = acpi_fetch_acpi_dev(handle); 1692 - if (!sibling) 1693 - return AE_OK; 1694 - 1695 - if (!strcmp(acpi_device_name(sibling), ACPI_VIDEO_BUS_NAME)) 1696 - return AE_ALREADY_EXISTS; 1697 - 1698 - return AE_OK; 1699 1682 } 1700 1683 1701 1684 static void acpi_video_dev_register_backlight(struct acpi_video_device *device) ··· 1879 1902 snprintf(video->phys, sizeof(video->phys), 1880 1903 "%s/video/input0", acpi_device_hid(video->device)); 1881 1904 1882 - input->name = acpi_device_name(video->device); 1905 + input->name = "Video Bus"; 1883 1906 input->phys = video->phys; 1884 1907 input->id.bustype = BUS_HOST; 1885 1908 input->id.product = 0x06; ··· 1953 1976 return 0; 1954 1977 } 1955 1978 1956 - static int instance; 1979 + static int duplicate_dev_check(struct device *sibling, void *data) 1980 + { 1981 + struct acpi_video_bus *video; 1982 + 1983 + if (sibling == data || !dev_is_auxiliary(sibling)) 1984 + return 0; 1985 + 1986 + guard(mutex)(&video_list_lock); 1987 + 1988 + list_for_each_entry(video, &video_bus_head, entry) { 1989 + if (video == dev_get_drvdata(sibling)) 1990 + return -EEXIST; 1991 + } 1992 + 1993 + return 0; 1994 + } 1995 + 1996 + static bool acpi_video_bus_dev_is_duplicate(struct device *dev) 1997 + { 1998 + return device_for_each_child(dev->parent, dev, duplicate_dev_check); 1999 + } 1957 2000 1958 2001 static int acpi_video_bus_probe(struct auxiliary_device *aux_dev, 1959 2002 const struct auxiliary_device_id *id_unused) 1960 2003 { 1961 2004 struct acpi_device *device = ACPI_COMPANION(&aux_dev->dev); 2005 + static DEFINE_MUTEX(probe_lock); 1962 2006 struct acpi_video_bus *video; 2007 + static int instance; 1963 2008 bool auto_detect; 1964 2009 int error; 1965 - acpi_status status; 1966 2010 1967 - status = acpi_walk_namespace(ACPI_TYPE_DEVICE, 1968 - acpi_dev_parent(device)->handle, 1, 1969 - acpi_video_bus_match, NULL, 1970 - device, NULL); 1971 - if (status == AE_ALREADY_EXISTS) { 2011 + /* Probe one video bus device at a time in case there are duplicates. */ 2012 + guard(mutex)(&probe_lock); 2013 + 2014 + if (!allow_duplicates && acpi_video_bus_dev_is_duplicate(&aux_dev->dev)) { 1972 2015 pr_info(FW_BUG 1973 2016 "Duplicate ACPI video bus devices for the" 1974 2017 " same VGA controller, please try module " 1975 2018 "parameter \"video.allow_duplicates=1\"" 1976 2019 "if the current driver doesn't work.\n"); 1977 - if (!allow_duplicates) 1978 - return -ENODEV; 2020 + return -ENODEV; 1979 2021 } 1980 2022 1981 2023 video = kzalloc_obj(struct acpi_video_bus); 1982 2024 if (!video) 1983 2025 return -ENOMEM; 1984 2026 1985 - /* a hack to fix the duplicate name "VID" problem on T61 */ 1986 - if (!strcmp(device->pnp.bus_id, "VID")) { 2027 + /* 2028 + * A hack to fix the duplicate name "VID" problem on T61 and the 2029 + * duplicate name "VGA" problem on Pa 3553. 2030 + */ 2031 + if (!strcmp(device->pnp.bus_id, "VID") || 2032 + !strcmp(device->pnp.bus_id, "VGA")) { 1987 2033 if (instance) 1988 2034 device->pnp.bus_id[3] = '0' + instance; 1989 - instance++; 1990 - } 1991 - /* a hack to fix the duplicate name "VGA" problem on Pa 3553 */ 1992 - if (!strcmp(device->pnp.bus_id, "VGA")) { 1993 - if (instance) 1994 - device->pnp.bus_id[3] = '0' + instance; 2035 + 1995 2036 instance++; 1996 2037 } 1997 2038 1998 2039 auxiliary_set_drvdata(aux_dev, video); 1999 2040 2000 2041 video->device = device; 2001 - strscpy(acpi_device_name(device), ACPI_VIDEO_BUS_NAME); 2002 - strscpy(acpi_device_class(device), ACPI_VIDEO_CLASS); 2003 2042 device->driver_data = video; 2004 2043 2005 2044 acpi_video_bus_find_cap(video); ··· 2036 2043 */ 2037 2044 acpi_device_fix_up_power_children(device); 2038 2045 2039 - pr_info("%s [%s] (multi-head: %s rom: %s post: %s)\n", 2040 - ACPI_VIDEO_DEVICE_NAME, acpi_device_bid(device), 2041 - str_yes_no(video->flags.multihead), 2042 - str_yes_no(video->flags.rom), 2043 - str_yes_no(video->flags.post)); 2046 + pr_info("Video Device [%s] (multi-head: %s rom: %s post: %s)\n", 2047 + acpi_device_bid(device), str_yes_no(video->flags.multihead), 2048 + str_yes_no(video->flags.rom), str_yes_no(video->flags.post)); 2049 + 2044 2050 mutex_lock(&video_list_lock); 2045 2051 list_add_tail(&video->entry, &video_bus_head); 2046 2052 mutex_unlock(&video_list_lock);
+1 -2
drivers/acpi/acpica/utnonansi.c
··· 168 168 { 169 169 /* Always terminate destination string */ 170 170 171 - strncpy(dest, source, dest_size); 172 - dest[dest_size - 1] = 0; 171 + strscpy_pad(dest, source, dest_size); 173 172 } 174 173 175 174 #endif
+14
drivers/acpi/apei/Kconfig
··· 74 74 75 75 If unsure say 'n' 76 76 77 + config ACPI_APEI_GHES_NVIDIA 78 + tristate "NVIDIA GHES vendor record handler" 79 + depends on ACPI_APEI_GHES 80 + help 81 + Support for decoding NVIDIA-specific CPER sections delivered via 82 + the APEI GHES vendor record notifier chain. Registers a handler 83 + for the NVIDIA section GUID and logs error signatures, severity, 84 + socket, and diagnostic register address-value pairs. 85 + 86 + Enable on NVIDIA server platforms (e.g. DGX, HGX) that expose 87 + ACPI device NVDA2012 in their firmware tables. 88 + 89 + If unsure, say N. 90 + 77 91 config ACPI_APEI_ERST_DEBUG 78 92 tristate "APEI Error Record Serialization Table (ERST) Debug Support" 79 93 depends on ACPI_APEI
+1
drivers/acpi/apei/Makefile
··· 10 10 einj-y := einj-core.o 11 11 einj-$(CONFIG_ACPI_APEI_EINJ_CXL) += einj-cxl.o 12 12 obj-$(CONFIG_ACPI_APEI_ERST_DEBUG) += erst-dbg.o 13 + obj-$(CONFIG_ACPI_APEI_GHES_NVIDIA) += ghes-nvidia.o 13 14 14 15 apei-y := apei-base.o hest.o erst.o bert.o
+149
drivers/acpi/apei/ghes-nvidia.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * NVIDIA GHES vendor record handler 4 + * 5 + * Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. 6 + */ 7 + 8 + #include <linux/acpi.h> 9 + #include <linux/module.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/types.h> 12 + #include <linux/uuid.h> 13 + #include <acpi/ghes.h> 14 + 15 + static const guid_t nvidia_sec_guid = 16 + GUID_INIT(0x6d5244f2, 0x2712, 0x11ec, 17 + 0xbe, 0xa7, 0xcb, 0x3f, 0xdb, 0x95, 0xc7, 0x86); 18 + 19 + struct cper_sec_nvidia { 20 + char signature[16]; 21 + __le16 error_type; 22 + __le16 error_instance; 23 + u8 severity; 24 + u8 socket; 25 + u8 number_regs; 26 + u8 reserved; 27 + __le64 instance_base; 28 + struct { 29 + __le64 addr; 30 + __le64 val; 31 + } regs[] __counted_by(number_regs); 32 + }; 33 + 34 + struct nvidia_ghes_private { 35 + struct notifier_block nb; 36 + struct device *dev; 37 + }; 38 + 39 + static void nvidia_ghes_print_error(struct device *dev, 40 + const struct cper_sec_nvidia *nvidia_err, 41 + size_t error_data_length, bool fatal) 42 + { 43 + const char *level = fatal ? KERN_ERR : KERN_INFO; 44 + size_t min_size; 45 + 46 + dev_printk(level, dev, "signature: %.16s\n", nvidia_err->signature); 47 + dev_printk(level, dev, "error_type: %u\n", le16_to_cpu(nvidia_err->error_type)); 48 + dev_printk(level, dev, "error_instance: %u\n", le16_to_cpu(nvidia_err->error_instance)); 49 + dev_printk(level, dev, "severity: %u\n", nvidia_err->severity); 50 + dev_printk(level, dev, "socket: %u\n", nvidia_err->socket); 51 + dev_printk(level, dev, "number_regs: %u\n", nvidia_err->number_regs); 52 + dev_printk(level, dev, "instance_base: 0x%016llx\n", 53 + le64_to_cpu(nvidia_err->instance_base)); 54 + 55 + if (nvidia_err->number_regs == 0) 56 + return; 57 + 58 + /* 59 + * Validate that all registers fit within error_data_length. 60 + * Each register pair is two little-endian u64s. 61 + */ 62 + min_size = struct_size(nvidia_err, regs, nvidia_err->number_regs); 63 + if (error_data_length < min_size) { 64 + dev_err(dev, "Invalid number_regs %u (section size %zu, need %zu)\n", 65 + nvidia_err->number_regs, error_data_length, min_size); 66 + return; 67 + } 68 + 69 + for (int i = 0; i < nvidia_err->number_regs; i++) 70 + dev_printk(level, dev, "register[%d]: address=0x%016llx value=0x%016llx\n", 71 + i, le64_to_cpu(nvidia_err->regs[i].addr), 72 + le64_to_cpu(nvidia_err->regs[i].val)); 73 + } 74 + 75 + static int nvidia_ghes_notify(struct notifier_block *nb, 76 + unsigned long event, void *data) 77 + { 78 + struct acpi_hest_generic_data *gdata = data; 79 + struct nvidia_ghes_private *priv; 80 + const struct cper_sec_nvidia *nvidia_err; 81 + guid_t sec_guid; 82 + 83 + import_guid(&sec_guid, gdata->section_type); 84 + if (!guid_equal(&sec_guid, &nvidia_sec_guid)) 85 + return NOTIFY_DONE; 86 + 87 + priv = container_of(nb, struct nvidia_ghes_private, nb); 88 + 89 + if (acpi_hest_get_error_length(gdata) < sizeof(*nvidia_err)) { 90 + dev_err(priv->dev, "Section too small (%d < %zu)\n", 91 + acpi_hest_get_error_length(gdata), sizeof(*nvidia_err)); 92 + return NOTIFY_OK; 93 + } 94 + 95 + nvidia_err = acpi_hest_get_payload(gdata); 96 + 97 + if (event >= GHES_SEV_RECOVERABLE) 98 + dev_err(priv->dev, "NVIDIA CPER section, error_data_length: %u\n", 99 + acpi_hest_get_error_length(gdata)); 100 + else 101 + dev_info(priv->dev, "NVIDIA CPER section, error_data_length: %u\n", 102 + acpi_hest_get_error_length(gdata)); 103 + 104 + nvidia_ghes_print_error(priv->dev, nvidia_err, acpi_hest_get_error_length(gdata), 105 + event >= GHES_SEV_RECOVERABLE); 106 + 107 + return NOTIFY_OK; 108 + } 109 + 110 + static int nvidia_ghes_probe(struct platform_device *pdev) 111 + { 112 + struct nvidia_ghes_private *priv; 113 + int ret; 114 + 115 + priv = devm_kmalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 116 + if (!priv) 117 + return -ENOMEM; 118 + 119 + *priv = (struct nvidia_ghes_private) { 120 + .nb.notifier_call = nvidia_ghes_notify, 121 + .dev = &pdev->dev, 122 + }; 123 + 124 + ret = devm_ghes_register_vendor_record_notifier(&pdev->dev, &priv->nb); 125 + if (ret) 126 + return dev_err_probe(&pdev->dev, ret, 127 + "Failed to register NVIDIA GHES vendor record notifier\n"); 128 + 129 + return 0; 130 + } 131 + 132 + static const struct acpi_device_id nvidia_ghes_acpi_match[] = { 133 + { "NVDA2012" }, 134 + { } 135 + }; 136 + MODULE_DEVICE_TABLE(acpi, nvidia_ghes_acpi_match); 137 + 138 + static struct platform_driver nvidia_ghes_driver = { 139 + .driver = { 140 + .name = "nvidia-ghes", 141 + .acpi_match_table = nvidia_ghes_acpi_match, 142 + }, 143 + .probe = nvidia_ghes_probe, 144 + }; 145 + module_platform_driver(nvidia_ghes_driver); 146 + 147 + MODULE_AUTHOR("Kai-Heng Feng <kaihengf@nvidia.com>"); 148 + MODULE_DESCRIPTION("NVIDIA GHES vendor CPER record handler"); 149 + MODULE_LICENSE("GPL");
+18
drivers/acpi/apei/ghes.c
··· 689 689 } 690 690 EXPORT_SYMBOL_GPL(ghes_unregister_vendor_record_notifier); 691 691 692 + static void ghes_vendor_record_notifier_destroy(void *nb) 693 + { 694 + ghes_unregister_vendor_record_notifier(nb); 695 + } 696 + 697 + int devm_ghes_register_vendor_record_notifier(struct device *dev, 698 + struct notifier_block *nb) 699 + { 700 + int ret; 701 + 702 + ret = ghes_register_vendor_record_notifier(nb); 703 + if (ret) 704 + return ret; 705 + 706 + return devm_add_action_or_reset(dev, ghes_vendor_record_notifier_destroy, nb); 707 + } 708 + EXPORT_SYMBOL_GPL(devm_ghes_register_vendor_record_notifier); 709 + 692 710 static void ghes_vendor_record_work_func(struct work_struct *work) 693 711 { 694 712 struct ghes_vendor_record_entry *entry;
+3 -6
drivers/acpi/battery.c
··· 33 33 #define ACPI_BATTERY_CAPACITY_VALID(capacity) \ 34 34 ((capacity) != 0 && (capacity) != ACPI_BATTERY_VALUE_UNKNOWN) 35 35 36 - #define ACPI_BATTERY_DEVICE_NAME "Battery" 37 - 38 36 /* Battery power unit: 0 means mW, 1 means mA */ 39 37 #define ACPI_BATTERY_POWER_UNIT_MA 1 40 38 ··· 1078 1080 if (event == ACPI_BATTERY_NOTIFY_INFO) 1079 1081 acpi_battery_refresh(battery); 1080 1082 acpi_battery_update(battery, false); 1081 - acpi_bus_generate_netlink_event(device->pnp.device_class, 1083 + acpi_bus_generate_netlink_event(ACPI_BATTERY_CLASS, 1082 1084 dev_name(&device->dev), event, 1083 1085 acpi_battery_present(battery)); 1084 - acpi_notifier_call_chain(device, event, acpi_battery_present(battery)); 1086 + acpi_notifier_call_chain(ACPI_BATTERY_CLASS, acpi_device_bid(device), 1087 + event, acpi_battery_present(battery)); 1085 1088 /* acpi_battery_update could remove power_supply object */ 1086 1089 if (old && battery->bat) 1087 1090 power_supply_changed(battery->bat); ··· 1228 1229 platform_set_drvdata(pdev, battery); 1229 1230 1230 1231 battery->device = device; 1231 - strscpy(acpi_device_name(device), ACPI_BATTERY_DEVICE_NAME); 1232 - strscpy(acpi_device_class(device), ACPI_BATTERY_CLASS); 1233 1232 1234 1233 result = devm_mutex_init(&pdev->dev, &battery->update_lock); 1235 1234 if (result)
+6 -5
drivers/acpi/button.c
··· 468 468 input_report_key(input, keycode, 0); 469 469 input_sync(input); 470 470 471 - acpi_bus_generate_netlink_event(device->pnp.device_class, 471 + acpi_bus_generate_netlink_event(acpi_device_class(device), 472 472 dev_name(&device->dev), 473 473 event, ++button->pushed); 474 474 } ··· 558 558 goto err_free_button; 559 559 } 560 560 561 - name = acpi_device_name(device); 562 561 class = acpi_device_class(device); 563 562 564 563 if (!strcmp(hid, ACPI_BUTTON_HID_POWER) || 565 564 !strcmp(hid, ACPI_BUTTON_HID_POWERF)) { 566 565 button->type = ACPI_BUTTON_TYPE_POWER; 567 566 handler = acpi_button_notify; 568 - strscpy(name, ACPI_BUTTON_DEVICE_NAME_POWER, MAX_ACPI_DEVICE_NAME_LEN); 567 + name = ACPI_BUTTON_DEVICE_NAME_POWER; 569 568 sprintf(class, "%s/%s", 570 569 ACPI_BUTTON_CLASS, ACPI_BUTTON_SUBCLASS_POWER); 571 570 } else if (!strcmp(hid, ACPI_BUTTON_HID_SLEEP) || 572 571 !strcmp(hid, ACPI_BUTTON_HID_SLEEPF)) { 573 572 button->type = ACPI_BUTTON_TYPE_SLEEP; 574 573 handler = acpi_button_notify; 575 - strscpy(name, ACPI_BUTTON_DEVICE_NAME_SLEEP, MAX_ACPI_DEVICE_NAME_LEN); 574 + name = ACPI_BUTTON_DEVICE_NAME_SLEEP; 576 575 sprintf(class, "%s/%s", 577 576 ACPI_BUTTON_CLASS, ACPI_BUTTON_SUBCLASS_SLEEP); 578 577 } else if (!strcmp(hid, ACPI_BUTTON_HID_LID)) { 579 578 button->type = ACPI_BUTTON_TYPE_LID; 580 579 handler = acpi_lid_notify; 581 - strscpy(name, ACPI_BUTTON_DEVICE_NAME_LID, MAX_ACPI_DEVICE_NAME_LEN); 580 + name = ACPI_BUTTON_DEVICE_NAME_LID; 582 581 sprintf(class, "%s/%s", 583 582 ACPI_BUTTON_CLASS, ACPI_BUTTON_SUBCLASS_LID); 584 583 input->open = acpi_lid_input_open; ··· 697 698 acpi_button_remove_fs(button); 698 699 input_unregister_device(button->input); 699 700 kfree(button); 701 + 702 + memset(acpi_device_class(adev), 0, sizeof(acpi_device_class)); 700 703 } 701 704 702 705 static int param_set_lid_init_state(const char *val,
+247 -46
drivers/acpi/cppc_acpi.c
··· 177 177 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, highest_perf); 178 178 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, lowest_perf); 179 179 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, nominal_perf); 180 + show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, reference_perf); 180 181 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, lowest_nonlinear_perf); 181 182 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, guaranteed_perf); 182 183 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, lowest_freq); 183 184 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, nominal_freq); 184 185 185 - show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, reference_perf); 186 186 show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, wraparound_time); 187 187 188 188 /* Check for valid access_width, otherwise, fallback to using bit_width */ ··· 854 854 per_cpu(cpu_pcc_subspace_idx, pr->id) = pcc_subspace_id; 855 855 856 856 /* 857 + * In CPPC v1, DESIRED_PERF is mandatory. In CPPC v2, it is optional 858 + * only when AUTO_SEL_ENABLE is supported. 859 + */ 860 + if (!CPC_SUPPORTED(&cpc_ptr->cpc_regs[DESIRED_PERF]) && 861 + (!osc_sb_cppc2_support_acked || 862 + !CPC_SUPPORTED(&cpc_ptr->cpc_regs[AUTO_SEL_ENABLE]))) 863 + pr_warn("Desired perf. register is mandatory if CPPC v2 is not supported " 864 + "or autonomous selection is disabled\n"); 865 + 866 + /* 857 867 * Initialize the remaining cpc_regs as unsupported. 858 868 * Example: In case FW exposes CPPC v2, the below loop will initialize 859 869 * LOWEST_FREQ and NOMINAL_FREQ regs as unsupported ··· 1352 1342 { 1353 1343 struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum); 1354 1344 struct cpc_register_resource *highest_reg, *lowest_reg, 1355 - *lowest_non_linear_reg, *nominal_reg, *guaranteed_reg, 1356 - *low_freq_reg = NULL, *nom_freq_reg = NULL; 1357 - u64 high, low, guaranteed, nom, min_nonlinear, low_f = 0, nom_f = 0; 1345 + *lowest_non_linear_reg, *nominal_reg, *reference_reg, 1346 + *guaranteed_reg, *low_freq_reg = NULL, *nom_freq_reg = NULL; 1347 + u64 high, low, guaranteed, nom, ref, min_nonlinear, 1348 + low_f = 0, nom_f = 0; 1358 1349 int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum); 1359 1350 struct cppc_pcc_data *pcc_ss_data = NULL; 1360 1351 int ret = 0, regs_in_pcc = 0; ··· 1369 1358 lowest_reg = &cpc_desc->cpc_regs[LOWEST_PERF]; 1370 1359 lowest_non_linear_reg = &cpc_desc->cpc_regs[LOW_NON_LINEAR_PERF]; 1371 1360 nominal_reg = &cpc_desc->cpc_regs[NOMINAL_PERF]; 1361 + reference_reg = &cpc_desc->cpc_regs[REFERENCE_PERF]; 1372 1362 low_freq_reg = &cpc_desc->cpc_regs[LOWEST_FREQ]; 1373 1363 nom_freq_reg = &cpc_desc->cpc_regs[NOMINAL_FREQ]; 1374 1364 guaranteed_reg = &cpc_desc->cpc_regs[GUARANTEED_PERF]; ··· 1377 1365 /* Are any of the regs PCC ?*/ 1378 1366 if (CPC_IN_PCC(highest_reg) || CPC_IN_PCC(lowest_reg) || 1379 1367 CPC_IN_PCC(lowest_non_linear_reg) || CPC_IN_PCC(nominal_reg) || 1368 + (CPC_SUPPORTED(reference_reg) && CPC_IN_PCC(reference_reg)) || 1380 1369 CPC_IN_PCC(low_freq_reg) || CPC_IN_PCC(nom_freq_reg) || 1381 1370 CPC_IN_PCC(guaranteed_reg)) { 1382 1371 if (pcc_ss_id < 0) { ··· 1394 1381 } 1395 1382 } 1396 1383 1397 - cpc_read(cpunum, highest_reg, &high); 1384 + ret = cpc_read(cpunum, highest_reg, &high); 1385 + if (ret) 1386 + goto out_err; 1398 1387 perf_caps->highest_perf = high; 1399 1388 1400 - cpc_read(cpunum, lowest_reg, &low); 1389 + ret = cpc_read(cpunum, lowest_reg, &low); 1390 + if (ret) 1391 + goto out_err; 1401 1392 perf_caps->lowest_perf = low; 1402 1393 1403 - cpc_read(cpunum, nominal_reg, &nom); 1394 + ret = cpc_read(cpunum, nominal_reg, &nom); 1395 + if (ret) 1396 + goto out_err; 1404 1397 perf_caps->nominal_perf = nom; 1398 + 1399 + /* 1400 + * If reference perf register is not supported then we should 1401 + * use the nominal perf value 1402 + */ 1403 + if (CPC_SUPPORTED(reference_reg)) { 1404 + ret = cpc_read(cpunum, reference_reg, &ref); 1405 + if (ret) 1406 + goto out_err; 1407 + } else { 1408 + ref = nom; 1409 + } 1410 + perf_caps->reference_perf = ref; 1405 1411 1406 1412 if (guaranteed_reg->type != ACPI_TYPE_BUFFER || 1407 1413 IS_NULL_REG(&guaranteed_reg->cpc_entry.reg)) { 1408 1414 perf_caps->guaranteed_perf = 0; 1409 1415 } else { 1410 - cpc_read(cpunum, guaranteed_reg, &guaranteed); 1416 + ret = cpc_read(cpunum, guaranteed_reg, &guaranteed); 1417 + if (ret) 1418 + goto out_err; 1411 1419 perf_caps->guaranteed_perf = guaranteed; 1412 1420 } 1413 1421 1414 - cpc_read(cpunum, lowest_non_linear_reg, &min_nonlinear); 1422 + ret = cpc_read(cpunum, lowest_non_linear_reg, &min_nonlinear); 1423 + if (ret) 1424 + goto out_err; 1415 1425 perf_caps->lowest_nonlinear_perf = min_nonlinear; 1416 1426 1417 - if (!high || !low || !nom || !min_nonlinear) 1427 + if (!high || !low || !nom || !ref || !min_nonlinear) { 1418 1428 ret = -EFAULT; 1429 + goto out_err; 1430 + } 1419 1431 1420 1432 /* Read optional lowest and nominal frequencies if present */ 1421 - if (CPC_SUPPORTED(low_freq_reg)) 1422 - cpc_read(cpunum, low_freq_reg, &low_f); 1433 + if (CPC_SUPPORTED(low_freq_reg)) { 1434 + ret = cpc_read(cpunum, low_freq_reg, &low_f); 1435 + if (ret) 1436 + goto out_err; 1437 + } 1423 1438 1424 - if (CPC_SUPPORTED(nom_freq_reg)) 1425 - cpc_read(cpunum, nom_freq_reg, &nom_f); 1439 + if (CPC_SUPPORTED(nom_freq_reg)) { 1440 + ret = cpc_read(cpunum, nom_freq_reg, &nom_f); 1441 + if (ret) 1442 + goto out_err; 1443 + } 1426 1444 1427 1445 perf_caps->lowest_freq = low_f; 1428 1446 perf_caps->nominal_freq = nom_f; ··· 1475 1431 bool cppc_perf_ctrs_in_pcc_cpu(unsigned int cpu) 1476 1432 { 1477 1433 struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); 1478 - struct cpc_register_resource *ref_perf_reg; 1479 - 1480 - /* 1481 - * If reference perf register is not supported then we should use the 1482 - * nominal perf value 1483 - */ 1484 - ref_perf_reg = &cpc_desc->cpc_regs[REFERENCE_PERF]; 1485 - if (!CPC_SUPPORTED(ref_perf_reg)) 1486 - ref_perf_reg = &cpc_desc->cpc_regs[NOMINAL_PERF]; 1487 1434 1488 1435 return CPC_IN_PCC(&cpc_desc->cpc_regs[DELIVERED_CTR]) || 1489 1436 CPC_IN_PCC(&cpc_desc->cpc_regs[REFERENCE_CTR]) || 1490 - CPC_IN_PCC(&cpc_desc->cpc_regs[CTR_WRAP_TIME]) || 1491 - CPC_IN_PCC(ref_perf_reg); 1437 + CPC_IN_PCC(&cpc_desc->cpc_regs[CTR_WRAP_TIME]); 1492 1438 } 1493 1439 EXPORT_SYMBOL_GPL(cppc_perf_ctrs_in_pcc_cpu); 1494 1440 ··· 1515 1481 { 1516 1482 struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum); 1517 1483 struct cpc_register_resource *delivered_reg, *reference_reg, 1518 - *ref_perf_reg, *ctr_wrap_reg; 1484 + *ctr_wrap_reg; 1519 1485 int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum); 1520 1486 struct cppc_pcc_data *pcc_ss_data = NULL; 1521 - u64 delivered, reference, ref_perf, ctr_wrap_time; 1487 + u64 delivered, reference, ctr_wrap_time; 1522 1488 int ret = 0, regs_in_pcc = 0; 1523 1489 1524 1490 if (!cpc_desc) { ··· 1528 1494 1529 1495 delivered_reg = &cpc_desc->cpc_regs[DELIVERED_CTR]; 1530 1496 reference_reg = &cpc_desc->cpc_regs[REFERENCE_CTR]; 1531 - ref_perf_reg = &cpc_desc->cpc_regs[REFERENCE_PERF]; 1532 1497 ctr_wrap_reg = &cpc_desc->cpc_regs[CTR_WRAP_TIME]; 1533 - 1534 - /* 1535 - * If reference perf register is not supported then we should 1536 - * use the nominal perf value 1537 - */ 1538 - if (!CPC_SUPPORTED(ref_perf_reg)) 1539 - ref_perf_reg = &cpc_desc->cpc_regs[NOMINAL_PERF]; 1540 1498 1541 1499 /* Are any of the regs PCC ?*/ 1542 1500 if (CPC_IN_PCC(delivered_reg) || CPC_IN_PCC(reference_reg) || 1543 - CPC_IN_PCC(ctr_wrap_reg) || CPC_IN_PCC(ref_perf_reg)) { 1501 + CPC_IN_PCC(ctr_wrap_reg)) { 1544 1502 if (pcc_ss_id < 0) { 1545 1503 pr_debug("Invalid pcc_ss_id\n"); 1546 1504 return -ENODEV; ··· 1547 1521 } 1548 1522 } 1549 1523 1550 - cpc_read(cpunum, delivered_reg, &delivered); 1551 - cpc_read(cpunum, reference_reg, &reference); 1552 - cpc_read(cpunum, ref_perf_reg, &ref_perf); 1524 + ret = cpc_read(cpunum, delivered_reg, &delivered); 1525 + if (ret) 1526 + goto out_err; 1527 + 1528 + ret = cpc_read(cpunum, reference_reg, &reference); 1529 + if (ret) 1530 + goto out_err; 1553 1531 1554 1532 /* 1555 1533 * Per spec, if ctr_wrap_time optional register is unsupported, then the ··· 1561 1531 * platform 1562 1532 */ 1563 1533 ctr_wrap_time = (u64)(~((u64)0)); 1564 - if (CPC_SUPPORTED(ctr_wrap_reg)) 1565 - cpc_read(cpunum, ctr_wrap_reg, &ctr_wrap_time); 1534 + if (CPC_SUPPORTED(ctr_wrap_reg)) { 1535 + ret = cpc_read(cpunum, ctr_wrap_reg, &ctr_wrap_time); 1536 + if (ret) 1537 + goto out_err; 1538 + } 1566 1539 1567 - if (!delivered || !reference || !ref_perf) { 1540 + if (!delivered || !reference) { 1568 1541 ret = -EFAULT; 1569 1542 goto out_err; 1570 1543 } 1571 1544 1572 1545 perf_fb_ctrs->delivered = delivered; 1573 1546 perf_fb_ctrs->reference = reference; 1574 - perf_fb_ctrs->reference_perf = ref_perf; 1575 1547 perf_fb_ctrs->wraparound_time = ctr_wrap_time; 1576 1548 out_err: 1577 1549 if (regs_in_pcc) ··· 1593 1561 struct cpc_register_resource *auto_sel_reg; 1594 1562 struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); 1595 1563 struct cppc_pcc_data *pcc_ss_data = NULL; 1564 + bool autosel_ffh_sysmem; 1565 + bool epp_ffh_sysmem; 1596 1566 int ret; 1597 1567 1598 1568 if (!cpc_desc) { ··· 1604 1570 1605 1571 auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE]; 1606 1572 epp_set_reg = &cpc_desc->cpc_regs[ENERGY_PERF]; 1573 + 1574 + epp_ffh_sysmem = CPC_SUPPORTED(epp_set_reg) && 1575 + (CPC_IN_FFH(epp_set_reg) || CPC_IN_SYSTEM_MEMORY(epp_set_reg)); 1576 + autosel_ffh_sysmem = CPC_SUPPORTED(auto_sel_reg) && 1577 + (CPC_IN_FFH(auto_sel_reg) || CPC_IN_SYSTEM_MEMORY(auto_sel_reg)); 1607 1578 1608 1579 if (CPC_IN_PCC(epp_set_reg) || CPC_IN_PCC(auto_sel_reg)) { 1609 1580 if (pcc_ss_id < 0) { ··· 1635 1596 ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE); 1636 1597 up_write(&pcc_ss_data->pcc_lock); 1637 1598 } else if (osc_cpc_flexible_adr_space_confirmed && 1638 - CPC_SUPPORTED(epp_set_reg) && CPC_IN_FFH(epp_set_reg)) { 1639 - ret = cpc_write(cpu, epp_set_reg, perf_ctrls->energy_perf); 1599 + (epp_ffh_sysmem || autosel_ffh_sysmem)) { 1600 + if (autosel_ffh_sysmem) { 1601 + ret = cpc_write(cpu, auto_sel_reg, enable); 1602 + if (ret) 1603 + return ret; 1604 + } 1605 + 1606 + if (epp_ffh_sysmem) { 1607 + ret = cpc_write(cpu, epp_set_reg, 1608 + perf_ctrls->energy_perf); 1609 + if (ret) 1610 + return ret; 1611 + } 1640 1612 } else { 1641 1613 ret = -ENOTSUPP; 1642 - pr_debug("_CPC in PCC and _CPC in FFH are not supported\n"); 1614 + pr_debug("_CPC in PCC/FFH/SystemMemory are not supported\n"); 1643 1615 } 1644 1616 1645 1617 return ret; ··· 1789 1739 EXPORT_SYMBOL_GPL(cppc_set_enable); 1790 1740 1791 1741 /** 1742 + * cppc_get_perf - Get a CPU's performance controls. 1743 + * @cpu: CPU for which to get performance controls. 1744 + * @perf_ctrls: ptr to cppc_perf_ctrls. See cppc_acpi.h 1745 + * 1746 + * Return: 0 for success with perf_ctrls, -ERRNO otherwise. 1747 + */ 1748 + int cppc_get_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) 1749 + { 1750 + struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); 1751 + struct cpc_register_resource *desired_perf_reg, 1752 + *min_perf_reg, *max_perf_reg, 1753 + *energy_perf_reg, *auto_sel_reg; 1754 + u64 desired_perf = 0, min = 0, max = 0, energy_perf = 0, auto_sel = 0; 1755 + int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); 1756 + struct cppc_pcc_data *pcc_ss_data = NULL; 1757 + int ret = 0, regs_in_pcc = 0; 1758 + 1759 + if (!cpc_desc) { 1760 + pr_debug("No CPC descriptor for CPU:%d\n", cpu); 1761 + return -ENODEV; 1762 + } 1763 + 1764 + if (!perf_ctrls) { 1765 + pr_debug("Invalid perf_ctrls pointer\n"); 1766 + return -EINVAL; 1767 + } 1768 + 1769 + desired_perf_reg = &cpc_desc->cpc_regs[DESIRED_PERF]; 1770 + min_perf_reg = &cpc_desc->cpc_regs[MIN_PERF]; 1771 + max_perf_reg = &cpc_desc->cpc_regs[MAX_PERF]; 1772 + energy_perf_reg = &cpc_desc->cpc_regs[ENERGY_PERF]; 1773 + auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE]; 1774 + 1775 + /* Are any of the regs PCC ?*/ 1776 + if (CPC_IN_PCC(desired_perf_reg) || CPC_IN_PCC(min_perf_reg) || 1777 + CPC_IN_PCC(max_perf_reg) || CPC_IN_PCC(energy_perf_reg) || 1778 + CPC_IN_PCC(auto_sel_reg)) { 1779 + if (pcc_ss_id < 0) { 1780 + pr_debug("Invalid pcc_ss_id for CPU:%d\n", cpu); 1781 + return -ENODEV; 1782 + } 1783 + pcc_ss_data = pcc_data[pcc_ss_id]; 1784 + regs_in_pcc = 1; 1785 + down_write(&pcc_ss_data->pcc_lock); 1786 + /* Ring doorbell once to update PCC subspace */ 1787 + if (send_pcc_cmd(pcc_ss_id, CMD_READ) < 0) { 1788 + ret = -EIO; 1789 + goto out_err; 1790 + } 1791 + } 1792 + 1793 + /* Read optional elements if present */ 1794 + if (CPC_SUPPORTED(max_perf_reg)) { 1795 + ret = cpc_read(cpu, max_perf_reg, &max); 1796 + if (ret) 1797 + goto out_err; 1798 + } 1799 + perf_ctrls->max_perf = max; 1800 + 1801 + if (CPC_SUPPORTED(min_perf_reg)) { 1802 + ret = cpc_read(cpu, min_perf_reg, &min); 1803 + if (ret) 1804 + goto out_err; 1805 + } 1806 + perf_ctrls->min_perf = min; 1807 + 1808 + if (CPC_SUPPORTED(desired_perf_reg)) { 1809 + ret = cpc_read(cpu, desired_perf_reg, &desired_perf); 1810 + if (ret) 1811 + goto out_err; 1812 + } 1813 + perf_ctrls->desired_perf = desired_perf; 1814 + 1815 + if (CPC_SUPPORTED(energy_perf_reg)) { 1816 + ret = cpc_read(cpu, energy_perf_reg, &energy_perf); 1817 + if (ret) 1818 + goto out_err; 1819 + } 1820 + perf_ctrls->energy_perf = energy_perf; 1821 + 1822 + if (CPC_SUPPORTED(auto_sel_reg)) { 1823 + ret = cpc_read(cpu, auto_sel_reg, &auto_sel); 1824 + if (ret) 1825 + goto out_err; 1826 + } 1827 + perf_ctrls->auto_sel = (bool)auto_sel; 1828 + 1829 + out_err: 1830 + if (regs_in_pcc) 1831 + up_write(&pcc_ss_data->pcc_lock); 1832 + return ret; 1833 + } 1834 + EXPORT_SYMBOL_GPL(cppc_get_perf); 1835 + 1836 + /** 1792 1837 * cppc_set_perf - Set a CPU's performance controls. 1793 1838 * @cpu: CPU for which to set performance controls. 1794 1839 * @perf_ctrls: ptr to cppc_perf_ctrls. See cppc_acpi.h ··· 2014 1869 return ret; 2015 1870 } 2016 1871 EXPORT_SYMBOL_GPL(cppc_set_perf); 1872 + 1873 + /** 1874 + * cppc_get_perf_limited - Get the Performance Limited register value. 1875 + * @cpu: CPU from which to get Performance Limited register. 1876 + * @perf_limited: Pointer to store the Performance Limited value. 1877 + * 1878 + * The returned value contains sticky status bits indicating platform-imposed 1879 + * performance limitations. 1880 + * 1881 + * Return: 0 for success, -EIO on failure, -EOPNOTSUPP if not supported. 1882 + */ 1883 + int cppc_get_perf_limited(int cpu, u64 *perf_limited) 1884 + { 1885 + return cppc_get_reg_val(cpu, PERF_LIMITED, perf_limited); 1886 + } 1887 + EXPORT_SYMBOL_GPL(cppc_get_perf_limited); 1888 + 1889 + /** 1890 + * cppc_set_perf_limited() - Clear bits in the Performance Limited register. 1891 + * @cpu: CPU on which to write register. 1892 + * @bits_to_clear: Bitmask of bits to clear in the perf_limited register. 1893 + * 1894 + * The Performance Limited register contains two sticky bits set by platform: 1895 + * - Bit 0 (Desired_Excursion): Set when delivered performance is constrained 1896 + * below desired performance. Not used when Autonomous Selection is enabled. 1897 + * - Bit 1 (Minimum_Excursion): Set when delivered performance is constrained 1898 + * below minimum performance. 1899 + * 1900 + * These bits are sticky and remain set until OSPM explicitly clears them. 1901 + * This function only allows clearing bits (the platform sets them). 1902 + * 1903 + * Return: 0 for success, -EINVAL for invalid bits, -EIO on register 1904 + * access failure, -EOPNOTSUPP if not supported. 1905 + */ 1906 + int cppc_set_perf_limited(int cpu, u64 bits_to_clear) 1907 + { 1908 + u64 current_val, new_val; 1909 + int ret; 1910 + 1911 + /* Only bits 0 and 1 are valid */ 1912 + if (bits_to_clear & ~CPPC_PERF_LIMITED_MASK) 1913 + return -EINVAL; 1914 + 1915 + if (!bits_to_clear) 1916 + return 0; 1917 + 1918 + ret = cppc_get_perf_limited(cpu, &current_val); 1919 + if (ret) 1920 + return ret; 1921 + 1922 + /* Clear the specified bits */ 1923 + new_val = current_val & ~bits_to_clear; 1924 + 1925 + return cppc_set_reg_val(cpu, PERF_LIMITED, new_val); 1926 + } 1927 + EXPORT_SYMBOL_GPL(cppc_set_perf_limited); 2017 1928 2018 1929 /** 2019 1930 * cppc_get_transition_latency - returns frequency transition latency in ns
-6
drivers/acpi/ec.c
··· 35 35 36 36 #include "internal.h" 37 37 38 - #define ACPI_EC_CLASS "embedded_controller" 39 - #define ACPI_EC_DEVICE_NAME "Embedded Controller" 40 - 41 38 /* EC status register */ 42 39 #define ACPI_EC_FLAG_OBF 0x01 /* Output buffer full */ 43 40 #define ACPI_EC_FLAG_IBF 0x02 /* Input buffer full */ ··· 1679 1682 struct acpi_device *device = ACPI_COMPANION(&pdev->dev); 1680 1683 struct acpi_ec *ec; 1681 1684 int ret; 1682 - 1683 - strscpy(acpi_device_name(device), ACPI_EC_DEVICE_NAME); 1684 - strscpy(acpi_device_class(device), ACPI_EC_CLASS); 1685 1685 1686 1686 if (boot_ec && (boot_ec->handle == device->handle || 1687 1687 !strcmp(acpi_device_hid(device), ACPI_ECDT_HID))) {
+4 -3
drivers/acpi/event.c
··· 24 24 /* ACPI notifier chain */ 25 25 static BLOCKING_NOTIFIER_HEAD(acpi_chain_head); 26 26 27 - int acpi_notifier_call_chain(struct acpi_device *dev, u32 type, u32 data) 27 + int acpi_notifier_call_chain(const char *device_class, 28 + const char *bus_id, u32 type, u32 data) 28 29 { 29 30 struct acpi_bus_event event; 30 31 31 - strscpy(event.device_class, dev->pnp.device_class); 32 - strscpy(event.bus_id, dev->pnp.bus_id); 32 + strscpy(event.device_class, device_class); 33 + strscpy(event.bus_id, bus_id); 33 34 event.type = type; 34 35 event.data = data; 35 36 return (blocking_notifier_call_chain(&acpi_chain_head, 0, (void *)&event)
+18 -1
drivers/acpi/osl.c
··· 13 13 14 14 #include <linux/module.h> 15 15 #include <linux/kernel.h> 16 + #include <linux/panic.h> 17 + #include <linux/reboot.h> 16 18 #include <linux/slab.h> 17 19 #include <linux/mm.h> 18 20 #include <linux/highmem.h> ··· 71 69 static bool acpi_os_initialized; 72 70 unsigned int acpi_sci_irq = INVALID_ACPI_IRQ; 73 71 bool acpi_permanent_mmap = false; 72 + 73 + static bool poweroff_on_fatal = true; 74 + module_param(poweroff_on_fatal, bool, 0); 75 + MODULE_PARM_DESC(poweroff_on_fatal, "Poweroff when encountering a fatal ACPI error"); 74 76 75 77 /* 76 78 * This list of permanent mappings is for memory that may be accessed from ··· 1387 1381 1388 1382 acpi_status acpi_os_signal(u32 function, void *info) 1389 1383 { 1384 + struct acpi_signal_fatal_info *fatal_info; 1385 + 1390 1386 switch (function) { 1391 1387 case ACPI_SIGNAL_FATAL: 1392 - pr_err("Fatal opcode executed\n"); 1388 + fatal_info = info; 1389 + pr_emerg("Fatal error while evaluating ACPI control method\n"); 1390 + pr_emerg("Type 0x%X Code 0x%X Argument 0x%X\n", 1391 + fatal_info->type, fatal_info->code, fatal_info->argument); 1392 + 1393 + if (poweroff_on_fatal) 1394 + orderly_poweroff(true); 1395 + else 1396 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 1397 + 1393 1398 break; 1394 1399 case ACPI_SIGNAL_BREAKPOINT: 1395 1400 /*
-4
drivers/acpi/pci_link.c
··· 29 29 30 30 #include "internal.h" 31 31 32 - #define ACPI_PCI_LINK_CLASS "pci_irq_routing" 33 - #define ACPI_PCI_LINK_DEVICE_NAME "PCI Interrupt Link" 34 32 #define ACPI_PCI_LINK_MAX_POSSIBLE 16 35 33 36 34 static int acpi_pci_link_add(struct acpi_device *device, ··· 723 725 return -ENOMEM; 724 726 725 727 link->device = device; 726 - strscpy(acpi_device_name(device), ACPI_PCI_LINK_DEVICE_NAME); 727 - strscpy(acpi_device_class(device), ACPI_PCI_LINK_CLASS); 728 728 device->driver_data = link; 729 729 730 730 mutex_lock(&acpi_link_lock);
+2 -7
drivers/acpi/pci_root.c
··· 24 24 #include <linux/platform_data/x86/apple.h> 25 25 #include "internal.h" 26 26 27 - #define ACPI_PCI_ROOT_CLASS "pci_bridge" 28 - #define ACPI_PCI_ROOT_DEVICE_NAME "PCI Root Bridge" 29 27 static int acpi_pci_root_add(struct acpi_device *device, 30 28 const struct acpi_device_id *not_used); 31 29 static void acpi_pci_root_remove(struct acpi_device *device); ··· 687 689 688 690 root->device = device; 689 691 root->segment = segment & 0xFFFF; 690 - strscpy(acpi_device_name(device), ACPI_PCI_ROOT_DEVICE_NAME); 691 - strscpy(acpi_device_class(device), ACPI_PCI_ROOT_CLASS); 692 692 device->driver_data = root; 693 693 694 694 if (hotadd && dmar_device_add(handle)) { ··· 694 698 goto end; 695 699 } 696 700 697 - pr_info("%s [%s] (domain %04x %pR)\n", 698 - acpi_device_name(device), acpi_device_bid(device), 699 - root->segment, &root->secondary); 701 + pr_info("PCI Root Bridge [%s] (domain %04x %pR)\n", 702 + acpi_device_bid(device), root->segment, &root->secondary); 700 703 701 704 root->mcfg_addr = acpi_pci_root_get_mcfg_addr(handle); 702 705
-4
drivers/acpi/power.c
··· 37 37 #include "sleep.h" 38 38 #include "internal.h" 39 39 40 - #define ACPI_POWER_CLASS "power_resource" 41 - #define ACPI_POWER_DEVICE_NAME "Power Resource" 42 40 #define ACPI_POWER_RESOURCE_STATE_OFF 0x00 43 41 #define ACPI_POWER_RESOURCE_STATE_ON 0x01 44 42 #define ACPI_POWER_RESOURCE_STATE_UNKNOWN 0xFF ··· 953 955 mutex_init(&resource->resource_lock); 954 956 INIT_LIST_HEAD(&resource->list_node); 955 957 INIT_LIST_HEAD(&resource->dependents); 956 - strscpy(acpi_device_name(device), ACPI_POWER_DEVICE_NAME); 957 - strscpy(acpi_device_class(device), ACPI_POWER_CLASS); 958 958 device->power.state = ACPI_STATE_UNKNOWN; 959 959 device->flags.match_driver = true; 960 960
+43 -38
drivers/acpi/pptt.c
··· 21 21 #include <linux/cacheinfo.h> 22 22 #include <acpi/processor.h> 23 23 24 - /* 25 - * The acpi_pptt_cache_v1 in actbl2.h, which is imported from acpica, 26 - * only contains the cache_id field rather than all the fields of the 27 - * Cache Type Structure. Use this alternative structure until it is 28 - * resolved in acpica. 29 - */ 30 - struct acpi_pptt_cache_v1_full { 31 - struct acpi_subtable_header header; 32 - u16 reserved; 33 - u32 flags; 34 - u32 next_level_of_cache; 35 - u32 size; 36 - u32 number_of_sets; 37 - u8 associativity; 38 - u8 attributes; 39 - u16 line_size; 40 - u32 cache_id; 41 - } __packed; 42 - 43 24 static struct acpi_subtable_header *fetch_pptt_subtable(struct acpi_table_header *table_hdr, 44 25 u32 pptt_ref) 45 26 { ··· 56 75 return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, pptt_ref); 57 76 } 58 77 59 - static struct acpi_pptt_cache_v1_full *upgrade_pptt_cache(struct acpi_pptt_cache *cache) 78 + static struct acpi_pptt_cache_v1 *upgrade_pptt_cache(struct acpi_pptt_cache *cache) 60 79 { 61 - if (cache->header.length < sizeof(struct acpi_pptt_cache_v1_full)) 80 + if (cache->header.length < sizeof(struct acpi_pptt_cache_v1)) 62 81 return NULL; 63 82 64 83 /* No use for v1 if the only additional field is invalid */ 65 84 if (!(cache->flags & ACPI_PPTT_CACHE_ID_VALID)) 66 85 return NULL; 67 86 68 - return (struct acpi_pptt_cache_v1_full *)cache; 87 + return (struct acpi_pptt_cache_v1 *)cache; 69 88 } 70 89 71 90 static struct acpi_subtable_header *acpi_get_pptt_resource(struct acpi_table_header *table_hdr, ··· 378 397 struct acpi_pptt_cache *found_cache, 379 398 struct acpi_pptt_processor *cpu_node) 380 399 { 381 - struct acpi_pptt_cache_v1_full *found_cache_v1; 400 + struct acpi_pptt_cache_v1 *found_cache_v1; 382 401 383 402 this_leaf->fw_token = cpu_node; 384 403 if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) ··· 440 459 { 441 460 struct acpi_pptt_cache *found_cache; 442 461 struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); 443 - u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu); 462 + u32 acpi_cpu_id; 444 463 struct cacheinfo *this_leaf; 445 464 unsigned int index = 0; 446 465 struct acpi_pptt_processor *cpu_node = NULL; 466 + 467 + if (acpi_get_cpu_uid(cpu, &acpi_cpu_id) != 0) 468 + return; 447 469 448 470 while (index < get_cpu_cacheinfo(cpu)->num_leaves) { 449 471 this_leaf = this_cpu_ci->info_list + index; ··· 530 546 unsigned int cpu, int level, int flag) 531 547 { 532 548 struct acpi_pptt_processor *cpu_node; 533 - u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu); 549 + u32 acpi_cpu_id; 550 + 551 + if (acpi_get_cpu_uid(cpu, &acpi_cpu_id) != 0) 552 + return -ENOENT; 534 553 535 554 cpu_node = acpi_find_processor_node(table, acpi_cpu_id); 536 555 if (cpu_node) { ··· 601 614 * 602 615 * Check the node representing a CPU for a given flag. 603 616 * 604 - * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found or 605 - * the table revision isn't new enough. 617 + * Return: -ENOENT if can't get CPU's ACPI Processor UID, the PPTT doesn't 618 + * exist, the CPU cannot be found or the table revision isn't new 619 + * enough. 606 620 * 1, any passed flag set 607 621 * 0, flag unset 608 622 */ 609 623 static int check_acpi_cpu_flag(unsigned int cpu, int rev, u32 flag) 610 624 { 611 625 struct acpi_table_header *table; 612 - u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu); 626 + u32 acpi_cpu_id; 613 627 struct acpi_pptt_processor *cpu_node = NULL; 614 628 int ret = -ENOENT; 629 + 630 + if (acpi_get_cpu_uid(cpu, &acpi_cpu_id) != 0) 631 + return -ENOENT; 615 632 616 633 table = acpi_get_pptt(); 617 634 if (!table) ··· 642 651 * in the PPTT. Errors caused by lack of a PPTT table, or otherwise, return 0 643 652 * indicating we didn't find any cache levels. 644 653 * 645 - * Return: -ENOENT if no PPTT table or no PPTT processor struct found. 654 + * Return: -ENOENT if no PPTT table, can't get CPU's ACPI Process UID or no PPTT 655 + * processor struct found. 646 656 * 0 on success. 647 657 */ 648 658 int acpi_get_cache_info(unsigned int cpu, unsigned int *levels, ··· 663 671 664 672 pr_debug("Cache Setup: find cache levels for CPU=%d\n", cpu); 665 673 666 - acpi_cpu_id = get_acpi_id_for_cpu(cpu); 674 + if (acpi_get_cpu_uid(cpu, &acpi_cpu_id)) 675 + return -ENOENT; 676 + 667 677 cpu_node = acpi_find_processor_node(table, acpi_cpu_id); 668 678 if (!cpu_node) 669 679 return -ENOENT; ··· 774 780 * It may not exist in single CPU systems. In simple multi-CPU systems, 775 781 * it may be equal to the package topology level. 776 782 * 777 - * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found 778 - * or there is no toplogy level above the CPU.. 783 + * Return: -ENOENT if the PPTT doesn't exist, can't get CPU's ACPI 784 + * Processor UID, the CPU cannot be found or there is no toplogy level 785 + * above the CPU. 779 786 * Otherwise returns a value which represents the package for this CPU. 780 787 */ 781 788 ··· 792 797 if (!table) 793 798 return -ENOENT; 794 799 795 - acpi_cpu_id = get_acpi_id_for_cpu(cpu); 800 + if (acpi_get_cpu_uid(cpu, &acpi_cpu_id) != 0) 801 + return -ENOENT; 802 + 796 803 cpu_node = acpi_find_processor_node(table, acpi_cpu_id); 797 804 if (!cpu_node || !cpu_node->parent) 798 805 return -ENOENT; ··· 869 872 cpumask_clear(cpus); 870 873 871 874 for_each_possible_cpu(cpu) { 872 - acpi_id = get_acpi_id_for_cpu(cpu); 875 + if (acpi_get_cpu_uid(cpu, &acpi_id) != 0) 876 + continue; 877 + 873 878 cpu_node = acpi_find_processor_node(table_hdr, acpi_id); 874 879 875 880 while (cpu_node) { ··· 965 966 for_each_possible_cpu(cpu) { 966 967 bool empty; 967 968 int level = 1; 968 - u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu); 969 + u32 acpi_cpu_id; 969 970 struct acpi_pptt_cache *cache; 970 971 struct acpi_pptt_processor *cpu_node; 972 + 973 + if (acpi_get_cpu_uid(cpu, &acpi_cpu_id) != 0) 974 + continue; 971 975 972 976 cpu_node = acpi_find_processor_node(table, acpi_cpu_id); 973 977 if (!cpu_node) ··· 981 979 982 980 empty = true; 983 981 for (int i = 0; i < ARRAY_SIZE(cache_type); i++) { 984 - struct acpi_pptt_cache_v1_full *cache_v1; 982 + struct acpi_pptt_cache_v1 *cache_v1; 985 983 986 984 cache = acpi_find_cache_node(table, acpi_cpu_id, cache_type[i], 987 985 level, &cpu_node); ··· 1032 1030 for_each_possible_cpu(cpu) { 1033 1031 bool empty; 1034 1032 int level = 1; 1035 - u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu); 1033 + u32 acpi_cpu_id; 1036 1034 struct acpi_pptt_cache *cache; 1037 1035 struct acpi_pptt_processor *cpu_node; 1036 + 1037 + if (acpi_get_cpu_uid(cpu, &acpi_cpu_id) != 0) 1038 + continue; 1038 1039 1039 1040 cpu_node = acpi_find_processor_node(table, acpi_cpu_id); 1040 1041 if (!cpu_node) ··· 1048 1043 1049 1044 empty = true; 1050 1045 for (int i = 0; i < ARRAY_SIZE(cache_type); i++) { 1051 - struct acpi_pptt_cache_v1_full *cache_v1; 1046 + struct acpi_pptt_cache_v1 *cache_v1; 1052 1047 1053 1048 cache = acpi_find_cache_node(table, acpi_cpu_id, cache_type[i], 1054 1049 level, &cpu_node);
+8 -14
drivers/acpi/processor_driver.c
··· 53 53 { 54 54 struct acpi_device *device = data; 55 55 struct acpi_processor *pr; 56 - int saved; 56 + int saved, ev_data = 0; 57 57 58 58 if (device->handle != handle) 59 59 return; ··· 66 66 case ACPI_PROCESSOR_NOTIFY_PERFORMANCE: 67 67 saved = pr->performance_platform_limit; 68 68 acpi_processor_ppc_has_changed(pr, 1); 69 - if (saved == pr->performance_platform_limit) 70 - break; 71 - acpi_bus_generate_netlink_event(device->pnp.device_class, 72 - dev_name(&device->dev), event, 73 - pr->performance_platform_limit); 69 + ev_data = pr->performance_platform_limit; 70 + if (saved == ev_data) 71 + return; 72 + 74 73 break; 75 74 case ACPI_PROCESSOR_NOTIFY_POWER: 76 75 acpi_processor_power_state_has_changed(pr); 77 - acpi_bus_generate_netlink_event(device->pnp.device_class, 78 - dev_name(&device->dev), event, 0); 79 76 break; 80 77 case ACPI_PROCESSOR_NOTIFY_THROTTLING: 81 78 acpi_processor_tstate_has_changed(pr); 82 - acpi_bus_generate_netlink_event(device->pnp.device_class, 83 - dev_name(&device->dev), event, 0); 84 79 break; 85 80 case ACPI_PROCESSOR_NOTIFY_HIGEST_PERF_CHANGED: 86 81 cpufreq_update_limits(pr->id); 87 - acpi_bus_generate_netlink_event(device->pnp.device_class, 88 - dev_name(&device->dev), event, 0); 89 82 break; 90 83 default: 91 84 acpi_handle_debug(handle, "Unsupported event [0x%x]\n", event); 92 - break; 85 + return; 93 86 } 94 87 95 - return; 88 + acpi_bus_generate_netlink_event("processor", dev_name(&device->dev), 89 + event, ev_data); 96 90 } 97 91 98 92 static int __acpi_processor_start(struct acpi_device *device);
+48 -40
drivers/acpi/processor_idle.c
··· 819 819 drv->state_count = count; 820 820 } 821 821 822 - static inline void acpi_processor_cstate_first_run_checks(void) 822 + static inline void acpi_processor_update_max_cstate(void) 823 823 { 824 - static int first_run; 825 - 826 - if (first_run) 827 - return; 828 824 dmi_check_system(processor_power_dmi_table); 829 825 max_cstate = acpi_processor_cstate_check(max_cstate); 830 826 if (max_cstate < ACPI_C_STATES_MAX) 831 827 pr_notice("processor limited to max C-state %d\n", max_cstate); 832 - 833 - first_run++; 834 828 835 829 if (nocst) 836 830 return; ··· 834 840 #else 835 841 836 842 static inline int disabled_by_idle_boot_param(void) { return 0; } 837 - static inline void acpi_processor_cstate_first_run_checks(void) { } 843 + static inline void acpi_processor_update_max_cstate(void) { } 838 844 static int acpi_processor_get_cstate_info(struct acpi_processor *pr) 839 845 { 840 846 return -ENODEV; ··· 1010 1016 result->arch_flags = parent->arch_flags; 1011 1017 result->index = parent->index; 1012 1018 1013 - strscpy(result->desc, local->desc, ACPI_CX_DESC_LEN); 1014 - strlcat(result->desc, "+", ACPI_CX_DESC_LEN); 1015 - strlcat(result->desc, parent->desc, ACPI_CX_DESC_LEN); 1019 + scnprintf(result->desc, ACPI_CX_DESC_LEN, "%s+%s", local->desc, parent->desc); 1016 1020 return true; 1017 1021 } 1018 1022 ··· 1060 1068 stash_composite_state(curr_level, flpi); 1061 1069 flat_state_cnt++; 1062 1070 flpi++; 1071 + if (flat_state_cnt >= ACPI_PROCESSOR_MAX_POWER) 1072 + break; 1063 1073 } 1064 1074 } 1065 1075 } ··· 1267 1273 1268 1274 int acpi_processor_hotplug(struct acpi_processor *pr) 1269 1275 { 1276 + struct cpuidle_device *dev = per_cpu(acpi_cpuidle_device, pr->id); 1270 1277 int ret = 0; 1271 - struct cpuidle_device *dev; 1272 1278 1273 1279 if (disabled_by_idle_boot_param()) 1274 1280 return 0; 1275 1281 1276 - if (!pr->flags.power_setup_done) 1282 + if (!pr->flags.power_setup_done || !dev) 1277 1283 return -ENODEV; 1278 1284 1279 - dev = per_cpu(acpi_cpuidle_device, pr->id); 1280 1285 cpuidle_pause_and_lock(); 1281 1286 cpuidle_disable_device(dev); 1282 1287 ret = acpi_processor_get_power_info(pr); ··· 1307 1314 */ 1308 1315 1309 1316 if (pr->id == 0 && cpuidle_get_driver() == &acpi_idle_driver) { 1310 - 1311 1317 /* Protect against cpu-hotplug */ 1312 1318 cpus_read_lock(); 1319 + 1320 + /* Unregister cpuidle device of all CPUs */ 1313 1321 cpuidle_pause_and_lock(); 1314 - 1315 - /* Disable all cpuidle devices */ 1316 - for_each_online_cpu(cpu) { 1317 - _pr = per_cpu(processors, cpu); 1318 - if (!_pr || !_pr->flags.power_setup_done) 1319 - continue; 1322 + for_each_possible_cpu(cpu) { 1320 1323 dev = per_cpu(acpi_cpuidle_device, cpu); 1321 - cpuidle_disable_device(dev); 1322 - } 1323 - 1324 - /* Populate Updated C-state information */ 1325 - acpi_processor_get_power_info(pr); 1326 - acpi_processor_setup_cpuidle_states(pr); 1327 - 1328 - /* Enable all cpuidle devices */ 1329 - for_each_online_cpu(cpu) { 1330 1324 _pr = per_cpu(processors, cpu); 1331 - if (!_pr || !_pr->flags.power_setup_done) 1325 + if (!_pr || !_pr->flags.power || !dev) 1332 1326 continue; 1333 - acpi_processor_get_power_info(_pr); 1334 - if (_pr->flags.power) { 1335 - dev = per_cpu(acpi_cpuidle_device, cpu); 1336 - acpi_processor_setup_cpuidle_dev(_pr, dev); 1337 - cpuidle_enable_device(dev); 1338 - } 1327 + 1328 + cpuidle_unregister_device_no_lock(dev); 1329 + kfree(dev); 1330 + _pr->flags.power = 0; 1339 1331 } 1340 1332 cpuidle_resume_and_unlock(); 1333 + 1334 + /* 1335 + * Unregister ACPI idle driver, reinitialize ACPI idle states 1336 + * and register ACPI idle driver again. 1337 + */ 1338 + acpi_processor_unregister_idle_driver(); 1339 + acpi_processor_register_idle_driver(); 1340 + 1341 + /* 1342 + * Reinitialize power information of all CPUs and re-register 1343 + * all cpuidle devices. Now idle states is ok to use, can enable 1344 + * cpuidle of each CPU safely one by one. 1345 + */ 1346 + for_each_possible_cpu(cpu) { 1347 + _pr = per_cpu(processors, cpu); 1348 + if (!_pr) 1349 + continue; 1350 + acpi_processor_power_init(_pr); 1351 + } 1352 + 1341 1353 cpus_read_unlock(); 1342 1354 } 1343 1355 ··· 1355 1357 int ret = -ENODEV; 1356 1358 int cpu; 1357 1359 1360 + acpi_processor_update_max_cstate(); 1361 + 1358 1362 /* 1359 1363 * ACPI idle driver is used by all possible CPUs. 1360 1364 * Use the processor power info of one in them to set up idle states. ··· 1368 1368 if (!pr) 1369 1369 continue; 1370 1370 1371 - acpi_processor_cstate_first_run_checks(); 1372 1371 ret = acpi_processor_get_power_info(pr); 1373 1372 if (!ret) { 1374 1373 pr->flags.power_setup_done = 1; ··· 1383 1384 1384 1385 ret = cpuidle_register_driver(&acpi_idle_driver); 1385 1386 if (ret) { 1387 + pr->flags.power_setup_done = 0; 1386 1388 pr_debug("register %s failed.\n", acpi_idle_driver.name); 1387 1389 return; 1388 1390 } ··· 1392 1392 1393 1393 void acpi_processor_unregister_idle_driver(void) 1394 1394 { 1395 + struct acpi_processor *pr; 1396 + int cpu; 1397 + 1395 1398 cpuidle_unregister_driver(&acpi_idle_driver); 1399 + for_each_possible_cpu(cpu) { 1400 + pr = per_cpu(processors, cpu); 1401 + if (!pr) 1402 + continue; 1403 + pr->flags.power_setup_done = 0; 1404 + } 1396 1405 } 1397 1406 1398 1407 void acpi_processor_power_init(struct acpi_processor *pr) ··· 1417 1408 1418 1409 if (disabled_by_idle_boot_param()) 1419 1410 return; 1420 - 1421 - acpi_processor_cstate_first_run_checks(); 1422 1411 1423 1412 if (!acpi_processor_get_power_info(pr)) 1424 1413 pr->flags.power_setup_done = 1; ··· 1438 1431 */ 1439 1432 if (cpuidle_register_device(dev)) { 1440 1433 per_cpu(acpi_cpuidle_device, pr->id) = NULL; 1434 + pr->flags.power_setup_done = 0; 1441 1435 kfree(dev); 1442 1436 } 1443 1437 }
+6 -1
drivers/acpi/riscv/rhct.c
··· 44 44 struct acpi_rhct_isa_string *isa_node; 45 45 struct acpi_table_rhct *rhct; 46 46 u32 *hart_info_node_offset; 47 - u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu); 47 + u32 acpi_cpu_id; 48 + int ret; 48 49 49 50 BUG_ON(acpi_disabled); 51 + 52 + ret = acpi_get_cpu_uid(cpu, &acpi_cpu_id); 53 + if (ret != 0) 54 + return ret; 50 55 51 56 if (!table) { 52 57 rhct = acpi_get_rhct();
-4
drivers/acpi/sbs.c
··· 26 26 27 27 #include "sbshc.h" 28 28 29 - #define ACPI_SBS_CLASS "sbs" 30 - #define ACPI_AC_CLASS "ac_adapter" 31 29 #define ACPI_SBS_DEVICE_NAME "Smart Battery System" 32 30 #define ACPI_BATTERY_DIR_NAME "BAT%i" 33 31 #define ACPI_AC_DIR_NAME "AC0" ··· 646 648 647 649 sbs->hc = dev_get_drvdata(pdev->dev.parent); 648 650 sbs->device = device; 649 - strscpy(acpi_device_name(device), ACPI_SBS_DEVICE_NAME); 650 - strscpy(acpi_device_class(device), ACPI_SBS_CLASS); 651 651 652 652 result = acpi_charger_add(sbs); 653 653 if (result && result != -ENODEV)
-6
drivers/acpi/sbshc.c
··· 18 18 #include "sbshc.h" 19 19 #include "internal.h" 20 20 21 - #define ACPI_SMB_HC_CLASS "smbus_host_ctl" 22 - #define ACPI_SMB_HC_DEVICE_NAME "ACPI SMBus HC" 23 - 24 21 struct acpi_smb_hc { 25 22 struct acpi_ec *ec; 26 23 struct mutex lock; ··· 247 250 pr_err("error obtaining _EC.\n"); 248 251 return -EIO; 249 252 } 250 - 251 - strscpy(acpi_device_name(device), ACPI_SMB_HC_DEVICE_NAME); 252 - strscpy(acpi_device_class(device), ACPI_SMB_HC_CLASS); 253 253 254 254 hc = kzalloc_obj(struct acpi_smb_hc); 255 255 if (!hc)
+5 -8
drivers/acpi/thermal.c
··· 35 35 #include "internal.h" 36 36 37 37 #define ACPI_THERMAL_CLASS "thermal_zone" 38 - #define ACPI_THERMAL_DEVICE_NAME "Thermal Zone" 39 38 #define ACPI_THERMAL_NOTIFY_TEMPERATURE 0x80 40 39 #define ACPI_THERMAL_NOTIFY_THRESHOLDS 0x81 41 40 #define ACPI_THERMAL_NOTIFY_DEVICES 0x82 ··· 340 341 thermal_zone_for_each_trip(tz->thermal_zone, 341 342 acpi_thermal_adjust_trip, &atd); 342 343 acpi_queue_thermal_check(tz); 343 - acpi_bus_generate_netlink_event(adev->pnp.device_class, 344 + acpi_bus_generate_netlink_event(ACPI_THERMAL_CLASS, 344 345 dev_name(&adev->dev), event, 0); 345 346 } 346 347 ··· 542 543 { 543 544 struct acpi_thermal *tz = thermal_zone_device_priv(thermal); 544 545 545 - acpi_bus_generate_netlink_event(tz->device->pnp.device_class, 546 + acpi_bus_generate_netlink_event(ACPI_THERMAL_CLASS, 546 547 dev_name(&tz->device->dev), 547 548 ACPI_THERMAL_NOTIFY_HOT, 1); 548 549 } ··· 551 552 { 552 553 struct acpi_thermal *tz = thermal_zone_device_priv(thermal); 553 554 554 - acpi_bus_generate_netlink_event(tz->device->pnp.device_class, 555 + acpi_bus_generate_netlink_event(ACPI_THERMAL_CLASS, 555 556 dev_name(&tz->device->dev), 556 557 ACPI_THERMAL_NOTIFY_CRITICAL, 1); 557 558 ··· 799 800 800 801 tz->device = device; 801 802 strscpy(tz->name, device->pnp.bus_id); 802 - strscpy(acpi_device_name(device), ACPI_THERMAL_DEVICE_NAME); 803 - strscpy(acpi_device_class(device), ACPI_THERMAL_CLASS); 804 803 805 804 acpi_thermal_aml_dependency_fix(tz); 806 805 ··· 876 879 mutex_init(&tz->thermal_check_lock); 877 880 INIT_WORK(&tz->thermal_check_work, acpi_thermal_check_fn); 878 881 879 - pr_info("%s [%s] (%ld C)\n", acpi_device_name(device), 880 - acpi_device_bid(device), deci_kelvin_to_celsius(tz->temp_dk)); 882 + pr_info("Thermal Zone [%s] (%ld C)\n", acpi_device_bid(device), 883 + deci_kelvin_to_celsius(tz->temp_dk)); 881 884 882 885 result = acpi_dev_install_notify_handler(device, ACPI_DEVICE_NOTIFY, 883 886 acpi_thermal_notify, tz);
+48 -40
drivers/acpi/x86/cmos_rtc.c
··· 18 18 #include "../internal.h" 19 19 20 20 static const struct acpi_device_id acpi_cmos_rtc_ids[] = { 21 - { "PNP0B00" }, 22 - { "PNP0B01" }, 23 - { "PNP0B02" }, 24 - {} 21 + { "ACPI000E", 1 }, /* ACPI Time and Alarm Device (TAD) */ 22 + ACPI_CMOS_RTC_IDS 25 23 }; 26 24 27 - static acpi_status 28 - acpi_cmos_rtc_space_handler(u32 function, acpi_physical_address address, 29 - u32 bits, u64 *value64, 30 - void *handler_context, void *region_context) 25 + bool cmos_rtc_platform_device_present; 26 + 27 + static acpi_status acpi_cmos_rtc_space_handler(u32 function, 28 + acpi_physical_address address, 29 + u32 bits, u64 *value64, 30 + void *handler_context, 31 + void *region_context) 31 32 { 32 - int i; 33 + unsigned int i, bytes = DIV_ROUND_UP(bits, 8); 33 34 u8 *value = (u8 *)value64; 34 35 35 36 if (address > 0xff || !value64) 36 37 return AE_BAD_PARAMETER; 37 38 38 - if (function != ACPI_WRITE && function != ACPI_READ) 39 - return AE_BAD_PARAMETER; 39 + guard(spinlock_irq)(&rtc_lock); 40 40 41 - spin_lock_irq(&rtc_lock); 42 - 43 - for (i = 0; i < DIV_ROUND_UP(bits, 8); ++i, ++address, ++value) 44 - if (function == ACPI_READ) 45 - *value = CMOS_READ(address); 46 - else 41 + if (function == ACPI_WRITE) { 42 + for (i = 0; i < bytes; i++, address++, value++) 47 43 CMOS_WRITE(*value, address); 48 44 49 - spin_unlock_irq(&rtc_lock); 45 + return AE_OK; 46 + } 50 47 51 - return AE_OK; 48 + if (function == ACPI_READ) { 49 + for (i = 0; i < bytes; i++, address++, value++) 50 + *value = CMOS_READ(address); 51 + 52 + return AE_OK; 53 + } 54 + 55 + return AE_BAD_PARAMETER; 52 56 } 53 57 54 - int acpi_install_cmos_rtc_space_handler(acpi_handle handle) 58 + static int acpi_install_cmos_rtc_space_handler(acpi_handle handle) 55 59 { 60 + static bool cmos_rtc_space_handler_present __read_mostly; 56 61 acpi_status status; 57 62 63 + if (cmos_rtc_space_handler_present) 64 + return 0; 65 + 58 66 status = acpi_install_address_space_handler(handle, 59 - ACPI_ADR_SPACE_CMOS, 60 - &acpi_cmos_rtc_space_handler, 61 - NULL, NULL); 67 + ACPI_ADR_SPACE_CMOS, 68 + acpi_cmos_rtc_space_handler, 69 + NULL, NULL); 62 70 if (ACPI_FAILURE(status)) { 63 - pr_err("Error installing CMOS-RTC region handler\n"); 71 + pr_err("Failed to install CMOS-RTC address space handler\n"); 64 72 return -ENODEV; 65 73 } 66 74 75 + cmos_rtc_space_handler_present = true; 76 + 67 77 return 1; 68 78 } 69 - EXPORT_SYMBOL_GPL(acpi_install_cmos_rtc_space_handler); 70 79 71 - void acpi_remove_cmos_rtc_space_handler(acpi_handle handle) 80 + static int acpi_cmos_rtc_attach(struct acpi_device *adev, 81 + const struct acpi_device_id *id) 72 82 { 73 - if (ACPI_FAILURE(acpi_remove_address_space_handler(handle, 74 - ACPI_ADR_SPACE_CMOS, &acpi_cmos_rtc_space_handler))) 75 - pr_err("Error removing CMOS-RTC region handler\n"); 76 - } 77 - EXPORT_SYMBOL_GPL(acpi_remove_cmos_rtc_space_handler); 83 + int ret; 78 84 79 - static int acpi_cmos_rtc_attach_handler(struct acpi_device *adev, const struct acpi_device_id *id) 80 - { 81 - return acpi_install_cmos_rtc_space_handler(adev->handle); 82 - } 85 + ret = acpi_install_cmos_rtc_space_handler(adev->handle); 86 + if (ret < 0) 87 + return ret; 83 88 84 - static void acpi_cmos_rtc_detach_handler(struct acpi_device *adev) 85 - { 86 - acpi_remove_cmos_rtc_space_handler(adev->handle); 89 + if (IS_ERR_OR_NULL(acpi_create_platform_device(adev, NULL))) { 90 + pr_err("Failed to create a platform device for %s\n", (char *)id->id); 91 + return 0; 92 + } else if (!id->driver_data) { 93 + cmos_rtc_platform_device_present = true; 94 + } 95 + return 1; 87 96 } 88 97 89 98 static struct acpi_scan_handler cmos_rtc_handler = { 90 99 .ids = acpi_cmos_rtc_ids, 91 - .attach = acpi_cmos_rtc_attach_handler, 92 - .detach = acpi_cmos_rtc_detach_handler, 100 + .attach = acpi_cmos_rtc_attach, 93 101 }; 94 102 95 103 void __init acpi_cmos_rtc_init(void)
+10
drivers/base/auxiliary.c
··· 496 496 } 497 497 EXPORT_SYMBOL_GPL(__devm_auxiliary_device_create); 498 498 499 + /** 500 + * dev_is_auxiliary - check if the device is an auxiliary one 501 + * @dev: device to check 502 + */ 503 + bool dev_is_auxiliary(struct device *dev) 504 + { 505 + return dev->bus == &auxiliary_bus_type; 506 + } 507 + EXPORT_SYMBOL_GPL(dev_is_auxiliary); 508 + 499 509 void __init auxiliary_bus_init(void) 500 510 { 501 511 WARN_ON(bus_register(&auxiliary_bus_type));
+92 -12
drivers/cpufreq/cppc_cpufreq.c
··· 50 50 static DEFINE_PER_CPU(struct cppc_freq_invariance, cppc_freq_inv); 51 51 static struct kthread_worker *kworker_fie; 52 52 53 - static int cppc_perf_from_fbctrs(struct cppc_perf_fb_ctrs *fb_ctrs_t0, 53 + static int cppc_perf_from_fbctrs(u64 reference_perf, 54 + struct cppc_perf_fb_ctrs *fb_ctrs_t0, 54 55 struct cppc_perf_fb_ctrs *fb_ctrs_t1); 55 56 56 57 /** ··· 71 70 struct cppc_perf_fb_ctrs fb_ctrs = {0}; 72 71 struct cppc_cpudata *cpu_data; 73 72 unsigned long local_freq_scale; 74 - u64 perf; 73 + u64 perf, ref_perf; 75 74 76 75 cpu_data = cppc_fi->cpu_data; 77 76 ··· 80 79 return; 81 80 } 82 81 83 - perf = cppc_perf_from_fbctrs(&cppc_fi->prev_perf_fb_ctrs, &fb_ctrs); 82 + ref_perf = cpu_data->perf_caps.reference_perf; 83 + perf = cppc_perf_from_fbctrs(ref_perf, 84 + &cppc_fi->prev_perf_fb_ctrs, &fb_ctrs); 84 85 if (!perf) 85 86 return; 86 87 ··· 290 287 } 291 288 #endif /* CONFIG_ACPI_CPPC_CPUFREQ_FIE */ 292 289 290 + static void cppc_cpufreq_update_perf_limits(struct cppc_cpudata *cpu_data, 291 + struct cpufreq_policy *policy) 292 + { 293 + struct cppc_perf_caps *caps = &cpu_data->perf_caps; 294 + u32 min_perf, max_perf; 295 + 296 + min_perf = cppc_khz_to_perf(caps, policy->min); 297 + max_perf = cppc_khz_to_perf(caps, policy->max); 298 + 299 + cpu_data->perf_ctrls.min_perf = 300 + clamp_t(u32, min_perf, caps->lowest_perf, caps->highest_perf); 301 + cpu_data->perf_ctrls.max_perf = 302 + clamp_t(u32, max_perf, caps->lowest_perf, caps->highest_perf); 303 + } 304 + 293 305 static int cppc_cpufreq_set_target(struct cpufreq_policy *policy, 294 306 unsigned int target_freq, 295 307 unsigned int relation) ··· 316 298 317 299 cpu_data->perf_ctrls.desired_perf = 318 300 cppc_khz_to_perf(&cpu_data->perf_caps, target_freq); 301 + cppc_cpufreq_update_perf_limits(cpu_data, policy); 302 + 319 303 freqs.old = policy->cur; 320 304 freqs.new = target_freq; 321 305 ··· 342 322 343 323 desired_perf = cppc_khz_to_perf(&cpu_data->perf_caps, target_freq); 344 324 cpu_data->perf_ctrls.desired_perf = desired_perf; 345 - ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); 325 + cppc_cpufreq_update_perf_limits(cpu_data, policy); 346 326 327 + ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); 347 328 if (ret) { 348 329 pr_debug("Failed to set target on CPU:%d. ret:%d\n", 349 330 cpu, ret); ··· 615 594 goto free_mask; 616 595 } 617 596 597 + ret = cppc_get_perf(cpu, &cpu_data->perf_ctrls); 598 + if (ret) { 599 + pr_debug("Err reading CPU%d perf ctrls: ret:%d\n", cpu, ret); 600 + goto free_mask; 601 + } 602 + 618 603 return cpu_data; 619 604 620 605 free_mask: ··· 750 723 return (u32)t1 - (u32)t0; 751 724 } 752 725 753 - static int cppc_perf_from_fbctrs(struct cppc_perf_fb_ctrs *fb_ctrs_t0, 726 + static int cppc_perf_from_fbctrs(u64 reference_perf, 727 + struct cppc_perf_fb_ctrs *fb_ctrs_t0, 754 728 struct cppc_perf_fb_ctrs *fb_ctrs_t1) 755 729 { 756 730 u64 delta_reference, delta_delivered; 757 - u64 reference_perf; 758 - 759 - reference_perf = fb_ctrs_t0->reference_perf; 760 731 761 732 delta_reference = get_delta(fb_ctrs_t1->reference, 762 733 fb_ctrs_t0->reference); ··· 791 766 struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpu); 792 767 struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0}; 793 768 struct cppc_cpudata *cpu_data; 794 - u64 delivered_perf; 769 + u64 delivered_perf, reference_perf; 795 770 int ret; 796 771 797 772 if (!policy) ··· 808 783 return 0; 809 784 } 810 785 811 - delivered_perf = cppc_perf_from_fbctrs(&fb_ctrs_t0, &fb_ctrs_t1); 786 + reference_perf = cpu_data->perf_caps.reference_perf; 787 + delivered_perf = cppc_perf_from_fbctrs(reference_perf, 788 + &fb_ctrs_t0, &fb_ctrs_t1); 812 789 if (!delivered_perf) 813 790 goto out_invalid_counters; 814 791 ··· 876 849 static ssize_t store_auto_select(struct cpufreq_policy *policy, 877 850 const char *buf, size_t count) 878 851 { 852 + struct cppc_cpudata *cpu_data = policy->driver_data; 879 853 bool val; 880 854 int ret; 881 855 ··· 887 859 ret = cppc_set_auto_sel(policy->cpu, val); 888 860 if (ret) 889 861 return ret; 862 + 863 + cpu_data->perf_ctrls.auto_sel = val; 864 + 865 + if (val) { 866 + u32 old_min_perf = cpu_data->perf_ctrls.min_perf; 867 + u32 old_max_perf = cpu_data->perf_ctrls.max_perf; 868 + 869 + /* 870 + * When enabling autonomous selection, program MIN_PERF and 871 + * MAX_PERF from current policy limits so that the platform 872 + * uses the correct performance bounds immediately. 873 + */ 874 + cppc_cpufreq_update_perf_limits(cpu_data, policy); 875 + 876 + ret = cppc_set_perf(policy->cpu, &cpu_data->perf_ctrls); 877 + if (ret) { 878 + cpu_data->perf_ctrls.min_perf = old_min_perf; 879 + cpu_data->perf_ctrls.max_perf = old_max_perf; 880 + cppc_set_auto_sel(policy->cpu, false); 881 + cpu_data->perf_ctrls.auto_sel = false; 882 + return ret; 883 + } 884 + } 890 885 891 886 return count; 892 887 } ··· 961 910 CPPC_CPUFREQ_ATTR_RW_U64(auto_act_window, cppc_get_auto_act_window, 962 911 cppc_set_auto_act_window) 963 912 964 - CPPC_CPUFREQ_ATTR_RW_U64(energy_performance_preference_val, 965 - cppc_get_epp_perf, cppc_set_epp) 913 + static ssize_t 914 + show_energy_performance_preference_val(struct cpufreq_policy *policy, char *buf) 915 + { 916 + return cppc_cpufreq_sysfs_show_u64(policy->cpu, cppc_get_epp_perf, buf); 917 + } 918 + 919 + static ssize_t 920 + store_energy_performance_preference_val(struct cpufreq_policy *policy, 921 + const char *buf, size_t count) 922 + { 923 + struct cppc_cpudata *cpu_data = policy->driver_data; 924 + u64 val; 925 + int ret; 926 + 927 + ret = kstrtou64(buf, 0, &val); 928 + if (ret) 929 + return ret; 930 + 931 + ret = cppc_set_epp(policy->cpu, val); 932 + if (ret) 933 + return ret; 934 + 935 + cpu_data->perf_ctrls.energy_perf = val; 936 + 937 + return count; 938 + } 939 + 940 + CPPC_CPUFREQ_ATTR_RW_U64(perf_limited, cppc_get_perf_limited, 941 + cppc_set_perf_limited) 966 942 967 943 cpufreq_freq_attr_ro(freqdomain_cpus); 968 944 cpufreq_freq_attr_rw(auto_select); 969 945 cpufreq_freq_attr_rw(auto_act_window); 970 946 cpufreq_freq_attr_rw(energy_performance_preference_val); 947 + cpufreq_freq_attr_rw(perf_limited); 971 948 972 949 static struct freq_attr *cppc_cpufreq_attr[] = { 973 950 &freqdomain_cpus, 974 951 &auto_select, 975 952 &auto_act_window, 976 953 &energy_performance_preference_val, 954 + &perf_limited, 977 955 NULL, 978 956 }; 979 957
+18 -10
drivers/cpuidle/cpuidle.c
··· 714 714 715 715 EXPORT_SYMBOL_GPL(cpuidle_register_device); 716 716 717 + void cpuidle_unregister_device_no_lock(struct cpuidle_device *dev) 718 + { 719 + if (!dev || dev->registered == 0) 720 + return; 721 + 722 + lockdep_assert_held(&cpuidle_lock); 723 + 724 + cpuidle_disable_device(dev); 725 + 726 + cpuidle_remove_sysfs(dev); 727 + 728 + __cpuidle_unregister_device(dev); 729 + 730 + cpuidle_coupled_unregister_device(dev); 731 + } 732 + EXPORT_SYMBOL_GPL(cpuidle_unregister_device_no_lock); 733 + 717 734 /** 718 735 * cpuidle_unregister_device - unregisters a CPU's idle PM feature 719 736 * @dev: the cpu ··· 741 724 return; 742 725 743 726 cpuidle_pause_and_lock(); 744 - 745 - cpuidle_disable_device(dev); 746 - 747 - cpuidle_remove_sysfs(dev); 748 - 749 - __cpuidle_unregister_device(dev); 750 - 751 - cpuidle_coupled_unregister_device(dev); 752 - 727 + cpuidle_unregister_device_no_lock(dev); 753 728 cpuidle_resume_and_unlock(); 754 729 } 755 - 756 730 EXPORT_SYMBOL_GPL(cpuidle_unregister_device); 757 731 758 732 /**
-2
drivers/gpu/drm/amd/include/amd_acpi.h
··· 26 26 27 27 #include <linux/types.h> 28 28 29 - #define ACPI_AC_CLASS "ac_adapter" 30 - 31 29 struct atif_verify_interface { 32 30 u16 size; /* structure size in bytes (includes size field) */ 33 31 u16 version; /* version */
-2
drivers/gpu/drm/radeon/radeon_acpi.c
··· 44 44 static inline bool radeon_atpx_dgpu_req_power_for_displays(void) { return false; } 45 45 #endif 46 46 47 - #define ACPI_AC_CLASS "ac_adapter" 48 - 49 47 struct atif_verify_interface { 50 48 u16 size; /* structure size in bytes (includes size field) */ 51 49 u16 version; /* version */
+1 -11
drivers/pci/controller/pcie-hisi-error.c
··· 287 287 288 288 priv->nb.notifier_call = hisi_pcie_notify_error; 289 289 priv->dev = &pdev->dev; 290 - ret = ghes_register_vendor_record_notifier(&priv->nb); 290 + ret = devm_ghes_register_vendor_record_notifier(&pdev->dev, &priv->nb); 291 291 if (ret) { 292 292 dev_err(&pdev->dev, 293 293 "Failed to register hisi pcie controller error handler with apei\n"); 294 294 return ret; 295 295 } 296 296 297 - platform_set_drvdata(pdev, priv); 298 - 299 297 return 0; 300 - } 301 - 302 - static void hisi_pcie_error_handler_remove(struct platform_device *pdev) 303 - { 304 - struct hisi_pcie_error_private *priv = platform_get_drvdata(pdev); 305 - 306 - ghes_unregister_vendor_record_notifier(&priv->nb); 307 298 } 308 299 309 300 static const struct acpi_device_id hisi_pcie_acpi_match[] = { ··· 308 317 .acpi_match_table = hisi_pcie_acpi_match, 309 318 }, 310 319 .probe = hisi_pcie_error_handler_probe, 311 - .remove = hisi_pcie_error_handler_remove, 312 320 }; 313 321 module_platform_driver(hisi_pcie_error_handler_driver); 314 322
+11 -5
drivers/pci/tph.c
··· 236 236 * with a specific CPU 237 237 * @pdev: PCI device 238 238 * @mem_type: target memory type (volatile or persistent RAM) 239 - * @cpu_uid: associated CPU id 239 + * @cpu: associated CPU id 240 240 * @tag: Steering Tag to be returned 241 241 * 242 242 * Return the Steering Tag for a target memory that is associated with a 243 - * specific CPU as indicated by cpu_uid. 243 + * specific CPU as indicated by cpu. 244 244 * 245 245 * Return: 0 if success, otherwise negative value (-errno) 246 246 */ 247 247 int pcie_tph_get_cpu_st(struct pci_dev *pdev, enum tph_mem_type mem_type, 248 - unsigned int cpu_uid, u16 *tag) 248 + unsigned int cpu, u16 *tag) 249 249 { 250 250 #ifdef CONFIG_ACPI 251 251 struct pci_dev *rp; 252 252 acpi_handle rp_acpi_handle; 253 253 union st_info info; 254 + u32 cpu_uid; 255 + int ret; 256 + 257 + ret = acpi_get_cpu_uid(cpu, &cpu_uid); 258 + if (ret != 0) 259 + return ret; 254 260 255 261 rp = pcie_find_root_port(pdev); 256 262 if (!rp || !rp->bus || !rp->bus->bridge) ··· 271 265 272 266 *tag = tph_extract_tag(mem_type, pdev->tph_req_type, &info); 273 267 274 - pci_dbg(pdev, "get steering tag: mem_type=%s, cpu_uid=%d, tag=%#04x\n", 268 + pci_dbg(pdev, "get steering tag: mem_type=%s, cpu=%d, tag=%#04x\n", 275 269 (mem_type == TPH_MEM_TYPE_VM) ? "volatile" : "persistent", 276 - cpu_uid, *tag); 270 + cpu, *tag); 277 271 278 272 return 0; 279 273 #else
+4 -2
drivers/perf/arm_cspmu/arm_cspmu.c
··· 1107 1107 { 1108 1108 struct acpi_apmt_node *apmt_node; 1109 1109 int affinity_flag; 1110 + u32 cpu_uid; 1110 1111 int cpu; 1112 + int ret; 1111 1113 1112 1114 apmt_node = arm_cspmu_apmt_node(cspmu->dev); 1113 1115 affinity_flag = apmt_node->flags & ACPI_APMT_FLAGS_AFFINITY; 1114 1116 1115 1117 if (affinity_flag == ACPI_APMT_FLAGS_AFFINITY_PROC) { 1116 1118 for_each_possible_cpu(cpu) { 1117 - if (apmt_node->proc_affinity == 1118 - get_acpi_id_for_cpu(cpu)) { 1119 + ret = acpi_get_cpu_uid(cpu, &cpu_uid); 1120 + if (ret == 0 && apmt_node->proc_affinity == cpu_uid) { 1119 1121 cpumask_set_cpu(cpu, &cspmu->associated_cpus); 1120 1122 break; 1121 1123 }
-2
drivers/platform/x86/hp/hp-wmi.c
··· 58 58 #define HP_POWER_LIMIT_DEFAULT 0x00 59 59 #define HP_POWER_LIMIT_NO_CHANGE 0xFF 60 60 61 - #define ACPI_AC_CLASS "ac_adapter" 62 - 63 61 #define zero_if_sup(tmp) (zero_insize_support?0:sizeof(tmp)) // use when zero insize is required 64 62 65 63 enum hp_thermal_profile_omen_v0 {
-1
drivers/platform/x86/lenovo/wmi-capdata.c
··· 53 53 #define LENOVO_CAPABILITY_DATA_01_GUID "7A8F5407-CB67-4D6E-B547-39B3BE018154" 54 54 #define LENOVO_FAN_TEST_DATA_GUID "B642801B-3D21-45DE-90AE-6E86F164FB21" 55 55 56 - #define ACPI_AC_CLASS "ac_adapter" 57 56 #define ACPI_AC_NOTIFY_STATUS 0x80 58 57 59 58 #define LWMI_FEATURE_ID_FAN_TEST 0x05
+32 -113
drivers/rtc/rtc-cmos.c
··· 27 27 28 28 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 29 29 30 + #include <linux/acpi.h> 30 31 #include <linux/kernel.h> 31 32 #include <linux/module.h> 32 33 #include <linux/init.h> ··· 216 215 217 216 /*----------------------------------------------------------------*/ 218 217 218 + static bool cmos_no_alarm(struct cmos_rtc *cmos) 219 + { 220 + return !is_valid_irq(cmos->irq) && !cmos_use_acpi_alarm(); 221 + } 222 + 219 223 static int cmos_read_time(struct device *dev, struct rtc_time *t) 220 224 { 221 225 int ret; ··· 292 286 }; 293 287 294 288 /* This not only a rtc_op, but also called directly */ 295 - if (!is_valid_irq(cmos->irq)) 289 + if (cmos_no_alarm(cmos)) 296 290 return -ETIMEDOUT; 297 291 298 292 /* Basic alarms only support hour, minute, and seconds fields. ··· 525 519 int ret; 526 520 527 521 /* This not only a rtc_op, but also called directly */ 528 - if (!is_valid_irq(cmos->irq)) 522 + if (cmos_no_alarm(cmos)) 529 523 return -EIO; 530 524 531 525 ret = cmos_validate_alarm(dev, t); ··· 822 816 #ifdef CONFIG_X86 823 817 static void use_acpi_alarm_quirks(void) 824 818 { 819 + if (acpi_gbl_FADT.flags & ACPI_FADT_FIXED_RTC) 820 + return; 821 + 825 822 switch (boot_cpu_data.x86_vendor) { 826 823 case X86_VENDOR_INTEL: 827 824 if (dmi_get_bios_year() < 2015) ··· 838 829 default: 839 830 return; 840 831 } 841 - if (!is_hpet_enabled()) 842 - return; 843 832 844 833 use_acpi_alarm = true; 845 834 } ··· 1101 1094 dev_dbg(dev, "IRQ %d is already in use\n", rtc_irq); 1102 1095 goto cleanup1; 1103 1096 } 1104 - } else { 1097 + } else if (!cmos_use_acpi_alarm()) { 1105 1098 clear_bit(RTC_FEATURE_ALARM, cmos_rtc.rtc->features); 1106 1099 } 1107 1100 ··· 1126 1119 acpi_rtc_event_setup(dev); 1127 1120 1128 1121 dev_info(dev, "%s%s, %d bytes nvram%s\n", 1129 - !is_valid_irq(rtc_irq) ? "no alarms" : 1122 + cmos_no_alarm(&cmos_rtc) ? "no alarms" : 1130 1123 cmos_rtc.mon_alrm ? "alarms up to one year" : 1131 1124 cmos_rtc.day_alrm ? "alarms up to one month" : 1132 1125 "alarms up to one day", ··· 1152 1145 static void cmos_do_shutdown(int rtc_irq) 1153 1146 { 1154 1147 spin_lock_irq(&rtc_lock); 1155 - if (is_valid_irq(rtc_irq)) 1148 + if (!cmos_no_alarm(&cmos_rtc)) 1156 1149 cmos_irq_disable(&cmos_rtc, RTC_IRQMASK); 1157 1150 spin_unlock_irq(&rtc_lock); 1158 1151 } ··· 1376 1369 1377 1370 static SIMPLE_DEV_PM_OPS(cmos_pm_ops, cmos_suspend, cmos_resume); 1378 1371 1379 - /*----------------------------------------------------------------*/ 1380 - 1381 - /* On non-x86 systems, a "CMOS" RTC lives most naturally on platform_bus. 1382 - * ACPI systems always list these as PNPACPI devices, and pre-ACPI PCs 1383 - * probably list them in similar PNPBIOS tables; so PNP is more common. 1384 - * 1385 - * We don't use legacy "poke at the hardware" probing. Ancient PCs that 1386 - * predate even PNPBIOS should set up platform_bus devices. 1387 - */ 1388 - 1389 - #ifdef CONFIG_PNP 1390 - 1391 - #include <linux/pnp.h> 1392 - 1393 - static int cmos_pnp_probe(struct pnp_dev *pnp, const struct pnp_device_id *id) 1394 - { 1395 - int irq; 1396 - 1397 - if (pnp_port_start(pnp, 0) == 0x70 && !pnp_irq_valid(pnp, 0)) { 1398 - irq = 0; 1399 - #ifdef CONFIG_X86 1400 - /* Some machines contain a PNP entry for the RTC, but 1401 - * don't define the IRQ. It should always be safe to 1402 - * hardcode it on systems with a legacy PIC. 1403 - */ 1404 - if (nr_legacy_irqs()) 1405 - irq = RTC_IRQ; 1406 - #endif 1407 - } else { 1408 - irq = pnp_irq(pnp, 0); 1409 - } 1410 - 1411 - return cmos_do_probe(&pnp->dev, pnp_get_resource(pnp, IORESOURCE_IO, 0), irq); 1412 - } 1413 - 1414 - static void cmos_pnp_remove(struct pnp_dev *pnp) 1415 - { 1416 - cmos_do_remove(&pnp->dev); 1417 - } 1418 - 1419 - static void cmos_pnp_shutdown(struct pnp_dev *pnp) 1420 - { 1421 - struct device *dev = &pnp->dev; 1422 - struct cmos_rtc *cmos = dev_get_drvdata(dev); 1423 - 1424 - if (system_state == SYSTEM_POWER_OFF) { 1425 - int retval = cmos_poweroff(dev); 1426 - 1427 - if (cmos_aie_poweroff(dev) < 0 && !retval) 1428 - return; 1429 - } 1430 - 1431 - cmos_do_shutdown(cmos->irq); 1432 - } 1433 - 1434 - static const struct pnp_device_id rtc_ids[] = { 1435 - { .id = "PNP0b00", }, 1436 - { .id = "PNP0b01", }, 1437 - { .id = "PNP0b02", }, 1438 - { }, 1439 - }; 1440 - MODULE_DEVICE_TABLE(pnp, rtc_ids); 1441 - 1442 - static struct pnp_driver cmos_pnp_driver = { 1443 - .name = driver_name, 1444 - .id_table = rtc_ids, 1445 - .probe = cmos_pnp_probe, 1446 - .remove = cmos_pnp_remove, 1447 - .shutdown = cmos_pnp_shutdown, 1448 - 1449 - /* flag ensures resume() gets called, and stops syslog spam */ 1450 - .flags = PNP_DRIVER_RES_DO_NOT_CHANGE, 1451 - .driver = { 1452 - .pm = &cmos_pm_ops, 1453 - }, 1454 - }; 1455 - 1456 - #endif /* CONFIG_PNP */ 1457 - 1458 1372 #ifdef CONFIG_OF 1459 1373 static const struct of_device_id of_cmos_match[] = { 1460 1374 { ··· 1404 1476 #else 1405 1477 static inline void cmos_of_init(struct platform_device *pdev) {} 1406 1478 #endif 1479 + 1480 + #ifdef CONFIG_ACPI 1481 + static const struct acpi_device_id acpi_cmos_rtc_ids[] = { 1482 + ACPI_CMOS_RTC_IDS 1483 + }; 1484 + MODULE_DEVICE_TABLE(acpi, acpi_cmos_rtc_ids); 1485 + #endif 1486 + 1407 1487 /*----------------------------------------------------------------*/ 1408 1488 1409 1489 /* Platform setup should have set up an RTC device, when PNP is ··· 1466 1530 .name = driver_name, 1467 1531 .pm = &cmos_pm_ops, 1468 1532 .of_match_table = of_match_ptr(of_cmos_match), 1533 + .acpi_match_table = ACPI_PTR(acpi_cmos_rtc_ids), 1469 1534 } 1470 1535 }; 1471 1536 1472 - #ifdef CONFIG_PNP 1473 - static bool pnp_driver_registered; 1474 - #endif 1475 1537 static bool platform_driver_registered; 1476 1538 1477 1539 static int __init cmos_init(void) 1478 1540 { 1479 - int retval = 0; 1541 + int retval; 1480 1542 1481 - #ifdef CONFIG_PNP 1482 - retval = pnp_register_driver(&cmos_pnp_driver); 1483 - if (retval == 0) 1484 - pnp_driver_registered = true; 1485 - #endif 1486 - 1487 - if (!cmos_rtc.dev) { 1488 - retval = platform_driver_probe(&cmos_platform_driver, 1489 - cmos_platform_probe); 1490 - if (retval == 0) 1491 - platform_driver_registered = true; 1492 - } 1493 - 1494 - if (retval == 0) 1543 + if (cmos_rtc.dev) 1495 1544 return 0; 1496 1545 1497 - #ifdef CONFIG_PNP 1498 - if (pnp_driver_registered) 1499 - pnp_unregister_driver(&cmos_pnp_driver); 1500 - #endif 1501 - return retval; 1546 + retval = platform_driver_probe(&cmos_platform_driver, cmos_platform_probe); 1547 + if (retval) 1548 + return retval; 1549 + 1550 + platform_driver_registered = true; 1551 + 1552 + return 0; 1502 1553 } 1503 1554 module_init(cmos_init); 1504 1555 1505 1556 static void __exit cmos_exit(void) 1506 1557 { 1507 - #ifdef CONFIG_PNP 1508 - if (pnp_driver_registered) 1509 - pnp_unregister_driver(&cmos_pnp_driver); 1510 - #endif 1511 1558 if (platform_driver_registered) 1512 1559 platform_driver_unregister(&cmos_platform_driver); 1513 1560 }
+14 -13
drivers/watchdog/ni903x_wdt.c
··· 8 8 #include <linux/interrupt.h> 9 9 #include <linux/io.h> 10 10 #include <linux/module.h> 11 + #include <linux/platform_device.h> 11 12 #include <linux/watchdog.h> 12 13 13 14 #define NIWD_CONTROL 0x01 ··· 178 177 .get_timeleft = ni903x_wdd_get_timeleft, 179 178 }; 180 179 181 - static int ni903x_acpi_add(struct acpi_device *device) 180 + static int ni903x_acpi_probe(struct platform_device *pdev) 182 181 { 183 - struct device *dev = &device->dev; 182 + struct device *dev = &pdev->dev; 184 183 struct watchdog_device *wdd; 185 184 struct ni903x_wdt *wdt; 186 185 acpi_status status; ··· 190 189 if (!wdt) 191 190 return -ENOMEM; 192 191 193 - device->driver_data = wdt; 192 + platform_set_drvdata(pdev, wdt); 194 193 wdt->dev = dev; 195 194 196 - status = acpi_walk_resources(device->handle, METHOD_NAME__CRS, 195 + status = acpi_walk_resources(ACPI_HANDLE(dev), METHOD_NAME__CRS, 197 196 ni903x_resources, wdt); 198 197 if (ACPI_FAILURE(status) || wdt->io_base == 0) { 199 198 dev_err(dev, "failed to get resources\n"); ··· 225 224 return 0; 226 225 } 227 226 228 - static void ni903x_acpi_remove(struct acpi_device *device) 227 + static void ni903x_acpi_remove(struct platform_device *pdev) 229 228 { 230 - struct ni903x_wdt *wdt = acpi_driver_data(device); 229 + struct ni903x_wdt *wdt = platform_get_drvdata(pdev); 231 230 232 231 ni903x_wdd_stop(&wdt->wdd); 233 232 watchdog_unregister_device(&wdt->wdd); ··· 239 238 }; 240 239 MODULE_DEVICE_TABLE(acpi, ni903x_device_ids); 241 240 242 - static struct acpi_driver ni903x_acpi_driver = { 243 - .name = NIWD_NAME, 244 - .ids = ni903x_device_ids, 245 - .ops = { 246 - .add = ni903x_acpi_add, 247 - .remove = ni903x_acpi_remove, 241 + static struct platform_driver ni903x_acpi_driver = { 242 + .probe = ni903x_acpi_probe, 243 + .remove = ni903x_acpi_remove, 244 + .driver = { 245 + .name = NIWD_NAME, 246 + .acpi_match_table = ni903x_device_ids, 248 247 }, 249 248 }; 250 249 251 - module_acpi_driver(ni903x_acpi_driver); 250 + module_platform_driver(ni903x_acpi_driver); 252 251 253 252 MODULE_DESCRIPTION("NI 903x Watchdog"); 254 253 MODULE_AUTHOR("Jeff Westfahl <jeff.westfahl@ni.com>");
+12 -11
drivers/xen/xen-acpi-pad.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/types.h> 13 13 #include <linux/acpi.h> 14 + #include <linux/platform_device.h> 14 15 #include <xen/xen.h> 15 16 #include <xen/interface/version.h> 16 17 #include <xen/xen-ops.h> ··· 108 107 } 109 108 } 110 109 111 - static int acpi_pad_add(struct acpi_device *device) 110 + static int acpi_pad_probe(struct platform_device *pdev) 112 111 { 112 + struct acpi_device *device = ACPI_COMPANION(&pdev->dev); 113 113 acpi_status status; 114 114 115 115 strcpy(acpi_device_name(device), ACPI_PROCESSOR_AGGREGATOR_DEVICE_NAME); ··· 124 122 return 0; 125 123 } 126 124 127 - static void acpi_pad_remove(struct acpi_device *device) 125 + static void acpi_pad_remove(struct platform_device *pdev) 128 126 { 129 127 mutex_lock(&xen_cpu_lock); 130 128 xen_acpi_pad_idle_cpus(0); 131 129 mutex_unlock(&xen_cpu_lock); 132 130 133 - acpi_remove_notify_handler(device->handle, 131 + acpi_remove_notify_handler(ACPI_HANDLE(&pdev->dev), 134 132 ACPI_DEVICE_NOTIFY, acpi_pad_notify); 135 133 } 136 134 ··· 139 137 {"", 0}, 140 138 }; 141 139 142 - static struct acpi_driver acpi_pad_driver = { 143 - .name = "processor_aggregator", 144 - .class = ACPI_PROCESSOR_AGGREGATOR_CLASS, 145 - .ids = pad_device_ids, 146 - .ops = { 147 - .add = acpi_pad_add, 148 - .remove = acpi_pad_remove, 140 + static struct platform_driver acpi_pad_driver = { 141 + .probe = acpi_pad_probe, 142 + .remove = acpi_pad_remove, 143 + .driver = { 144 + .name = "acpi_processor_aggregator", 145 + .acpi_match_table = pad_device_ids, 149 146 }, 150 147 }; 151 148 ··· 158 157 if (!xen_running_on_version_or_later(4, 2)) 159 158 return -ENODEV; 160 159 161 - return acpi_bus_register_driver(&acpi_pad_driver); 160 + return platform_driver_register(&acpi_pad_driver); 162 161 } 163 162 subsys_initcall(xen_acpi_pad_init);
+4 -10
include/acpi/acpi_bus.h
··· 613 613 u32 data; 614 614 }; 615 615 616 + #define ACPI_AC_CLASS "ac_adapter" 617 + 616 618 extern struct kobject *acpi_kobj; 617 619 extern int acpi_bus_generate_netlink_event(const char*, const char*, u8, int); 618 620 void acpi_bus_private_data_handler(acpi_handle, void *); ··· 627 625 void acpi_dev_remove_notify_handler(struct acpi_device *adev, 628 626 u32 handler_type, 629 627 acpi_notify_handler handler); 630 - extern int acpi_notifier_call_chain(struct acpi_device *, u32, u32); 628 + extern int acpi_notifier_call_chain(const char *device_class, 629 + const char *bus_id, u32 type, u32 data); 631 630 extern int register_acpi_notifier(struct notifier_block *); 632 631 extern int unregister_acpi_notifier(struct notifier_block *); 633 632 ··· 763 760 #ifdef CONFIG_X86 764 761 bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *status); 765 762 bool acpi_quirk_skip_acpi_ac_and_battery(void); 766 - int acpi_install_cmos_rtc_space_handler(acpi_handle handle); 767 - void acpi_remove_cmos_rtc_space_handler(acpi_handle handle); 768 763 int acpi_quirk_skip_serdev_enumeration(struct device *controller_parent, bool *skip); 769 764 #else 770 765 static inline bool acpi_device_override_status(struct acpi_device *adev, ··· 773 772 static inline bool acpi_quirk_skip_acpi_ac_and_battery(void) 774 773 { 775 774 return false; 776 - } 777 - static inline int acpi_install_cmos_rtc_space_handler(acpi_handle handle) 778 - { 779 - return 1; 780 - } 781 - static inline void acpi_remove_cmos_rtc_space_handler(acpi_handle handle) 782 - { 783 775 } 784 776 static inline int 785 777 acpi_quirk_skip_serdev_enumeration(struct device *controller_parent, bool *skip)
+21 -1
include/acpi/cppc_acpi.h
··· 42 42 #define CPPC_EPP_PERFORMANCE_PREF 0x00 43 43 #define CPPC_EPP_ENERGY_EFFICIENCY_PREF 0xFF 44 44 45 + #define CPPC_PERF_LIMITED_DESIRED_EXCURSION BIT(0) 46 + #define CPPC_PERF_LIMITED_MINIMUM_EXCURSION BIT(1) 47 + #define CPPC_PERF_LIMITED_MASK (CPPC_PERF_LIMITED_DESIRED_EXCURSION | \ 48 + CPPC_PERF_LIMITED_MINIMUM_EXCURSION) 49 + 45 50 /* Each register has the folowing format. */ 46 51 struct cpc_reg { 47 52 u8 descriptor; ··· 121 116 u32 guaranteed_perf; 122 117 u32 highest_perf; 123 118 u32 nominal_perf; 119 + u32 reference_perf; 124 120 u32 lowest_perf; 125 121 u32 lowest_nonlinear_perf; 126 122 u32 lowest_freq; ··· 139 133 struct cppc_perf_fb_ctrs { 140 134 u64 reference; 141 135 u64 delivered; 142 - u64 reference_perf; 143 136 u64 wraparound_time; 144 137 }; 145 138 ··· 156 151 extern int cppc_get_nominal_perf(int cpunum, u64 *nominal_perf); 157 152 extern int cppc_get_highest_perf(int cpunum, u64 *highest_perf); 158 153 extern int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs); 154 + extern int cppc_get_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls); 159 155 extern int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls); 160 156 extern int cppc_set_enable(int cpu, bool enable); 161 157 extern int cppc_get_perf_caps(int cpu, struct cppc_perf_caps *caps); ··· 179 173 extern int cppc_set_auto_act_window(int cpu, u64 auto_act_window); 180 174 extern int cppc_get_auto_sel(int cpu, bool *enable); 181 175 extern int cppc_set_auto_sel(int cpu, bool enable); 176 + extern int cppc_get_perf_limited(int cpu, u64 *perf_limited); 177 + extern int cppc_set_perf_limited(int cpu, u64 bits_to_clear); 182 178 extern int amd_get_highest_perf(unsigned int cpu, u32 *highest_perf); 183 179 extern int amd_get_boost_ratio_numerator(unsigned int cpu, u64 *numerator); 184 180 extern int amd_detect_prefcore(bool *detected); ··· 198 190 return -EOPNOTSUPP; 199 191 } 200 192 static inline int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs) 193 + { 194 + return -EOPNOTSUPP; 195 + } 196 + static inline int cppc_get_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) 201 197 { 202 198 return -EOPNOTSUPP; 203 199 } ··· 274 262 return -EOPNOTSUPP; 275 263 } 276 264 static inline int cppc_set_auto_sel(int cpu, bool enable) 265 + { 266 + return -EOPNOTSUPP; 267 + } 268 + static inline int cppc_get_perf_limited(int cpu, u64 *perf_limited) 269 + { 270 + return -EOPNOTSUPP; 271 + } 272 + static inline int cppc_set_perf_limited(int cpu, u64 bits_to_clear) 277 273 { 278 274 return -EOPNOTSUPP; 279 275 }
+11
include/acpi/ghes.h
··· 71 71 */ 72 72 void ghes_unregister_vendor_record_notifier(struct notifier_block *nb); 73 73 74 + /** 75 + * devm_ghes_register_vendor_record_notifier - device-managed vendor 76 + * record notifier registration. 77 + * @dev: device that owns the notifier lifetime 78 + * @nb: pointer to the notifier_block structure of the vendor record handler 79 + * 80 + * Return: 0 on success, negative errno on failure. 81 + */ 82 + int devm_ghes_register_vendor_record_notifier(struct device *dev, 83 + struct notifier_block *nb); 84 + 74 85 struct list_head *ghes_get_devices(void); 75 86 76 87 void ghes_estatus_pool_region_free(unsigned long addr, u32 size);
-2
include/acpi/processor.h
··· 14 14 15 15 #include <asm/acpi.h> 16 16 17 - #define ACPI_PROCESSOR_CLASS "processor" 18 - #define ACPI_PROCESSOR_DEVICE_NAME "Processor" 19 17 #define ACPI_PROCESSOR_DEVICE_HID "ACPI0007" 20 18 #define ACPI_PROCESSOR_CONTAINER_HID "ACPI0010" 21 19
+21
include/linux/acpi.h
··· 324 324 325 325 acpi_handle acpi_get_processor_handle(int cpu); 326 326 327 + /** 328 + * acpi_get_cpu_uid() - Get ACPI Processor UID of from MADT table 329 + * @cpu: Logical CPU number (0-based) 330 + * @uid: Pointer to store ACPI Processor UID 331 + * 332 + * Return: 0 on success (ACPI Processor ID stored in *uid); 333 + * -EINVAL if CPU number is invalid or out of range; 334 + * -ENODEV if ACPI Processor UID for the CPU is not found. 335 + */ 336 + int acpi_get_cpu_uid(unsigned int cpu, u32 *uid); 337 + 327 338 #ifdef CONFIG_ACPI_HOTPLUG_IOAPIC 328 339 int acpi_get_ioapic_id(acpi_handle handle, u32 gsi_base, u64 *phys_addr); 329 340 #endif ··· 802 791 int acpi_mrrm_max_mem_region(void); 803 792 #endif 804 793 794 + #define ACPI_CMOS_RTC_IDS \ 795 + { "PNP0B00", }, \ 796 + { "PNP0B01", }, \ 797 + { "PNP0B02", }, \ 798 + { "", } 799 + 800 + extern bool cmos_rtc_platform_device_present; 801 + 805 802 #else /* !CONFIG_ACPI */ 806 803 807 804 #define acpi_disabled 1 ··· 1134 1115 { 1135 1116 return 1; 1136 1117 } 1118 + 1119 + #define cmos_rtc_platform_device_present false 1137 1120 1138 1121 #endif /* !CONFIG_ACPI */ 1139 1122
+2
include/linux/auxiliary_bus.h
··· 271 271 __devm_auxiliary_device_create(dev, KBUILD_MODNAME, devname, \ 272 272 platform_data, 0) 273 273 274 + bool dev_is_auxiliary(struct device *dev); 275 + 274 276 /** 275 277 * module_auxiliary_driver() - Helper macro for registering an auxiliary driver 276 278 * @__auxiliary_driver: auxiliary driver struct
+2
include/linux/cpuidle.h
··· 188 188 extern void cpuidle_unregister_driver(struct cpuidle_driver *drv); 189 189 extern int cpuidle_register_device(struct cpuidle_device *dev); 190 190 extern void cpuidle_unregister_device(struct cpuidle_device *dev); 191 + extern void cpuidle_unregister_device_no_lock(struct cpuidle_device *dev); 191 192 extern int cpuidle_register(struct cpuidle_driver *drv, 192 193 const struct cpumask *const coupled_cpus); 193 194 extern void cpuidle_unregister(struct cpuidle_driver *drv); ··· 227 226 static inline int cpuidle_register_device(struct cpuidle_device *dev) 228 227 {return -ENODEV; } 229 228 static inline void cpuidle_unregister_device(struct cpuidle_device *dev) { } 229 + static inline void cpuidle_unregister_device_no_lock(struct cpuidle_device *dev) {} 230 230 static inline int cpuidle_register(struct cpuidle_driver *drv, 231 231 const struct cpumask *const coupled_cpus) 232 232 {return -ENODEV; }
+2 -2
include/linux/pci-tph.h
··· 25 25 unsigned int index, u16 tag); 26 26 int pcie_tph_get_cpu_st(struct pci_dev *dev, 27 27 enum tph_mem_type mem_type, 28 - unsigned int cpu_uid, u16 *tag); 28 + unsigned int cpu, u16 *tag); 29 29 void pcie_disable_tph(struct pci_dev *pdev); 30 30 int pcie_enable_tph(struct pci_dev *pdev, int mode); 31 31 u16 pcie_tph_get_st_table_size(struct pci_dev *pdev); ··· 36 36 { return -EINVAL; } 37 37 static inline int pcie_tph_get_cpu_st(struct pci_dev *dev, 38 38 enum tph_mem_type mem_type, 39 - unsigned int cpu_uid, u16 *tag) 39 + unsigned int cpu, u16 *tag) 40 40 { return -EINVAL; } 41 41 static inline void pcie_disable_tph(struct pci_dev *pdev) { } 42 42 static inline int pcie_enable_tph(struct pci_dev *pdev, int mode)