Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'asoc-fix-v7.0-merge-window' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus

ASoC: Fixes for v7.0 merge window

A reasonably small set of fixes and quriks that came in during the merge
window, there's one more pending that I'll send tomorrow if you didn't
send a PR already.

+2030 -1018
+3 -1
.mailmap
··· 34 34 Alexander Lobakin <alobakin@pm.me> <bloodyreaper@yandex.ru> 35 35 Alexander Mikhalitsyn <alexander@mihalicyn.com> <alexander.mikhalitsyn@virtuozzo.com> 36 36 Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@canonical.com> 37 + Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@futurfusion.io> 37 38 Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin.ext@nsn.com> 38 39 Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@gmx.de> 39 40 Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@nokia.com> ··· 787 786 Subbaraman Narayanamurthy <quic_subbaram@quicinc.com> <subbaram@codeaurora.org> 788 787 Subhash Jadavani <subhashj@codeaurora.org> 789 788 Sudarshan Rajagopalan <quic_sudaraja@quicinc.com> <sudaraja@codeaurora.org> 790 - Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> 789 + Sudeep Holla <sudeep.holla@kernel.org> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> 790 + Sudeep Holla <sudeep.holla@kernel.org> <sudeep.holla@arm.com> 791 791 Sumit Garg <sumit.garg@kernel.org> <sumit.garg@linaro.org> 792 792 Sumit Semwal <sumit.semwal@ti.com> 793 793 Surabhi Vishnoi <quic_svishnoi@quicinc.com> <svishnoi@codeaurora.org>
-10
Documentation/ABI/testing/sysfs-class-tsm
··· 7 7 signals when the PCI layer is able to support establishment of 8 8 link encryption and other device-security features coordinated 9 9 through a platform tsm. 10 - 11 - What: /sys/class/tsm/tsmN/streamH.R.E 12 - Contact: linux-pci@vger.kernel.org 13 - Description: 14 - (RO) When a host bridge has established a secure connection via 15 - the platform TSM, symlink appears. The primary function of this 16 - is have a system global review of TSM resource consumption 17 - across host bridges. The link points to the endpoint PCI device 18 - and matches the same link published by the host bridge. See 19 - Documentation/ABI/testing/sysfs-devices-pci-host-bridge.
+5
Documentation/admin-guide/kernel-parameters.txt
··· 3472 3472 If there are multiple matching configurations changing 3473 3473 the same attribute, the last one is used. 3474 3474 3475 + liveupdate= [KNL,EARLY] 3476 + Format: <bool> 3477 + Enable Live Update Orchestrator (LUO). 3478 + Default: off. 3479 + 3475 3480 load_ramdisk= [RAM] [Deprecated] 3476 3481 3477 3482 lockd.nlm_grace_period=P [NFS] Assign grace period.
+3 -3
Documentation/devicetree/bindings/sound/asahi-kasei,ak4458.yaml
··· 21 21 reg: 22 22 maxItems: 1 23 23 24 - avdd-supply: 24 + AVDD-supply: 25 25 description: Analog power supply 26 26 27 - dvdd-supply: 27 + DVDD-supply: 28 28 description: Digital power supply 29 29 30 30 reset-gpios: ··· 60 60 properties: 61 61 dsd-path: false 62 62 63 - additionalProperties: false 63 + unevaluatedProperties: false 64 64 65 65 examples: 66 66 - |
+6 -3
Documentation/devicetree/bindings/sound/asahi-kasei,ak5558.yaml
··· 19 19 reg: 20 20 maxItems: 1 21 21 22 - avdd-supply: 22 + AVDD-supply: 23 23 description: A 1.8V supply that powers up the AVDD pin. 24 24 25 - dvdd-supply: 25 + DVDD-supply: 26 26 description: A 1.2V supply that powers up the DVDD pin. 27 27 28 28 reset-gpios: ··· 32 32 - compatible 33 33 - reg 34 34 35 - additionalProperties: false 35 + allOf: 36 + - $ref: dai-common.yaml# 37 + 38 + unevaluatedProperties: false 36 39 37 40 examples: 38 41 - |
+35 -18
MAINTAINERS
··· 335 335 ACPI FOR ARM64 (ACPI/arm64) 336 336 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 337 337 M: Hanjun Guo <guohanjun@huawei.com> 338 - M: Sudeep Holla <sudeep.holla@arm.com> 338 + M: Sudeep Holla <sudeep.holla@kernel.org> 339 339 L: linux-acpi@vger.kernel.org 340 340 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 341 341 S: Maintained ··· 351 351 F: include/linux/acpi_rimt.h 352 352 353 353 ACPI PCC(Platform Communication Channel) MAILBOX DRIVER 354 - M: Sudeep Holla <sudeep.holla@arm.com> 354 + M: Sudeep Holla <sudeep.holla@kernel.org> 355 355 L: linux-acpi@vger.kernel.org 356 356 S: Supported 357 357 F: drivers/mailbox/pcc.c ··· 2754 2754 F: arch/arm/mach-footbridge/ 2755 2755 2756 2756 ARM/FREESCALE IMX / MXC ARM ARCHITECTURE 2757 - M: Shawn Guo <shawnguo@kernel.org> 2757 + M: Frank Li <Frank.Li@nxp.com> 2758 2758 M: Sascha Hauer <s.hauer@pengutronix.de> 2759 2759 R: Pengutronix Kernel Team <kernel@pengutronix.de> 2760 2760 R: Fabio Estevam <festevam@gmail.com> 2761 2761 L: imx@lists.linux.dev 2762 2762 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2763 2763 S: Maintained 2764 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 2764 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git 2765 2765 F: Documentation/devicetree/bindings/firmware/fsl* 2766 2766 F: Documentation/devicetree/bindings/firmware/nxp* 2767 2767 F: arch/arm/boot/dts/nxp/imx/ ··· 2776 2776 N: \bmxc[^\d] 2777 2777 2778 2778 ARM/FREESCALE LAYERSCAPE ARM ARCHITECTURE 2779 - M: Shawn Guo <shawnguo@kernel.org> 2779 + M: Frank Li <Frank.Li@nxp.com> 2780 2780 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2781 2781 S: Maintained 2782 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 2782 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git 2783 2783 F: arch/arm/boot/dts/nxp/ls/ 2784 2784 F: arch/arm64/boot/dts/freescale/fsl-* 2785 2785 F: arch/arm64/boot/dts/freescale/qoriq-* 2786 2786 2787 2787 ARM/FREESCALE VYBRID ARM ARCHITECTURE 2788 - M: Shawn Guo <shawnguo@kernel.org> 2788 + M: Frank Li <Frank.Li@nxp.com> 2789 2789 M: Sascha Hauer <s.hauer@pengutronix.de> 2790 2790 R: Pengutronix Kernel Team <kernel@pengutronix.de> 2791 2791 R: Stefan Agner <stefan@agner.ch> 2792 2792 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2793 2793 S: Maintained 2794 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 2794 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git 2795 2795 F: arch/arm/boot/dts/nxp/vf/ 2796 2796 F: arch/arm/mach-imx/*vf610* 2797 2797 ··· 3688 3688 3689 3689 ARM/VERSATILE EXPRESS PLATFORM 3690 3690 M: Liviu Dudau <liviu.dudau@arm.com> 3691 - M: Sudeep Holla <sudeep.holla@arm.com> 3691 + M: Sudeep Holla <sudeep.holla@kernel.org> 3692 3692 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 3693 3693 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3694 3694 S: Maintained ··· 6520 6520 6521 6521 CPU FREQUENCY DRIVERS - VEXPRESS SPC ARM BIG LITTLE 6522 6522 M: Viresh Kumar <viresh.kumar@linaro.org> 6523 - M: Sudeep Holla <sudeep.holla@arm.com> 6523 + M: Sudeep Holla <sudeep.holla@kernel.org> 6524 6524 L: linux-pm@vger.kernel.org 6525 6525 S: Maintained 6526 6526 W: http://www.arm.com/products/processors/technologies/biglittleprocessing.php ··· 6616 6616 6617 6617 CPUIDLE DRIVER - ARM PSCI 6618 6618 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 6619 - M: Sudeep Holla <sudeep.holla@arm.com> 6619 + M: Sudeep Holla <sudeep.holla@kernel.org> 6620 6620 M: Ulf Hansson <ulf.hansson@linaro.org> 6621 6621 L: linux-pm@vger.kernel.org 6622 6622 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 9822 9822 F: tools/firewire/ 9823 9823 9824 9824 FIRMWARE FRAMEWORK FOR ARMV8-A 9825 - M: Sudeep Holla <sudeep.holla@arm.com> 9825 + M: Sudeep Holla <sudeep.holla@kernel.org> 9826 9826 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 9827 9827 S: Maintained 9828 9828 F: drivers/firmware/arm_ffa/ ··· 10520 10520 F: scripts/gendwarfksyms/ 10521 10521 10522 10522 GENERIC ARCHITECTURE TOPOLOGY 10523 - M: Sudeep Holla <sudeep.holla@arm.com> 10523 + M: Sudeep Holla <sudeep.holla@kernel.org> 10524 10524 L: linux-kernel@vger.kernel.org 10525 10525 S: Maintained 10526 10526 F: drivers/base/arch_topology.c ··· 11381 11381 F: Documentation/ABI/testing/sysfs-devices-platform-kunpeng_hccs 11382 11382 F: drivers/soc/hisilicon/kunpeng_hccs.c 11383 11383 F: drivers/soc/hisilicon/kunpeng_hccs.h 11384 + 11385 + HISILICON SOC HHA DRIVER 11386 + M: Yushan Wang <wangyushan12@huawei.com> 11387 + S: Maintained 11388 + F: drivers/cache/hisi_soc_hha.c 11384 11389 11385 11390 HISILICON LPC BUS DRIVER 11386 11391 M: Jay Fang <f.fangjian@huawei.com> ··· 15112 15107 F: include/linux/mailbox/arm_mhuv2_message.h 15113 15108 15114 15109 MAILBOX ARM MHUv3 15115 - M: Sudeep Holla <sudeep.holla@arm.com> 15110 + M: Sudeep Holla <sudeep.holla@kernel.org> 15116 15111 M: Cristian Marussi <cristian.marussi@arm.com> 15117 15112 L: linux-kernel@vger.kernel.org 15118 15113 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 20589 20584 PIN CONTROLLER - FREESCALE 20590 20585 M: Dong Aisheng <aisheng.dong@nxp.com> 20591 20586 M: Fabio Estevam <festevam@gmail.com> 20592 - M: Shawn Guo <shawnguo@kernel.org> 20587 + M: Frank Li <Frank.Li@nxp.com> 20593 20588 M: Jacky Bai <ping.bai@nxp.com> 20594 20589 R: Pengutronix Kernel Team <kernel@pengutronix.de> 20595 20590 R: NXP S32 Linux Team <s32@nxp.com> ··· 20984 20979 F: Documentation/devicetree/bindings/net/pse-pd/ 20985 20980 F: drivers/net/pse-pd/ 20986 20981 F: net/ethtool/pse-pd.c 20982 + 20983 + PSP SECURITY PROTOCOL 20984 + M: Daniel Zahka <daniel.zahka@gmail.com> 20985 + M: Jakub Kicinski <kuba@kernel.org> 20986 + M: Willem de Bruijn <willemdebruijn.kernel@gmail.com> 20987 + F: Documentation/netlink/specs/psp.yaml 20988 + F: Documentation/networking/psp.rst 20989 + F: include/net/psp/ 20990 + F: include/net/psp.h 20991 + F: include/uapi/linux/psp.h 20992 + F: net/psp/ 20993 + K: struct\ psp(_assoc|_dev|hdr)\b 20987 20994 20988 20995 PSTORE FILESYSTEM 20989 20996 M: Kees Cook <kees@kernel.org> ··· 23654 23637 SECURE MONITOR CALL(SMC) CALLING CONVENTION (SMCCC) 23655 23638 M: Mark Rutland <mark.rutland@arm.com> 23656 23639 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 23657 - M: Sudeep Holla <sudeep.holla@arm.com> 23640 + M: Sudeep Holla <sudeep.holla@kernel.org> 23658 23641 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 23659 23642 S: Maintained 23660 23643 F: drivers/firmware/smccc/ ··· 25418 25401 F: drivers/mfd/syscon.c 25419 25402 25420 25403 SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers 25421 - M: Sudeep Holla <sudeep.holla@arm.com> 25404 + M: Sudeep Holla <sudeep.holla@kernel.org> 25422 25405 R: Cristian Marussi <cristian.marussi@arm.com> 25423 25406 L: arm-scmi@vger.kernel.org 25424 25407 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 26570 26553 26571 26554 TRUSTED SERVICES TEE DRIVER 26572 26555 M: Balint Dobszay <balint.dobszay@arm.com> 26573 - M: Sudeep Holla <sudeep.holla@arm.com> 26556 + M: Sudeep Holla <sudeep.holla@kernel.org> 26574 26557 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 26575 26558 L: trusted-services@lists.trustedfirmware.org 26576 26559 S: Maintained
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc8 5 + EXTRAVERSION = 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+4 -1
arch/arm/include/asm/string.h
··· 42 42 extern void *__memset64(uint64_t *, uint32_t low, __kernel_size_t, uint32_t hi); 43 43 static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n) 44 44 { 45 - return __memset64(p, v, n * 8, v >> 32); 45 + if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN)) 46 + return __memset64(p, v, n * 8, v >> 32); 47 + else 48 + return __memset64(p, v >> 32, n * 8, v); 46 49 } 47 50 48 51 /*
+4 -3
arch/x86/include/asm/kfence.h
··· 42 42 { 43 43 unsigned int level; 44 44 pte_t *pte = lookup_address(addr, &level); 45 - pteval_t val; 45 + pteval_t val, new; 46 46 47 47 if (WARN_ON(!pte || level != PG_LEVEL_4K)) 48 48 return false; ··· 57 57 return true; 58 58 59 59 /* 60 - * Otherwise, invert the entire PTE. This avoids writing out an 60 + * Otherwise, flip the Present bit, taking care to avoid writing an 61 61 * L1TF-vulnerable PTE (not present, without the high address bits 62 62 * set). 63 63 */ 64 - set_pte(pte, __pte(~val)); 64 + new = val ^ _PAGE_PRESENT; 65 + set_pte(pte, __pte(flip_protnone_guard(val, new, PTE_PFN_MASK))); 65 66 66 67 /* 67 68 * If the page was protected (non-present) and we're making it
+2 -2
arch/x86/include/asm/vmware.h
··· 140 140 "b" (in1), 141 141 "c" (cmd), 142 142 "d" (0) 143 - : "cc", "memory"); 143 + : "di", "si", "cc", "memory"); 144 144 return out0; 145 145 } 146 146 ··· 165 165 "b" (in1), 166 166 "c" (cmd), 167 167 "d" (0) 168 - : "cc", "memory"); 168 + : "di", "si", "cc", "memory"); 169 169 return out0; 170 170 } 171 171
+2 -1
arch/x86/kvm/irq.c
··· 514 514 */ 515 515 spin_lock_irq(&kvm->irqfds.lock); 516 516 517 - if (irqfd->irq_entry.type == KVM_IRQ_ROUTING_MSI) { 517 + if (irqfd->irq_entry.type == KVM_IRQ_ROUTING_MSI || 518 + WARN_ON_ONCE(irqfd->irq_bypass_vcpu)) { 518 519 ret = kvm_pi_update_irte(irqfd, NULL); 519 520 if (ret) 520 521 pr_info("irq bypass consumer (eventfd %p) unregistration fails: %d\n",
+2 -2
arch/x86/kvm/svm/avic.c
··· 376 376 377 377 static int avic_init_backing_page(struct kvm_vcpu *vcpu) 378 378 { 379 + u32 max_id = x2avic_enabled ? x2avic_max_physical_id : AVIC_MAX_PHYSICAL_ID; 379 380 struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm); 380 381 struct vcpu_svm *svm = to_svm(vcpu); 381 382 u32 id = vcpu->vcpu_id; ··· 389 388 * avic_vcpu_load() expects to be called if and only if the vCPU has 390 389 * fully initialized AVIC. 391 390 */ 392 - if ((!x2avic_enabled && id > AVIC_MAX_PHYSICAL_ID) || 393 - (id > x2avic_max_physical_id)) { 391 + if (id > max_id) { 394 392 kvm_set_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_PHYSICAL_ID_TOO_BIG); 395 393 vcpu->arch.apic->apicv_active = false; 396 394 return 0;
+2
arch/x86/kvm/svm/svm.c
··· 5284 5284 */ 5285 5285 kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT); 5286 5286 kvm_cpu_cap_clear(X86_FEATURE_MSR_IMM); 5287 + 5288 + kvm_setup_xss_caps(); 5287 5289 } 5288 5290 5289 5291 static __init int svm_hardware_setup(void)
+2
arch/x86/kvm/vmx/vmx.c
··· 8051 8051 kvm_cpu_cap_clear(X86_FEATURE_SHSTK); 8052 8052 kvm_cpu_cap_clear(X86_FEATURE_IBT); 8053 8053 } 8054 + 8055 + kvm_setup_xss_caps(); 8054 8056 } 8055 8057 8056 8058 static bool vmx_is_io_intercepted(struct kvm_vcpu *vcpu,
+17 -13
arch/x86/kvm/x86.c
··· 9953 9953 }; 9954 9954 #endif 9955 9955 9956 + void kvm_setup_xss_caps(void) 9957 + { 9958 + if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES)) 9959 + kvm_caps.supported_xss = 0; 9960 + 9961 + if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) && 9962 + !kvm_cpu_cap_has(X86_FEATURE_IBT)) 9963 + kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL; 9964 + 9965 + if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) { 9966 + kvm_cpu_cap_clear(X86_FEATURE_SHSTK); 9967 + kvm_cpu_cap_clear(X86_FEATURE_IBT); 9968 + kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL; 9969 + } 9970 + } 9971 + EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_setup_xss_caps); 9972 + 9956 9973 static inline void kvm_ops_update(struct kvm_x86_init_ops *ops) 9957 9974 { 9958 9975 memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops)); ··· 10141 10124 /* KVM always ignores guest PAT for shadow paging. */ 10142 10125 if (!tdp_enabled) 10143 10126 kvm_caps.supported_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT; 10144 - 10145 - if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES)) 10146 - kvm_caps.supported_xss = 0; 10147 - 10148 - if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) && 10149 - !kvm_cpu_cap_has(X86_FEATURE_IBT)) 10150 - kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL; 10151 - 10152 - if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) { 10153 - kvm_cpu_cap_clear(X86_FEATURE_SHSTK); 10154 - kvm_cpu_cap_clear(X86_FEATURE_IBT); 10155 - kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL; 10156 - } 10157 10127 10158 10128 if (kvm_caps.has_tsc_control) { 10159 10129 /*
+2
arch/x86/kvm/x86.h
··· 471 471 472 472 extern bool enable_pmu; 473 473 474 + void kvm_setup_xss_caps(void); 475 + 474 476 /* 475 477 * Get a filtered version of KVM's supported XCR0 that strips out dynamic 476 478 * features for which the current process doesn't (yet) have permission to use.
+16 -3
drivers/android/binder.c
··· 2991 2991 * @t: the binder transaction that failed 2992 2992 * @data_size: the user provided data size for the transaction 2993 2993 * @error: enum binder_driver_return_protocol returned to sender 2994 + * 2995 + * Note that t->buffer is not safe to access here, as it may have been 2996 + * released (or not yet allocated). Callers should guarantee all the 2997 + * transaction items used here are safe to access. 2994 2998 */ 2995 2999 static void binder_netlink_report(struct binder_proc *proc, 2996 3000 struct binder_transaction *t, ··· 3784 3780 goto err_dead_proc_or_thread; 3785 3781 } 3786 3782 } else { 3783 + /* 3784 + * Make a transaction copy. It is not safe to access 't' after 3785 + * binder_proc_transaction() reported a pending frozen. The 3786 + * target could thaw and consume the transaction at any point. 3787 + * Instead, use a safe 't_copy' for binder_netlink_report(). 3788 + */ 3789 + struct binder_transaction t_copy = *t; 3790 + 3787 3791 BUG_ON(target_node == NULL); 3788 3792 BUG_ON(t->buffer->async_transaction != 1); 3789 3793 return_error = binder_proc_transaction(t, target_proc, NULL); ··· 3802 3790 */ 3803 3791 if (return_error == BR_TRANSACTION_PENDING_FROZEN) { 3804 3792 tcomplete->type = BINDER_WORK_TRANSACTION_PENDING; 3805 - binder_netlink_report(proc, t, tr->data_size, 3793 + binder_netlink_report(proc, &t_copy, tr->data_size, 3806 3794 return_error); 3807 3795 } 3808 3796 binder_enqueue_thread_work(thread, tcomplete); ··· 3824 3812 return; 3825 3813 3826 3814 err_dead_proc_or_thread: 3827 - binder_txn_error("%d:%d dead process or thread\n", 3828 - thread->pid, proc->pid); 3815 + binder_txn_error("%d:%d %s process or thread\n", 3816 + proc->pid, thread->pid, 3817 + return_error == BR_FROZEN_REPLY ? "frozen" : "dead"); 3829 3818 return_error_line = __LINE__; 3830 3819 binder_dequeue_work(proc, tcomplete); 3831 3820 err_translate_failed:
+6 -11
drivers/android/binder/rust_binderfs.c
··· 132 132 mutex_lock(&binderfs_minors_mutex); 133 133 if (++info->device_count <= info->mount_opts.max) 134 134 minor = ida_alloc_max(&binderfs_minors, 135 - use_reserve ? BINDERFS_MAX_MINOR : 136 - BINDERFS_MAX_MINOR_CAPPED, 135 + use_reserve ? BINDERFS_MAX_MINOR - 1 : 136 + BINDERFS_MAX_MINOR_CAPPED - 1, 137 137 GFP_KERNEL); 138 138 else 139 139 minor = -ENOSPC; ··· 391 391 if (!device) 392 392 return -ENOMEM; 393 393 394 - /* If we have already created a binder-control node, return. */ 395 - if (info->control_dentry) { 396 - ret = 0; 397 - goto out; 398 - } 399 - 400 394 ret = -ENOMEM; 401 395 inode = new_inode(sb); 402 396 if (!inode) ··· 399 405 /* Reserve a new minor number for the new device. */ 400 406 mutex_lock(&binderfs_minors_mutex); 401 407 minor = ida_alloc_max(&binderfs_minors, 402 - use_reserve ? BINDERFS_MAX_MINOR : 403 - BINDERFS_MAX_MINOR_CAPPED, 408 + use_reserve ? BINDERFS_MAX_MINOR - 1 : 409 + BINDERFS_MAX_MINOR_CAPPED - 1, 404 410 GFP_KERNEL); 405 411 mutex_unlock(&binderfs_minors_mutex); 406 412 if (minor < 0) { ··· 425 431 426 432 inode->i_private = device; 427 433 info->control_dentry = dentry; 428 - d_add(dentry, inode); 434 + d_make_persistent(dentry, inode); 435 + dput(dentry); 429 436 430 437 return 0; 431 438
+70 -39
drivers/android/binder/thread.rs
··· 39 39 sync::atomic::{AtomicU32, Ordering}, 40 40 }; 41 41 42 + fn is_aligned(value: usize, to: usize) -> bool { 43 + value % to == 0 44 + } 45 + 42 46 /// Stores the layout of the scatter-gather entries. This is used during the `translate_objects` 43 47 /// call and is discarded when it returns. 44 48 struct ScatterGatherState { ··· 73 69 } 74 70 75 71 /// This entry specifies that a fixup should happen at `target_offset` of the 76 - /// buffer. If `skip` is nonzero, then the fixup is a `binder_fd_array_object` 77 - /// and is applied later. Otherwise if `skip` is zero, then the size of the 78 - /// fixup is `sizeof::<u64>()` and `pointer_value` is written to the buffer. 79 - struct PointerFixupEntry { 80 - /// The number of bytes to skip, or zero for a `binder_buffer_object` fixup. 81 - skip: usize, 82 - /// The translated pointer to write when `skip` is zero. 83 - pointer_value: u64, 84 - /// The offset at which the value should be written. The offset is relative 85 - /// to the original buffer. 86 - target_offset: usize, 72 + /// buffer. 73 + enum PointerFixupEntry { 74 + /// A fixup for a `binder_buffer_object`. 75 + Fixup { 76 + /// The translated pointer to write. 77 + pointer_value: u64, 78 + /// The offset at which the value should be written. The offset is relative 79 + /// to the original buffer. 80 + target_offset: usize, 81 + }, 82 + /// A skip for a `binder_fd_array_object`. 83 + Skip { 84 + /// The number of bytes to skip. 85 + skip: usize, 86 + /// The offset at which the skip should happen. The offset is relative 87 + /// to the original buffer. 88 + target_offset: usize, 89 + }, 87 90 } 88 91 89 92 /// Return type of `apply_and_validate_fixup_in_parent`. ··· 773 762 774 763 parent_entry.fixup_min_offset = info.new_min_offset; 775 764 parent_entry.pointer_fixups.push( 776 - PointerFixupEntry { 777 - skip: 0, 765 + PointerFixupEntry::Fixup { 778 766 pointer_value: buffer_ptr_in_user_space, 779 767 target_offset: info.target_offset, 780 768 }, ··· 799 789 let num_fds = usize::try_from(obj.num_fds).map_err(|_| EINVAL)?; 800 790 let fds_len = num_fds.checked_mul(size_of::<u32>()).ok_or(EINVAL)?; 801 791 792 + if !is_aligned(parent_offset, size_of::<u32>()) { 793 + return Err(EINVAL.into()); 794 + } 795 + 802 796 let info = sg_state.validate_parent_fixup(parent_index, parent_offset, fds_len)?; 803 797 view.alloc.info_add_fd_reserve(num_fds)?; 804 798 ··· 817 803 } 818 804 }; 819 805 806 + if !is_aligned(parent_entry.sender_uaddr, size_of::<u32>()) { 807 + return Err(EINVAL.into()); 808 + } 809 + 820 810 parent_entry.fixup_min_offset = info.new_min_offset; 821 811 parent_entry 822 812 .pointer_fixups 823 813 .push( 824 - PointerFixupEntry { 814 + PointerFixupEntry::Skip { 825 815 skip: fds_len, 826 - pointer_value: 0, 827 816 target_offset: info.target_offset, 828 817 }, 829 818 GFP_KERNEL, ··· 837 820 .sender_uaddr 838 821 .checked_add(parent_offset) 839 822 .ok_or(EINVAL)?; 823 + 840 824 let mut fda_bytes = KVec::new(); 841 825 UserSlice::new(UserPtr::from_addr(fda_uaddr as _), fds_len) 842 826 .read_all(&mut fda_bytes, GFP_KERNEL)?; ··· 889 871 let mut reader = 890 872 UserSlice::new(UserPtr::from_addr(sg_entry.sender_uaddr), sg_entry.length).reader(); 891 873 for fixup in &mut sg_entry.pointer_fixups { 892 - let fixup_len = if fixup.skip == 0 { 893 - size_of::<u64>() 894 - } else { 895 - fixup.skip 874 + let (fixup_len, fixup_offset) = match fixup { 875 + PointerFixupEntry::Fixup { target_offset, .. } => { 876 + (size_of::<u64>(), *target_offset) 877 + } 878 + PointerFixupEntry::Skip { 879 + skip, 880 + target_offset, 881 + } => (*skip, *target_offset), 896 882 }; 897 883 898 - let target_offset_end = fixup.target_offset.checked_add(fixup_len).ok_or(EINVAL)?; 899 - if fixup.target_offset < end_of_previous_fixup || offset_end < target_offset_end { 884 + let target_offset_end = fixup_offset.checked_add(fixup_len).ok_or(EINVAL)?; 885 + if fixup_offset < end_of_previous_fixup || offset_end < target_offset_end { 900 886 pr_warn!( 901 887 "Fixups oob {} {} {} {}", 902 - fixup.target_offset, 888 + fixup_offset, 903 889 end_of_previous_fixup, 904 890 offset_end, 905 891 target_offset_end ··· 912 890 } 913 891 914 892 let copy_off = end_of_previous_fixup; 915 - let copy_len = fixup.target_offset - end_of_previous_fixup; 893 + let copy_len = fixup_offset - end_of_previous_fixup; 916 894 if let Err(err) = alloc.copy_into(&mut reader, copy_off, copy_len) { 917 895 pr_warn!("Failed copying into alloc: {:?}", err); 918 896 return Err(err.into()); 919 897 } 920 - if fixup.skip == 0 { 921 - let res = alloc.write::<u64>(fixup.target_offset, &fixup.pointer_value); 898 + if let PointerFixupEntry::Fixup { pointer_value, .. } = fixup { 899 + let res = alloc.write::<u64>(fixup_offset, pointer_value); 922 900 if let Err(err) = res { 923 901 pr_warn!("Failed copying ptr into alloc: {:?}", err); 924 902 return Err(err.into()); ··· 971 949 972 950 let data_size = trd.data_size.try_into().map_err(|_| EINVAL)?; 973 951 let aligned_data_size = ptr_align(data_size).ok_or(EINVAL)?; 974 - let offsets_size = trd.offsets_size.try_into().map_err(|_| EINVAL)?; 975 - let aligned_offsets_size = ptr_align(offsets_size).ok_or(EINVAL)?; 976 - let buffers_size = tr.buffers_size.try_into().map_err(|_| EINVAL)?; 977 - let aligned_buffers_size = ptr_align(buffers_size).ok_or(EINVAL)?; 952 + let offsets_size: usize = trd.offsets_size.try_into().map_err(|_| EINVAL)?; 953 + let buffers_size: usize = tr.buffers_size.try_into().map_err(|_| EINVAL)?; 978 954 let aligned_secctx_size = match secctx.as_ref() { 979 955 Some((_offset, ctx)) => ptr_align(ctx.len()).ok_or(EINVAL)?, 980 956 None => 0, 981 957 }; 982 958 959 + if !is_aligned(offsets_size, size_of::<u64>()) { 960 + return Err(EINVAL.into()); 961 + } 962 + if !is_aligned(buffers_size, size_of::<u64>()) { 963 + return Err(EINVAL.into()); 964 + } 965 + 983 966 // This guarantees that at least `sizeof(usize)` bytes will be allocated. 984 967 let len = usize::max( 985 968 aligned_data_size 986 - .checked_add(aligned_offsets_size) 987 - .and_then(|sum| sum.checked_add(aligned_buffers_size)) 969 + .checked_add(offsets_size) 970 + .and_then(|sum| sum.checked_add(buffers_size)) 988 971 .and_then(|sum| sum.checked_add(aligned_secctx_size)) 989 972 .ok_or(ENOMEM)?, 990 - size_of::<usize>(), 973 + size_of::<u64>(), 991 974 ); 992 - let secctx_off = aligned_data_size + aligned_offsets_size + aligned_buffers_size; 975 + let secctx_off = aligned_data_size + offsets_size + buffers_size; 993 976 let mut alloc = 994 977 match to_process.buffer_alloc(debug_id, len, is_oneway, self.process.task.pid()) { 995 978 Ok(alloc) => alloc, ··· 1026 999 } 1027 1000 1028 1001 let offsets_start = aligned_data_size; 1029 - let offsets_end = aligned_data_size + aligned_offsets_size; 1002 + let offsets_end = aligned_data_size + offsets_size; 1030 1003 1031 1004 // This state is used for BINDER_TYPE_PTR objects. 1032 1005 let sg_state = sg_state.insert(ScatterGatherState { 1033 1006 unused_buffer_space: UnusedBufferSpace { 1034 1007 offset: offsets_end, 1035 - limit: len, 1008 + limit: offsets_end + buffers_size, 1036 1009 }, 1037 1010 sg_entries: KVec::new(), 1038 1011 ancestors: KVec::new(), ··· 1041 1014 // Traverse the objects specified. 1042 1015 let mut view = AllocationView::new(&mut alloc, data_size); 1043 1016 for (index, index_offset) in (offsets_start..offsets_end) 1044 - .step_by(size_of::<usize>()) 1017 + .step_by(size_of::<u64>()) 1045 1018 .enumerate() 1046 1019 { 1047 - let offset = view.alloc.read(index_offset)?; 1020 + let offset: usize = view 1021 + .alloc 1022 + .read::<u64>(index_offset)? 1023 + .try_into() 1024 + .map_err(|_| EINVAL)?; 1048 1025 1049 - if offset < end_of_previous_object { 1026 + if offset < end_of_previous_object || !is_aligned(offset, size_of::<u32>()) { 1050 1027 pr_warn!("Got transaction with invalid offset."); 1051 1028 return Err(EINVAL.into()); 1052 1029 } ··· 1082 1051 } 1083 1052 1084 1053 // Update the indexes containing objects to clean up. 1085 - let offset_after_object = index_offset + size_of::<usize>(); 1054 + let offset_after_object = index_offset + size_of::<u64>(); 1086 1055 view.alloc 1087 1056 .set_info_offsets(offsets_start..offset_after_object); 1088 1057 }
+4 -4
drivers/android/binderfs.c
··· 132 132 mutex_lock(&binderfs_minors_mutex); 133 133 if (++info->device_count <= info->mount_opts.max) 134 134 minor = ida_alloc_max(&binderfs_minors, 135 - use_reserve ? BINDERFS_MAX_MINOR : 136 - BINDERFS_MAX_MINOR_CAPPED, 135 + use_reserve ? BINDERFS_MAX_MINOR - 1 : 136 + BINDERFS_MAX_MINOR_CAPPED - 1, 137 137 GFP_KERNEL); 138 138 else 139 139 minor = -ENOSPC; ··· 408 408 /* Reserve a new minor number for the new device. */ 409 409 mutex_lock(&binderfs_minors_mutex); 410 410 minor = ida_alloc_max(&binderfs_minors, 411 - use_reserve ? BINDERFS_MAX_MINOR : 412 - BINDERFS_MAX_MINOR_CAPPED, 411 + use_reserve ? BINDERFS_MAX_MINOR - 1 : 412 + BINDERFS_MAX_MINOR_CAPPED - 1, 413 413 GFP_KERNEL); 414 414 mutex_unlock(&binderfs_minors_mutex); 415 415 if (minor < 0) {
+12 -33
drivers/block/loop.c
··· 1225 1225 } 1226 1226 1227 1227 static int 1228 - loop_set_status(struct loop_device *lo, blk_mode_t mode, 1229 - struct block_device *bdev, const struct loop_info64 *info) 1228 + loop_set_status(struct loop_device *lo, const struct loop_info64 *info) 1230 1229 { 1231 1230 int err; 1232 1231 bool partscan = false; 1233 1232 bool size_changed = false; 1234 1233 unsigned int memflags; 1235 1234 1236 - /* 1237 - * If we don't hold exclusive handle for the device, upgrade to it 1238 - * here to avoid changing device under exclusive owner. 1239 - */ 1240 - if (!(mode & BLK_OPEN_EXCL)) { 1241 - err = bd_prepare_to_claim(bdev, loop_set_status, NULL); 1242 - if (err) 1243 - goto out_reread_partitions; 1244 - } 1245 - 1246 1235 err = mutex_lock_killable(&lo->lo_mutex); 1247 1236 if (err) 1248 - goto out_abort_claiming; 1249 - 1237 + return err; 1250 1238 if (lo->lo_state != Lo_bound) { 1251 1239 err = -ENXIO; 1252 1240 goto out_unlock; ··· 1273 1285 } 1274 1286 out_unlock: 1275 1287 mutex_unlock(&lo->lo_mutex); 1276 - out_abort_claiming: 1277 - if (!(mode & BLK_OPEN_EXCL)) 1278 - bd_abort_claiming(bdev, loop_set_status); 1279 - out_reread_partitions: 1280 1288 if (partscan) 1281 1289 loop_reread_partitions(lo); 1282 1290 ··· 1352 1368 } 1353 1369 1354 1370 static int 1355 - loop_set_status_old(struct loop_device *lo, blk_mode_t mode, 1356 - struct block_device *bdev, 1357 - const struct loop_info __user *arg) 1371 + loop_set_status_old(struct loop_device *lo, const struct loop_info __user *arg) 1358 1372 { 1359 1373 struct loop_info info; 1360 1374 struct loop_info64 info64; ··· 1360 1378 if (copy_from_user(&info, arg, sizeof (struct loop_info))) 1361 1379 return -EFAULT; 1362 1380 loop_info64_from_old(&info, &info64); 1363 - return loop_set_status(lo, mode, bdev, &info64); 1381 + return loop_set_status(lo, &info64); 1364 1382 } 1365 1383 1366 1384 static int 1367 - loop_set_status64(struct loop_device *lo, blk_mode_t mode, 1368 - struct block_device *bdev, 1369 - const struct loop_info64 __user *arg) 1385 + loop_set_status64(struct loop_device *lo, const struct loop_info64 __user *arg) 1370 1386 { 1371 1387 struct loop_info64 info64; 1372 1388 1373 1389 if (copy_from_user(&info64, arg, sizeof (struct loop_info64))) 1374 1390 return -EFAULT; 1375 - return loop_set_status(lo, mode, bdev, &info64); 1391 + return loop_set_status(lo, &info64); 1376 1392 } 1377 1393 1378 1394 static int ··· 1549 1569 case LOOP_SET_STATUS: 1550 1570 err = -EPERM; 1551 1571 if ((mode & BLK_OPEN_WRITE) || capable(CAP_SYS_ADMIN)) 1552 - err = loop_set_status_old(lo, mode, bdev, argp); 1572 + err = loop_set_status_old(lo, argp); 1553 1573 break; 1554 1574 case LOOP_GET_STATUS: 1555 1575 return loop_get_status_old(lo, argp); 1556 1576 case LOOP_SET_STATUS64: 1557 1577 err = -EPERM; 1558 1578 if ((mode & BLK_OPEN_WRITE) || capable(CAP_SYS_ADMIN)) 1559 - err = loop_set_status64(lo, mode, bdev, argp); 1579 + err = loop_set_status64(lo, argp); 1560 1580 break; 1561 1581 case LOOP_GET_STATUS64: 1562 1582 return loop_get_status64(lo, argp); ··· 1650 1670 } 1651 1671 1652 1672 static int 1653 - loop_set_status_compat(struct loop_device *lo, blk_mode_t mode, 1654 - struct block_device *bdev, 1655 - const struct compat_loop_info __user *arg) 1673 + loop_set_status_compat(struct loop_device *lo, 1674 + const struct compat_loop_info __user *arg) 1656 1675 { 1657 1676 struct loop_info64 info64; 1658 1677 int ret; ··· 1659 1680 ret = loop_info64_from_compat(arg, &info64); 1660 1681 if (ret < 0) 1661 1682 return ret; 1662 - return loop_set_status(lo, mode, bdev, &info64); 1683 + return loop_set_status(lo, &info64); 1663 1684 } 1664 1685 1665 1686 static int ··· 1685 1706 1686 1707 switch(cmd) { 1687 1708 case LOOP_SET_STATUS: 1688 - err = loop_set_status_compat(lo, mode, bdev, 1709 + err = loop_set_status_compat(lo, 1689 1710 (const struct compat_loop_info __user *)arg); 1690 1711 break; 1691 1712 case LOOP_GET_STATUS:
+21 -12
drivers/block/rbd.c
··· 3495 3495 rbd_assert(!need_exclusive_lock(img_req) || 3496 3496 __rbd_is_lock_owner(rbd_dev)); 3497 3497 3498 - if (rbd_img_is_write(img_req)) { 3499 - rbd_assert(!img_req->snapc); 3498 + if (test_bit(IMG_REQ_CHILD, &img_req->flags)) { 3499 + rbd_assert(!rbd_img_is_write(img_req)); 3500 + } else { 3501 + struct request *rq = blk_mq_rq_from_pdu(img_req); 3502 + u64 off = (u64)blk_rq_pos(rq) << SECTOR_SHIFT; 3503 + u64 len = blk_rq_bytes(rq); 3504 + u64 mapping_size; 3505 + 3500 3506 down_read(&rbd_dev->header_rwsem); 3501 - img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc); 3507 + mapping_size = rbd_dev->mapping.size; 3508 + if (rbd_img_is_write(img_req)) { 3509 + rbd_assert(!img_req->snapc); 3510 + img_req->snapc = 3511 + ceph_get_snap_context(rbd_dev->header.snapc); 3512 + } 3502 3513 up_read(&rbd_dev->header_rwsem); 3514 + 3515 + if (unlikely(off + len > mapping_size)) { 3516 + rbd_warn(rbd_dev, "beyond EOD (%llu~%llu > %llu)", 3517 + off, len, mapping_size); 3518 + img_req->pending.result = -EIO; 3519 + return; 3520 + } 3503 3521 } 3504 3522 3505 3523 for_each_obj_request(img_req, obj_req) { ··· 4743 4725 struct request *rq = blk_mq_rq_from_pdu(img_request); 4744 4726 u64 offset = (u64)blk_rq_pos(rq) << SECTOR_SHIFT; 4745 4727 u64 length = blk_rq_bytes(rq); 4746 - u64 mapping_size; 4747 4728 int result; 4748 4729 4749 4730 /* Ignore/skip any zero-length requests */ ··· 4755 4738 blk_mq_start_request(rq); 4756 4739 4757 4740 down_read(&rbd_dev->header_rwsem); 4758 - mapping_size = rbd_dev->mapping.size; 4759 4741 rbd_img_capture_header(img_request); 4760 4742 up_read(&rbd_dev->header_rwsem); 4761 - 4762 - if (offset + length > mapping_size) { 4763 - rbd_warn(rbd_dev, "beyond EOD (%llu~%llu > %llu)", offset, 4764 - length, mapping_size); 4765 - result = -EIO; 4766 - goto err_img_request; 4767 - } 4768 4743 4769 4744 dout("%s rbd_dev %p img_req %p %s %llu~%llu\n", __func__, rbd_dev, 4770 4745 img_request, obj_op_name(op_type), offset, length);
+1 -14
drivers/crypto/ccp/sev-dev-tsm.c
··· 19 19 20 20 MODULE_IMPORT_NS("PCI_IDE"); 21 21 22 - #define TIO_DEFAULT_NR_IDE_STREAMS 1 23 - 24 - static uint nr_ide_streams = TIO_DEFAULT_NR_IDE_STREAMS; 25 - module_param_named(ide_nr, nr_ide_streams, uint, 0644); 26 - MODULE_PARM_DESC(ide_nr, "Set the maximum number of IDE streams per PHB"); 27 - 28 22 #define dev_to_sp(dev) ((struct sp_device *)dev_get_drvdata(dev)) 29 23 #define dev_to_psp(dev) ((struct psp_device *)(dev_to_sp(dev)->psp_data)) 30 24 #define dev_to_sev(dev) ((struct sev_device *)(dev_to_psp(dev)->sev_data)) ··· 187 193 static int stream_alloc(struct pci_dev *pdev, struct pci_ide **ide, 188 194 unsigned int tc) 189 195 { 190 - struct pci_dev *rp = pcie_find_root_port(pdev); 191 196 struct pci_ide *ide1; 192 197 193 198 if (ide[tc]) { ··· 194 201 return -EBUSY; 195 202 } 196 203 197 - /* FIXME: find a better way */ 198 - if (nr_ide_streams != TIO_DEFAULT_NR_IDE_STREAMS) 199 - pci_notice(pdev, "Enable non-default %d streams", nr_ide_streams); 200 - pci_ide_set_nr_streams(to_pci_host_bridge(rp->bus->bridge), nr_ide_streams); 201 - 202 204 ide1 = pci_ide_stream_alloc(pdev); 203 205 if (!ide1) 204 206 return -EFAULT; 205 207 206 - /* Blindly assign streamid=0 to TC=0, and so on */ 207 - ide1->stream_id = tc; 208 + ide1->stream_id = ide1->host_bridge_stream; 208 209 209 210 ide[tc] = ide1; 210 211
+1 -1
drivers/gpio/gpio-loongson-64bit.c
··· 263 263 chip->irq.num_parents = data->intr_num; 264 264 chip->irq.parents = devm_kcalloc(&pdev->dev, data->intr_num, 265 265 sizeof(*chip->irq.parents), GFP_KERNEL); 266 - if (!chip->parent) 266 + if (!chip->irq.parents) 267 267 return -ENOMEM; 268 268 269 269 for (i = 0; i < data->intr_num; i++) {
+1
drivers/gpio/gpiolib-acpi-core.c
··· 1359 1359 while (element < end) { 1360 1360 switch (element->type) { 1361 1361 case ACPI_TYPE_LOCAL_REFERENCE: 1362 + case ACPI_TYPE_STRING: 1362 1363 element += 3; 1363 1364 fallthrough; 1364 1365 case ACPI_TYPE_INTEGER:
+9 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1920 1920 1921 1921 /* Make sure restore workers don't access the BO any more */ 1922 1922 mutex_lock(&process_info->lock); 1923 - list_del(&mem->validate_list); 1923 + if (!list_empty(&mem->validate_list)) 1924 + list_del_init(&mem->validate_list); 1924 1925 mutex_unlock(&process_info->lock); 1925 - 1926 - /* Cleanup user pages and MMU notifiers */ 1927 - if (amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm)) { 1928 - amdgpu_hmm_unregister(mem->bo); 1929 - mutex_lock(&process_info->notifier_lock); 1930 - amdgpu_hmm_range_free(mem->range); 1931 - mutex_unlock(&process_info->notifier_lock); 1932 - } 1933 1926 1934 1927 ret = reserve_bo_and_cond_vms(mem, NULL, BO_VM_ALL, &ctx); 1935 1928 if (unlikely(ret)) 1936 1929 return ret; 1930 + 1931 + /* Cleanup user pages and MMU notifiers */ 1932 + if (amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm)) { 1933 + amdgpu_hmm_unregister(mem->bo); 1934 + amdgpu_hmm_range_free(mem->range); 1935 + mem->range = NULL; 1936 + } 1937 1937 1938 1938 amdgpu_amdkfd_remove_eviction_fence(mem->bo, 1939 1939 process_info->eviction_fence);
-3
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 2405 2405 return -ENODEV; 2406 2406 } 2407 2407 2408 - if (amdgpu_aspm == -1 && !pcie_aspm_enabled(pdev)) 2409 - amdgpu_aspm = 0; 2410 - 2411 2408 if (amdgpu_virtual_display || 2412 2409 amdgpu_device_asic_has_dc_support(pdev, flags & AMD_ASIC_MASK)) 2413 2410 supports_atomic = true;
+1 -1
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 1671 1671 if (r) 1672 1672 goto failure; 1673 1673 1674 - if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x50) { 1674 + if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x52) { 1675 1675 r = mes_v11_0_set_hw_resources_1(&adev->mes); 1676 1676 if (r) { 1677 1677 DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r);
+29 -8
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
··· 105 105 #define NUMBER_REGIONS 32 106 106 #define NUMBER_SW_SEGMENTS 16 107 107 108 - bool cm3_helper_translate_curve_to_hw_format( 109 - const struct dc_transfer_func *output_tf, 110 - struct pwl_params *lut_params, bool fixpoint) 108 + #define DC_LOGGER \ 109 + ctx->logger 110 + 111 + bool cm3_helper_translate_curve_to_hw_format(struct dc_context *ctx, 112 + const struct dc_transfer_func *output_tf, 113 + struct pwl_params *lut_params, bool fixpoint) 111 114 { 112 115 struct curve_points3 *corner_points; 113 116 struct pwl_result_data *rgb_resulted; ··· 165 162 if (seg_distr[k] != -1) 166 163 hw_points += (1 << seg_distr[k]); 167 164 } 165 + 166 + // DCN3+ have 257 pts in lieu of no separate slope registers 167 + // Prior HW had 256 base+slope pairs 168 + // Shaper LUT (i.e. fixpoint == true) is still 256 bases and 256 deltas 169 + hw_points = fixpoint ? (hw_points - 1) : hw_points; 168 170 169 171 j = 0; 170 172 for (k = 0; k < (region_end - region_start); k++) { ··· 231 223 corner_points[1].green.slope = dc_fixpt_zero; 232 224 corner_points[1].blue.slope = dc_fixpt_zero; 233 225 234 - // DCN3+ have 257 pts in lieu of no separate slope registers 235 - // Prior HW had 256 base+slope pairs 236 226 lut_params->hw_points_num = hw_points + 1; 237 227 238 228 k = 0; ··· 254 248 if (fixpoint == true) { 255 249 i = 1; 256 250 while (i != hw_points + 2) { 251 + uint32_t red_clamp; 252 + uint32_t green_clamp; 253 + uint32_t blue_clamp; 254 + 257 255 if (i >= hw_points) { 258 256 if (dc_fixpt_lt(rgb_plus_1->red, rgb->red)) 259 257 rgb_plus_1->red = dc_fixpt_add(rgb->red, ··· 270 260 rgb_minus_1->delta_blue); 271 261 } 272 262 273 - rgb->delta_red_reg = dc_fixpt_clamp_u0d10(rgb->delta_red); 274 - rgb->delta_green_reg = dc_fixpt_clamp_u0d10(rgb->delta_green); 275 - rgb->delta_blue_reg = dc_fixpt_clamp_u0d10(rgb->delta_blue); 263 + rgb->delta_red = dc_fixpt_sub(rgb_plus_1->red, rgb->red); 264 + rgb->delta_green = dc_fixpt_sub(rgb_plus_1->green, rgb->green); 265 + rgb->delta_blue = dc_fixpt_sub(rgb_plus_1->blue, rgb->blue); 266 + 267 + red_clamp = dc_fixpt_clamp_u0d14(rgb->delta_red); 268 + green_clamp = dc_fixpt_clamp_u0d14(rgb->delta_green); 269 + blue_clamp = dc_fixpt_clamp_u0d14(rgb->delta_blue); 270 + 271 + if (red_clamp >> 10 || green_clamp >> 10 || blue_clamp >> 10) 272 + DC_LOG_ERROR("Losing delta precision while programming shaper LUT."); 273 + 274 + rgb->delta_red_reg = red_clamp & 0x3ff; 275 + rgb->delta_green_reg = green_clamp & 0x3ff; 276 + rgb->delta_blue_reg = blue_clamp & 0x3ff; 276 277 rgb->red_reg = dc_fixpt_clamp_u0d14(rgb->red); 277 278 rgb->green_reg = dc_fixpt_clamp_u0d14(rgb->green); 278 279 rgb->blue_reg = dc_fixpt_clamp_u0d14(rgb->blue);
+1 -1
drivers/gpu/drm/amd/display/dc/dwb/dcn30/dcn30_cm_common.h
··· 59 59 const struct pwl_params *params, 60 60 const struct dcn3_xfer_func_reg *reg); 61 61 62 - bool cm3_helper_translate_curve_to_hw_format( 62 + bool cm3_helper_translate_curve_to_hw_format(struct dc_context *ctx, 63 63 const struct dc_transfer_func *output_tf, 64 64 struct pwl_params *lut_params, bool fixpoint); 65 65
+5 -4
drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
··· 239 239 if (plane_state->blend_tf.type == TF_TYPE_HWPWL) 240 240 blend_lut = &plane_state->blend_tf.pwl; 241 241 else if (plane_state->blend_tf.type == TF_TYPE_DISTRIBUTED_POINTS) { 242 - result = cm3_helper_translate_curve_to_hw_format( 242 + result = cm3_helper_translate_curve_to_hw_format(plane_state->ctx, 243 243 &plane_state->blend_tf, &dpp_base->regamma_params, false); 244 244 if (!result) 245 245 return result; ··· 334 334 if (plane_state->in_transfer_func.type == TF_TYPE_HWPWL) 335 335 params = &plane_state->in_transfer_func.pwl; 336 336 else if (plane_state->in_transfer_func.type == TF_TYPE_DISTRIBUTED_POINTS && 337 - cm3_helper_translate_curve_to_hw_format(&plane_state->in_transfer_func, 338 - &dpp_base->degamma_params, false)) 337 + cm3_helper_translate_curve_to_hw_format(plane_state->ctx, 338 + &plane_state->in_transfer_func, 339 + &dpp_base->degamma_params, false)) 339 340 params = &dpp_base->degamma_params; 340 341 341 342 result = dpp_base->funcs->dpp_program_gamcor_lut(dpp_base, params); ··· 407 406 params = &stream->out_transfer_func.pwl; 408 407 else if (pipe_ctx->stream->out_transfer_func.type == 409 408 TF_TYPE_DISTRIBUTED_POINTS && 410 - cm3_helper_translate_curve_to_hw_format( 409 + cm3_helper_translate_curve_to_hw_format(stream->ctx, 411 410 &stream->out_transfer_func, 412 411 &mpc->blender_params, false)) 413 412 params = &mpc->blender_params;
+10 -8
drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
··· 486 486 if (plane_state->blend_tf.type == TF_TYPE_HWPWL) 487 487 lut_params = &plane_state->blend_tf.pwl; 488 488 else if (plane_state->blend_tf.type == TF_TYPE_DISTRIBUTED_POINTS) { 489 - result = cm3_helper_translate_curve_to_hw_format(&plane_state->blend_tf, 490 - &dpp_base->regamma_params, false); 489 + result = cm3_helper_translate_curve_to_hw_format(plane_state->ctx, 490 + &plane_state->blend_tf, 491 + &dpp_base->regamma_params, false); 491 492 if (!result) 492 493 return result; 493 494 ··· 502 501 lut_params = &plane_state->in_shaper_func.pwl; 503 502 else if (plane_state->in_shaper_func.type == TF_TYPE_DISTRIBUTED_POINTS) { 504 503 // TODO: dpp_base replace 505 - ASSERT(false); 506 - cm3_helper_translate_curve_to_hw_format(&plane_state->in_shaper_func, 507 - &dpp_base->shaper_params, true); 504 + cm3_helper_translate_curve_to_hw_format(plane_state->ctx, 505 + &plane_state->in_shaper_func, 506 + &dpp_base->shaper_params, true); 508 507 lut_params = &dpp_base->shaper_params; 509 508 } 510 509 ··· 544 543 if (plane_state->in_transfer_func.type == TF_TYPE_HWPWL) 545 544 params = &plane_state->in_transfer_func.pwl; 546 545 else if (plane_state->in_transfer_func.type == TF_TYPE_DISTRIBUTED_POINTS && 547 - cm3_helper_translate_curve_to_hw_format(&plane_state->in_transfer_func, 548 - &dpp_base->degamma_params, false)) 546 + cm3_helper_translate_curve_to_hw_format(plane_state->ctx, 547 + &plane_state->in_transfer_func, 548 + &dpp_base->degamma_params, false)) 549 549 params = &dpp_base->degamma_params; 550 550 551 551 dpp_base->funcs->dpp_program_gamcor_lut(dpp_base, params); ··· 577 575 params = &stream->out_transfer_func.pwl; 578 576 else if (pipe_ctx->stream->out_transfer_func.type == 579 577 TF_TYPE_DISTRIBUTED_POINTS && 580 - cm3_helper_translate_curve_to_hw_format( 578 + cm3_helper_translate_curve_to_hw_format(stream->ctx, 581 579 &stream->out_transfer_func, 582 580 &mpc->blender_params, false)) 583 581 params = &mpc->blender_params;
+9 -7
drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
··· 430 430 if (mcm_luts.lut1d_func->type == TF_TYPE_HWPWL) 431 431 m_lut_params.pwl = &mcm_luts.lut1d_func->pwl; 432 432 else if (mcm_luts.lut1d_func->type == TF_TYPE_DISTRIBUTED_POINTS) { 433 - rval = cm3_helper_translate_curve_to_hw_format( 433 + rval = cm3_helper_translate_curve_to_hw_format(mpc->ctx, 434 434 mcm_luts.lut1d_func, 435 435 &dpp_base->regamma_params, false); 436 436 m_lut_params.pwl = rval ? &dpp_base->regamma_params : NULL; ··· 450 450 m_lut_params.pwl = &mcm_luts.shaper->pwl; 451 451 else if (mcm_luts.shaper->type == TF_TYPE_DISTRIBUTED_POINTS) { 452 452 ASSERT(false); 453 - rval = cm3_helper_translate_curve_to_hw_format( 453 + rval = cm3_helper_translate_curve_to_hw_format(mpc->ctx, 454 454 mcm_luts.shaper, 455 455 &dpp_base->regamma_params, true); 456 456 m_lut_params.pwl = rval ? &dpp_base->regamma_params : NULL; ··· 627 627 if (plane_state->blend_tf.type == TF_TYPE_HWPWL) 628 628 lut_params = &plane_state->blend_tf.pwl; 629 629 else if (plane_state->blend_tf.type == TF_TYPE_DISTRIBUTED_POINTS) { 630 - rval = cm3_helper_translate_curve_to_hw_format(&plane_state->blend_tf, 631 - &dpp_base->regamma_params, false); 630 + rval = cm3_helper_translate_curve_to_hw_format(plane_state->ctx, 631 + &plane_state->blend_tf, 632 + &dpp_base->regamma_params, false); 632 633 lut_params = rval ? &dpp_base->regamma_params : NULL; 633 634 } 634 635 result = mpc->funcs->program_1dlut(mpc, lut_params, mpcc_id); ··· 640 639 lut_params = &plane_state->in_shaper_func.pwl; 641 640 else if (plane_state->in_shaper_func.type == TF_TYPE_DISTRIBUTED_POINTS) { 642 641 // TODO: dpp_base replace 643 - rval = cm3_helper_translate_curve_to_hw_format(&plane_state->in_shaper_func, 644 - &dpp_base->shaper_params, true); 642 + rval = cm3_helper_translate_curve_to_hw_format(plane_state->ctx, 643 + &plane_state->in_shaper_func, 644 + &dpp_base->shaper_params, true); 645 645 lut_params = rval ? &dpp_base->shaper_params : NULL; 646 646 } 647 647 result &= mpc->funcs->program_shaper(mpc, lut_params, mpcc_id); ··· 676 674 params = &stream->out_transfer_func.pwl; 677 675 else if (pipe_ctx->stream->out_transfer_func.type == 678 676 TF_TYPE_DISTRIBUTED_POINTS && 679 - cm3_helper_translate_curve_to_hw_format( 677 + cm3_helper_translate_curve_to_hw_format(stream->ctx, 680 678 &stream->out_transfer_func, 681 679 &mpc->blender_params, false)) 682 680 params = &mpc->blender_params;
+15
drivers/gpu/drm/bridge/imx/imx8mp-hdmi-pai.c
··· 8 8 #include <linux/module.h> 9 9 #include <linux/of_platform.h> 10 10 #include <linux/platform_device.h> 11 + #include <linux/pm_runtime.h> 11 12 #include <linux/regmap.h> 12 13 #include <drm/bridge/dw_hdmi.h> 13 14 #include <sound/asoundef.h> ··· 34 33 35 34 struct imx8mp_hdmi_pai { 36 35 struct regmap *regmap; 36 + struct device *dev; 37 37 }; 38 38 39 39 static void imx8mp_hdmi_pai_enable(struct dw_hdmi *dw_hdmi, int channel, ··· 44 42 const struct dw_hdmi_plat_data *pdata = dw_hdmi_to_plat_data(dw_hdmi); 45 43 struct imx8mp_hdmi_pai *hdmi_pai = pdata->priv_audio; 46 44 int val; 45 + 46 + if (pm_runtime_resume_and_get(hdmi_pai->dev) < 0) 47 + return; 47 48 48 49 /* PAI set control extended */ 49 50 val = WTMK_HIGH(3) | WTMK_LOW(3); ··· 90 85 91 86 /* Stop PAI */ 92 87 regmap_write(hdmi_pai->regmap, HTX_PAI_CTRL, 0); 88 + 89 + pm_runtime_put_sync(hdmi_pai->dev); 93 90 } 94 91 95 92 static const struct regmap_config imx8mp_hdmi_pai_regmap_config = { ··· 108 101 struct imx8mp_hdmi_pai *hdmi_pai; 109 102 struct resource *res; 110 103 void __iomem *base; 104 + int ret; 111 105 112 106 hdmi_pai = devm_kzalloc(dev, sizeof(*hdmi_pai), GFP_KERNEL); 113 107 if (!hdmi_pai) ··· 128 120 plat_data->enable_audio = imx8mp_hdmi_pai_enable; 129 121 plat_data->disable_audio = imx8mp_hdmi_pai_disable; 130 122 plat_data->priv_audio = hdmi_pai; 123 + 124 + hdmi_pai->dev = dev; 125 + ret = devm_pm_runtime_enable(dev); 126 + if (ret < 0) { 127 + dev_err(dev, "failed to enable PM runtime: %d\n", ret); 128 + return ret; 129 + } 131 130 132 131 return 0; 133 132 }
+13 -23
drivers/gpu/drm/gma500/psb_irq.c
··· 250 250 void gma_irq_preinstall(struct drm_device *dev) 251 251 { 252 252 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 253 - struct drm_crtc *crtc; 254 253 unsigned long irqflags; 255 254 256 255 spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags); ··· 260 261 PSB_WSGX32(0x00000000, PSB_CR_EVENT_HOST_ENABLE); 261 262 PSB_RSGX32(PSB_CR_EVENT_HOST_ENABLE); 262 263 263 - drm_for_each_crtc(crtc, dev) { 264 - struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 265 - 266 - if (vblank->enabled) { 267 - u32 mask = drm_crtc_index(crtc) ? _PSB_VSYNC_PIPEB_FLAG : 268 - _PSB_VSYNC_PIPEA_FLAG; 269 - dev_priv->vdc_irq_mask |= mask; 270 - } 271 - } 264 + if (dev->vblank[0].enabled) 265 + dev_priv->vdc_irq_mask |= _PSB_VSYNC_PIPEA_FLAG; 266 + if (dev->vblank[1].enabled) 267 + dev_priv->vdc_irq_mask |= _PSB_VSYNC_PIPEB_FLAG; 272 268 273 269 /* Revisit this area - want per device masks ? */ 274 270 if (dev_priv->ops->hotplug) ··· 278 284 void gma_irq_postinstall(struct drm_device *dev) 279 285 { 280 286 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 281 - struct drm_crtc *crtc; 282 287 unsigned long irqflags; 288 + unsigned int i; 283 289 284 290 spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags); 285 291 ··· 292 298 PSB_WVDC32(dev_priv->vdc_irq_mask, PSB_INT_ENABLE_R); 293 299 PSB_WVDC32(0xFFFFFFFF, PSB_HWSTAM); 294 300 295 - drm_for_each_crtc(crtc, dev) { 296 - struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 297 - 298 - if (vblank->enabled) 299 - gma_enable_pipestat(dev_priv, drm_crtc_index(crtc), PIPE_VBLANK_INTERRUPT_ENABLE); 301 + for (i = 0; i < dev->num_crtcs; ++i) { 302 + if (dev->vblank[i].enabled) 303 + gma_enable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); 300 304 else 301 - gma_disable_pipestat(dev_priv, drm_crtc_index(crtc), PIPE_VBLANK_INTERRUPT_ENABLE); 305 + gma_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); 302 306 } 303 307 304 308 if (dev_priv->ops->hotplug_enable) ··· 337 345 { 338 346 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 339 347 struct pci_dev *pdev = to_pci_dev(dev->dev); 340 - struct drm_crtc *crtc; 341 348 unsigned long irqflags; 349 + unsigned int i; 342 350 343 351 if (!dev_priv->irq_enabled) 344 352 return; ··· 350 358 351 359 PSB_WVDC32(0xFFFFFFFF, PSB_HWSTAM); 352 360 353 - drm_for_each_crtc(crtc, dev) { 354 - struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 355 - 356 - if (vblank->enabled) 357 - gma_disable_pipestat(dev_priv, drm_crtc_index(crtc), PIPE_VBLANK_INTERRUPT_ENABLE); 361 + for (i = 0; i < dev->num_crtcs; ++i) { 362 + if (dev->vblank[i].enabled) 363 + gma_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); 358 364 } 359 365 360 366 dev_priv->vdc_irq_mask &= _PSB_IRQ_SGX_FLAG |
+12 -19
drivers/gpu/drm/mgag200/mgag200_bmc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 3 3 #include <linux/delay.h> 4 + #include <linux/iopoll.h> 4 5 5 6 #include <drm/drm_atomic_helper.h> 6 7 #include <drm/drm_edid.h> ··· 13 12 void mgag200_bmc_stop_scanout(struct mga_device *mdev) 14 13 { 15 14 u8 tmp; 16 - int iter_max; 15 + int ret; 17 16 18 17 /* 19 18 * 1 - The first step is to inform the BMC of an upcoming mode ··· 43 42 44 43 /* 45 44 * 3a- The third step is to verify if there is an active scan. 46 - * We are waiting for a 0 on remhsyncsts <XSPAREREG<0>). 45 + * We are waiting for a 0 on remhsyncsts (<XSPAREREG<0>). 47 46 */ 48 - iter_max = 300; 49 - while (!(tmp & 0x1) && iter_max) { 50 - WREG8(DAC_INDEX, MGA1064_SPAREREG); 51 - tmp = RREG8(DAC_DATA); 52 - udelay(1000); 53 - iter_max--; 54 - } 47 + ret = read_poll_timeout(RREG_DAC, tmp, !(tmp & 0x1), 48 + 1000, 300000, false, 49 + MGA1064_SPAREREG); 50 + if (ret == -ETIMEDOUT) 51 + return; 55 52 56 53 /* 57 - * 3b- This step occurs only if the remove is actually 54 + * 3b- This step occurs only if the remote BMC is actually 58 55 * scanning. We are waiting for the end of the frame which is 59 56 * a 1 on remvsyncsts (XSPAREREG<1>) 60 57 */ 61 - if (iter_max) { 62 - iter_max = 300; 63 - while ((tmp & 0x2) && iter_max) { 64 - WREG8(DAC_INDEX, MGA1064_SPAREREG); 65 - tmp = RREG8(DAC_DATA); 66 - udelay(1000); 67 - iter_max--; 68 - } 69 - } 58 + (void)read_poll_timeout(RREG_DAC, tmp, (tmp & 0x2), 59 + 1000, 300000, false, 60 + MGA1064_SPAREREG); 70 61 } 71 62 72 63 void mgag200_bmc_start_scanout(struct mga_device *mdev)
+6
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 111 111 #define DAC_INDEX 0x3c00 112 112 #define DAC_DATA 0x3c0a 113 113 114 + #define RREG_DAC(reg) \ 115 + ({ \ 116 + WREG8(DAC_INDEX, reg); \ 117 + RREG8(DAC_DATA); \ 118 + }) \ 119 + 114 120 #define WREG_DAC(reg, v) \ 115 121 do { \ 116 122 WREG8(DAC_INDEX, reg); \
+1 -1
drivers/gpu/drm/nouveau/include/nvif/client.h
··· 11 11 12 12 int nvif_client_ctor(struct nvif_client *parent, const char *name, struct nvif_client *); 13 13 void nvif_client_dtor(struct nvif_client *); 14 - int nvif_client_suspend(struct nvif_client *); 14 + int nvif_client_suspend(struct nvif_client *, bool); 15 15 int nvif_client_resume(struct nvif_client *); 16 16 17 17 /*XXX*/
+1 -1
drivers/gpu/drm/nouveau/include/nvif/driver.h
··· 8 8 const char *name; 9 9 int (*init)(const char *name, u64 device, const char *cfg, 10 10 const char *dbg, void **priv); 11 - int (*suspend)(void *priv); 11 + int (*suspend)(void *priv, bool runtime); 12 12 int (*resume)(void *priv); 13 13 int (*ioctl)(void *priv, void *data, u32 size, void **hack); 14 14 void __iomem *(*map)(void *priv, u64 handle, u32 size);
+2 -1
drivers/gpu/drm/nouveau/include/nvkm/core/device.h
··· 2 2 #ifndef __NVKM_DEVICE_H__ 3 3 #define __NVKM_DEVICE_H__ 4 4 #include <core/oclass.h> 5 + #include <core/suspend_state.h> 5 6 #include <core/intr.h> 6 7 enum nvkm_subdev_type; 7 8 ··· 94 93 void *(*dtor)(struct nvkm_device *); 95 94 int (*preinit)(struct nvkm_device *); 96 95 int (*init)(struct nvkm_device *); 97 - void (*fini)(struct nvkm_device *, bool suspend); 96 + void (*fini)(struct nvkm_device *, enum nvkm_suspend_state suspend); 98 97 int (*irq)(struct nvkm_device *); 99 98 resource_size_t (*resource_addr)(struct nvkm_device *, enum nvkm_bar_id); 100 99 resource_size_t (*resource_size)(struct nvkm_device *, enum nvkm_bar_id);
+1 -1
drivers/gpu/drm/nouveau/include/nvkm/core/engine.h
··· 20 20 int (*oneinit)(struct nvkm_engine *); 21 21 int (*info)(struct nvkm_engine *, u64 mthd, u64 *data); 22 22 int (*init)(struct nvkm_engine *); 23 - int (*fini)(struct nvkm_engine *, bool suspend); 23 + int (*fini)(struct nvkm_engine *, enum nvkm_suspend_state suspend); 24 24 int (*reset)(struct nvkm_engine *); 25 25 int (*nonstall)(struct nvkm_engine *); 26 26 void (*intr)(struct nvkm_engine *);
+3 -2
drivers/gpu/drm/nouveau/include/nvkm/core/object.h
··· 2 2 #ifndef __NVKM_OBJECT_H__ 3 3 #define __NVKM_OBJECT_H__ 4 4 #include <core/oclass.h> 5 + #include <core/suspend_state.h> 5 6 struct nvkm_event; 6 7 struct nvkm_gpuobj; 7 8 struct nvkm_uevent; ··· 28 27 struct nvkm_object_func { 29 28 void *(*dtor)(struct nvkm_object *); 30 29 int (*init)(struct nvkm_object *); 31 - int (*fini)(struct nvkm_object *, bool suspend); 30 + int (*fini)(struct nvkm_object *, enum nvkm_suspend_state suspend); 32 31 int (*mthd)(struct nvkm_object *, u32 mthd, void *data, u32 size); 33 32 int (*ntfy)(struct nvkm_object *, u32 mthd, struct nvkm_event **); 34 33 int (*map)(struct nvkm_object *, void *argv, u32 argc, ··· 50 49 void nvkm_object_del(struct nvkm_object **); 51 50 void *nvkm_object_dtor(struct nvkm_object *); 52 51 int nvkm_object_init(struct nvkm_object *); 53 - int nvkm_object_fini(struct nvkm_object *, bool suspend); 52 + int nvkm_object_fini(struct nvkm_object *, enum nvkm_suspend_state); 54 53 int nvkm_object_mthd(struct nvkm_object *, u32 mthd, void *data, u32 size); 55 54 int nvkm_object_ntfy(struct nvkm_object *, u32 mthd, struct nvkm_event **); 56 55 int nvkm_object_map(struct nvkm_object *, void *argv, u32 argc,
+1 -1
drivers/gpu/drm/nouveau/include/nvkm/core/oproxy.h
··· 13 13 struct nvkm_oproxy_func { 14 14 void (*dtor[2])(struct nvkm_oproxy *); 15 15 int (*init[2])(struct nvkm_oproxy *); 16 - int (*fini[2])(struct nvkm_oproxy *, bool suspend); 16 + int (*fini[2])(struct nvkm_oproxy *, enum nvkm_suspend_state suspend); 17 17 }; 18 18 19 19 void nvkm_oproxy_ctor(const struct nvkm_oproxy_func *,
+2 -2
drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h
··· 40 40 int (*oneinit)(struct nvkm_subdev *); 41 41 int (*info)(struct nvkm_subdev *, u64 mthd, u64 *data); 42 42 int (*init)(struct nvkm_subdev *); 43 - int (*fini)(struct nvkm_subdev *, bool suspend); 43 + int (*fini)(struct nvkm_subdev *, enum nvkm_suspend_state suspend); 44 44 void (*intr)(struct nvkm_subdev *); 45 45 }; 46 46 ··· 65 65 int nvkm_subdev_preinit(struct nvkm_subdev *); 66 66 int nvkm_subdev_oneinit(struct nvkm_subdev *); 67 67 int nvkm_subdev_init(struct nvkm_subdev *); 68 - int nvkm_subdev_fini(struct nvkm_subdev *, bool suspend); 68 + int nvkm_subdev_fini(struct nvkm_subdev *, enum nvkm_suspend_state suspend); 69 69 int nvkm_subdev_info(struct nvkm_subdev *, u64, u64 *); 70 70 void nvkm_subdev_intr(struct nvkm_subdev *); 71 71
+11
drivers/gpu/drm/nouveau/include/nvkm/core/suspend_state.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + #ifndef __NVKM_SUSPEND_STATE_H__ 3 + #define __NVKM_SUSPEND_STATE_H__ 4 + 5 + enum nvkm_suspend_state { 6 + NVKM_POWEROFF, 7 + NVKM_SUSPEND, 8 + NVKM_RUNTIME_SUSPEND, 9 + }; 10 + 11 + #endif
+6
drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
··· 44 44 * NVKM_GSP_RPC_REPLY_NOWAIT - If specified, immediately return to the 45 45 * caller after the GSP RPC command is issued. 46 46 * 47 + * NVKM_GSP_RPC_REPLY_NOSEQ - If specified, exactly like NOWAIT 48 + * but don't emit RPC sequence number. 49 + * 47 50 * NVKM_GSP_RPC_REPLY_RECV - If specified, wait and receive the entire GSP 48 51 * RPC message after the GSP RPC command is issued. 49 52 * ··· 56 53 */ 57 54 enum nvkm_gsp_rpc_reply_policy { 58 55 NVKM_GSP_RPC_REPLY_NOWAIT = 0, 56 + NVKM_GSP_RPC_REPLY_NOSEQ, 59 57 NVKM_GSP_RPC_REPLY_RECV, 60 58 NVKM_GSP_RPC_REPLY_POLL, 61 59 }; ··· 245 241 246 242 /* The size of the registry RPC */ 247 243 size_t registry_rpc_size; 244 + 245 + u32 rpc_seq; 248 246 249 247 #ifdef CONFIG_DEBUG_FS 250 248 /*
-2
drivers/gpu/drm/nouveau/nouveau_display.c
··· 352 352 353 353 static const struct drm_mode_config_funcs nouveau_mode_config_funcs = { 354 354 .fb_create = nouveau_user_framebuffer_create, 355 - .atomic_commit = drm_atomic_helper_commit, 356 - .atomic_check = drm_atomic_helper_check, 357 355 }; 358 356 359 357
+1 -1
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 983 983 } 984 984 985 985 NV_DEBUG(drm, "suspending object tree...\n"); 986 - ret = nvif_client_suspend(&drm->_client); 986 + ret = nvif_client_suspend(&drm->_client, runtime); 987 987 if (ret) 988 988 goto fail_client; 989 989
+8 -2
drivers/gpu/drm/nouveau/nouveau_nvif.c
··· 62 62 } 63 63 64 64 static int 65 - nvkm_client_suspend(void *priv) 65 + nvkm_client_suspend(void *priv, bool runtime) 66 66 { 67 67 struct nvkm_client *client = priv; 68 - return nvkm_object_fini(&client->object, true); 68 + enum nvkm_suspend_state state; 69 + 70 + if (runtime) 71 + state = NVKM_RUNTIME_SUSPEND; 72 + else 73 + state = NVKM_SUSPEND; 74 + return nvkm_object_fini(&client->object, state); 69 75 } 70 76 71 77 static int
+2 -2
drivers/gpu/drm/nouveau/nvif/client.c
··· 30 30 #include <nvif/if0000.h> 31 31 32 32 int 33 - nvif_client_suspend(struct nvif_client *client) 33 + nvif_client_suspend(struct nvif_client *client, bool runtime) 34 34 { 35 - return client->driver->suspend(client->object.priv); 35 + return client->driver->suspend(client->object.priv, runtime); 36 36 } 37 37 38 38 int
+2 -2
drivers/gpu/drm/nouveau/nvkm/core/engine.c
··· 41 41 if (engine->func->reset) 42 42 return engine->func->reset(engine); 43 43 44 - nvkm_subdev_fini(&engine->subdev, false); 44 + nvkm_subdev_fini(&engine->subdev, NVKM_POWEROFF); 45 45 return nvkm_subdev_init(&engine->subdev); 46 46 } 47 47 ··· 98 98 } 99 99 100 100 static int 101 - nvkm_engine_fini(struct nvkm_subdev *subdev, bool suspend) 101 + nvkm_engine_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 102 102 { 103 103 struct nvkm_engine *engine = nvkm_engine(subdev); 104 104 if (engine->func->fini)
+2 -2
drivers/gpu/drm/nouveau/nvkm/core/ioctl.c
··· 141 141 } 142 142 ret = -EEXIST; 143 143 } 144 - nvkm_object_fini(object, false); 144 + nvkm_object_fini(object, NVKM_POWEROFF); 145 145 } 146 146 147 147 nvkm_object_del(&object); ··· 160 160 nvif_ioctl(object, "delete size %d\n", size); 161 161 if (!(ret = nvif_unvers(ret, &data, &size, args->none))) { 162 162 nvif_ioctl(object, "delete\n"); 163 - nvkm_object_fini(object, false); 163 + nvkm_object_fini(object, NVKM_POWEROFF); 164 164 nvkm_object_del(&object); 165 165 } 166 166
+16 -4
drivers/gpu/drm/nouveau/nvkm/core/object.c
··· 142 142 } 143 143 144 144 int 145 - nvkm_object_fini(struct nvkm_object *object, bool suspend) 145 + nvkm_object_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 146 146 { 147 - const char *action = suspend ? "suspend" : "fini"; 147 + const char *action; 148 148 struct nvkm_object *child; 149 149 s64 time; 150 150 int ret; 151 151 152 + switch (suspend) { 153 + case NVKM_POWEROFF: 154 + default: 155 + action = "fini"; 156 + break; 157 + case NVKM_SUSPEND: 158 + action = "suspend"; 159 + break; 160 + case NVKM_RUNTIME_SUSPEND: 161 + action = "runtime"; 162 + break; 163 + } 152 164 nvif_debug(object, "%s children...\n", action); 153 165 time = ktime_to_us(ktime_get()); 154 166 list_for_each_entry_reverse(child, &object->tree, head) { ··· 224 212 225 213 fail_child: 226 214 list_for_each_entry_continue_reverse(child, &object->tree, head) 227 - nvkm_object_fini(child, false); 215 + nvkm_object_fini(child, NVKM_POWEROFF); 228 216 fail: 229 217 nvif_error(object, "init failed with %d\n", ret); 230 218 if (object->func->fini) 231 - object->func->fini(object, false); 219 + object->func->fini(object, NVKM_POWEROFF); 232 220 return ret; 233 221 } 234 222
+1 -1
drivers/gpu/drm/nouveau/nvkm/core/oproxy.c
··· 87 87 } 88 88 89 89 static int 90 - nvkm_oproxy_fini(struct nvkm_object *object, bool suspend) 90 + nvkm_oproxy_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 91 91 { 92 92 struct nvkm_oproxy *oproxy = nvkm_oproxy(object); 93 93 int ret;
+15 -3
drivers/gpu/drm/nouveau/nvkm/core/subdev.c
··· 51 51 } 52 52 53 53 int 54 - nvkm_subdev_fini(struct nvkm_subdev *subdev, bool suspend) 54 + nvkm_subdev_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 55 55 { 56 56 struct nvkm_device *device = subdev->device; 57 - const char *action = suspend ? "suspend" : subdev->use.enabled ? "fini" : "reset"; 57 + const char *action; 58 58 s64 time; 59 59 60 + switch (suspend) { 61 + case NVKM_POWEROFF: 62 + default: 63 + action = subdev->use.enabled ? "fini" : "reset"; 64 + break; 65 + case NVKM_SUSPEND: 66 + action = "suspend"; 67 + break; 68 + case NVKM_RUNTIME_SUSPEND: 69 + action = "runtime"; 70 + break; 71 + } 60 72 nvkm_trace(subdev, "%s running...\n", action); 61 73 time = ktime_to_us(ktime_get()); 62 74 ··· 198 186 nvkm_subdev_unref(struct nvkm_subdev *subdev) 199 187 { 200 188 if (refcount_dec_and_mutex_lock(&subdev->use.refcount, &subdev->use.mutex)) { 201 - nvkm_subdev_fini(subdev, false); 189 + nvkm_subdev_fini(subdev, NVKM_POWEROFF); 202 190 mutex_unlock(&subdev->use.mutex); 203 191 } 204 192 }
+1 -1
drivers/gpu/drm/nouveau/nvkm/core/uevent.c
··· 73 73 } 74 74 75 75 static int 76 - nvkm_uevent_fini(struct nvkm_object *object, bool suspend) 76 + nvkm_uevent_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 77 77 { 78 78 struct nvkm_uevent *uevent = nvkm_uevent(object); 79 79
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/ce/ga100.c
··· 46 46 } 47 47 48 48 int 49 - ga100_ce_fini(struct nvkm_engine *engine, bool suspend) 49 + ga100_ce_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend) 50 50 { 51 51 nvkm_inth_block(&engine->subdev.inth); 52 52 return 0;
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/ce/priv.h
··· 14 14 15 15 int ga100_ce_oneinit(struct nvkm_engine *); 16 16 int ga100_ce_init(struct nvkm_engine *); 17 - int ga100_ce_fini(struct nvkm_engine *, bool); 17 + int ga100_ce_fini(struct nvkm_engine *, enum nvkm_suspend_state); 18 18 int ga100_ce_nonstall(struct nvkm_engine *); 19 19 20 20 u32 gb202_ce_grce_mask(struct nvkm_device *);
+17 -5
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
··· 2936 2936 } 2937 2937 2938 2938 int 2939 - nvkm_device_fini(struct nvkm_device *device, bool suspend) 2939 + nvkm_device_fini(struct nvkm_device *device, enum nvkm_suspend_state suspend) 2940 2940 { 2941 - const char *action = suspend ? "suspend" : "fini"; 2941 + const char *action; 2942 2942 struct nvkm_subdev *subdev; 2943 2943 int ret; 2944 2944 s64 time; 2945 2945 2946 + switch (suspend) { 2947 + case NVKM_POWEROFF: 2948 + default: 2949 + action = "fini"; 2950 + break; 2951 + case NVKM_SUSPEND: 2952 + action = "suspend"; 2953 + break; 2954 + case NVKM_RUNTIME_SUSPEND: 2955 + action = "runtime"; 2956 + break; 2957 + } 2946 2958 nvdev_trace(device, "%s running...\n", action); 2947 2959 time = ktime_to_us(ktime_get()); 2948 2960 ··· 3044 3032 if (ret) 3045 3033 return ret; 3046 3034 3047 - nvkm_device_fini(device, false); 3035 + nvkm_device_fini(device, NVKM_POWEROFF); 3048 3036 3049 3037 nvdev_trace(device, "init running...\n"); 3050 3038 time = ktime_to_us(ktime_get()); ··· 3072 3060 3073 3061 fail_subdev: 3074 3062 list_for_each_entry_from(subdev, &device->subdev, head) 3075 - nvkm_subdev_fini(subdev, false); 3063 + nvkm_subdev_fini(subdev, NVKM_POWEROFF); 3076 3064 fail: 3077 - nvkm_device_fini(device, false); 3065 + nvkm_device_fini(device, NVKM_POWEROFF); 3078 3066 3079 3067 nvdev_error(device, "init failed with %d\n", ret); 3080 3068 return ret;
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c
··· 1605 1605 } 1606 1606 1607 1607 static void 1608 - nvkm_device_pci_fini(struct nvkm_device *device, bool suspend) 1608 + nvkm_device_pci_fini(struct nvkm_device *device, enum nvkm_suspend_state suspend) 1609 1609 { 1610 1610 struct nvkm_device_pci *pdev = nvkm_device_pci(device); 1611 - if (suspend) { 1611 + if (suspend != NVKM_POWEROFF) { 1612 1612 pci_disable_device(pdev->pdev); 1613 1613 pdev->suspend = true; 1614 1614 }
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/device/priv.h
··· 56 56 const char *name, const char *cfg, const char *dbg, 57 57 struct nvkm_device *); 58 58 int nvkm_device_init(struct nvkm_device *); 59 - int nvkm_device_fini(struct nvkm_device *, bool suspend); 59 + int nvkm_device_fini(struct nvkm_device *, enum nvkm_suspend_state suspend); 60 60 #endif
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/device/user.c
··· 218 218 } 219 219 220 220 static int 221 - nvkm_udevice_fini(struct nvkm_object *object, bool suspend) 221 + nvkm_udevice_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 222 222 { 223 223 struct nvkm_udevice *udev = nvkm_udevice(object); 224 224 struct nvkm_device *device = udev->device;
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
··· 99 99 } 100 100 101 101 static int 102 - nvkm_disp_fini(struct nvkm_engine *engine, bool suspend) 102 + nvkm_disp_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend) 103 103 { 104 104 struct nvkm_disp *disp = nvkm_disp(engine); 105 105 struct nvkm_outp *outp; 106 106 107 107 if (disp->func->fini) 108 - disp->func->fini(disp, suspend); 108 + disp->func->fini(disp, suspend != NVKM_POWEROFF); 109 109 110 110 list_for_each_entry(outp, &disp->outps, head) { 111 111 if (outp->func->fini)
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/disp/chan.c
··· 128 128 } 129 129 130 130 static int 131 - nvkm_disp_chan_fini(struct nvkm_object *object, bool suspend) 131 + nvkm_disp_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 132 132 { 133 133 struct nvkm_disp_chan *chan = nvkm_disp_chan(object); 134 134
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/falcon.c
··· 93 93 } 94 94 95 95 static int 96 - nvkm_falcon_fini(struct nvkm_engine *engine, bool suspend) 96 + nvkm_falcon_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend) 97 97 { 98 98 struct nvkm_falcon *falcon = nvkm_falcon(engine); 99 99 struct nvkm_device *device = falcon->engine.subdev.device; 100 100 const u32 base = falcon->addr; 101 101 102 - if (!suspend) { 102 + if (suspend == NVKM_POWEROFF) { 103 103 nvkm_memory_unref(&falcon->core); 104 104 if (falcon->external) { 105 105 vfree(falcon->data.data);
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c
··· 122 122 } 123 123 124 124 static int 125 - nvkm_fifo_fini(struct nvkm_engine *engine, bool suspend) 125 + nvkm_fifo_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend) 126 126 { 127 127 struct nvkm_fifo *fifo = nvkm_fifo(engine); 128 128 struct nvkm_runl *runl;
+3 -3
drivers/gpu/drm/nouveau/nvkm/engine/fifo/uchan.c
··· 72 72 }; 73 73 74 74 static int 75 - nvkm_uchan_object_fini_1(struct nvkm_oproxy *oproxy, bool suspend) 75 + nvkm_uchan_object_fini_1(struct nvkm_oproxy *oproxy, enum nvkm_suspend_state suspend) 76 76 { 77 77 struct nvkm_uobj *uobj = container_of(oproxy, typeof(*uobj), oproxy); 78 78 struct nvkm_chan *chan = uobj->chan; ··· 87 87 nvkm_chan_cctx_bind(chan, ectx->engn, NULL); 88 88 89 89 if (refcount_dec_and_test(&ectx->uses)) 90 - nvkm_object_fini(ectx->object, false); 90 + nvkm_object_fini(ectx->object, NVKM_POWEROFF); 91 91 mutex_unlock(&chan->cgrp->mutex); 92 92 } 93 93 ··· 269 269 } 270 270 271 271 static int 272 - nvkm_uchan_fini(struct nvkm_object *object, bool suspend) 272 + nvkm_uchan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 273 273 { 274 274 struct nvkm_chan *chan = nvkm_uchan(object)->chan; 275 275
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/gr/base.c
··· 168 168 } 169 169 170 170 static int 171 - nvkm_gr_fini(struct nvkm_engine *engine, bool suspend) 171 + nvkm_gr_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend) 172 172 { 173 173 struct nvkm_gr *gr = nvkm_gr(engine); 174 174 if (gr->func->fini) 175 - return gr->func->fini(gr, suspend); 175 + return gr->func->fini(gr, suspend != NVKM_POWEROFF); 176 176 return 0; 177 177 } 178 178
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
··· 2330 2330 2331 2331 WARN_ON(gf100_gr_fecs_halt_pipeline(gr)); 2332 2332 2333 - subdev->func->fini(subdev, false); 2333 + subdev->func->fini(subdev, NVKM_POWEROFF); 2334 2334 nvkm_mc_disable(device, subdev->type, subdev->inst); 2335 2335 if (gr->func->gpccs.reset) 2336 2336 gr->func->gpccs.reset(gr);
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/gr/nv04.c
··· 1158 1158 } 1159 1159 1160 1160 static int 1161 - nv04_gr_chan_fini(struct nvkm_object *object, bool suspend) 1161 + nv04_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 1162 1162 { 1163 1163 struct nv04_gr_chan *chan = nv04_gr_chan(object); 1164 1164 struct nv04_gr *gr = chan->gr;
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/gr/nv10.c
··· 951 951 } 952 952 953 953 static int 954 - nv10_gr_chan_fini(struct nvkm_object *object, bool suspend) 954 + nv10_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 955 955 { 956 956 struct nv10_gr_chan *chan = nv10_gr_chan(object); 957 957 struct nv10_gr *gr = chan->gr;
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/gr/nv20.c
··· 27 27 } 28 28 29 29 int 30 - nv20_gr_chan_fini(struct nvkm_object *object, bool suspend) 30 + nv20_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 31 31 { 32 32 struct nv20_gr_chan *chan = nv20_gr_chan(object); 33 33 struct nv20_gr *gr = chan->gr;
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/gr/nv20.h
··· 31 31 32 32 void *nv20_gr_chan_dtor(struct nvkm_object *); 33 33 int nv20_gr_chan_init(struct nvkm_object *); 34 - int nv20_gr_chan_fini(struct nvkm_object *, bool); 34 + int nv20_gr_chan_fini(struct nvkm_object *, enum nvkm_suspend_state); 35 35 #endif
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/gr/nv40.c
··· 89 89 } 90 90 91 91 static int 92 - nv40_gr_chan_fini(struct nvkm_object *object, bool suspend) 92 + nv40_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 93 93 { 94 94 struct nv40_gr_chan *chan = nv40_gr_chan(object); 95 95 struct nv40_gr *gr = chan->gr; ··· 101 101 nvkm_mask(device, 0x400720, 0x00000001, 0x00000000); 102 102 103 103 if (nvkm_rd32(device, 0x40032c) == inst) { 104 - if (suspend) { 104 + if (suspend != NVKM_POWEROFF) { 105 105 nvkm_wr32(device, 0x400720, 0x00000000); 106 106 nvkm_wr32(device, 0x400784, inst); 107 107 nvkm_mask(device, 0x400310, 0x00000020, 0x00000020);
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c
··· 65 65 } 66 66 67 67 static int 68 - nv44_mpeg_chan_fini(struct nvkm_object *object, bool suspend) 68 + nv44_mpeg_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 69 69 { 70 70 71 71 struct nv44_mpeg_chan *chan = nv44_mpeg_chan(object);
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/sec2/base.c
··· 37 37 } 38 38 39 39 static int 40 - nvkm_sec2_fini(struct nvkm_engine *engine, bool suspend) 40 + nvkm_sec2_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend) 41 41 { 42 42 struct nvkm_sec2 *sec2 = nvkm_sec2(engine); 43 43 struct nvkm_subdev *subdev = &sec2->engine.subdev;
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/xtensa.c
··· 76 76 } 77 77 78 78 static int 79 - nvkm_xtensa_fini(struct nvkm_engine *engine, bool suspend) 79 + nvkm_xtensa_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend) 80 80 { 81 81 struct nvkm_xtensa *xtensa = nvkm_xtensa(engine); 82 82 struct nvkm_device *device = xtensa->engine.subdev.device; ··· 85 85 nvkm_wr32(device, base + 0xd84, 0); /* INTR_EN */ 86 86 nvkm_wr32(device, base + 0xd94, 0); /* FIFO_CTRL */ 87 87 88 - if (!suspend) 88 + if (suspend == NVKM_POWEROFF) 89 89 nvkm_memory_unref(&xtensa->gpu_fw); 90 90 return 0; 91 91 }
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/acr/base.c
··· 182 182 } 183 183 184 184 static int 185 - nvkm_acr_fini(struct nvkm_subdev *subdev, bool suspend) 185 + nvkm_acr_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 186 186 { 187 187 if (!subdev->use.enabled) 188 188 return 0;
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/bar/base.c
··· 90 90 } 91 91 92 92 static int 93 - nvkm_bar_fini(struct nvkm_subdev *subdev, bool suspend) 93 + nvkm_bar_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 94 94 { 95 95 struct nvkm_bar *bar = nvkm_bar(subdev); 96 96
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
··· 577 577 } 578 578 579 579 static int 580 - nvkm_clk_fini(struct nvkm_subdev *subdev, bool suspend) 580 + nvkm_clk_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 581 581 { 582 582 struct nvkm_clk *clk = nvkm_clk(subdev); 583 583 flush_work(&clk->work);
+2 -2
drivers/gpu/drm/nouveau/nvkm/subdev/devinit/base.c
··· 67 67 } 68 68 69 69 static int 70 - nvkm_devinit_fini(struct nvkm_subdev *subdev, bool suspend) 70 + nvkm_devinit_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 71 71 { 72 72 struct nvkm_devinit *init = nvkm_devinit(subdev); 73 73 /* force full reinit on resume */ 74 - if (suspend) 74 + if (suspend != NVKM_POWEROFF) 75 75 init->post = true; 76 76 return 0; 77 77 }
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/fault/base.c
··· 51 51 } 52 52 53 53 static int 54 - nvkm_fault_fini(struct nvkm_subdev *subdev, bool suspend) 54 + nvkm_fault_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 55 55 { 56 56 struct nvkm_fault *fault = nvkm_fault(subdev); 57 57 if (fault->func->fini)
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/fault/user.c
··· 56 56 } 57 57 58 58 static int 59 - nvkm_ufault_fini(struct nvkm_object *object, bool suspend) 59 + nvkm_ufault_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend) 60 60 { 61 61 struct nvkm_fault_buffer *buffer = nvkm_fault_buffer(object); 62 62 buffer->fault->func->buffer.fini(buffer);
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gpio/base.c
··· 144 144 } 145 145 146 146 static int 147 - nvkm_gpio_fini(struct nvkm_subdev *subdev, bool suspend) 147 + nvkm_gpio_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 148 148 { 149 149 struct nvkm_gpio *gpio = nvkm_gpio(subdev); 150 150 u32 mask = (1ULL << gpio->func->lines) - 1;
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c
··· 48 48 } 49 49 50 50 static int 51 - nvkm_gsp_fini(struct nvkm_subdev *subdev, bool suspend) 51 + nvkm_gsp_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 52 52 { 53 53 struct nvkm_gsp *gsp = nvkm_gsp(subdev); 54 54
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/gh100.c
··· 17 17 #include <nvhw/ref/gh100/dev_riscv_pri.h> 18 18 19 19 int 20 - gh100_gsp_fini(struct nvkm_gsp *gsp, bool suspend) 20 + gh100_gsp_fini(struct nvkm_gsp *gsp, enum nvkm_suspend_state suspend) 21 21 { 22 22 struct nvkm_falcon *falcon = &gsp->falcon; 23 23 int ret, time = 4000;
+4 -4
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h
··· 59 59 void (*dtor)(struct nvkm_gsp *); 60 60 int (*oneinit)(struct nvkm_gsp *); 61 61 int (*init)(struct nvkm_gsp *); 62 - int (*fini)(struct nvkm_gsp *, bool suspend); 62 + int (*fini)(struct nvkm_gsp *, enum nvkm_suspend_state suspend); 63 63 int (*reset)(struct nvkm_gsp *); 64 64 65 65 struct { ··· 75 75 void tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *); 76 76 int tu102_gsp_oneinit(struct nvkm_gsp *); 77 77 int tu102_gsp_init(struct nvkm_gsp *); 78 - int tu102_gsp_fini(struct nvkm_gsp *, bool suspend); 78 + int tu102_gsp_fini(struct nvkm_gsp *, enum nvkm_suspend_state suspend); 79 79 int tu102_gsp_reset(struct nvkm_gsp *); 80 80 u64 tu102_gsp_wpr_heap_size(struct nvkm_gsp *); 81 81 ··· 87 87 88 88 int gh100_gsp_oneinit(struct nvkm_gsp *); 89 89 int gh100_gsp_init(struct nvkm_gsp *); 90 - int gh100_gsp_fini(struct nvkm_gsp *, bool suspend); 90 + int gh100_gsp_fini(struct nvkm_gsp *, enum nvkm_suspend_state suspend); 91 91 92 92 void r535_gsp_dtor(struct nvkm_gsp *); 93 93 int r535_gsp_oneinit(struct nvkm_gsp *); 94 94 int r535_gsp_init(struct nvkm_gsp *); 95 - int r535_gsp_fini(struct nvkm_gsp *, bool suspend); 95 + int r535_gsp_fini(struct nvkm_gsp *, enum nvkm_suspend_state suspend); 96 96 97 97 int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, enum nvkm_subdev_type, int, 98 98 struct nvkm_gsp **);
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fbsr.c
··· 208 208 } 209 209 210 210 static int 211 - r535_fbsr_suspend(struct nvkm_gsp *gsp) 211 + r535_fbsr_suspend(struct nvkm_gsp *gsp, bool runtime) 212 212 { 213 213 struct nvkm_subdev *subdev = &gsp->subdev; 214 214 struct nvkm_device *device = subdev->device;
+4 -4
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c
··· 704 704 705 705 build_registry(gsp, rpc); 706 706 707 - return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_NOWAIT); 707 + return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_NOSEQ); 708 708 709 709 fail: 710 710 clean_registry(gsp); ··· 921 921 info->pciConfigMirrorSize = device->pci->func->cfg.size; 922 922 r535_gsp_acpi_info(gsp, &info->acpiMethodData); 923 923 924 - return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOWAIT); 924 + return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOSEQ); 925 925 } 926 926 927 927 static int ··· 1721 1721 } 1722 1722 1723 1723 int 1724 - r535_gsp_fini(struct nvkm_gsp *gsp, bool suspend) 1724 + r535_gsp_fini(struct nvkm_gsp *gsp, enum nvkm_suspend_state suspend) 1725 1725 { 1726 1726 struct nvkm_rm *rm = gsp->rm; 1727 1727 int ret; ··· 1748 1748 sr->sysmemAddrOfSuspendResumeData = gsp->sr.radix3.lvl0.addr; 1749 1749 sr->sizeOfSuspendResumeData = len; 1750 1750 1751 - ret = rm->api->fbsr->suspend(gsp); 1751 + ret = rm->api->fbsr->suspend(gsp, suspend == NVKM_RUNTIME_SUSPEND); 1752 1752 if (ret) { 1753 1753 nvkm_gsp_mem_dtor(&gsp->sr.meta); 1754 1754 nvkm_gsp_radix3_dtor(gsp, &gsp->sr.radix3);
+6
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
··· 557 557 558 558 switch (policy) { 559 559 case NVKM_GSP_RPC_REPLY_NOWAIT: 560 + case NVKM_GSP_RPC_REPLY_NOSEQ: 560 561 break; 561 562 case NVKM_GSP_RPC_REPLY_RECV: 562 563 reply = r535_gsp_msg_recv(gsp, fn, gsp_rpc_len); ··· 588 587 print_hex_dump(KERN_INFO, "rpc: ", DUMP_PREFIX_OFFSET, 16, 1, 589 588 rpc->data, rpc->length - sizeof(*rpc), true); 590 589 } 590 + 591 + if (policy == NVKM_GSP_RPC_REPLY_NOSEQ) 592 + rpc->sequence = 0; 593 + else 594 + rpc->sequence = gsp->rpc_seq++; 591 595 592 596 ret = r535_gsp_cmdq_push(gsp, rpc); 593 597 if (ret)
+4 -4
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r570/fbsr.c
··· 62 62 } 63 63 64 64 static int 65 - r570_fbsr_init(struct nvkm_gsp *gsp, struct sg_table *sgt, u64 size) 65 + r570_fbsr_init(struct nvkm_gsp *gsp, struct sg_table *sgt, u64 size, bool runtime) 66 66 { 67 67 NV2080_CTRL_INTERNAL_FBSR_INIT_PARAMS *ctrl; 68 68 struct nvkm_gsp_object memlist; ··· 81 81 ctrl->hClient = gsp->internal.client.object.handle; 82 82 ctrl->hSysMem = memlist.handle; 83 83 ctrl->sysmemAddrOfSuspendResumeData = gsp->sr.meta.addr; 84 - ctrl->bEnteringGcoffState = 1; 84 + ctrl->bEnteringGcoffState = runtime ? 1 : 0; 85 85 86 86 ret = nvkm_gsp_rm_ctrl_wr(&gsp->internal.device.subdevice, ctrl); 87 87 if (ret) ··· 92 92 } 93 93 94 94 static int 95 - r570_fbsr_suspend(struct nvkm_gsp *gsp) 95 + r570_fbsr_suspend(struct nvkm_gsp *gsp, bool runtime) 96 96 { 97 97 struct nvkm_subdev *subdev = &gsp->subdev; 98 98 struct nvkm_device *device = subdev->device; ··· 133 133 return ret; 134 134 135 135 /* Initialise FBSR on RM. */ 136 - ret = r570_fbsr_init(gsp, &gsp->sr.fbsr, size); 136 + ret = r570_fbsr_init(gsp, &gsp->sr.fbsr, size, runtime); 137 137 if (ret) { 138 138 nvkm_gsp_sg_free(device, &gsp->sr.fbsr); 139 139 return ret;
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r570/gsp.c
··· 176 176 info->bIsPrimary = video_is_primary_device(device->dev); 177 177 info->bPreserveVideoMemoryAllocations = false; 178 178 179 - return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOWAIT); 179 + return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOSEQ); 180 180 } 181 181 182 182 static void
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rm.h
··· 78 78 } *device; 79 79 80 80 const struct nvkm_rm_api_fbsr { 81 - int (*suspend)(struct nvkm_gsp *); 81 + int (*suspend)(struct nvkm_gsp *, bool runtime); 82 82 void (*resume)(struct nvkm_gsp *); 83 83 } *fbsr; 84 84
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu102.c
··· 161 161 } 162 162 163 163 int 164 - tu102_gsp_fini(struct nvkm_gsp *gsp, bool suspend) 164 + tu102_gsp_fini(struct nvkm_gsp *gsp, enum nvkm_suspend_state suspend) 165 165 { 166 166 u32 mbox0 = 0xff, mbox1 = 0xff; 167 167 int ret;
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
··· 135 135 } 136 136 137 137 static int 138 - nvkm_i2c_fini(struct nvkm_subdev *subdev, bool suspend) 138 + nvkm_i2c_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 139 139 { 140 140 struct nvkm_i2c *i2c = nvkm_i2c(subdev); 141 141 struct nvkm_i2c_pad *pad;
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/instmem/base.c
··· 176 176 } 177 177 178 178 static int 179 - nvkm_instmem_fini(struct nvkm_subdev *subdev, bool suspend) 179 + nvkm_instmem_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 180 180 { 181 181 struct nvkm_instmem *imem = nvkm_instmem(subdev); 182 182 int ret;
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/pci/base.c
··· 74 74 } 75 75 76 76 static int 77 - nvkm_pci_fini(struct nvkm_subdev *subdev, bool suspend) 77 + nvkm_pci_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 78 78 { 79 79 struct nvkm_pci *pci = nvkm_pci(subdev); 80 80
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
··· 77 77 } 78 78 79 79 static int 80 - nvkm_pmu_fini(struct nvkm_subdev *subdev, bool suspend) 80 + nvkm_pmu_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 81 81 { 82 82 struct nvkm_pmu *pmu = nvkm_pmu(subdev); 83 83
+3 -3
drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
··· 341 341 } 342 342 343 343 static int 344 - nvkm_therm_fini(struct nvkm_subdev *subdev, bool suspend) 344 + nvkm_therm_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 345 345 { 346 346 struct nvkm_therm *therm = nvkm_therm(subdev); 347 347 348 348 if (therm->func->fini) 349 349 therm->func->fini(therm); 350 350 351 - nvkm_therm_fan_fini(therm, suspend); 352 - nvkm_therm_sensor_fini(therm, suspend); 351 + nvkm_therm_fan_fini(therm, suspend != NVKM_POWEROFF); 352 + nvkm_therm_sensor_fini(therm, suspend != NVKM_POWEROFF); 353 353 354 354 if (suspend) { 355 355 therm->suspend = therm->mode;
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/timer/base.c
··· 149 149 } 150 150 151 151 static int 152 - nvkm_timer_fini(struct nvkm_subdev *subdev, bool suspend) 152 + nvkm_timer_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend) 153 153 { 154 154 struct nvkm_timer *tmr = nvkm_timer(subdev); 155 155 tmr->func->alarm_fini(tmr);
+4 -2
drivers/gpu/drm/xe/xe_guc.c
··· 1618 1618 return xe_guc_submit_start(guc); 1619 1619 } 1620 1620 1621 - void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p) 1621 + int xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p) 1622 1622 { 1623 1623 struct xe_gt *gt = guc_to_gt(guc); 1624 1624 unsigned int fw_ref; ··· 1630 1630 if (!IS_SRIOV_VF(gt_to_xe(gt))) { 1631 1631 fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT); 1632 1632 if (!fw_ref) 1633 - return; 1633 + return -EIO; 1634 1634 1635 1635 status = xe_mmio_read32(&gt->mmio, GUC_STATUS); 1636 1636 ··· 1658 1658 1659 1659 drm_puts(p, "\n"); 1660 1660 xe_guc_submit_print(guc, p); 1661 + 1662 + return 0; 1661 1663 } 1662 1664 1663 1665 /**
+1 -1
drivers/gpu/drm/xe/xe_guc.h
··· 45 45 int xe_guc_self_cfg64(struct xe_guc *guc, u16 key, u64 val); 46 46 void xe_guc_irq_handler(struct xe_guc *guc, const u16 iir); 47 47 void xe_guc_sanitize(struct xe_guc *guc); 48 - void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p); 48 + int xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p); 49 49 int xe_guc_reset_prepare(struct xe_guc *guc); 50 50 void xe_guc_reset_wait(struct xe_guc *guc); 51 51 void xe_guc_stop_prepare(struct xe_guc *guc);
+1 -1
drivers/gpu/drm/xe/xe_migrate.c
··· 1201 1201 } 1202 1202 1203 1203 /** 1204 - * xe_get_migrate_exec_queue() - Get the execution queue from migrate context. 1204 + * xe_migrate_exec_queue() - Get the execution queue from migrate context. 1205 1205 * @migrate: Migrate context. 1206 1206 * 1207 1207 * Return: Pointer to execution queue on success, error on failure
+10 -3
drivers/gpu/drm/xe/xe_pm.c
··· 8 8 #include <linux/fault-inject.h> 9 9 #include <linux/pm_runtime.h> 10 10 #include <linux/suspend.h> 11 + #include <linux/dmi.h> 11 12 12 13 #include <drm/drm_managed.h> 13 14 #include <drm/ttm/ttm_placement.h> ··· 358 357 359 358 static u32 vram_threshold_value(struct xe_device *xe) 360 359 { 361 - /* FIXME: D3Cold temporarily disabled by default on BMG */ 362 - if (xe->info.platform == XE_BATTLEMAGE) 363 - return 0; 360 + if (xe->info.platform == XE_BATTLEMAGE) { 361 + const char *product_name; 362 + 363 + product_name = dmi_get_system_info(DMI_PRODUCT_NAME); 364 + if (product_name && strstr(product_name, "NUC13RNG")) { 365 + drm_warn(&xe->drm, "BMG + D3Cold not supported on this platform\n"); 366 + return 0; 367 + } 368 + } 364 369 365 370 return DEFAULT_VRAM_THRESHOLD; 366 371 }
+1 -1
drivers/gpu/drm/xe/xe_query.c
··· 491 491 492 492 if (copy_to_user(*ptr, topo, sizeof(*topo))) 493 493 return -EFAULT; 494 - *ptr += sizeof(topo); 494 + *ptr += sizeof(*topo); 495 495 496 496 if (copy_to_user(*ptr, mask, mask_size)) 497 497 return -EFAULT;
+1 -1
drivers/gpu/drm/xe/xe_tlb_inval.c
··· 115 115 } 116 116 117 117 /** 118 - * xe_gt_tlb_inval_init - Initialize TLB invalidation state 118 + * xe_gt_tlb_inval_init_early() - Initialize TLB invalidation state 119 119 * @gt: GT structure 120 120 * 121 121 * Initialize TLB invalidation state, purely software initialization, should
+1 -1
drivers/gpu/drm/xe/xe_tlb_inval_job.c
··· 164 164 } 165 165 166 166 /** 167 - * xe_tlb_inval_alloc_dep() - TLB invalidation job alloc dependency 167 + * xe_tlb_inval_job_alloc_dep() - TLB invalidation job alloc dependency 168 168 * @job: TLB invalidation job to alloc dependency for 169 169 * 170 170 * Allocate storage for a dependency in the TLB invalidation fence. This
+14 -3
drivers/hwmon/acpi_power_meter.c
··· 47 47 static int cap_in_hardware; 48 48 static bool force_cap_on; 49 49 50 + static DEFINE_MUTEX(acpi_notify_lock); 51 + 50 52 static int can_cap_in_hardware(void) 51 53 { 52 54 return force_cap_on || cap_in_hardware; ··· 825 823 826 824 resource = acpi_driver_data(device); 827 825 826 + guard(mutex)(&acpi_notify_lock); 827 + 828 828 switch (event) { 829 829 case METER_NOTIFY_CONFIG: 830 + if (!IS_ERR(resource->hwmon_dev)) 831 + hwmon_device_unregister(resource->hwmon_dev); 832 + 830 833 mutex_lock(&resource->lock); 834 + 831 835 free_capabilities(resource); 832 836 remove_domain_devices(resource); 833 - hwmon_device_unregister(resource->hwmon_dev); 834 837 res = read_capabilities(resource); 835 838 if (res) 836 839 dev_err_once(&device->dev, "read capabilities failed.\n"); 837 840 res = read_domain_devices(resource); 838 841 if (res && res != -ENODEV) 839 842 dev_err_once(&device->dev, "read domain devices failed.\n"); 843 + 844 + mutex_unlock(&resource->lock); 845 + 840 846 resource->hwmon_dev = 841 847 hwmon_device_register_with_info(&device->dev, 842 848 ACPI_POWER_METER_NAME, ··· 853 843 power_extra_groups); 854 844 if (IS_ERR(resource->hwmon_dev)) 855 845 dev_err_once(&device->dev, "register hwmon device failed.\n"); 856 - mutex_unlock(&resource->lock); 846 + 857 847 break; 858 848 case METER_NOTIFY_TRIP: 859 849 sysfs_notify(&device->dev.kobj, NULL, POWER_AVERAGE_NAME); ··· 963 953 return; 964 954 965 955 resource = acpi_driver_data(device); 966 - hwmon_device_unregister(resource->hwmon_dev); 956 + if (!IS_ERR(resource->hwmon_dev)) 957 + hwmon_device_unregister(resource->hwmon_dev); 967 958 968 959 remove_domain_devices(resource); 969 960 free_capabilities(resource);
+8
drivers/hwmon/dell-smm-hwmon.c
··· 1640 1640 .driver_data = (void *)&i8k_fan_control_data[I8K_FAN_30A3_31A3], 1641 1641 }, 1642 1642 { 1643 + .ident = "Dell G15 5510", 1644 + .matches = { 1645 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 1646 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Dell G15 5510"), 1647 + }, 1648 + .driver_data = (void *)&i8k_fan_control_data[I8K_FAN_30A3_31A3], 1649 + }, 1650 + { 1643 1651 .ident = "Dell G15 5511", 1644 1652 .matches = { 1645 1653 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+3 -3
drivers/hwmon/gpio-fan.c
··· 148 148 int ret; 149 149 150 150 ret = pm_runtime_put_sync(fan_data->dev); 151 - if (ret < 0) 151 + if (ret < 0 && ret != -ENOSYS) 152 152 return ret; 153 153 } 154 154 ··· 291 291 { 292 292 struct gpio_fan_data *fan_data = dev_get_drvdata(dev); 293 293 unsigned long rpm; 294 - int ret = count; 294 + int ret; 295 295 296 296 if (kstrtoul(buf, 10, &rpm)) 297 297 return -EINVAL; ··· 308 308 exit_unlock: 309 309 mutex_unlock(&fan_data->lock); 310 310 311 - return ret; 311 + return ret ? ret : count; 312 312 } 313 313 314 314 static DEVICE_ATTR_RW(pwm1);
+1
drivers/hwmon/occ/common.c
··· 749 749 * are dynamically allocated, we cannot use the existing kernel macros which 750 750 * stringify the name argument. 751 751 */ 752 + __printf(7, 8) 752 753 static void occ_init_attribute(struct occ_attribute *attr, int mode, 753 754 ssize_t (*show)(struct device *dev, struct device_attribute *attr, char *buf), 754 755 ssize_t (*store)(struct device *dev, struct device_attribute *attr,
+2 -1
drivers/i2c/busses/i2c-imx.c
··· 1103 1103 1104 1104 case IMX_I2C_STATE_READ_BLOCK_DATA_LEN: 1105 1105 i2c_imx_isr_read_block_data_len(i2c_imx); 1106 - i2c_imx->state = IMX_I2C_STATE_READ_CONTINUE; 1106 + if (i2c_imx->state == IMX_I2C_STATE_READ_BLOCK_DATA_LEN) 1107 + i2c_imx->state = IMX_I2C_STATE_READ_CONTINUE; 1107 1108 break; 1108 1109 1109 1110 case IMX_I2C_STATE_WRITE:
+1 -1
drivers/iommu/intel/pasid.h
··· 24 24 25 25 #define PASID_FLAG_NESTED BIT(1) 26 26 #define PASID_FLAG_PAGE_SNOOP BIT(2) 27 - #define PASID_FLAG_PWSNP BIT(2) 27 + #define PASID_FLAG_PWSNP BIT(3) 28 28 29 29 /* 30 30 * The PASID_FLAG_FL5LP flag Indicates using 5-level paging for first-
+3
drivers/net/ethernet/adi/adin1110.c
··· 1089 1089 1090 1090 reset_gpio = devm_gpiod_get_optional(&priv->spidev->dev, "reset", 1091 1091 GPIOD_OUT_LOW); 1092 + if (IS_ERR(reset_gpio)) 1093 + return dev_err_probe(&priv->spidev->dev, PTR_ERR(reset_gpio), 1094 + "failed to get reset gpio\n"); 1092 1095 if (reset_gpio) { 1093 1096 /* MISO pin is used for internal configuration, can't have 1094 1097 * anyone else disturbing the SDO line.
+20 -19
drivers/net/ethernet/cavium/liquidio/lio_main.c
··· 3505 3505 */ 3506 3506 netdev->netdev_ops = &lionetdevops; 3507 3507 3508 + lio = GET_LIO(netdev); 3509 + 3510 + memset(lio, 0, sizeof(struct lio)); 3511 + 3512 + lio->ifidx = ifidx_or_pfnum; 3513 + 3514 + props = &octeon_dev->props[i]; 3515 + props->gmxport = resp->cfg_info.linfo.gmxport; 3516 + props->netdev = netdev; 3517 + 3518 + /* Point to the properties for octeon device to which this 3519 + * interface belongs. 3520 + */ 3521 + lio->oct_dev = octeon_dev; 3522 + lio->octprops = props; 3523 + lio->netdev = netdev; 3524 + 3508 3525 retval = netif_set_real_num_rx_queues(netdev, num_oqueues); 3509 3526 if (retval) { 3510 3527 dev_err(&octeon_dev->pci_dev->dev, ··· 3537 3520 WRITE_ONCE(sc->caller_is_done, true); 3538 3521 goto setup_nic_dev_free; 3539 3522 } 3540 - 3541 - lio = GET_LIO(netdev); 3542 - 3543 - memset(lio, 0, sizeof(struct lio)); 3544 - 3545 - lio->ifidx = ifidx_or_pfnum; 3546 - 3547 - props = &octeon_dev->props[i]; 3548 - props->gmxport = resp->cfg_info.linfo.gmxport; 3549 - props->netdev = netdev; 3550 3523 3551 3524 lio->linfo.num_rxpciq = num_oqueues; 3552 3525 lio->linfo.num_txpciq = num_iqueues; ··· 3602 3595 /* MTU range: 68 - 16000 */ 3603 3596 netdev->min_mtu = LIO_MIN_MTU_SIZE; 3604 3597 netdev->max_mtu = LIO_MAX_MTU_SIZE; 3605 - 3606 - /* Point to the properties for octeon device to which this 3607 - * interface belongs. 3608 - */ 3609 - lio->oct_dev = octeon_dev; 3610 - lio->octprops = props; 3611 - lio->netdev = netdev; 3612 3598 3613 3599 dev_dbg(&octeon_dev->pci_dev->dev, 3614 3600 "if%d gmx: %d hw_addr: 0x%llx\n", i, ··· 3750 3750 if (!devlink) { 3751 3751 device_unlock(&octeon_dev->pci_dev->dev); 3752 3752 dev_err(&octeon_dev->pci_dev->dev, "devlink alloc failed\n"); 3753 + i--; 3753 3754 goto setup_nic_dev_free; 3754 3755 } 3755 3756 ··· 3766 3765 3767 3766 setup_nic_dev_free: 3768 3767 3769 - while (i--) { 3768 + do { 3770 3769 dev_err(&octeon_dev->pci_dev->dev, 3771 3770 "NIC ifidx:%d Setup failed\n", i); 3772 3771 liquidio_destroy_nic_device(octeon_dev, i); 3773 - } 3772 + } while (i--); 3774 3773 3775 3774 setup_nic_dev_done: 3776 3775
+2 -2
drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
··· 2212 2212 2213 2213 setup_nic_dev_free: 2214 2214 2215 - while (i--) { 2215 + do { 2216 2216 dev_err(&octeon_dev->pci_dev->dev, 2217 2217 "NIC ifidx:%d Setup failed\n", i); 2218 2218 liquidio_destroy_nic_device(octeon_dev, i); 2219 - } 2219 + } while (i--); 2220 2220 2221 2221 setup_nic_dev_done: 2222 2222
+10
drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
··· 1531 1531 } 1532 1532 1533 1533 if_id = (status & 0xFFFF0000) >> 16; 1534 + if (if_id >= ethsw->sw_attr.num_ifs) { 1535 + dev_err(dev, "Invalid if_id %d in IRQ status\n", if_id); 1536 + goto out; 1537 + } 1534 1538 port_priv = ethsw->ports[if_id]; 1535 1539 1536 1540 if (status & DPSW_IRQ_EVENT_LINK_CHANGED) ··· 3025 3021 &ethsw->sw_attr); 3026 3022 if (err) { 3027 3023 dev_err(dev, "dpsw_get_attributes err %d\n", err); 3024 + goto err_close; 3025 + } 3026 + 3027 + if (!ethsw->sw_attr.num_ifs) { 3028 + dev_err(dev, "DPSW device has no interfaces\n"); 3029 + err = -ENODEV; 3028 3030 goto err_close; 3029 3031 } 3030 3032
+7 -4
drivers/net/ethernet/freescale/enetc/enetc.c
··· 2512 2512 struct enetc_hw *hw = &si->hw; 2513 2513 int err; 2514 2514 2515 - /* set SI cache attributes */ 2516 - enetc_wr(hw, ENETC_SICAR0, 2517 - ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT); 2518 - enetc_wr(hw, ENETC_SICAR1, ENETC_SICAR_MSI); 2515 + if (is_enetc_rev1(si)) { 2516 + /* set SI cache attributes */ 2517 + enetc_wr(hw, ENETC_SICAR0, 2518 + ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT); 2519 + enetc_wr(hw, ENETC_SICAR1, ENETC_SICAR_MSI); 2520 + } 2521 + 2519 2522 /* enable SI */ 2520 2523 enetc_wr(hw, ENETC_SIMR, ENETC_SIMR_EN); 2521 2524
+3 -3
drivers/net/ethernet/freescale/enetc/enetc4_pf.c
··· 59 59 60 60 if (si != 0) { 61 61 __raw_writel(upper, hw->port + ENETC4_PSIPMAR0(si)); 62 - __raw_writew(lower, hw->port + ENETC4_PSIPMAR1(si)); 62 + __raw_writel(lower, hw->port + ENETC4_PSIPMAR1(si)); 63 63 } else { 64 64 __raw_writel(upper, hw->port + ENETC4_PMAR0); 65 - __raw_writew(lower, hw->port + ENETC4_PMAR1); 65 + __raw_writel(lower, hw->port + ENETC4_PMAR1); 66 66 } 67 67 } 68 68 ··· 73 73 u16 lower; 74 74 75 75 upper = __raw_readl(hw->port + ENETC4_PSIPMAR0(si)); 76 - lower = __raw_readw(hw->port + ENETC4_PSIPMAR1(si)); 76 + lower = __raw_readl(hw->port + ENETC4_PSIPMAR1(si)); 77 77 78 78 put_unaligned_le32(upper, addr); 79 79 put_unaligned_le16(lower, addr + 4);
-4
drivers/net/ethernet/freescale/enetc/enetc_cbdr.c
··· 74 74 if (!user->ring) 75 75 return -ENOMEM; 76 76 77 - /* set CBDR cache attributes */ 78 - enetc_wr(hw, ENETC_SICAR2, 79 - ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT); 80 - 81 77 regs.pir = hw->reg + ENETC_SICBDRPIR; 82 78 regs.cir = hw->reg + ENETC_SICBDRCIR; 83 79 regs.mr = hw->reg + ENETC_SICBDRMR;
+14 -3
drivers/net/ethernet/freescale/enetc/enetc_hw.h
··· 708 708 #define ENETC_RFSE_EN BIT(15) 709 709 #define ENETC_RFSE_MODE_BD 2 710 710 711 + static inline void enetc_get_primary_mac_addr(struct enetc_hw *hw, u8 *addr) 712 + { 713 + u32 upper; 714 + u16 lower; 715 + 716 + upper = __raw_readl(hw->reg + ENETC_SIPMAR0); 717 + lower = __raw_readl(hw->reg + ENETC_SIPMAR1); 718 + 719 + put_unaligned_le32(upper, addr); 720 + put_unaligned_le16(lower, addr + 4); 721 + } 722 + 711 723 static inline void enetc_load_primary_mac_addr(struct enetc_hw *hw, 712 724 struct net_device *ndev) 713 725 { 714 - u8 addr[ETH_ALEN] __aligned(4); 726 + u8 addr[ETH_ALEN]; 715 727 716 - *(u32 *)addr = __raw_readl(hw->reg + ENETC_SIPMAR0); 717 - *(u16 *)(addr + 4) = __raw_readw(hw->reg + ENETC_SIPMAR1); 728 + enetc_get_primary_mac_addr(hw, addr); 718 729 eth_hw_addr_set(ndev, addr); 719 730 } 720 731
+51 -26
drivers/net/ethernet/google/gve/gve_ethtool.c
··· 152 152 u64 tmp_rx_pkts, tmp_rx_hsplit_pkt, tmp_rx_bytes, tmp_rx_hsplit_bytes, 153 153 tmp_rx_skb_alloc_fail, tmp_rx_buf_alloc_fail, 154 154 tmp_rx_desc_err_dropped_pkt, tmp_rx_hsplit_unsplit_pkt, 155 - tmp_tx_pkts, tmp_tx_bytes; 155 + tmp_tx_pkts, tmp_tx_bytes, 156 + tmp_xdp_tx_errors, tmp_xdp_redirect_errors; 156 157 u64 rx_buf_alloc_fail, rx_desc_err_dropped_pkt, rx_hsplit_unsplit_pkt, 157 158 rx_pkts, rx_hsplit_pkt, rx_skb_alloc_fail, rx_bytes, tx_pkts, tx_bytes, 158 - tx_dropped; 159 - int stats_idx, base_stats_idx, max_stats_idx; 159 + tx_dropped, xdp_tx_errors, xdp_redirect_errors; 160 + int rx_base_stats_idx, max_rx_stats_idx, max_tx_stats_idx; 161 + int stats_idx, stats_region_len, nic_stats_len; 160 162 struct stats *report_stats; 161 163 int *rx_qid_to_stats_idx; 162 164 int *tx_qid_to_stats_idx; ··· 200 198 for (rx_pkts = 0, rx_bytes = 0, rx_hsplit_pkt = 0, 201 199 rx_skb_alloc_fail = 0, rx_buf_alloc_fail = 0, 202 200 rx_desc_err_dropped_pkt = 0, rx_hsplit_unsplit_pkt = 0, 201 + xdp_tx_errors = 0, xdp_redirect_errors = 0, 203 202 ring = 0; 204 203 ring < priv->rx_cfg.num_queues; ring++) { 205 204 if (priv->rx) { ··· 218 215 rx->rx_desc_err_dropped_pkt; 219 216 tmp_rx_hsplit_unsplit_pkt = 220 217 rx->rx_hsplit_unsplit_pkt; 218 + tmp_xdp_tx_errors = rx->xdp_tx_errors; 219 + tmp_xdp_redirect_errors = 220 + rx->xdp_redirect_errors; 221 221 } while (u64_stats_fetch_retry(&priv->rx[ring].statss, 222 222 start)); 223 223 rx_pkts += tmp_rx_pkts; ··· 230 224 rx_buf_alloc_fail += tmp_rx_buf_alloc_fail; 231 225 rx_desc_err_dropped_pkt += tmp_rx_desc_err_dropped_pkt; 232 226 rx_hsplit_unsplit_pkt += tmp_rx_hsplit_unsplit_pkt; 227 + xdp_tx_errors += tmp_xdp_tx_errors; 228 + xdp_redirect_errors += tmp_xdp_redirect_errors; 233 229 } 234 230 } 235 231 for (tx_pkts = 0, tx_bytes = 0, tx_dropped = 0, ring = 0; ··· 257 249 data[i++] = rx_bytes; 258 250 data[i++] = tx_bytes; 259 251 /* total rx dropped packets */ 260 - data[i++] = rx_skb_alloc_fail + rx_buf_alloc_fail + 261 - rx_desc_err_dropped_pkt; 252 + data[i++] = rx_skb_alloc_fail + rx_desc_err_dropped_pkt + 253 + xdp_tx_errors + xdp_redirect_errors; 262 254 data[i++] = tx_dropped; 263 255 data[i++] = priv->tx_timeo_cnt; 264 256 data[i++] = rx_skb_alloc_fail; ··· 273 265 data[i++] = priv->stats_report_trigger_cnt; 274 266 i = GVE_MAIN_STATS_LEN; 275 267 276 - /* For rx cross-reporting stats, start from nic rx stats in report */ 277 - base_stats_idx = GVE_TX_STATS_REPORT_NUM * num_tx_queues + 278 - GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues; 279 - /* The boundary between driver stats and NIC stats shifts if there are 280 - * stopped queues. 281 - */ 282 - base_stats_idx += NIC_RX_STATS_REPORT_NUM * num_stopped_rxqs + 283 - NIC_TX_STATS_REPORT_NUM * num_stopped_txqs; 284 - max_stats_idx = NIC_RX_STATS_REPORT_NUM * 285 - (priv->rx_cfg.num_queues - num_stopped_rxqs) + 286 - base_stats_idx; 268 + rx_base_stats_idx = 0; 269 + max_rx_stats_idx = 0; 270 + max_tx_stats_idx = 0; 271 + stats_region_len = priv->stats_report_len - 272 + sizeof(struct gve_stats_report); 273 + nic_stats_len = (NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues + 274 + NIC_TX_STATS_REPORT_NUM * num_tx_queues) * sizeof(struct stats); 275 + if (unlikely((stats_region_len - 276 + nic_stats_len) % sizeof(struct stats))) { 277 + net_err_ratelimited("Starting index of NIC stats should be multiple of stats size"); 278 + } else { 279 + /* For rx cross-reporting stats, 280 + * start from nic rx stats in report 281 + */ 282 + rx_base_stats_idx = (stats_region_len - nic_stats_len) / 283 + sizeof(struct stats); 284 + /* The boundary between driver stats and NIC stats 285 + * shifts if there are stopped queues 286 + */ 287 + rx_base_stats_idx += NIC_RX_STATS_REPORT_NUM * 288 + num_stopped_rxqs + NIC_TX_STATS_REPORT_NUM * 289 + num_stopped_txqs; 290 + max_rx_stats_idx = NIC_RX_STATS_REPORT_NUM * 291 + (priv->rx_cfg.num_queues - num_stopped_rxqs) + 292 + rx_base_stats_idx; 293 + max_tx_stats_idx = NIC_TX_STATS_REPORT_NUM * 294 + (num_tx_queues - num_stopped_txqs) + 295 + max_rx_stats_idx; 296 + } 287 297 /* Preprocess the stats report for rx, map queue id to start index */ 288 298 skip_nic_stats = false; 289 - for (stats_idx = base_stats_idx; stats_idx < max_stats_idx; 299 + for (stats_idx = rx_base_stats_idx; stats_idx < max_rx_stats_idx; 290 300 stats_idx += NIC_RX_STATS_REPORT_NUM) { 291 301 u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name); 292 302 u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id); ··· 337 311 tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail; 338 312 tmp_rx_desc_err_dropped_pkt = 339 313 rx->rx_desc_err_dropped_pkt; 314 + tmp_xdp_tx_errors = rx->xdp_tx_errors; 315 + tmp_xdp_redirect_errors = 316 + rx->xdp_redirect_errors; 340 317 } while (u64_stats_fetch_retry(&priv->rx[ring].statss, 341 318 start)); 342 319 data[i++] = tmp_rx_bytes; ··· 350 321 data[i++] = rx->rx_frag_alloc_cnt; 351 322 /* rx dropped packets */ 352 323 data[i++] = tmp_rx_skb_alloc_fail + 353 - tmp_rx_buf_alloc_fail + 354 - tmp_rx_desc_err_dropped_pkt; 324 + tmp_rx_desc_err_dropped_pkt + 325 + tmp_xdp_tx_errors + 326 + tmp_xdp_redirect_errors; 355 327 data[i++] = rx->rx_copybreak_pkt; 356 328 data[i++] = rx->rx_copied_pkt; 357 329 /* stats from NIC */ ··· 384 354 i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS; 385 355 } 386 356 387 - /* For tx cross-reporting stats, start from nic tx stats in report */ 388 - base_stats_idx = max_stats_idx; 389 - max_stats_idx = NIC_TX_STATS_REPORT_NUM * 390 - (num_tx_queues - num_stopped_txqs) + 391 - max_stats_idx; 392 - /* Preprocess the stats report for tx, map queue id to start index */ 393 357 skip_nic_stats = false; 394 - for (stats_idx = base_stats_idx; stats_idx < max_stats_idx; 358 + /* NIC TX stats start right after NIC RX stats */ 359 + for (stats_idx = max_rx_stats_idx; stats_idx < max_tx_stats_idx; 395 360 stats_idx += NIC_TX_STATS_REPORT_NUM) { 396 361 u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name); 397 362 u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id);
+2 -2
drivers/net/ethernet/google/gve/gve_main.c
··· 283 283 int tx_stats_num, rx_stats_num; 284 284 285 285 tx_stats_num = (GVE_TX_STATS_REPORT_NUM + NIC_TX_STATS_REPORT_NUM) * 286 - gve_num_tx_queues(priv); 286 + priv->tx_cfg.max_queues; 287 287 rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) * 288 - priv->rx_cfg.num_queues; 288 + priv->rx_cfg.max_queues; 289 289 priv->stats_report_len = struct_size(priv->stats_report, stats, 290 290 size_add(tx_stats_num, rx_stats_num)); 291 291 priv->stats_report =
-1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 9030 9030 TCP_FLAG_FIN | 9031 9031 TCP_FLAG_CWR) >> 16); 9032 9032 wr32(&pf->hw, I40E_GLLAN_TSOMSK_L, be32_to_cpu(TCP_FLAG_CWR) >> 16); 9033 - udp_tunnel_get_rx_info(netdev); 9034 9033 9035 9034 return 0; 9036 9035 }
+14 -12
drivers/net/ethernet/intel/ice/ice_main.c
··· 3314 3314 if (ice_is_reset_in_progress(pf->state)) 3315 3315 goto skip_irq; 3316 3316 3317 - if (test_and_clear_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread)) { 3318 - /* Process outstanding Tx timestamps. If there is more work, 3319 - * re-arm the interrupt to trigger again. 3320 - */ 3321 - if (ice_ptp_process_ts(pf) == ICE_TX_TSTAMP_WORK_PENDING) { 3322 - wr32(hw, PFINT_OICR, PFINT_OICR_TSYN_TX_M); 3323 - ice_flush(hw); 3324 - } 3325 - } 3317 + if (test_and_clear_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread)) 3318 + ice_ptp_process_ts(pf); 3326 3319 3327 3320 skip_irq: 3328 3321 ice_irq_dynamic_ena(hw, NULL, NULL); 3322 + ice_flush(hw); 3323 + 3324 + if (ice_ptp_tx_tstamps_pending(pf)) { 3325 + /* If any new Tx timestamps happened while in interrupt, 3326 + * re-arm the interrupt to trigger it again. 3327 + */ 3328 + wr32(hw, PFINT_OICR, PFINT_OICR_TSYN_TX_M); 3329 + ice_flush(hw); 3330 + } 3329 3331 3330 3332 return IRQ_HANDLED; 3331 3333 } ··· 7809 7807 7810 7808 /* Restore timestamp mode settings after VSI rebuild */ 7811 7809 ice_ptp_restore_timestamp_mode(pf); 7810 + 7811 + /* Start PTP periodic work after VSI is fully rebuilt */ 7812 + ice_ptp_queue_work(pf); 7812 7813 return; 7813 7814 7814 7815 err_vsi_rebuild: ··· 9661 9656 if (err) 9662 9657 netdev_err(netdev, "Failed to open VSI 0x%04X on switch 0x%04X\n", 9663 9658 vsi->vsi_num, vsi->vsw->sw_id); 9664 - 9665 - /* Update existing tunnels information */ 9666 - udp_tunnel_get_rx_info(netdev); 9667 9659 9668 9660 return err; 9669 9661 }
+109 -70
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 573 573 pf = ptp_port_to_pf(ptp_port); 574 574 hw = &pf->hw; 575 575 576 + if (!tx->init) 577 + return; 578 + 576 579 /* Read the Tx ready status first */ 577 580 if (tx->has_ready_bitmap) { 578 581 err = ice_get_phy_tx_tstamp_ready(hw, tx->block, &tstamp_ready); ··· 677 674 pf->ptp.tx_hwtstamp_good += tstamp_good; 678 675 } 679 676 680 - /** 681 - * ice_ptp_tx_tstamp_owner - Process Tx timestamps for all ports on the device 682 - * @pf: Board private structure 683 - */ 684 - static enum ice_tx_tstamp_work ice_ptp_tx_tstamp_owner(struct ice_pf *pf) 677 + static void ice_ptp_tx_tstamp_owner(struct ice_pf *pf) 685 678 { 686 679 struct ice_ptp_port *port; 687 - unsigned int i; 688 680 689 681 mutex_lock(&pf->adapter->ports.lock); 690 682 list_for_each_entry(port, &pf->adapter->ports.ports, list_node) { ··· 691 693 ice_ptp_process_tx_tstamp(tx); 692 694 } 693 695 mutex_unlock(&pf->adapter->ports.lock); 694 - 695 - for (i = 0; i < ICE_GET_QUAD_NUM(pf->hw.ptp.num_lports); i++) { 696 - u64 tstamp_ready; 697 - int err; 698 - 699 - /* Read the Tx ready status first */ 700 - err = ice_get_phy_tx_tstamp_ready(&pf->hw, i, &tstamp_ready); 701 - if (err) 702 - break; 703 - else if (tstamp_ready) 704 - return ICE_TX_TSTAMP_WORK_PENDING; 705 - } 706 - 707 - return ICE_TX_TSTAMP_WORK_DONE; 708 - } 709 - 710 - /** 711 - * ice_ptp_tx_tstamp - Process Tx timestamps for this function. 712 - * @tx: Tx tracking structure to initialize 713 - * 714 - * Returns: ICE_TX_TSTAMP_WORK_PENDING if there are any outstanding incomplete 715 - * Tx timestamps, or ICE_TX_TSTAMP_WORK_DONE otherwise. 716 - */ 717 - static enum ice_tx_tstamp_work ice_ptp_tx_tstamp(struct ice_ptp_tx *tx) 718 - { 719 - bool more_timestamps; 720 - unsigned long flags; 721 - 722 - if (!tx->init) 723 - return ICE_TX_TSTAMP_WORK_DONE; 724 - 725 - /* Process the Tx timestamp tracker */ 726 - ice_ptp_process_tx_tstamp(tx); 727 - 728 - /* Check if there are outstanding Tx timestamps */ 729 - spin_lock_irqsave(&tx->lock, flags); 730 - more_timestamps = tx->init && !bitmap_empty(tx->in_use, tx->len); 731 - spin_unlock_irqrestore(&tx->lock, flags); 732 - 733 - if (more_timestamps) 734 - return ICE_TX_TSTAMP_WORK_PENDING; 735 - 736 - return ICE_TX_TSTAMP_WORK_DONE; 737 696 } 738 697 739 698 /** ··· 1302 1347 /* Do not reconfigure E810 or E830 PHY */ 1303 1348 return; 1304 1349 case ICE_MAC_GENERIC: 1305 - case ICE_MAC_GENERIC_3K_E825: 1306 1350 ice_ptp_port_phy_restart(ptp_port); 1351 + return; 1352 + case ICE_MAC_GENERIC_3K_E825: 1353 + if (linkup) 1354 + ice_ptp_port_phy_restart(ptp_port); 1307 1355 return; 1308 1356 default: 1309 1357 dev_warn(ice_pf_to_dev(pf), "%s: Unknown PHY type\n", __func__); ··· 2621 2663 return idx + tx->offset; 2622 2664 } 2623 2665 2624 - /** 2625 - * ice_ptp_process_ts - Process the PTP Tx timestamps 2626 - * @pf: Board private structure 2627 - * 2628 - * Returns: ICE_TX_TSTAMP_WORK_PENDING if there are any outstanding Tx 2629 - * timestamps that need processing, and ICE_TX_TSTAMP_WORK_DONE otherwise. 2630 - */ 2631 - enum ice_tx_tstamp_work ice_ptp_process_ts(struct ice_pf *pf) 2666 + void ice_ptp_process_ts(struct ice_pf *pf) 2632 2667 { 2633 2668 switch (pf->ptp.tx_interrupt_mode) { 2634 2669 case ICE_PTP_TX_INTERRUPT_NONE: 2635 2670 /* This device has the clock owner handle timestamps for it */ 2636 - return ICE_TX_TSTAMP_WORK_DONE; 2671 + return; 2637 2672 case ICE_PTP_TX_INTERRUPT_SELF: 2638 2673 /* This device handles its own timestamps */ 2639 - return ice_ptp_tx_tstamp(&pf->ptp.port.tx); 2674 + ice_ptp_process_tx_tstamp(&pf->ptp.port.tx); 2675 + return; 2640 2676 case ICE_PTP_TX_INTERRUPT_ALL: 2641 2677 /* This device handles timestamps for all ports */ 2642 - return ice_ptp_tx_tstamp_owner(pf); 2678 + ice_ptp_tx_tstamp_owner(pf); 2679 + return; 2643 2680 default: 2644 2681 WARN_ONCE(1, "Unexpected Tx timestamp interrupt mode %u\n", 2645 2682 pf->ptp.tx_interrupt_mode); 2646 - return ICE_TX_TSTAMP_WORK_DONE; 2683 + return; 2647 2684 } 2685 + } 2686 + 2687 + static bool ice_port_has_timestamps(struct ice_ptp_tx *tx) 2688 + { 2689 + bool more_timestamps; 2690 + 2691 + scoped_guard(spinlock_irqsave, &tx->lock) { 2692 + if (!tx->init) 2693 + return false; 2694 + 2695 + more_timestamps = !bitmap_empty(tx->in_use, tx->len); 2696 + } 2697 + 2698 + return more_timestamps; 2699 + } 2700 + 2701 + static bool ice_any_port_has_timestamps(struct ice_pf *pf) 2702 + { 2703 + struct ice_ptp_port *port; 2704 + 2705 + scoped_guard(mutex, &pf->adapter->ports.lock) { 2706 + list_for_each_entry(port, &pf->adapter->ports.ports, 2707 + list_node) { 2708 + struct ice_ptp_tx *tx = &port->tx; 2709 + 2710 + if (ice_port_has_timestamps(tx)) 2711 + return true; 2712 + } 2713 + } 2714 + 2715 + return false; 2716 + } 2717 + 2718 + bool ice_ptp_tx_tstamps_pending(struct ice_pf *pf) 2719 + { 2720 + struct ice_hw *hw = &pf->hw; 2721 + unsigned int i; 2722 + 2723 + /* Check software indicator */ 2724 + switch (pf->ptp.tx_interrupt_mode) { 2725 + case ICE_PTP_TX_INTERRUPT_NONE: 2726 + return false; 2727 + case ICE_PTP_TX_INTERRUPT_SELF: 2728 + if (ice_port_has_timestamps(&pf->ptp.port.tx)) 2729 + return true; 2730 + break; 2731 + case ICE_PTP_TX_INTERRUPT_ALL: 2732 + if (ice_any_port_has_timestamps(pf)) 2733 + return true; 2734 + break; 2735 + default: 2736 + WARN_ONCE(1, "Unexpected Tx timestamp interrupt mode %u\n", 2737 + pf->ptp.tx_interrupt_mode); 2738 + break; 2739 + } 2740 + 2741 + /* Check hardware indicator */ 2742 + for (i = 0; i < ICE_GET_QUAD_NUM(hw->ptp.num_lports); i++) { 2743 + u64 tstamp_ready = 0; 2744 + int err; 2745 + 2746 + err = ice_get_phy_tx_tstamp_ready(&pf->hw, i, &tstamp_ready); 2747 + if (err || tstamp_ready) 2748 + return true; 2749 + } 2750 + 2751 + return false; 2648 2752 } 2649 2753 2650 2754 /** ··· 2758 2738 return IRQ_WAKE_THREAD; 2759 2739 case ICE_MAC_E830: 2760 2740 /* E830 can read timestamps in the top half using rd32() */ 2761 - if (ice_ptp_process_ts(pf) == ICE_TX_TSTAMP_WORK_PENDING) { 2741 + ice_ptp_process_ts(pf); 2742 + 2743 + if (ice_ptp_tx_tstamps_pending(pf)) { 2762 2744 /* Process outstanding Tx timestamps. If there 2763 2745 * is more work, re-arm the interrupt to trigger again. 2764 2746 */ ··· 2840 2818 } 2841 2819 2842 2820 /** 2821 + * ice_ptp_queue_work - Queue PTP periodic work for a PF 2822 + * @pf: Board private structure 2823 + * 2824 + * Helper function to queue PTP periodic work after VSI rebuild completes. 2825 + * This ensures that PTP work only runs when VSI structures are ready. 2826 + */ 2827 + void ice_ptp_queue_work(struct ice_pf *pf) 2828 + { 2829 + if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags) && 2830 + pf->ptp.state == ICE_PTP_READY) 2831 + kthread_queue_delayed_work(pf->ptp.kworker, &pf->ptp.work, 0); 2832 + } 2833 + 2834 + /** 2843 2835 * ice_ptp_prepare_rebuild_sec - Prepare second NAC for PTP reset or rebuild 2844 2836 * @pf: Board private structure 2845 2837 * @rebuild: rebuild if true, prepare if false ··· 2871 2835 struct ice_pf *peer_pf = ptp_port_to_pf(port); 2872 2836 2873 2837 if (!ice_is_primary(&peer_pf->hw)) { 2874 - if (rebuild) 2838 + if (rebuild) { 2839 + /* TODO: When implementing rebuild=true: 2840 + * 1. Ensure secondary PFs' VSIs are rebuilt 2841 + * 2. Call ice_ptp_queue_work(peer_pf) after VSI rebuild 2842 + */ 2875 2843 ice_ptp_rebuild(peer_pf, reset_type); 2876 - else 2844 + } else { 2877 2845 ice_ptp_prepare_for_reset(peer_pf, reset_type); 2846 + } 2878 2847 } 2879 2848 } 2880 2849 } ··· 3024 2983 } 3025 2984 3026 2985 ptp->state = ICE_PTP_READY; 3027 - 3028 - /* Start periodic work going */ 3029 - kthread_queue_delayed_work(ptp->kworker, &ptp->work, 0); 3030 2986 3031 2987 dev_info(ice_pf_to_dev(pf), "PTP reset successful\n"); 3032 2988 return; ··· 3229 3191 { 3230 3192 switch (pf->hw.mac_type) { 3231 3193 case ICE_MAC_GENERIC: 3232 - /* E822 based PHY has the clock owner process the interrupt 3233 - * for all ports. 3194 + case ICE_MAC_GENERIC_3K_E825: 3195 + /* E82x hardware has the clock owner process timestamps for 3196 + * all ports. 3234 3197 */ 3235 3198 if (ice_pf_src_tmr_owned(pf)) 3236 3199 pf->ptp.tx_interrupt_mode = ICE_PTP_TX_INTERRUPT_ALL;
+13 -5
drivers/net/ethernet/intel/ice/ice_ptp.h
··· 304 304 s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb); 305 305 void ice_ptp_req_tx_single_tstamp(struct ice_ptp_tx *tx, u8 idx); 306 306 void ice_ptp_complete_tx_single_tstamp(struct ice_ptp_tx *tx); 307 - enum ice_tx_tstamp_work ice_ptp_process_ts(struct ice_pf *pf); 307 + void ice_ptp_process_ts(struct ice_pf *pf); 308 308 irqreturn_t ice_ptp_ts_irq(struct ice_pf *pf); 309 + bool ice_ptp_tx_tstamps_pending(struct ice_pf *pf); 309 310 u64 ice_ptp_read_src_clk_reg(struct ice_pf *pf, 310 311 struct ptp_system_timestamp *sts); 311 312 ··· 318 317 void ice_ptp_init(struct ice_pf *pf); 319 318 void ice_ptp_release(struct ice_pf *pf); 320 319 void ice_ptp_link_change(struct ice_pf *pf, bool linkup); 320 + void ice_ptp_queue_work(struct ice_pf *pf); 321 321 #else /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */ 322 322 323 323 static inline int ice_ptp_hwtstamp_get(struct net_device *netdev, ··· 347 345 348 346 static inline void ice_ptp_complete_tx_single_tstamp(struct ice_ptp_tx *tx) { } 349 347 350 - static inline bool ice_ptp_process_ts(struct ice_pf *pf) 351 - { 352 - return true; 353 - } 348 + static inline void ice_ptp_process_ts(struct ice_pf *pf) { } 354 349 355 350 static inline irqreturn_t ice_ptp_ts_irq(struct ice_pf *pf) 356 351 { 357 352 return IRQ_HANDLED; 353 + } 354 + 355 + static inline bool ice_ptp_tx_tstamps_pending(struct ice_pf *pf) 356 + { 357 + return false; 358 358 } 359 359 360 360 static inline u64 ice_ptp_read_src_clk_reg(struct ice_pf *pf, ··· 384 380 static inline void ice_ptp_init(struct ice_pf *pf) { } 385 381 static inline void ice_ptp_release(struct ice_pf *pf) { } 386 382 static inline void ice_ptp_link_change(struct ice_pf *pf, bool linkup) 383 + { 384 + } 385 + 386 + static inline void ice_ptp_queue_work(struct ice_pf *pf) 387 387 { 388 388 } 389 389
+15 -6
drivers/net/ethernet/spacemit/k1_emac.c
··· 12 12 #include <linux/dma-mapping.h> 13 13 #include <linux/etherdevice.h> 14 14 #include <linux/ethtool.h> 15 + #include <linux/if_vlan.h> 15 16 #include <linux/interrupt.h> 16 17 #include <linux/io.h> 17 18 #include <linux/iopoll.h> ··· 39 38 40 39 #define EMAC_DEFAULT_BUFSIZE 1536 41 40 #define EMAC_RX_BUF_2K 2048 42 - #define EMAC_RX_BUF_4K 4096 41 + #define EMAC_RX_BUF_MAX FIELD_MAX(RX_DESC_1_BUFFER_SIZE_1_MASK) 43 42 44 43 /* Tuning parameters from SpacemiT */ 45 44 #define EMAC_TX_FRAMES 64 ··· 203 202 { 204 203 /* Destination address for 802.3x Ethernet flow control */ 205 204 u8 fc_dest_addr[ETH_ALEN] = { 0x01, 0x80, 0xc2, 0x00, 0x00, 0x01 }; 206 - 207 - u32 rxirq = 0, dma = 0; 205 + u32 rxirq = 0, dma = 0, frame_sz; 208 206 209 207 regmap_set_bits(priv->regmap_apmu, 210 208 priv->regmap_apmu_offset + APMU_EMAC_CTRL_REG, ··· 227 227 emac_wr(priv, MAC_TRANSMIT_PACKET_START_THRESHOLD, 228 228 DEFAULT_TX_THRESHOLD); 229 229 emac_wr(priv, MAC_RECEIVE_PACKET_START_THRESHOLD, DEFAULT_RX_THRESHOLD); 230 + 231 + /* Set maximum frame size and jabber size based on configured MTU, 232 + * accounting for Ethernet header, double VLAN tags, and FCS. 233 + */ 234 + frame_sz = priv->ndev->mtu + ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN; 235 + 236 + emac_wr(priv, MAC_MAXIMUM_FRAME_SIZE, frame_sz); 237 + emac_wr(priv, MAC_TRANSMIT_JABBER_SIZE, frame_sz); 238 + emac_wr(priv, MAC_RECEIVE_JABBER_SIZE, frame_sz); 230 239 231 240 /* Configure flow control (enabled in emac_adjust_link() later) */ 232 241 emac_set_mac_addr_reg(priv, fc_dest_addr, MAC_FC_SOURCE_ADDRESS_HIGH); ··· 933 924 return -EBUSY; 934 925 } 935 926 936 - frame_len = mtu + ETH_HLEN + ETH_FCS_LEN; 927 + frame_len = mtu + ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN; 937 928 938 929 if (frame_len <= EMAC_DEFAULT_BUFSIZE) 939 930 priv->dma_buf_sz = EMAC_DEFAULT_BUFSIZE; 940 931 else if (frame_len <= EMAC_RX_BUF_2K) 941 932 priv->dma_buf_sz = EMAC_RX_BUF_2K; 942 933 else 943 - priv->dma_buf_sz = EMAC_RX_BUF_4K; 934 + priv->dma_buf_sz = EMAC_RX_BUF_MAX; 944 935 945 936 ndev->mtu = mtu; 946 937 ··· 2034 2025 ndev->hw_features = NETIF_F_SG; 2035 2026 ndev->features |= ndev->hw_features; 2036 2027 2037 - ndev->max_mtu = EMAC_RX_BUF_4K - (ETH_HLEN + ETH_FCS_LEN); 2028 + ndev->max_mtu = EMAC_RX_BUF_MAX - (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN); 2038 2029 ndev->pcpu_stat_type = NETDEV_PCPU_STAT_DSTATS; 2039 2030 2040 2031 priv = netdev_priv(ndev);
+2 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 8042 8042 u32 chan; 8043 8043 8044 8044 if (!ndev || !netif_running(ndev)) 8045 - return 0; 8045 + goto suspend_bsp; 8046 8046 8047 8047 mutex_lock(&priv->lock); 8048 8048 ··· 8082 8082 if (stmmac_fpe_supported(priv)) 8083 8083 ethtool_mmsv_stop(&priv->fpe_cfg.mmsv); 8084 8084 8085 + suspend_bsp: 8085 8086 if (priv->plat->suspend) 8086 8087 return priv->plat->suspend(dev, priv->plat->bsp_priv); 8087 8088
+35 -6
drivers/net/ethernet/ti/cpsw.c
··· 305 305 return 0; 306 306 } 307 307 308 - static void cpsw_ndo_set_rx_mode(struct net_device *ndev) 308 + static void cpsw_ndo_set_rx_mode_work(struct work_struct *work) 309 309 { 310 - struct cpsw_priv *priv = netdev_priv(ndev); 310 + struct cpsw_priv *priv = container_of(work, struct cpsw_priv, rx_mode_work); 311 311 struct cpsw_common *cpsw = priv->cpsw; 312 + struct net_device *ndev = priv->ndev; 312 313 int slave_port = -1; 314 + 315 + rtnl_lock(); 316 + if (!netif_running(ndev)) 317 + goto unlock_rtnl; 318 + 319 + netif_addr_lock_bh(ndev); 313 320 314 321 if (cpsw->data.dual_emac) 315 322 slave_port = priv->emac_port + 1; ··· 325 318 /* Enable promiscuous mode */ 326 319 cpsw_set_promiscious(ndev, true); 327 320 cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, slave_port); 328 - return; 321 + goto unlock_addr; 329 322 } else { 330 323 /* Disable promiscuous mode */ 331 324 cpsw_set_promiscious(ndev, false); ··· 338 331 /* add/remove mcast address either for real netdev or for vlan */ 339 332 __hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr, 340 333 cpsw_del_mc_addr); 334 + 335 + unlock_addr: 336 + netif_addr_unlock_bh(ndev); 337 + unlock_rtnl: 338 + rtnl_unlock(); 339 + } 340 + 341 + static void cpsw_ndo_set_rx_mode(struct net_device *ndev) 342 + { 343 + struct cpsw_priv *priv = netdev_priv(ndev); 344 + 345 + schedule_work(&priv->rx_mode_work); 341 346 } 342 347 343 348 static unsigned int cpsw_rxbuf_total_len(unsigned int len) ··· 1491 1472 priv_sl2->ndev = ndev; 1492 1473 priv_sl2->dev = &ndev->dev; 1493 1474 priv_sl2->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG); 1475 + INIT_WORK(&priv_sl2->rx_mode_work, cpsw_ndo_set_rx_mode_work); 1494 1476 1495 1477 if (is_valid_ether_addr(data->slave_data[1].mac_addr)) { 1496 1478 memcpy(priv_sl2->mac_addr, data->slave_data[1].mac_addr, ··· 1673 1653 priv->dev = dev; 1674 1654 priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG); 1675 1655 priv->emac_port = 0; 1656 + INIT_WORK(&priv->rx_mode_work, cpsw_ndo_set_rx_mode_work); 1676 1657 1677 1658 if (is_valid_ether_addr(data->slave_data[0].mac_addr)) { 1678 1659 memcpy(priv->mac_addr, data->slave_data[0].mac_addr, ETH_ALEN); ··· 1779 1758 static void cpsw_remove(struct platform_device *pdev) 1780 1759 { 1781 1760 struct cpsw_common *cpsw = platform_get_drvdata(pdev); 1761 + struct net_device *ndev; 1762 + struct cpsw_priv *priv; 1782 1763 int i, ret; 1783 1764 1784 1765 ret = pm_runtime_resume_and_get(&pdev->dev); ··· 1793 1770 return; 1794 1771 } 1795 1772 1796 - for (i = 0; i < cpsw->data.slaves; i++) 1797 - if (cpsw->slaves[i].ndev) 1798 - unregister_netdev(cpsw->slaves[i].ndev); 1773 + for (i = 0; i < cpsw->data.slaves; i++) { 1774 + ndev = cpsw->slaves[i].ndev; 1775 + if (!ndev) 1776 + continue; 1777 + 1778 + priv = netdev_priv(ndev); 1779 + unregister_netdev(ndev); 1780 + disable_work_sync(&priv->rx_mode_work); 1781 + } 1799 1782 1800 1783 cpts_release(cpsw->cpts); 1801 1784 cpdma_ctlr_destroy(cpsw->dma);
+29 -5
drivers/net/ethernet/ti/cpsw_new.c
··· 248 248 return 0; 249 249 } 250 250 251 - static void cpsw_ndo_set_rx_mode(struct net_device *ndev) 251 + static void cpsw_ndo_set_rx_mode_work(struct work_struct *work) 252 252 { 253 - struct cpsw_priv *priv = netdev_priv(ndev); 253 + struct cpsw_priv *priv = container_of(work, struct cpsw_priv, rx_mode_work); 254 254 struct cpsw_common *cpsw = priv->cpsw; 255 + struct net_device *ndev = priv->ndev; 255 256 257 + rtnl_lock(); 258 + if (!netif_running(ndev)) 259 + goto unlock_rtnl; 260 + 261 + netif_addr_lock_bh(ndev); 256 262 if (ndev->flags & IFF_PROMISC) { 257 263 /* Enable promiscuous mode */ 258 264 cpsw_set_promiscious(ndev, true); 259 265 cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, priv->emac_port); 260 - return; 266 + goto unlock_addr; 261 267 } 262 268 263 269 /* Disable promiscuous mode */ ··· 276 270 /* add/remove mcast address either for real netdev or for vlan */ 277 271 __hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr, 278 272 cpsw_del_mc_addr); 273 + 274 + unlock_addr: 275 + netif_addr_unlock_bh(ndev); 276 + unlock_rtnl: 277 + rtnl_unlock(); 278 + } 279 + 280 + static void cpsw_ndo_set_rx_mode(struct net_device *ndev) 281 + { 282 + struct cpsw_priv *priv = netdev_priv(ndev); 283 + 284 + schedule_work(&priv->rx_mode_work); 279 285 } 280 286 281 287 static unsigned int cpsw_rxbuf_total_len(unsigned int len) ··· 1416 1398 priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG); 1417 1399 priv->emac_port = i + 1; 1418 1400 priv->tx_packet_min = CPSW_MIN_PACKET_SIZE; 1401 + INIT_WORK(&priv->rx_mode_work, cpsw_ndo_set_rx_mode_work); 1419 1402 1420 1403 if (is_valid_ether_addr(slave_data->mac_addr)) { 1421 1404 ether_addr_copy(priv->mac_addr, slave_data->mac_addr); ··· 1466 1447 1467 1448 static void cpsw_unregister_ports(struct cpsw_common *cpsw) 1468 1449 { 1450 + struct net_device *ndev; 1451 + struct cpsw_priv *priv; 1469 1452 int i = 0; 1470 1453 1471 1454 for (i = 0; i < cpsw->data.slaves; i++) { 1472 - if (!cpsw->slaves[i].ndev) 1455 + ndev = cpsw->slaves[i].ndev; 1456 + if (!ndev) 1473 1457 continue; 1474 1458 1475 - unregister_netdev(cpsw->slaves[i].ndev); 1459 + priv = netdev_priv(ndev); 1460 + unregister_netdev(ndev); 1461 + disable_work_sync(&priv->rx_mode_work); 1476 1462 } 1477 1463 } 1478 1464
+1
drivers/net/ethernet/ti/cpsw_priv.h
··· 391 391 u32 tx_packet_min; 392 392 struct cpsw_ale_ratelimit ale_bc_ratelimit; 393 393 struct cpsw_ale_ratelimit ale_mc_ratelimit; 394 + struct work_struct rx_mode_work; 394 395 }; 395 396 396 397 #define ndev_to_cpsw(ndev) (((struct cpsw_priv *)netdev_priv(ndev))->cpsw)
+3 -2
drivers/net/macvlan.c
··· 1567 1567 /* the macvlan port may be freed by macvlan_uninit when fail to register. 1568 1568 * so we destroy the macvlan port only when it's valid. 1569 1569 */ 1570 - if (create && macvlan_port_get_rtnl(lowerdev)) { 1570 + if (macvlan_port_get_rtnl(lowerdev)) { 1571 1571 macvlan_flush_sources(port, vlan); 1572 - macvlan_port_destroy(port->dev); 1572 + if (create) 1573 + macvlan_port_destroy(port->dev); 1573 1574 } 1574 1575 return err; 1575 1576 }
+2
drivers/net/phy/sfp.c
··· 479 479 linkmode_zero(caps->link_modes); 480 480 linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseX_Full_BIT, 481 481 caps->link_modes); 482 + phy_interface_zero(caps->interfaces); 483 + __set_bit(PHY_INTERFACE_MODE_1000BASEX, caps->interfaces); 482 484 } 483 485 484 486 #define SFP_QUIRK(_v, _p, _s, _f) \
+15 -14
drivers/net/usb/r8152.c
··· 8535 8535 usb_submit_urb(tp->intr_urb, GFP_NOIO); 8536 8536 } 8537 8537 8538 - /* If the device is RTL8152_INACCESSIBLE here then we should do a 8539 - * reset. This is important because the usb_lock_device_for_reset() 8540 - * that happens as a result of usb_queue_reset_device() will silently 8541 - * fail if the device was suspended or if too much time passed. 8542 - * 8543 - * NOTE: The device is locked here so we can directly do the reset. 8544 - * We don't need usb_lock_device_for_reset() because that's just a 8545 - * wrapper over device_lock() and device_resume() (which calls us) 8546 - * does that for us. 8547 - */ 8548 - if (test_bit(RTL8152_INACCESSIBLE, &tp->flags)) 8549 - usb_reset_device(tp->udev); 8550 - 8551 8538 return 0; 8552 8539 } 8553 8540 ··· 8645 8658 static int rtl8152_resume(struct usb_interface *intf) 8646 8659 { 8647 8660 struct r8152 *tp = usb_get_intfdata(intf); 8661 + bool runtime_resume = test_bit(SELECTIVE_SUSPEND, &tp->flags); 8648 8662 int ret; 8649 8663 8650 8664 mutex_lock(&tp->control); 8651 8665 8652 8666 rtl_reset_ocp_base(tp); 8653 8667 8654 - if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) 8668 + if (runtime_resume) 8655 8669 ret = rtl8152_runtime_resume(tp); 8656 8670 else 8657 8671 ret = rtl8152_system_resume(tp); 8658 8672 8659 8673 mutex_unlock(&tp->control); 8674 + 8675 + /* If the device is RTL8152_INACCESSIBLE here then we should do a 8676 + * reset. This is important because the usb_lock_device_for_reset() 8677 + * that happens as a result of usb_queue_reset_device() will silently 8678 + * fail if the device was suspended or if too much time passed. 8679 + * 8680 + * NOTE: The device is locked here so we can directly do the reset. 8681 + * We don't need usb_lock_device_for_reset() because that's just a 8682 + * wrapper over device_lock() and device_resume() (which calls us) 8683 + * does that for us. 8684 + */ 8685 + if (!runtime_resume && test_bit(RTL8152_INACCESSIBLE, &tp->flags)) 8686 + usb_reset_device(tp->udev); 8660 8687 8661 8688 return ret; 8662 8689 }
-2
drivers/net/wireless/intel/iwlwifi/mld/iface.c
··· 55 55 56 56 ieee80211_iter_keys(mld->hw, vif, iwl_mld_cleanup_keys_iter, NULL); 57 57 58 - wiphy_delayed_work_cancel(mld->wiphy, &mld_vif->mlo_scan_start_wk); 59 - 60 58 CLEANUP_STRUCT(mld_vif); 61 59 } 62 60
+2
drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
··· 1759 1759 wiphy_work_cancel(mld->wiphy, &mld_vif->emlsr.unblock_tpt_wk); 1760 1760 wiphy_delayed_work_cancel(mld->wiphy, 1761 1761 &mld_vif->emlsr.check_tpt_wk); 1762 + wiphy_delayed_work_cancel(mld->wiphy, 1763 + &mld_vif->mlo_scan_start_wk); 1762 1764 1763 1765 iwl_mld_reset_cca_40mhz_workaround(mld, vif); 1764 1766 iwl_mld_smps_workaround(mld, vif, true);
+5 -1
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2012-2014, 2018-2025 Intel Corporation 3 + * Copyright (C) 2012-2014, 2018-2026 Intel Corporation 4 4 * Copyright (C) 2013-2015 Intel Mobile Communications GmbH 5 5 * Copyright (C) 2016-2017 Intel Deutschland GmbH 6 6 */ ··· 3239 3239 3240 3240 IWL_DEBUG_WOWLAN(mvm, "Starting fast suspend flow\n"); 3241 3241 3242 + iwl_mvm_pause_tcm(mvm, true); 3243 + 3242 3244 mvm->fast_resume = true; 3243 3245 set_bit(IWL_MVM_STATUS_IN_D3, &mvm->status); 3244 3246 ··· 3296 3294 IWL_ERR(mvm, "Couldn't get the d3 notif %d\n", ret); 3297 3295 mvm->trans->state = IWL_TRANS_NO_FW; 3298 3296 } 3297 + 3298 + iwl_mvm_resume_tcm(mvm); 3299 3299 3300 3300 out: 3301 3301 clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
+30 -15
drivers/nvme/host/pci.c
··· 816 816 nvme_free_descriptors(req); 817 817 } 818 818 819 + static bool nvme_pci_prp_save_mapping(struct request *req, 820 + struct device *dma_dev, 821 + struct blk_dma_iter *iter) 822 + { 823 + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 824 + 825 + if (dma_use_iova(&iod->dma_state) || !dma_need_unmap(dma_dev)) 826 + return true; 827 + 828 + if (!iod->nr_dma_vecs) { 829 + struct nvme_queue *nvmeq = req->mq_hctx->driver_data; 830 + 831 + iod->dma_vecs = mempool_alloc(nvmeq->dev->dmavec_mempool, 832 + GFP_ATOMIC); 833 + if (!iod->dma_vecs) { 834 + iter->status = BLK_STS_RESOURCE; 835 + return false; 836 + } 837 + } 838 + 839 + iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr; 840 + iod->dma_vecs[iod->nr_dma_vecs].len = iter->len; 841 + iod->nr_dma_vecs++; 842 + return true; 843 + } 844 + 819 845 static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev, 820 846 struct blk_dma_iter *iter) 821 847 { ··· 851 825 return true; 852 826 if (!blk_rq_dma_map_iter_next(req, dma_dev, &iod->dma_state, iter)) 853 827 return false; 854 - if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) { 855 - iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr; 856 - iod->dma_vecs[iod->nr_dma_vecs].len = iter->len; 857 - iod->nr_dma_vecs++; 858 - } 859 - return true; 828 + return nvme_pci_prp_save_mapping(req, dma_dev, iter); 860 829 } 861 830 862 831 static blk_status_t nvme_pci_setup_data_prp(struct request *req, ··· 864 843 unsigned int prp_len, i; 865 844 __le64 *prp_list; 866 845 867 - if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(nvmeq->dev->dev)) { 868 - iod->dma_vecs = mempool_alloc(nvmeq->dev->dmavec_mempool, 869 - GFP_ATOMIC); 870 - if (!iod->dma_vecs) 871 - return BLK_STS_RESOURCE; 872 - iod->dma_vecs[0].addr = iter->addr; 873 - iod->dma_vecs[0].len = iter->len; 874 - iod->nr_dma_vecs = 1; 875 - } 846 + if (!nvme_pci_prp_save_mapping(req, nvmeq->dev->dev, iter)) 847 + return iter->status; 876 848 877 849 /* 878 850 * PRP1 always points to the start of the DMA transfers. ··· 1233 1219 iod->nr_descriptors = 0; 1234 1220 iod->total_len = 0; 1235 1221 iod->meta_total_len = 0; 1222 + iod->nr_dma_vecs = 0; 1236 1223 1237 1224 ret = nvme_setup_cmd(req->q->queuedata, req); 1238 1225 if (ret)
+17
drivers/nvme/target/tcp.c
··· 349 349 cmd->req.sg = NULL; 350 350 } 351 351 352 + static void nvmet_tcp_fatal_error(struct nvmet_tcp_queue *queue); 353 + 352 354 static void nvmet_tcp_build_pdu_iovec(struct nvmet_tcp_cmd *cmd) 353 355 { 354 356 struct bio_vec *iov = cmd->iov; 355 357 struct scatterlist *sg; 356 358 u32 length, offset, sg_offset; 359 + unsigned int sg_remaining; 357 360 int nr_pages; 358 361 359 362 length = cmd->pdu_len; ··· 364 361 offset = cmd->rbytes_done; 365 362 cmd->sg_idx = offset / PAGE_SIZE; 366 363 sg_offset = offset % PAGE_SIZE; 364 + if (!cmd->req.sg_cnt || cmd->sg_idx >= cmd->req.sg_cnt) { 365 + nvmet_tcp_fatal_error(cmd->queue); 366 + return; 367 + } 367 368 sg = &cmd->req.sg[cmd->sg_idx]; 369 + sg_remaining = cmd->req.sg_cnt - cmd->sg_idx; 368 370 369 371 while (length) { 372 + if (!sg_remaining) { 373 + nvmet_tcp_fatal_error(cmd->queue); 374 + return; 375 + } 376 + if (!sg->length || sg->length <= sg_offset) { 377 + nvmet_tcp_fatal_error(cmd->queue); 378 + return; 379 + } 370 380 u32 iov_len = min_t(u32, length, sg->length - sg_offset); 371 381 372 382 bvec_set_page(iov, sg_page(sg), iov_len, ··· 387 371 388 372 length -= iov_len; 389 373 sg = sg_next(sg); 374 + sg_remaining--; 390 375 iov++; 391 376 sg_offset = 0; 392 377 }
+3 -7
drivers/pci/ide.c
··· 11 11 #include <linux/pci_regs.h> 12 12 #include <linux/slab.h> 13 13 #include <linux/sysfs.h> 14 - #include <linux/tsm.h> 15 14 16 15 #include "pci.h" 17 16 ··· 167 168 for (u16 i = 0; i < nr_streams; i++) { 168 169 int pos = __sel_ide_offset(ide_cap, nr_link_ide, i, nr_ide_mem); 169 170 170 - pci_read_config_dword(pdev, pos + PCI_IDE_SEL_CAP, &val); 171 + pci_read_config_dword(pdev, pos + PCI_IDE_SEL_CTL, &val); 171 172 if (val & PCI_IDE_SEL_CTL_EN) 172 173 continue; 173 174 val &= ~PCI_IDE_SEL_CTL_ID; ··· 282 283 /* for SR-IOV case, cover all VFs */ 283 284 num_vf = pci_num_vf(pdev); 284 285 if (num_vf) 285 - rid_end = PCI_DEVID(pci_iov_virtfn_bus(pdev, num_vf), 286 - pci_iov_virtfn_devfn(pdev, num_vf)); 286 + rid_end = PCI_DEVID(pci_iov_virtfn_bus(pdev, num_vf - 1), 287 + pci_iov_virtfn_devfn(pdev, num_vf - 1)); 287 288 else 288 289 rid_end = pci_dev_id(pdev); 289 290 ··· 371 372 372 373 if (ide->partner[PCI_IDE_EP].enable) 373 374 pci_ide_stream_disable(pdev, ide); 374 - 375 - if (ide->tsm_dev) 376 - tsm_ide_stream_unregister(ide); 377 375 378 376 if (ide->partner[PCI_IDE_RP].setup) 379 377 pci_ide_stream_teardown(rp, ide);
+7
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 302 302 DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"), 303 303 } 304 304 }, 305 + { 306 + .ident = "MECHREVO Wujie 15X Pro", 307 + .driver_data = &quirk_spurious_8042, 308 + .matches = { 309 + DMI_MATCH(DMI_BOARD_NAME, "WUJIE Series-X5SP4NAG"), 310 + } 311 + }, 305 312 {} 306 313 }; 307 314
+32
drivers/platform/x86/classmate-laptop.c
··· 207 207 208 208 acpi = to_acpi_device(dev); 209 209 inputdev = dev_get_drvdata(&acpi->dev); 210 + if (!inputdev) 211 + return -ENXIO; 212 + 210 213 accel = dev_get_drvdata(&inputdev->dev); 214 + if (!accel) 215 + return -ENXIO; 211 216 212 217 return sysfs_emit(buf, "%d\n", accel->sensitivity); 213 218 } ··· 229 224 230 225 acpi = to_acpi_device(dev); 231 226 inputdev = dev_get_drvdata(&acpi->dev); 227 + if (!inputdev) 228 + return -ENXIO; 229 + 232 230 accel = dev_get_drvdata(&inputdev->dev); 231 + if (!accel) 232 + return -ENXIO; 233 233 234 234 r = kstrtoul(buf, 0, &sensitivity); 235 235 if (r) ··· 266 256 267 257 acpi = to_acpi_device(dev); 268 258 inputdev = dev_get_drvdata(&acpi->dev); 259 + if (!inputdev) 260 + return -ENXIO; 261 + 269 262 accel = dev_get_drvdata(&inputdev->dev); 263 + if (!accel) 264 + return -ENXIO; 270 265 271 266 return sysfs_emit(buf, "%d\n", accel->g_select); 272 267 } ··· 288 273 289 274 acpi = to_acpi_device(dev); 290 275 inputdev = dev_get_drvdata(&acpi->dev); 276 + if (!inputdev) 277 + return -ENXIO; 278 + 291 279 accel = dev_get_drvdata(&inputdev->dev); 280 + if (!accel) 281 + return -ENXIO; 292 282 293 283 r = kstrtoul(buf, 0, &g_select); 294 284 if (r) ··· 322 302 323 303 acpi = to_acpi_device(input->dev.parent); 324 304 accel = dev_get_drvdata(&input->dev); 305 + if (!accel) 306 + return -ENXIO; 325 307 326 308 cmpc_accel_set_sensitivity_v4(acpi->handle, accel->sensitivity); 327 309 cmpc_accel_set_g_select_v4(acpi->handle, accel->g_select); ··· 571 549 572 550 acpi = to_acpi_device(dev); 573 551 inputdev = dev_get_drvdata(&acpi->dev); 552 + if (!inputdev) 553 + return -ENXIO; 554 + 574 555 accel = dev_get_drvdata(&inputdev->dev); 556 + if (!accel) 557 + return -ENXIO; 575 558 576 559 return sysfs_emit(buf, "%d\n", accel->sensitivity); 577 560 } ··· 593 566 594 567 acpi = to_acpi_device(dev); 595 568 inputdev = dev_get_drvdata(&acpi->dev); 569 + if (!inputdev) 570 + return -ENXIO; 571 + 596 572 accel = dev_get_drvdata(&inputdev->dev); 573 + if (!accel) 574 + return -ENXIO; 597 575 598 576 r = kstrtoul(buf, 0, &sensitivity); 599 577 if (r)
+5
drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
··· 696 696 return ret; 697 697 } 698 698 699 + if (!str_value || !str_value[0]) { 700 + pr_debug("Ignoring attribute with empty name\n"); 701 + goto pack_attr_exit; 702 + } 703 + 699 704 /* All duplicate attributes found are ignored */ 700 705 duplicate = kset_find_obj(temp_kset, str_value); 701 706 if (duplicate) {
+1 -1
drivers/platform/x86/intel/plr_tpmi.c
··· 316 316 snprintf(name, sizeof(name), "domain%d", i); 317 317 318 318 dentry = debugfs_create_dir(name, plr->dbgfs_dir); 319 - debugfs_create_file("status", 0444, dentry, &plr->die_info[i], 319 + debugfs_create_file("status", 0644, dentry, &plr->die_info[i], 320 320 &plr_status_fops); 321 321 } 322 322
+2 -2
drivers/platform/x86/intel/telemetry/debugfs.c
··· 449 449 for (index = 0; index < debugfs_conf->pss_ltr_evts; index++) { 450 450 seq_printf(s, "%-32s\t%u\n", 451 451 debugfs_conf->pss_ltr_data[index].name, 452 - pss_s0ix_wakeup[index]); 452 + pss_ltr_blkd[index]); 453 453 } 454 454 455 455 seq_puts(s, "\n--------------------------------------\n"); ··· 459 459 for (index = 0; index < debugfs_conf->pss_wakeup_evts; index++) { 460 460 seq_printf(s, "%-32s\t%u\n", 461 461 debugfs_conf->pss_wakeup[index].name, 462 - pss_ltr_blkd[index]); 462 + pss_s0ix_wakeup[index]); 463 463 } 464 464 465 465 return 0;
+1 -1
drivers/platform/x86/intel/telemetry/pltdrv.c
··· 610 610 /* Get telemetry Info */ 611 611 events = (read_buf & TELEM_INFO_SRAMEVTS_MASK) >> 612 612 TELEM_INFO_SRAMEVTS_SHIFT; 613 - event_regs = read_buf & TELEM_INFO_SRAMEVTS_MASK; 613 + event_regs = read_buf & TELEM_INFO_NENABLES_MASK; 614 614 if ((events < TELEM_MAX_EVENTS_SRAM) || 615 615 (event_regs < TELEM_MAX_EVENTS_SRAM)) { 616 616 dev_err(&pdev->dev, "PSS:Insufficient Space for SRAM Trace\n");
+2
drivers/platform/x86/intel/vsec.c
··· 766 766 #define PCI_DEVICE_ID_INTEL_VSEC_LNL_M 0x647d 767 767 #define PCI_DEVICE_ID_INTEL_VSEC_PTL 0xb07d 768 768 #define PCI_DEVICE_ID_INTEL_VSEC_WCL 0xfd7d 769 + #define PCI_DEVICE_ID_INTEL_VSEC_NVL 0xd70d 769 770 static const struct pci_device_id intel_vsec_pci_ids[] = { 770 771 { PCI_DEVICE_DATA(INTEL, VSEC_ADL, &tgl_info) }, 771 772 { PCI_DEVICE_DATA(INTEL, VSEC_DG1, &dg1_info) }, ··· 779 778 { PCI_DEVICE_DATA(INTEL, VSEC_LNL_M, &lnl_info) }, 780 779 { PCI_DEVICE_DATA(INTEL, VSEC_PTL, &mtl_info) }, 781 780 { PCI_DEVICE_DATA(INTEL, VSEC_WCL, &mtl_info) }, 781 + { PCI_DEVICE_DATA(INTEL, VSEC_NVL, &mtl_info) }, 782 782 { } 783 783 }; 784 784 MODULE_DEVICE_TABLE(pci, intel_vsec_pci_ids);
+10 -1
drivers/platform/x86/lg-laptop.c
··· 838 838 case 'P': 839 839 year = 2021; 840 840 break; 841 - default: 841 + case 'Q': 842 842 year = 2022; 843 + break; 844 + case 'R': 845 + year = 2023; 846 + break; 847 + case 'S': 848 + year = 2024; 849 + break; 850 + default: 851 + year = 2025; 843 852 } 844 853 break; 845 854 default:
+3 -1
drivers/platform/x86/panasonic-laptop.c
··· 1089 1089 PLATFORM_DEVID_NONE, NULL, 0); 1090 1090 if (IS_ERR(pcc->platform)) { 1091 1091 result = PTR_ERR(pcc->platform); 1092 - goto out_backlight; 1092 + goto out_sysfs; 1093 1093 } 1094 1094 result = device_create_file(&pcc->platform->dev, 1095 1095 &dev_attr_cdpower); ··· 1105 1105 1106 1106 out_platform: 1107 1107 platform_device_unregister(pcc->platform); 1108 + out_sysfs: 1109 + sysfs_remove_group(&device->dev.kobj, &pcc_attr_group); 1108 1110 out_backlight: 1109 1111 backlight_device_unregister(pcc->backlight); 1110 1112 out_input:
+1 -1
drivers/platform/x86/toshiba_haps.c
··· 183 183 184 184 pr_info("Toshiba HDD Active Protection Sensor device\n"); 185 185 186 - haps = kzalloc(sizeof(struct toshiba_haps_dev), GFP_KERNEL); 186 + haps = devm_kzalloc(&acpi_dev->dev, sizeof(*haps), GFP_KERNEL); 187 187 if (!haps) 188 188 return -ENOMEM; 189 189
+2 -6
drivers/pmdomain/imx/gpcv2.c
··· 165 165 #define IMX8M_VPU_HSK_PWRDNREQN BIT(5) 166 166 #define IMX8M_DISP_HSK_PWRDNREQN BIT(4) 167 167 168 - #define IMX8MM_GPUMIX_HSK_PWRDNACKN BIT(29) 169 - #define IMX8MM_GPU_HSK_PWRDNACKN (BIT(27) | BIT(28)) 168 + #define IMX8MM_GPU_HSK_PWRDNACKN GENMASK(29, 27) 170 169 #define IMX8MM_VPUMIX_HSK_PWRDNACKN BIT(26) 171 170 #define IMX8MM_DISPMIX_HSK_PWRDNACKN BIT(25) 172 171 #define IMX8MM_HSIO_HSK_PWRDNACKN (BIT(23) | BIT(24)) 173 - #define IMX8MM_GPUMIX_HSK_PWRDNREQN BIT(11) 174 - #define IMX8MM_GPU_HSK_PWRDNREQN (BIT(9) | BIT(10)) 172 + #define IMX8MM_GPU_HSK_PWRDNREQN GENMASK(11, 9) 175 173 #define IMX8MM_VPUMIX_HSK_PWRDNREQN BIT(8) 176 174 #define IMX8MM_DISPMIX_HSK_PWRDNREQN BIT(7) 177 175 #define IMX8MM_HSIO_HSK_PWRDNREQN (BIT(5) | BIT(6)) ··· 792 794 .bits = { 793 795 .pxx = IMX8MM_GPUMIX_SW_Pxx_REQ, 794 796 .map = IMX8MM_GPUMIX_A53_DOMAIN, 795 - .hskreq = IMX8MM_GPUMIX_HSK_PWRDNREQN, 796 - .hskack = IMX8MM_GPUMIX_HSK_PWRDNACKN, 797 797 }, 798 798 .pgc = BIT(IMX8MM_PGC_GPUMIX), 799 799 .keep_clocks = true,
+1 -1
drivers/pmdomain/imx/imx8m-blk-ctrl.c
··· 340 340 341 341 of_genpd_del_provider(pdev->dev.of_node); 342 342 343 - for (i = 0; bc->onecell_data.num_domains; i++) { 343 + for (i = 0; i < bc->onecell_data.num_domains; i++) { 344 344 struct imx8m_blk_ctrl_domain *domain = &bc->domains[i]; 345 345 346 346 pm_genpd_remove(&domain->genpd);
+30
drivers/pmdomain/imx/imx8mp-blk-ctrl.c
··· 53 53 const char * const *path_names; 54 54 int num_paths; 55 55 const char *gpc_name; 56 + const unsigned int flags; 56 57 }; 57 58 58 59 #define DOMAIN_MAX_CLKS 3 ··· 66 65 struct icc_bulk_data paths[DOMAIN_MAX_PATHS]; 67 66 struct device *power_dev; 68 67 struct imx8mp_blk_ctrl *bc; 68 + struct notifier_block power_nb; 69 69 int num_paths; 70 70 int id; 71 71 }; ··· 266 264 [IMX8MP_HSIOBLK_PD_USB_PHY1] = { 267 265 .name = "hsioblk-usb-phy1", 268 266 .gpc_name = "usb-phy1", 267 + .flags = GENPD_FLAG_ACTIVE_WAKEUP, 269 268 }, 270 269 [IMX8MP_HSIOBLK_PD_USB_PHY2] = { 271 270 .name = "hsioblk-usb-phy2", 272 271 .gpc_name = "usb-phy2", 272 + .flags = GENPD_FLAG_ACTIVE_WAKEUP, 273 273 }, 274 274 [IMX8MP_HSIOBLK_PD_PCIE] = { 275 275 .name = "hsioblk-pcie", ··· 598 594 return 0; 599 595 } 600 596 597 + static int imx8mp_blk_ctrl_gpc_notifier(struct notifier_block *nb, 598 + unsigned long action, void *data) 599 + { 600 + struct imx8mp_blk_ctrl_domain *domain = 601 + container_of(nb, struct imx8mp_blk_ctrl_domain, power_nb); 602 + 603 + if (action == GENPD_NOTIFY_PRE_OFF) { 604 + if (domain->genpd.status == GENPD_STATE_ON) 605 + return NOTIFY_BAD; 606 + } 607 + 608 + return NOTIFY_OK; 609 + } 610 + 601 611 static struct lock_class_key blk_ctrl_genpd_lock_class; 602 612 603 613 static int imx8mp_blk_ctrl_probe(struct platform_device *pdev) ··· 716 698 goto cleanup_pds; 717 699 } 718 700 701 + domain->power_nb.notifier_call = imx8mp_blk_ctrl_gpc_notifier; 702 + ret = dev_pm_genpd_add_notifier(domain->power_dev, &domain->power_nb); 703 + if (ret) { 704 + dev_err_probe(dev, ret, "failed to add power notifier\n"); 705 + dev_pm_domain_detach(domain->power_dev, true); 706 + goto cleanup_pds; 707 + } 708 + 719 709 domain->genpd.name = data->name; 720 710 domain->genpd.power_on = imx8mp_blk_ctrl_power_on; 721 711 domain->genpd.power_off = imx8mp_blk_ctrl_power_off; 712 + domain->genpd.flags = data->flags; 722 713 domain->bc = bc; 723 714 domain->id = i; 724 715 725 716 ret = pm_genpd_init(&domain->genpd, NULL, true); 726 717 if (ret) { 727 718 dev_err_probe(dev, ret, "failed to init power domain\n"); 719 + dev_pm_genpd_remove_notifier(domain->power_dev); 728 720 dev_pm_domain_detach(domain->power_dev, true); 729 721 goto cleanup_pds; 730 722 } ··· 783 755 cleanup_pds: 784 756 for (i--; i >= 0; i--) { 785 757 pm_genpd_remove(&bc->domains[i].genpd); 758 + dev_pm_genpd_remove_notifier(bc->domains[i].power_dev); 786 759 dev_pm_domain_detach(bc->domains[i].power_dev, true); 787 760 } 788 761 ··· 803 774 struct imx8mp_blk_ctrl_domain *domain = &bc->domains[i]; 804 775 805 776 pm_genpd_remove(&domain->genpd); 777 + dev_pm_genpd_remove_notifier(domain->power_dev); 806 778 dev_pm_domain_detach(domain->power_dev, true); 807 779 } 808 780
+1 -1
drivers/pmdomain/qcom/rpmpd.c
··· 1001 1001 1002 1002 /* Clamp to the highest corner/level if sync_state isn't done yet */ 1003 1003 if (!pd->state_synced) 1004 - this_active_corner = this_sleep_corner = pd->max_state - 1; 1004 + this_active_corner = this_sleep_corner = pd->max_state; 1005 1005 else 1006 1006 to_active_sleep(pd, pd->corner, &this_active_corner, &this_sleep_corner); 1007 1007
+3 -3
drivers/regulator/spacemit-p1.c
··· 87 87 } 88 88 89 89 #define P1_BUCK_DESC(_n) \ 90 - P1_REG_DESC(BUCK, buck, _n, "vin", 0x47, BUCK_MASK, 254, p1_buck_ranges) 90 + P1_REG_DESC(BUCK, buck, _n, "vin", 0x47, BUCK_MASK, 255, p1_buck_ranges) 91 91 92 92 #define P1_ALDO_DESC(_n) \ 93 - P1_REG_DESC(ALDO, aldo, _n, "vin", 0x5b, LDO_MASK, 117, p1_ldo_ranges) 93 + P1_REG_DESC(ALDO, aldo, _n, "vin", 0x5b, LDO_MASK, 128, p1_ldo_ranges) 94 94 95 95 #define P1_DLDO_DESC(_n) \ 96 - P1_REG_DESC(DLDO, dldo, _n, "buck5", 0x67, LDO_MASK, 117, p1_ldo_ranges) 96 + P1_REG_DESC(DLDO, dldo, _n, "buck5", 0x67, LDO_MASK, 128, p1_ldo_ranges) 97 97 98 98 static const struct regulator_desc p1_regulator_desc[] = { 99 99 P1_BUCK_DESC(1),
+3 -2
drivers/soc/qcom/smem.c
··· 396 396 */ 397 397 bool qcom_smem_is_available(void) 398 398 { 399 - return !!__smem; 399 + return !IS_ERR(__smem); 400 400 } 401 401 EXPORT_SYMBOL_GPL(qcom_smem_is_available); 402 402 ··· 1247 1247 { 1248 1248 platform_device_unregister(__smem->socinfo); 1249 1249 1250 - __smem = NULL; 1250 + /* Set to -EPROBE_DEFER to signal unprobed state */ 1251 + __smem = ERR_PTR(-EPROBE_DEFER); 1251 1252 } 1252 1253 1253 1254 static const struct of_device_id qcom_smem_of_match[] = {
+3
drivers/spi/spi-tegra114.c
··· 978 978 if (spi_get_csgpiod(spi, 0)) 979 979 gpiod_set_value(spi_get_csgpiod(spi, 0), 0); 980 980 981 + /* Update default register to include CS polarity and SPI mode */ 981 982 val = tspi->def_command1_reg; 982 983 if (spi->mode & SPI_CS_HIGH) 983 984 val &= ~SPI_CS_POL_INACTIVE(spi_get_chipselect(spi, 0)); 984 985 else 985 986 val |= SPI_CS_POL_INACTIVE(spi_get_chipselect(spi, 0)); 987 + val &= ~SPI_CONTROL_MODE_MASK; 988 + val |= SPI_MODE_SEL(spi->mode & 0x3); 986 989 tspi->def_command1_reg = val; 987 990 tegra_spi_writel(tspi, tspi->def_command1_reg, SPI_COMMAND1); 988 991 spin_unlock_irqrestore(&tspi->lock, flags);
+4 -2
drivers/spi/spi-tegra20-slink.c
··· 1086 1086 reset_control_deassert(tspi->rst); 1087 1087 1088 1088 spi_irq = platform_get_irq(pdev, 0); 1089 - if (spi_irq < 0) 1090 - return spi_irq; 1089 + if (spi_irq < 0) { 1090 + ret = spi_irq; 1091 + goto exit_pm_put; 1092 + } 1091 1093 tspi->irq = spi_irq; 1092 1094 ret = request_threaded_irq(tspi->irq, tegra_slink_isr, 1093 1095 tegra_slink_isr_thread, IRQF_ONESHOT,
+52 -4
drivers/spi/spi-tegra210-quad.c
··· 839 839 u32 command1, command2, speed = t->speed_hz; 840 840 u8 bits_per_word = t->bits_per_word; 841 841 u32 tx_tap = 0, rx_tap = 0; 842 + unsigned long flags; 842 843 int req_mode; 843 844 844 845 if (!has_acpi_companion(tqspi->dev) && speed != tqspi->cur_speed) { ··· 847 846 tqspi->cur_speed = speed; 848 847 } 849 848 849 + spin_lock_irqsave(&tqspi->lock, flags); 850 850 tqspi->cur_pos = 0; 851 851 tqspi->cur_rx_pos = 0; 852 852 tqspi->cur_tx_pos = 0; 853 853 tqspi->curr_xfer = t; 854 + spin_unlock_irqrestore(&tqspi->lock, flags); 854 855 855 856 if (is_first_of_msg) { 856 857 tegra_qspi_mask_clear_irq(tqspi); ··· 1161 1158 u32 address_value = 0; 1162 1159 u32 cmd_config = 0, addr_config = 0; 1163 1160 u8 cmd_value = 0, val = 0; 1161 + unsigned long flags; 1164 1162 1165 1163 /* Enable Combined sequence mode */ 1166 1164 val = tegra_qspi_readl(tqspi, QSPI_GLOBAL_CONFIG); ··· 1265 1261 tegra_qspi_transfer_end(spi); 1266 1262 spi_transfer_delay_exec(xfer); 1267 1263 } 1264 + spin_lock_irqsave(&tqspi->lock, flags); 1268 1265 tqspi->curr_xfer = NULL; 1266 + spin_unlock_irqrestore(&tqspi->lock, flags); 1269 1267 transfer_phase++; 1270 1268 } 1271 1269 ret = 0; 1272 1270 1273 1271 exit: 1272 + spin_lock_irqsave(&tqspi->lock, flags); 1274 1273 tqspi->curr_xfer = NULL; 1274 + spin_unlock_irqrestore(&tqspi->lock, flags); 1275 1275 msg->status = ret; 1276 1276 1277 1277 return ret; ··· 1288 1280 struct spi_transfer *transfer; 1289 1281 bool is_first_msg = true; 1290 1282 int ret = 0, val = 0; 1283 + unsigned long flags; 1291 1284 1292 1285 msg->status = 0; 1293 1286 msg->actual_length = 0; ··· 1369 1360 msg->actual_length += xfer->len + dummy_bytes; 1370 1361 1371 1362 complete_xfer: 1363 + spin_lock_irqsave(&tqspi->lock, flags); 1372 1364 tqspi->curr_xfer = NULL; 1365 + spin_unlock_irqrestore(&tqspi->lock, flags); 1373 1366 1374 1367 if (ret < 0) { 1375 1368 tegra_qspi_transfer_end(spi); ··· 1451 1440 1452 1441 static irqreturn_t handle_cpu_based_xfer(struct tegra_qspi *tqspi) 1453 1442 { 1454 - struct spi_transfer *t = tqspi->curr_xfer; 1443 + struct spi_transfer *t; 1455 1444 unsigned long flags; 1456 1445 1457 1446 spin_lock_irqsave(&tqspi->lock, flags); 1447 + t = tqspi->curr_xfer; 1448 + 1449 + if (!t) { 1450 + spin_unlock_irqrestore(&tqspi->lock, flags); 1451 + return IRQ_HANDLED; 1452 + } 1458 1453 1459 1454 if (tqspi->tx_status || tqspi->rx_status) { 1460 1455 tegra_qspi_handle_error(tqspi); ··· 1491 1474 1492 1475 static irqreturn_t handle_dma_based_xfer(struct tegra_qspi *tqspi) 1493 1476 { 1494 - struct spi_transfer *t = tqspi->curr_xfer; 1477 + struct spi_transfer *t; 1495 1478 unsigned int total_fifo_words; 1496 1479 unsigned long flags; 1497 1480 long wait_status; ··· 1530 1513 } 1531 1514 1532 1515 spin_lock_irqsave(&tqspi->lock, flags); 1516 + t = tqspi->curr_xfer; 1517 + 1518 + if (!t) { 1519 + spin_unlock_irqrestore(&tqspi->lock, flags); 1520 + return IRQ_HANDLED; 1521 + } 1533 1522 1534 1523 if (num_errors) { 1535 1524 tegra_qspi_dma_unmap_xfer(tqspi, t); ··· 1575 1552 static irqreturn_t tegra_qspi_isr_thread(int irq, void *context_data) 1576 1553 { 1577 1554 struct tegra_qspi *tqspi = context_data; 1555 + unsigned long flags; 1556 + u32 status; 1557 + 1558 + /* 1559 + * Read transfer status to check if interrupt was triggered by transfer 1560 + * completion 1561 + */ 1562 + status = tegra_qspi_readl(tqspi, QSPI_TRANS_STATUS); 1578 1563 1579 1564 /* 1580 1565 * Occasionally the IRQ thread takes a long time to wake up (usually 1581 1566 * when the CPU that it's running on is excessively busy) and we have 1582 1567 * already reached the timeout before and cleaned up the timed out 1583 1568 * transfer. Avoid any processing in that case and bail out early. 1569 + * 1570 + * If no transfer is in progress, check if this was a real interrupt 1571 + * that the timeout handler already processed, or a spurious one. 1584 1572 */ 1585 - if (!tqspi->curr_xfer) 1586 - return IRQ_NONE; 1573 + spin_lock_irqsave(&tqspi->lock, flags); 1574 + if (!tqspi->curr_xfer) { 1575 + spin_unlock_irqrestore(&tqspi->lock, flags); 1576 + /* Spurious interrupt - transfer not ready */ 1577 + if (!(status & QSPI_RDY)) 1578 + return IRQ_NONE; 1579 + /* Real interrupt, already handled by timeout path */ 1580 + return IRQ_HANDLED; 1581 + } 1587 1582 1588 1583 tqspi->status_reg = tegra_qspi_readl(tqspi, QSPI_FIFO_STATUS); 1589 1584 ··· 1612 1571 tqspi->rx_status = tqspi->status_reg & (QSPI_RX_FIFO_OVF | QSPI_RX_FIFO_UNF); 1613 1572 1614 1573 tegra_qspi_mask_clear_irq(tqspi); 1574 + spin_unlock_irqrestore(&tqspi->lock, flags); 1615 1575 1576 + /* 1577 + * Lock is released here but handlers safely re-check curr_xfer under 1578 + * lock before dereferencing. 1579 + * DMA handler also needs to sleep in wait_for_completion_*(), which 1580 + * cannot be done while holding spinlock. 1581 + */ 1616 1582 if (!tqspi->is_curr_dma_xfer) 1617 1583 return handle_cpu_based_xfer(tqspi); 1618 1584
+49 -53
drivers/usb/gadget/function/f_fs.c
··· 59 59 __attribute__((malloc)); 60 60 61 61 /* Opened counter handling. */ 62 - static void ffs_data_opened(struct ffs_data *ffs); 63 62 static void ffs_data_closed(struct ffs_data *ffs); 64 63 65 64 /* Called with ffs->mutex held; take over ownership of data. */ ··· 635 636 return ret; 636 637 } 637 638 639 + 640 + static void ffs_data_reset(struct ffs_data *ffs); 641 + 638 642 static int ffs_ep0_open(struct inode *inode, struct file *file) 639 643 { 640 644 struct ffs_data *ffs = inode->i_sb->s_fs_info; 641 - int ret; 642 645 643 - /* Acquire mutex */ 644 - ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK); 645 - if (ret < 0) 646 - return ret; 647 - 648 - ffs_data_opened(ffs); 646 + spin_lock_irq(&ffs->eps_lock); 649 647 if (ffs->state == FFS_CLOSING) { 650 - ffs_data_closed(ffs); 651 - mutex_unlock(&ffs->mutex); 648 + spin_unlock_irq(&ffs->eps_lock); 652 649 return -EBUSY; 653 650 } 654 - mutex_unlock(&ffs->mutex); 651 + if (!ffs->opened++ && ffs->state == FFS_DEACTIVATED) { 652 + ffs->state = FFS_CLOSING; 653 + spin_unlock_irq(&ffs->eps_lock); 654 + ffs_data_reset(ffs); 655 + } else { 656 + spin_unlock_irq(&ffs->eps_lock); 657 + } 655 658 file->private_data = ffs; 656 659 657 660 return stream_open(inode, file); ··· 1203 1202 { 1204 1203 struct ffs_data *ffs = inode->i_sb->s_fs_info; 1205 1204 struct ffs_epfile *epfile; 1206 - int ret; 1207 1205 1208 - /* Acquire mutex */ 1209 - ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK); 1210 - if (ret < 0) 1211 - return ret; 1212 - 1213 - if (!atomic_inc_not_zero(&ffs->opened)) { 1214 - mutex_unlock(&ffs->mutex); 1206 + spin_lock_irq(&ffs->eps_lock); 1207 + if (!ffs->opened) { 1208 + spin_unlock_irq(&ffs->eps_lock); 1215 1209 return -ENODEV; 1216 1210 } 1217 1211 /* ··· 1216 1220 */ 1217 1221 epfile = smp_load_acquire(&inode->i_private); 1218 1222 if (unlikely(ffs->state != FFS_ACTIVE || !epfile)) { 1219 - mutex_unlock(&ffs->mutex); 1220 - ffs_data_closed(ffs); 1223 + spin_unlock_irq(&ffs->eps_lock); 1221 1224 return -ENODEV; 1222 1225 } 1223 - mutex_unlock(&ffs->mutex); 1226 + ffs->opened++; 1227 + spin_unlock_irq(&ffs->eps_lock); 1224 1228 1225 1229 file->private_data = epfile; 1226 1230 return stream_open(inode, file); ··· 2088 2092 return 0; 2089 2093 } 2090 2094 2091 - static void ffs_data_reset(struct ffs_data *ffs); 2092 - 2093 2095 static void 2094 2096 ffs_fs_kill_sb(struct super_block *sb) 2095 2097 { ··· 2144 2150 refcount_inc(&ffs->ref); 2145 2151 } 2146 2152 2147 - static void ffs_data_opened(struct ffs_data *ffs) 2148 - { 2149 - if (atomic_add_return(1, &ffs->opened) == 1 && 2150 - ffs->state == FFS_DEACTIVATED) { 2151 - ffs->state = FFS_CLOSING; 2152 - ffs_data_reset(ffs); 2153 - } 2154 - } 2155 - 2156 2153 static void ffs_data_put(struct ffs_data *ffs) 2157 2154 { 2158 2155 if (refcount_dec_and_test(&ffs->ref)) { ··· 2161 2176 2162 2177 static void ffs_data_closed(struct ffs_data *ffs) 2163 2178 { 2164 - if (atomic_dec_and_test(&ffs->opened)) { 2165 - if (ffs->no_disconnect) { 2166 - struct ffs_epfile *epfiles; 2167 - unsigned long flags; 2179 + spin_lock_irq(&ffs->eps_lock); 2180 + if (--ffs->opened) { // not the last opener? 2181 + spin_unlock_irq(&ffs->eps_lock); 2182 + return; 2183 + } 2184 + if (ffs->no_disconnect) { 2185 + struct ffs_epfile *epfiles; 2168 2186 2169 - ffs->state = FFS_DEACTIVATED; 2170 - spin_lock_irqsave(&ffs->eps_lock, flags); 2171 - epfiles = ffs->epfiles; 2172 - ffs->epfiles = NULL; 2173 - spin_unlock_irqrestore(&ffs->eps_lock, 2174 - flags); 2187 + ffs->state = FFS_DEACTIVATED; 2188 + epfiles = ffs->epfiles; 2189 + ffs->epfiles = NULL; 2190 + spin_unlock_irq(&ffs->eps_lock); 2175 2191 2176 - if (epfiles) 2177 - ffs_epfiles_destroy(ffs->sb, epfiles, 2178 - ffs->eps_count); 2192 + if (epfiles) 2193 + ffs_epfiles_destroy(ffs->sb, epfiles, 2194 + ffs->eps_count); 2179 2195 2180 - if (ffs->setup_state == FFS_SETUP_PENDING) 2181 - __ffs_ep0_stall(ffs); 2182 - } else { 2183 - ffs->state = FFS_CLOSING; 2184 - ffs_data_reset(ffs); 2185 - } 2196 + if (ffs->setup_state == FFS_SETUP_PENDING) 2197 + __ffs_ep0_stall(ffs); 2198 + } else { 2199 + ffs->state = FFS_CLOSING; 2200 + spin_unlock_irq(&ffs->eps_lock); 2201 + ffs_data_reset(ffs); 2186 2202 } 2187 2203 } 2188 2204 ··· 2200 2214 } 2201 2215 2202 2216 refcount_set(&ffs->ref, 1); 2203 - atomic_set(&ffs->opened, 0); 2217 + ffs->opened = 0; 2204 2218 ffs->state = FFS_READ_DESCRIPTORS; 2205 2219 mutex_init(&ffs->mutex); 2206 2220 spin_lock_init(&ffs->eps_lock); ··· 2252 2266 { 2253 2267 ffs_data_clear(ffs); 2254 2268 2269 + spin_lock_irq(&ffs->eps_lock); 2255 2270 ffs->raw_descs_data = NULL; 2256 2271 ffs->raw_descs = NULL; 2257 2272 ffs->raw_strings = NULL; ··· 2276 2289 ffs->ms_os_descs_ext_prop_count = 0; 2277 2290 ffs->ms_os_descs_ext_prop_name_len = 0; 2278 2291 ffs->ms_os_descs_ext_prop_data_len = 0; 2292 + spin_unlock_irq(&ffs->eps_lock); 2279 2293 } 2280 2294 2281 2295 ··· 3744 3756 { 3745 3757 struct ffs_function *func = ffs_func_from_usb(f); 3746 3758 struct ffs_data *ffs = func->ffs; 3759 + unsigned long flags; 3747 3760 int ret = 0, intf; 3748 3761 3749 3762 if (alt > MAX_ALT_SETTINGS) ··· 3757 3768 if (ffs->func) 3758 3769 ffs_func_eps_disable(ffs->func); 3759 3770 3771 + spin_lock_irqsave(&ffs->eps_lock, flags); 3760 3772 if (ffs->state == FFS_DEACTIVATED) { 3761 3773 ffs->state = FFS_CLOSING; 3774 + spin_unlock_irqrestore(&ffs->eps_lock, flags); 3762 3775 INIT_WORK(&ffs->reset_work, ffs_reset_work); 3763 3776 schedule_work(&ffs->reset_work); 3764 3777 return -ENODEV; 3765 3778 } 3779 + spin_unlock_irqrestore(&ffs->eps_lock, flags); 3766 3780 3767 3781 if (ffs->state != FFS_ACTIVE) 3768 3782 return -ENODEV; ··· 3783 3791 { 3784 3792 struct ffs_function *func = ffs_func_from_usb(f); 3785 3793 struct ffs_data *ffs = func->ffs; 3794 + unsigned long flags; 3786 3795 3787 3796 if (ffs->func) 3788 3797 ffs_func_eps_disable(ffs->func); 3789 3798 3799 + spin_lock_irqsave(&ffs->eps_lock, flags); 3790 3800 if (ffs->state == FFS_DEACTIVATED) { 3791 3801 ffs->state = FFS_CLOSING; 3802 + spin_unlock_irqrestore(&ffs->eps_lock, flags); 3792 3803 INIT_WORK(&ffs->reset_work, ffs_reset_work); 3793 3804 schedule_work(&ffs->reset_work); 3794 3805 return; 3795 3806 } 3807 + spin_unlock_irqrestore(&ffs->eps_lock, flags); 3796 3808 3797 3809 if (ffs->state == FFS_ACTIVE) { 3798 3810 ffs->func = NULL;
+1 -1
drivers/usb/gadget/function/u_fs.h
··· 176 176 /* reference counter */ 177 177 refcount_t ref; 178 178 /* how many files are opened (EP0 and others) */ 179 - atomic_t opened; 179 + int opened; 180 180 181 181 /* EP0 state */ 182 182 enum ffs_state state;
-30
drivers/virt/coco/tsm-core.c
··· 4 4 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 5 5 6 6 #include <linux/tsm.h> 7 - #include <linux/pci.h> 8 - #include <linux/rwsem.h> 9 7 #include <linux/device.h> 10 8 #include <linux/module.h> 11 9 #include <linux/cleanup.h> 12 10 #include <linux/pci-tsm.h> 13 - #include <linux/pci-ide.h> 14 11 15 12 static struct class *tsm_class; 16 - static DECLARE_RWSEM(tsm_rwsem); 17 13 static DEFINE_IDA(tsm_ida); 18 14 19 15 static int match_id(struct device *dev, const void *data) ··· 103 107 device_unregister(&tsm_dev->dev); 104 108 } 105 109 EXPORT_SYMBOL_GPL(tsm_unregister); 106 - 107 - /* must be invoked between tsm_register / tsm_unregister */ 108 - int tsm_ide_stream_register(struct pci_ide *ide) 109 - { 110 - struct pci_dev *pdev = ide->pdev; 111 - struct pci_tsm *tsm = pdev->tsm; 112 - struct tsm_dev *tsm_dev = tsm->tsm_dev; 113 - int rc; 114 - 115 - rc = sysfs_create_link(&tsm_dev->dev.kobj, &pdev->dev.kobj, ide->name); 116 - if (rc) 117 - return rc; 118 - 119 - ide->tsm_dev = tsm_dev; 120 - return 0; 121 - } 122 - EXPORT_SYMBOL_GPL(tsm_ide_stream_register); 123 - 124 - void tsm_ide_stream_unregister(struct pci_ide *ide) 125 - { 126 - struct tsm_dev *tsm_dev = ide->tsm_dev; 127 - 128 - ide->tsm_dev = NULL; 129 - sysfs_remove_link(&tsm_dev->dev.kobj, ide->name); 130 - } 131 - EXPORT_SYMBOL_GPL(tsm_ide_stream_unregister); 132 110 133 111 static void tsm_release(struct device *dev) 134 112 {
+1
fs/btrfs/raid56.c
··· 150 150 static void free_raid_bio_pointers(struct btrfs_raid_bio *rbio) 151 151 { 152 152 bitmap_free(rbio->error_bitmap); 153 + bitmap_free(rbio->stripe_uptodate_bitmap); 153 154 kfree(rbio->stripe_pages); 154 155 kfree(rbio->bio_paddrs); 155 156 kfree(rbio->stripe_paddrs);
+5 -4
fs/ceph/crypto.c
··· 166 166 struct ceph_vino vino = { .snap = CEPH_NOSNAP }; 167 167 char *name_end, *inode_number; 168 168 int ret = -EIO; 169 - /* NUL-terminate */ 170 - char *str __free(kfree) = kmemdup_nul(name, *name_len, GFP_KERNEL); 169 + /* Snapshot name must start with an underscore */ 170 + if (*name_len <= 0 || name[0] != '_') 171 + return ERR_PTR(-EIO); 172 + /* Skip initial '_' and NUL-terminate */ 173 + char *str __free(kfree) = kmemdup_nul(name + 1, *name_len - 1, GFP_KERNEL); 171 174 if (!str) 172 175 return ERR_PTR(-ENOMEM); 173 - /* Skip initial '_' */ 174 - str++; 175 176 name_end = strrchr(str, '_'); 176 177 if (!name_end) { 177 178 doutc(cl, "failed to parse long snapshot name: %s\n", str);
+3 -2
fs/ceph/mds_client.c
··· 5671 5671 u32 caller_uid = from_kuid(&init_user_ns, cred->fsuid); 5672 5672 u32 caller_gid = from_kgid(&init_user_ns, cred->fsgid); 5673 5673 struct ceph_client *cl = mdsc->fsc->client; 5674 - const char *fs_name = mdsc->fsc->mount_options->mds_namespace; 5674 + const char *fs_name = mdsc->mdsmap->m_fs_name; 5675 5675 const char *spath = mdsc->fsc->mount_options->server_path; 5676 5676 bool gid_matched = false; 5677 5677 u32 gid, tlen, len; ··· 5679 5679 5680 5680 doutc(cl, "fsname check fs_name=%s match.fs_name=%s\n", 5681 5681 fs_name, auth->match.fs_name ? auth->match.fs_name : ""); 5682 - if (auth->match.fs_name && strcmp(auth->match.fs_name, fs_name)) { 5682 + 5683 + if (!ceph_namespace_match(auth->match.fs_name, fs_name)) { 5683 5684 /* fsname mismatch, try next one */ 5684 5685 return 0; 5685 5686 }
+19 -7
fs/ceph/mdsmap.c
··· 353 353 __decode_and_drop_type(p, end, u8, bad_ext); 354 354 } 355 355 if (mdsmap_ev >= 8) { 356 - u32 fsname_len; 356 + size_t fsname_len; 357 + 357 358 /* enabled */ 358 359 ceph_decode_8_safe(p, end, m->m_enabled, bad_ext); 360 + 359 361 /* fs_name */ 360 - ceph_decode_32_safe(p, end, fsname_len, bad_ext); 362 + m->m_fs_name = ceph_extract_encoded_string(p, end, 363 + &fsname_len, 364 + GFP_NOFS); 365 + if (IS_ERR(m->m_fs_name)) { 366 + m->m_fs_name = NULL; 367 + goto nomem; 368 + } 361 369 362 370 /* validate fsname against mds_namespace */ 363 - if (!namespace_equals(mdsc->fsc->mount_options, *p, 371 + if (!namespace_equals(mdsc->fsc->mount_options, m->m_fs_name, 364 372 fsname_len)) { 365 - pr_warn_client(cl, "fsname %*pE doesn't match mds_namespace %s\n", 366 - (int)fsname_len, (char *)*p, 373 + pr_warn_client(cl, "fsname %s doesn't match mds_namespace %s\n", 374 + m->m_fs_name, 367 375 mdsc->fsc->mount_options->mds_namespace); 368 376 goto bad; 369 377 } 370 - /* skip fsname after validation */ 371 - ceph_decode_skip_n(p, end, fsname_len, bad); 378 + } else { 379 + m->m_enabled = false; 380 + m->m_fs_name = kstrdup(CEPH_OLD_FS_NAME, GFP_NOFS); 381 + if (!m->m_fs_name) 382 + goto nomem; 372 383 } 373 384 /* damaged */ 374 385 if (mdsmap_ev >= 9) { ··· 441 430 kfree(m->m_info); 442 431 } 443 432 kfree(m->m_data_pg_pools); 433 + kfree(m->m_fs_name); 444 434 kfree(m); 445 435 } 446 436
+1
fs/ceph/mdsmap.h
··· 45 45 bool m_enabled; 46 46 bool m_damaged; 47 47 int m_num_laggy; 48 + char *m_fs_name; 48 49 }; 49 50 50 51 static inline struct ceph_entity_addr *
+14 -2
fs/ceph/super.h
··· 104 104 struct fscrypt_dummy_policy dummy_enc_policy; 105 105 }; 106 106 107 + #define CEPH_NAMESPACE_WILDCARD "*" 108 + 109 + static inline bool ceph_namespace_match(const char *pattern, 110 + const char *target) 111 + { 112 + if (!pattern || !pattern[0] || 113 + !strcmp(pattern, CEPH_NAMESPACE_WILDCARD)) 114 + return true; 115 + 116 + return !strcmp(pattern, target); 117 + } 118 + 107 119 /* 108 120 * Check if the mds namespace in ceph_mount_options matches 109 121 * the passed in namespace string. First time match (when 110 122 * ->mds_namespace is NULL) is treated specially, since 111 123 * ->mds_namespace needs to be initialized by the caller. 112 124 */ 113 - static inline int namespace_equals(struct ceph_mount_options *fsopt, 114 - const char *namespace, size_t len) 125 + static inline bool namespace_equals(struct ceph_mount_options *fsopt, 126 + const char *namespace, size_t len) 115 127 { 116 128 return !(fsopt->mds_namespace && 117 129 (strlen(fsopt->mds_namespace) != len ||
+27 -15
fs/proc/task_mmu.c
··· 656 656 struct proc_maps_locking_ctx lock_ctx = { .mm = mm }; 657 657 struct procmap_query karg; 658 658 struct vm_area_struct *vma; 659 + struct file *vm_file = NULL; 659 660 const char *name = NULL; 660 661 char build_id_buf[BUILD_ID_SIZE_MAX], *name_buf = NULL; 661 662 __u64 usize; ··· 728 727 karg.inode = 0; 729 728 } 730 729 731 - if (karg.build_id_size) { 732 - __u32 build_id_sz; 733 - 734 - err = build_id_parse(vma, build_id_buf, &build_id_sz); 735 - if (err) { 736 - karg.build_id_size = 0; 737 - } else { 738 - if (karg.build_id_size < build_id_sz) { 739 - err = -ENAMETOOLONG; 740 - goto out; 741 - } 742 - karg.build_id_size = build_id_sz; 743 - } 744 - } 745 - 746 730 if (karg.vma_name_size) { 747 731 size_t name_buf_sz = min_t(size_t, PATH_MAX, karg.vma_name_size); 748 732 const struct path *path; ··· 761 775 karg.vma_name_size = name_sz; 762 776 } 763 777 778 + if (karg.build_id_size && vma->vm_file) 779 + vm_file = get_file(vma->vm_file); 780 + 764 781 /* unlock vma or mmap_lock, and put mm_struct before copying data to user */ 765 782 query_vma_teardown(&lock_ctx); 766 783 mmput(mm); 784 + 785 + if (karg.build_id_size) { 786 + __u32 build_id_sz; 787 + 788 + if (vm_file) 789 + err = build_id_parse_file(vm_file, build_id_buf, &build_id_sz); 790 + else 791 + err = -ENOENT; 792 + if (err) { 793 + karg.build_id_size = 0; 794 + } else { 795 + if (karg.build_id_size < build_id_sz) { 796 + err = -ENAMETOOLONG; 797 + goto out; 798 + } 799 + karg.build_id_size = build_id_sz; 800 + } 801 + } 802 + 803 + if (vm_file) 804 + fput(vm_file); 767 805 768 806 if (karg.vma_name_size && copy_to_user(u64_to_user_ptr(karg.vma_name_addr), 769 807 name, karg.vma_name_size)) { ··· 808 798 out: 809 799 query_vma_teardown(&lock_ctx); 810 800 mmput(mm); 801 + if (vm_file) 802 + fput(vm_file); 811 803 kfree(name_buf); 812 804 return err; 813 805 }
+3 -1
fs/smb/client/cifstransport.c
··· 251 251 rc = cifs_send_recv(xid, ses, ses->server, 252 252 &rqst, &resp_buf_type, flags, &resp_iov); 253 253 if (rc < 0) 254 - return rc; 254 + goto out; 255 255 256 256 if (out_buf) { 257 257 *pbytes_returned = resp_iov.iov_len; 258 258 if (resp_iov.iov_len) 259 259 memcpy(out_buf, resp_iov.iov_base, resp_iov.iov_len); 260 260 } 261 + 262 + out: 261 263 free_rsp_buf(resp_buf_type, resp_iov.iov_base); 262 264 return rc; 263 265 }
+1
fs/smb/client/smb2file.c
··· 178 178 rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov, 179 179 &err_buftype); 180 180 if (rc == -EACCES && retry_without_read_attributes) { 181 + free_rsp_buf(err_buftype, err_iov.iov_base); 181 182 oparms->desired_access &= ~FILE_READ_ATTRIBUTES; 182 183 rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov, 183 184 &err_buftype);
+3
include/linux/buildid.h
··· 7 7 #define BUILD_ID_SIZE_MAX 20 8 8 9 9 struct vm_area_struct; 10 + struct file; 11 + 10 12 int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size); 13 + int build_id_parse_file(struct file *file, unsigned char *build_id, __u32 *size); 11 14 int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size); 12 15 int build_id_parse_buf(const void *buf, unsigned char *build_id, u32 buf_size); 13 16
+6
include/linux/ceph/ceph_fs.h
··· 31 31 #define CEPH_INO_CEPH 2 /* hidden .ceph dir */ 32 32 #define CEPH_INO_GLOBAL_SNAPREALM 3 /* global dummy snaprealm */ 33 33 34 + /* 35 + * name for "old" CephFS file systems, 36 + * see ceph.git e2b151d009640114b2565c901d6f41f6cd5ec652 37 + */ 38 + #define CEPH_OLD_FS_NAME "cephfs" 39 + 34 40 /* arbitrary limit on max # of monitors (cluster of 3 is typical) */ 35 41 #define CEPH_MAX_MON 31 36 42
+3
include/linux/livepatch.h
··· 175 175 int klp_module_coming(struct module *mod); 176 176 void klp_module_going(struct module *mod); 177 177 178 + void *klp_find_section_by_name(const struct module *mod, const char *name, 179 + size_t *sec_size); 180 + 178 181 void klp_copy_process(struct task_struct *child); 179 182 void klp_update_patch_state(struct task_struct *task); 180 183
+1 -3
include/linux/pci-ide.h
··· 26 26 /** 27 27 * struct pci_ide_partner - Per port pair Selective IDE Stream settings 28 28 * @rid_start: Partner Port Requester ID range start 29 - * @rid_end: Partner Port Requester ID range end 29 + * @rid_end: Partner Port Requester ID range end (inclusive) 30 30 * @stream_index: Selective IDE Stream Register Block selection 31 31 * @mem_assoc: PCI bus memory address association for targeting peer partner 32 32 * @pref_assoc: PCI bus prefetchable memory address association for ··· 82 82 * @host_bridge_stream: allocated from host bridge @ide_stream_ida pool 83 83 * @stream_id: unique Stream ID (within Partner Port pairing) 84 84 * @name: name of the established Selective IDE Stream in sysfs 85 - * @tsm_dev: For TSM established IDE, the TSM device context 86 85 * 87 86 * Negative @stream_id values indicate "uninitialized" on the 88 87 * expectation that with TSM established IDE the TSM owns the stream_id ··· 93 94 u8 host_bridge_stream; 94 95 int stream_id; 95 96 const char *name; 96 - struct tsm_dev *tsm_dev; 97 97 }; 98 98 99 99 /*
+2 -4
include/linux/rseq_types.h
··· 121 121 /** 122 122 * struct mm_mm_cid - Storage for per MM CID data 123 123 * @pcpu: Per CPU storage for CIDs associated to a CPU 124 - * @percpu: Set, when CIDs are in per CPU mode 125 - * @transit: Set to MM_CID_TRANSIT during a mode change transition phase 124 + * @mode: Indicates per CPU and transition mode 126 125 * @max_cids: The exclusive maximum CID value for allocation and convergence 127 126 * @irq_work: irq_work to handle the affinity mode change case 128 127 * @work: Regular work to handle the affinity mode change case ··· 138 139 struct mm_mm_cid { 139 140 /* Hotpath read mostly members */ 140 141 struct mm_cid_pcpu __percpu *pcpu; 141 - unsigned int percpu; 142 - unsigned int transit; 142 + unsigned int mode; 143 143 unsigned int max_cids; 144 144 145 145 /* Rarely used. Moves @lock and @mutex into the second cacheline */
+12
include/linux/skbuff.h
··· 4301 4301 skb_headlen(skb), buffer); 4302 4302 } 4303 4303 4304 + /* Variant of skb_header_pointer() where @offset is user-controlled 4305 + * and potentially negative. 4306 + */ 4307 + static inline void * __must_check 4308 + skb_header_pointer_careful(const struct sk_buff *skb, int offset, 4309 + int len, void *buffer) 4310 + { 4311 + if (unlikely(offset < 0 && -offset > skb_headroom(skb))) 4312 + return NULL; 4313 + return skb_header_pointer(skb, offset, len, buffer); 4314 + } 4315 + 4304 4316 static inline void * __must_check 4305 4317 skb_pointer_if_linear(const struct sk_buff *skb, int offset, int len) 4306 4318 {
-3
include/linux/tsm.h
··· 123 123 struct tsm_dev *tsm_register(struct device *parent, struct pci_tsm_ops *ops); 124 124 void tsm_unregister(struct tsm_dev *tsm_dev); 125 125 struct tsm_dev *find_tsm_dev(int id); 126 - struct pci_ide; 127 - int tsm_ide_stream_register(struct pci_ide *ide); 128 - void tsm_ide_stream_unregister(struct pci_ide *ide); 129 126 #endif /* __TSM_H */
+19 -6
include/trace/events/dma.h
··· 275 275 sizeof(u64), sizeof(u64))) 276 276 ); 277 277 278 + #define DMA_TRACE_MAX_ENTRIES 128 279 + 278 280 TRACE_EVENT(dma_map_sg, 279 281 TP_PROTO(struct device *dev, struct scatterlist *sgl, int nents, 280 282 int ents, enum dma_data_direction dir, unsigned long attrs), ··· 284 282 285 283 TP_STRUCT__entry( 286 284 __string(device, dev_name(dev)) 287 - __dynamic_array(u64, phys_addrs, nents) 288 - __dynamic_array(u64, dma_addrs, ents) 289 - __dynamic_array(unsigned int, lengths, ents) 285 + __field(int, full_nents) 286 + __field(int, full_ents) 287 + __field(bool, truncated) 288 + __dynamic_array(u64, phys_addrs, min(nents, DMA_TRACE_MAX_ENTRIES)) 289 + __dynamic_array(u64, dma_addrs, min(ents, DMA_TRACE_MAX_ENTRIES)) 290 + __dynamic_array(unsigned int, lengths, min(ents, DMA_TRACE_MAX_ENTRIES)) 290 291 __field(enum dma_data_direction, dir) 291 292 __field(unsigned long, attrs) 292 293 ), ··· 297 292 TP_fast_assign( 298 293 struct scatterlist *sg; 299 294 int i; 295 + int traced_nents = min_t(int, nents, DMA_TRACE_MAX_ENTRIES); 296 + int traced_ents = min_t(int, ents, DMA_TRACE_MAX_ENTRIES); 300 297 301 298 __assign_str(device); 302 - for_each_sg(sgl, sg, nents, i) 299 + __entry->full_nents = nents; 300 + __entry->full_ents = ents; 301 + __entry->truncated = (nents > DMA_TRACE_MAX_ENTRIES) || (ents > DMA_TRACE_MAX_ENTRIES); 302 + for_each_sg(sgl, sg, traced_nents, i) 303 303 ((u64 *)__get_dynamic_array(phys_addrs))[i] = sg_phys(sg); 304 - for_each_sg(sgl, sg, ents, i) { 304 + for_each_sg(sgl, sg, traced_ents, i) { 305 305 ((u64 *)__get_dynamic_array(dma_addrs))[i] = 306 306 sg_dma_address(sg); 307 307 ((unsigned int *)__get_dynamic_array(lengths))[i] = ··· 316 306 __entry->attrs = attrs; 317 307 ), 318 308 319 - TP_printk("%s dir=%s dma_addrs=%s sizes=%s phys_addrs=%s attrs=%s", 309 + TP_printk("%s dir=%s nents=%d/%d ents=%d/%d%s dma_addrs=%s sizes=%s phys_addrs=%s attrs=%s", 320 310 __get_str(device), 321 311 decode_dma_data_direction(__entry->dir), 312 + min_t(int, __entry->full_nents, DMA_TRACE_MAX_ENTRIES), __entry->full_nents, 313 + min_t(int, __entry->full_ents, DMA_TRACE_MAX_ENTRIES), __entry->full_ents, 314 + __entry->truncated ? " [TRUNCATED]" : "", 322 315 __print_array(__get_dynamic_array(dma_addrs), 323 316 __get_dynamic_array_len(dma_addrs) / 324 317 sizeof(u64), sizeof(u64)),
+9 -4
io_uring/fdinfo.c
··· 67 67 unsigned int cq_head = READ_ONCE(r->cq.head); 68 68 unsigned int cq_tail = READ_ONCE(r->cq.tail); 69 69 unsigned int sq_shift = 0; 70 - unsigned int sq_entries; 70 + unsigned int cq_entries, sq_entries; 71 71 int sq_pid = -1, sq_cpu = -1; 72 72 u64 sq_total_time = 0, sq_work_time = 0; 73 73 unsigned int i; ··· 146 146 } 147 147 } 148 148 seq_printf(m, "\n"); 149 + cond_resched(); 149 150 } 150 151 seq_printf(m, "CQEs:\t%u\n", cq_tail - cq_head); 151 - while (cq_head < cq_tail) { 152 + cq_entries = min(cq_tail - cq_head, ctx->cq_entries); 153 + for (i = 0; i < cq_entries; i++) { 152 154 struct io_uring_cqe *cqe; 153 155 bool cqe32 = false; 154 156 ··· 161 159 cq_head & cq_mask, cqe->user_data, cqe->res, 162 160 cqe->flags); 163 161 if (cqe32) 164 - seq_printf(m, ", extra1:%llu, extra2:%llu\n", 162 + seq_printf(m, ", extra1:%llu, extra2:%llu", 165 163 cqe->big_cqe[0], cqe->big_cqe[1]); 166 164 seq_printf(m, "\n"); 167 165 cq_head++; 168 - if (cqe32) 166 + if (cqe32) { 169 167 cq_head++; 168 + i++; 169 + } 170 + cond_resched(); 170 171 } 171 172 172 173 if (ctx->flags & IORING_SETUP_SQPOLL) {
+5 -4
io_uring/zcrx.c
··· 197 197 GFP_KERNEL_ACCOUNT); 198 198 if (ret) { 199 199 unpin_user_pages(pages, nr_pages); 200 + kvfree(pages); 200 201 return ret; 201 202 } 202 203 ··· 1069 1068 unsigned int mask = zcrx->rq_entries - 1; 1070 1069 unsigned int i; 1071 1070 1072 - guard(spinlock_bh)(&zcrx->rq_lock); 1073 - 1074 1071 nr = min(nr, io_zcrx_rqring_entries(zcrx)); 1075 1072 for (i = 0; i < nr; i++) { 1076 1073 struct io_uring_zcrx_rqe *rqe = io_zcrx_get_rqe(zcrx, mask); ··· 1113 1114 return -EINVAL; 1114 1115 1115 1116 do { 1116 - nr = zcrx_parse_rq(netmems, ZCRX_FLUSH_BATCH, zcrx); 1117 + scoped_guard(spinlock_bh, &zcrx->rq_lock) { 1118 + nr = zcrx_parse_rq(netmems, ZCRX_FLUSH_BATCH, zcrx); 1119 + zcrx_return_buffers(netmems, nr); 1120 + } 1117 1121 1118 - zcrx_return_buffers(netmems, nr); 1119 1122 total += nr; 1120 1123 1121 1124 if (fatal_signal_pending(current))
+63 -7
kernel/cgroup/dmem.c
··· 14 14 #include <linux/mutex.h> 15 15 #include <linux/page_counter.h> 16 16 #include <linux/parser.h> 17 + #include <linux/refcount.h> 17 18 #include <linux/rculist.h> 18 19 #include <linux/slab.h> 19 20 ··· 72 71 struct rcu_head rcu; 73 72 74 73 struct page_counter cnt; 74 + struct dmem_cgroup_pool_state *parent; 75 75 76 + refcount_t ref; 76 77 bool inited; 77 78 }; 78 79 ··· 90 87 */ 91 88 static DEFINE_SPINLOCK(dmemcg_lock); 92 89 static LIST_HEAD(dmem_cgroup_regions); 90 + 91 + static void dmemcg_free_region(struct kref *ref); 92 + static void dmemcg_pool_free_rcu(struct rcu_head *rcu); 93 93 94 94 static inline struct dmemcg_state * 95 95 css_to_dmemcs(struct cgroup_subsys_state *css) ··· 110 104 return cg->css.parent ? css_to_dmemcs(cg->css.parent) : NULL; 111 105 } 112 106 107 + static void dmemcg_pool_get(struct dmem_cgroup_pool_state *pool) 108 + { 109 + refcount_inc(&pool->ref); 110 + } 111 + 112 + static bool dmemcg_pool_tryget(struct dmem_cgroup_pool_state *pool) 113 + { 114 + return refcount_inc_not_zero(&pool->ref); 115 + } 116 + 117 + static void dmemcg_pool_put(struct dmem_cgroup_pool_state *pool) 118 + { 119 + if (!refcount_dec_and_test(&pool->ref)) 120 + return; 121 + 122 + call_rcu(&pool->rcu, dmemcg_pool_free_rcu); 123 + } 124 + 125 + static void dmemcg_pool_free_rcu(struct rcu_head *rcu) 126 + { 127 + struct dmem_cgroup_pool_state *pool = container_of(rcu, typeof(*pool), rcu); 128 + 129 + if (pool->parent) 130 + dmemcg_pool_put(pool->parent); 131 + kref_put(&pool->region->ref, dmemcg_free_region); 132 + kfree(pool); 133 + } 134 + 113 135 static void free_cg_pool(struct dmem_cgroup_pool_state *pool) 114 136 { 115 137 list_del(&pool->region_node); 116 - kfree(pool); 138 + dmemcg_pool_put(pool); 117 139 } 118 140 119 141 static void ··· 376 342 page_counter_init(&pool->cnt, 377 343 ppool ? &ppool->cnt : NULL, true); 378 344 reset_all_resource_limits(pool); 345 + refcount_set(&pool->ref, 1); 346 + kref_get(&region->ref); 347 + if (ppool && !pool->parent) { 348 + pool->parent = ppool; 349 + dmemcg_pool_get(ppool); 350 + } 379 351 380 352 list_add_tail_rcu(&pool->css_node, &dmemcs->pools); 381 353 list_add_tail(&pool->region_node, &region->pools); ··· 429 389 430 390 /* Fix up parent links, mark as inited. */ 431 391 pool->cnt.parent = &ppool->cnt; 392 + if (ppool && !pool->parent) { 393 + pool->parent = ppool; 394 + dmemcg_pool_get(ppool); 395 + } 432 396 pool->inited = true; 433 397 434 398 pool = ppool; ··· 467 423 */ 468 424 void dmem_cgroup_unregister_region(struct dmem_cgroup_region *region) 469 425 { 470 - struct list_head *entry; 426 + struct dmem_cgroup_pool_state *pool, *next; 471 427 472 428 if (!region) 473 429 return; ··· 477 433 /* Remove from global region list */ 478 434 list_del_rcu(&region->region_node); 479 435 480 - list_for_each_rcu(entry, &region->pools) { 481 - struct dmem_cgroup_pool_state *pool = 482 - container_of(entry, typeof(*pool), region_node); 483 - 436 + list_for_each_entry_safe(pool, next, &region->pools, region_node) { 484 437 list_del_rcu(&pool->css_node); 438 + list_del(&pool->region_node); 439 + dmemcg_pool_put(pool); 485 440 } 486 441 487 442 /* ··· 561 518 */ 562 519 void dmem_cgroup_pool_state_put(struct dmem_cgroup_pool_state *pool) 563 520 { 564 - if (pool) 521 + if (pool) { 565 522 css_put(&pool->cs->css); 523 + dmemcg_pool_put(pool); 524 + } 566 525 } 567 526 EXPORT_SYMBOL_GPL(dmem_cgroup_pool_state_put); 568 527 ··· 578 533 pool = find_cg_pool_locked(cg, region); 579 534 if (pool && !READ_ONCE(pool->inited)) 580 535 pool = NULL; 536 + if (pool && !dmemcg_pool_tryget(pool)) 537 + pool = NULL; 581 538 rcu_read_unlock(); 582 539 583 540 while (!pool) { ··· 588 541 pool = get_cg_pool_locked(cg, region, &allocpool); 589 542 else 590 543 pool = ERR_PTR(-ENODEV); 544 + if (!IS_ERR(pool)) 545 + dmemcg_pool_get(pool); 591 546 spin_unlock(&dmemcg_lock); 592 547 593 548 if (pool == ERR_PTR(-ENOMEM)) { ··· 625 576 626 577 page_counter_uncharge(&pool->cnt, size); 627 578 css_put(&pool->cs->css); 579 + dmemcg_pool_put(pool); 628 580 } 629 581 EXPORT_SYMBOL_GPL(dmem_cgroup_uncharge); 630 582 ··· 677 627 if (ret_limit_pool) { 678 628 *ret_limit_pool = container_of(fail, struct dmem_cgroup_pool_state, cnt); 679 629 css_get(&(*ret_limit_pool)->cs->css); 630 + dmemcg_pool_get(*ret_limit_pool); 680 631 } 632 + dmemcg_pool_put(pool); 681 633 ret = -EAGAIN; 682 634 goto err; 683 635 } ··· 752 700 if (!region_name[0]) 753 701 continue; 754 702 703 + if (!options || !*options) 704 + return -EINVAL; 705 + 755 706 rcu_read_lock(); 756 707 region = dmemcg_get_region_by_name(region_name); 757 708 rcu_read_unlock(); ··· 774 719 775 720 /* And commit */ 776 721 apply(pool, new_limit); 722 + dmemcg_pool_put(pool); 777 723 778 724 out_put: 779 725 kref_put(&region->ref, dmemcg_free_region);
+6 -4
kernel/dma/contiguous.c
··· 257 257 pr_debug("%s: reserving %ld MiB for global area\n", __func__, 258 258 (unsigned long)selected_size / SZ_1M); 259 259 260 - dma_contiguous_reserve_area(selected_size, selected_base, 261 - selected_limit, 262 - &dma_contiguous_default_area, 263 - fixed); 260 + ret = dma_contiguous_reserve_area(selected_size, selected_base, 261 + selected_limit, 262 + &dma_contiguous_default_area, 263 + fixed); 264 + if (ret) 265 + return; 264 266 265 267 ret = dma_heap_cma_register_heap(dma_contiguous_default_area); 266 268 if (ret)
+19
kernel/livepatch/core.c
··· 1356 1356 mutex_unlock(&klp_mutex); 1357 1357 } 1358 1358 1359 + void *klp_find_section_by_name(const struct module *mod, const char *name, 1360 + size_t *sec_size) 1361 + { 1362 + struct klp_modinfo *info = mod->klp_info; 1363 + 1364 + for (int i = 1; i < info->hdr.e_shnum; i++) { 1365 + Elf_Shdr *shdr = &info->sechdrs[i]; 1366 + 1367 + if (!strcmp(info->secstrings + shdr->sh_name, name)) { 1368 + *sec_size = shdr->sh_size; 1369 + return (void *)shdr->sh_addr; 1370 + } 1371 + } 1372 + 1373 + *sec_size = 0; 1374 + return NULL; 1375 + } 1376 + EXPORT_SYMBOL_GPL(klp_find_section_by_name); 1377 + 1359 1378 static int __init klp_init(void) 1360 1379 { 1361 1380 klp_root_kobj = kobject_create_and_add("livepatch", kernel_kobj);
-2
kernel/liveupdate/luo_file.c
··· 402 402 403 403 luo_file->fh->ops->unfreeze(&args); 404 404 } 405 - 406 - luo_file->serialized_data = 0; 407 405 } 408 406 409 407 static void __luo_file_unfreeze(struct luo_file_set *file_set,
+126 -58
kernel/sched/core.c
··· 10269 10269 * Serialization rules: 10270 10270 * 10271 10271 * mm::mm_cid::mutex: Serializes fork() and exit() and therefore 10272 - * protects mm::mm_cid::users. 10272 + * protects mm::mm_cid::users and mode switch 10273 + * transitions 10273 10274 * 10274 10275 * mm::mm_cid::lock: Serializes mm_update_max_cids() and 10275 10276 * mm_update_cpus_allowed(). Nests in mm_cid::mutex ··· 10286 10285 * 10287 10286 * A CID is either owned by a task (stored in task_struct::mm_cid.cid) or 10288 10287 * by a CPU (stored in mm::mm_cid.pcpu::cid). CIDs owned by CPUs have the 10289 - * MM_CID_ONCPU bit set. During transition from CPU to task ownership mode, 10290 - * MM_CID_TRANSIT is set on the per task CIDs. When this bit is set the 10291 - * task needs to drop the CID into the pool when scheduling out. Both bits 10292 - * (ONCPU and TRANSIT) are filtered out by task_cid() when the CID is 10293 - * actually handed over to user space in the RSEQ memory. 10288 + * MM_CID_ONCPU bit set. 10289 + * 10290 + * During the transition of ownership mode, the MM_CID_TRANSIT bit is set 10291 + * on the CIDs. When this bit is set the tasks drop the CID back into the 10292 + * pool when scheduling out. 10293 + * 10294 + * Both bits (ONCPU and TRANSIT) are filtered out by task_cid() when the 10295 + * CID is actually handed over to user space in the RSEQ memory. 10294 10296 * 10295 10297 * Mode switching: 10298 + * 10299 + * The ownership mode is per process and stored in mm:mm_cid::mode with the 10300 + * following possible states: 10301 + * 10302 + * 0: Per task ownership 10303 + * 0 | MM_CID_TRANSIT: Transition from per CPU to per task 10304 + * MM_CID_ONCPU: Per CPU ownership 10305 + * MM_CID_ONCPU | MM_CID_TRANSIT: Transition from per task to per CPU 10306 + * 10307 + * All transitions of ownership mode happen in two phases: 10308 + * 10309 + * 1) mm:mm_cid::mode has the MM_CID_TRANSIT bit set. This is OR'ed on the 10310 + * CIDs and denotes that the CID is only temporarily owned by a 10311 + * task. When the task schedules out it drops the CID back into the 10312 + * pool if this bit is set. 10313 + * 10314 + * 2) The initiating context walks the per CPU space or the tasks to fixup 10315 + * or drop the CIDs and after completion it clears MM_CID_TRANSIT in 10316 + * mm:mm_cid::mode. After that point the CIDs are strictly task or CPU 10317 + * owned again. 10318 + * 10319 + * This two phase transition is required to prevent CID space exhaustion 10320 + * during the transition as a direct transfer of ownership would fail: 10321 + * 10322 + * - On task to CPU mode switch if a task is scheduled in on one CPU and 10323 + * then migrated to another CPU before the fixup freed enough per task 10324 + * CIDs. 10325 + * 10326 + * - On CPU to task mode switch if two tasks are scheduled in on the same 10327 + * CPU before the fixup freed per CPU CIDs. 10328 + * 10329 + * Both scenarios can result in a live lock because sched_in() is invoked 10330 + * with runqueue lock held and loops in search of a CID and the fixup 10331 + * thread can't make progress freeing them up because it is stuck on the 10332 + * same runqueue lock. 10333 + * 10334 + * While MM_CID_TRANSIT is active during the transition phase the MM_CID 10335 + * bitmap can be contended, but that's a temporary contention bound to the 10336 + * transition period. After that everything goes back into steady state and 10337 + * nothing except fork() and exit() will touch the bitmap. This is an 10338 + * acceptable tradeoff as it completely avoids complex serialization, 10339 + * memory barriers and atomic operations for the common case. 10340 + * 10341 + * Aside of that this mechanism also ensures RT compability: 10342 + * 10343 + * - The task which runs the fixup is fully preemptible except for the 10344 + * short runqueue lock held sections. 10345 + * 10346 + * - The transient impact of the bitmap contention is only problematic 10347 + * when there is a thundering herd scenario of tasks scheduling in and 10348 + * out concurrently. There is not much which can be done about that 10349 + * except for avoiding mode switching by a proper overall system 10350 + * configuration. 10296 10351 * 10297 10352 * Switching to per CPU mode happens when the user count becomes greater 10298 10353 * than the maximum number of CIDs, which is calculated by: ··· 10363 10306 * 10364 10307 * At the point of switching to per CPU mode the new user is not yet 10365 10308 * visible in the system, so the task which initiated the fork() runs the 10366 - * fixup function: mm_cid_fixup_tasks_to_cpu() walks the thread list and 10367 - * either transfers each tasks owned CID to the CPU the task runs on or 10368 - * drops it into the CID pool if a task is not on a CPU at that point in 10369 - * time. Tasks which schedule in before the task walk reaches them do the 10370 - * handover in mm_cid_schedin(). When mm_cid_fixup_tasks_to_cpus() completes 10371 - * it's guaranteed that no task related to that MM owns a CID anymore. 10309 + * fixup function. mm_cid_fixup_tasks_to_cpu() walks the thread list and 10310 + * either marks each task owned CID with MM_CID_TRANSIT if the task is 10311 + * running on a CPU or drops it into the CID pool if a task is not on a 10312 + * CPU. Tasks which schedule in before the task walk reaches them do the 10313 + * handover in mm_cid_schedin(). When mm_cid_fixup_tasks_to_cpus() 10314 + * completes it is guaranteed that no task related to that MM owns a CID 10315 + * anymore. 10372 10316 * 10373 10317 * Switching back to task mode happens when the user count goes below the 10374 10318 * threshold which was recorded on the per CPU mode switch: ··· 10385 10327 * run either in the deferred update function in context of a workqueue or 10386 10328 * by a task which forks a new one or by a task which exits. Whatever 10387 10329 * happens first. mm_cid_fixup_cpus_to_task() walks through the possible 10388 - * CPUs and either transfers the CPU owned CIDs to a related task which 10389 - * runs on the CPU or drops it into the pool. Tasks which schedule in on a 10390 - * CPU which the walk did not cover yet do the handover themself. 10391 - * 10392 - * This transition from CPU to per task ownership happens in two phases: 10393 - * 10394 - * 1) mm:mm_cid.transit contains MM_CID_TRANSIT This is OR'ed on the task 10395 - * CID and denotes that the CID is only temporarily owned by the 10396 - * task. When it schedules out the task drops the CID back into the 10397 - * pool if this bit is set. 10398 - * 10399 - * 2) The initiating context walks the per CPU space and after completion 10400 - * clears mm:mm_cid.transit. So after that point the CIDs are strictly 10401 - * task owned again. 10402 - * 10403 - * This two phase transition is required to prevent CID space exhaustion 10404 - * during the transition as a direct transfer of ownership would fail if 10405 - * two tasks are scheduled in on the same CPU before the fixup freed per 10406 - * CPU CIDs. 10407 - * 10408 - * When mm_cid_fixup_cpus_to_tasks() completes it's guaranteed that no CID 10409 - * related to that MM is owned by a CPU anymore. 10330 + * CPUs and either marks the CPU owned CIDs with MM_CID_TRANSIT if a 10331 + * related task is running on the CPU or drops it into the pool. Tasks 10332 + * which are scheduled in before the fixup covered them do the handover 10333 + * themself. When mm_cid_fixup_cpus_to_tasks() completes it is guaranteed 10334 + * that no CID related to that MM is owned by a CPU anymore. 10410 10335 */ 10411 10336 10412 10337 /* ··· 10420 10379 static bool mm_update_max_cids(struct mm_struct *mm) 10421 10380 { 10422 10381 struct mm_mm_cid *mc = &mm->mm_cid; 10382 + bool percpu = cid_on_cpu(mc->mode); 10423 10383 10424 10384 lockdep_assert_held(&mm->mm_cid.lock); 10425 10385 ··· 10429 10387 __mm_update_max_cids(mc); 10430 10388 10431 10389 /* Check whether owner mode must be changed */ 10432 - if (!mc->percpu) { 10390 + if (!percpu) { 10433 10391 /* Enable per CPU mode when the number of users is above max_cids */ 10434 10392 if (mc->users > mc->max_cids) 10435 10393 mc->pcpu_thrs = mm_cid_calc_pcpu_thrs(mc); ··· 10440 10398 } 10441 10399 10442 10400 /* Mode change required? */ 10443 - if (!!mc->percpu == !!mc->pcpu_thrs) 10401 + if (percpu == !!mc->pcpu_thrs) 10444 10402 return false; 10445 - /* When switching back to per TASK mode, set the transition flag */ 10446 - if (!mc->pcpu_thrs) 10447 - WRITE_ONCE(mc->transit, MM_CID_TRANSIT); 10448 - WRITE_ONCE(mc->percpu, !!mc->pcpu_thrs); 10403 + 10404 + /* Flip the mode and set the transition flag to bridge the transfer */ 10405 + WRITE_ONCE(mc->mode, mc->mode ^ (MM_CID_TRANSIT | MM_CID_ONCPU)); 10406 + /* 10407 + * Order the store against the subsequent fixups so that 10408 + * acquire(rq::lock) cannot be reordered by the CPU before the 10409 + * store. 10410 + */ 10411 + smp_mb(); 10449 10412 return true; 10450 10413 } 10451 10414 ··· 10475 10428 10476 10429 WRITE_ONCE(mc->nr_cpus_allowed, weight); 10477 10430 __mm_update_max_cids(mc); 10478 - if (!mc->percpu) 10431 + if (!cid_on_cpu(mc->mode)) 10479 10432 return; 10480 10433 10481 10434 /* Adjust the threshold to the wider set */ ··· 10491 10444 /* Queue the irq work, which schedules the real work */ 10492 10445 mc->update_deferred = true; 10493 10446 irq_work_queue(&mc->irq_work); 10447 + } 10448 + 10449 + static inline void mm_cid_complete_transit(struct mm_struct *mm, unsigned int mode) 10450 + { 10451 + /* 10452 + * Ensure that the store removing the TRANSIT bit cannot be 10453 + * reordered by the CPU before the fixups have been completed. 10454 + */ 10455 + smp_mb(); 10456 + WRITE_ONCE(mm->mm_cid.mode, mode); 10494 10457 } 10495 10458 10496 10459 static inline void mm_cid_transit_to_task(struct task_struct *t, struct mm_cid_pcpu *pcp) ··· 10546 10489 } 10547 10490 } 10548 10491 } 10549 - /* Clear the transition bit */ 10550 - WRITE_ONCE(mm->mm_cid.transit, 0); 10492 + mm_cid_complete_transit(mm, 0); 10551 10493 } 10552 10494 10553 - static inline void mm_cid_transfer_to_cpu(struct task_struct *t, struct mm_cid_pcpu *pcp) 10495 + static inline void mm_cid_transit_to_cpu(struct task_struct *t, struct mm_cid_pcpu *pcp) 10554 10496 { 10555 10497 if (cid_on_task(t->mm_cid.cid)) { 10556 - t->mm_cid.cid = cid_to_cpu_cid(t->mm_cid.cid); 10498 + t->mm_cid.cid = cid_to_transit_cid(t->mm_cid.cid); 10557 10499 pcp->cid = t->mm_cid.cid; 10558 10500 } 10559 10501 } ··· 10565 10509 if (!t->mm_cid.active) 10566 10510 return false; 10567 10511 if (cid_on_task(t->mm_cid.cid)) { 10568 - /* If running on the CPU, transfer the CID, otherwise drop it */ 10512 + /* If running on the CPU, put the CID in transit mode, otherwise drop it */ 10569 10513 if (task_rq(t)->curr == t) 10570 - mm_cid_transfer_to_cpu(t, per_cpu_ptr(mm->mm_cid.pcpu, task_cpu(t))); 10514 + mm_cid_transit_to_cpu(t, per_cpu_ptr(mm->mm_cid.pcpu, task_cpu(t))); 10571 10515 else 10572 10516 mm_unset_cid_on_task(t); 10573 10517 } 10574 10518 return true; 10575 10519 } 10576 10520 10577 - static void mm_cid_fixup_tasks_to_cpus(void) 10521 + static void mm_cid_do_fixup_tasks_to_cpus(struct mm_struct *mm) 10578 10522 { 10579 - struct mm_struct *mm = current->mm; 10580 10523 struct task_struct *p, *t; 10581 10524 unsigned int users; 10582 10525 ··· 10613 10558 } 10614 10559 } 10615 10560 10561 + static void mm_cid_fixup_tasks_to_cpus(void) 10562 + { 10563 + struct mm_struct *mm = current->mm; 10564 + 10565 + mm_cid_do_fixup_tasks_to_cpus(mm); 10566 + mm_cid_complete_transit(mm, MM_CID_ONCPU); 10567 + } 10568 + 10616 10569 static bool sched_mm_cid_add_user(struct task_struct *t, struct mm_struct *mm) 10617 10570 { 10618 10571 t->mm_cid.active = 1; ··· 10649 10586 } 10650 10587 10651 10588 if (!sched_mm_cid_add_user(t, mm)) { 10652 - if (!mm->mm_cid.percpu) 10589 + if (!cid_on_cpu(mm->mm_cid.mode)) 10653 10590 t->mm_cid.cid = mm_get_cid(mm); 10654 10591 return; 10655 10592 } 10656 10593 10657 10594 /* Handle the mode change and transfer current's CID */ 10658 - percpu = !!mm->mm_cid.percpu; 10595 + percpu = cid_on_cpu(mm->mm_cid.mode); 10659 10596 if (!percpu) 10660 10597 mm_cid_transit_to_task(current, pcp); 10661 10598 else 10662 - mm_cid_transfer_to_cpu(current, pcp); 10599 + mm_cid_transit_to_cpu(current, pcp); 10663 10600 } 10664 10601 10665 10602 if (percpu) { ··· 10694 10631 * affinity change increased the number of allowed CPUs and the 10695 10632 * deferred fixup did not run yet. 10696 10633 */ 10697 - if (WARN_ON_ONCE(mm->mm_cid.percpu)) 10634 + if (WARN_ON_ONCE(cid_on_cpu(mm->mm_cid.mode))) 10698 10635 return false; 10699 10636 /* 10700 10637 * A failed fork(2) cleanup never gets here, so @current must have ··· 10727 10664 scoped_guard(raw_spinlock_irq, &mm->mm_cid.lock) { 10728 10665 if (!__sched_mm_cid_exit(t)) 10729 10666 return; 10730 - /* Mode change required. Transfer currents CID */ 10731 - mm_cid_transit_to_task(current, this_cpu_ptr(mm->mm_cid.pcpu)); 10667 + /* 10668 + * Mode change. The task has the CID unset 10669 + * already. The CPU CID is still valid and 10670 + * does not have MM_CID_TRANSIT set as the 10671 + * mode change has just taken effect under 10672 + * mm::mm_cid::lock. Drop it. 10673 + */ 10674 + mm_drop_cid_on_cpu(mm, this_cpu_ptr(mm->mm_cid.pcpu)); 10732 10675 } 10733 10676 mm_cid_fixup_cpus_to_tasks(mm); 10734 10677 return; ··· 10791 10722 if (!mm_update_max_cids(mm)) 10792 10723 return; 10793 10724 /* Affinity changes can only switch back to task mode */ 10794 - if (WARN_ON_ONCE(mm->mm_cid.percpu)) 10725 + if (WARN_ON_ONCE(cid_on_cpu(mm->mm_cid.mode))) 10795 10726 return; 10796 10727 } 10797 10728 mm_cid_fixup_cpus_to_tasks(mm); ··· 10812 10743 void mm_init_cid(struct mm_struct *mm, struct task_struct *p) 10813 10744 { 10814 10745 mm->mm_cid.max_cids = 0; 10815 - mm->mm_cid.percpu = 0; 10816 - mm->mm_cid.transit = 0; 10746 + mm->mm_cid.mode = 0; 10817 10747 mm->mm_cid.nr_cpus_allowed = p->nr_cpus_allowed; 10818 10748 mm->mm_cid.users = 0; 10819 10749 mm->mm_cid.pcpu_thrs = 0;
+48
kernel/sched/ext.c
··· 194 194 #include <trace/events/sched_ext.h> 195 195 196 196 static void process_ddsp_deferred_locals(struct rq *rq); 197 + static bool task_dead_and_done(struct task_struct *p); 197 198 static u32 reenq_local(struct rq *rq); 198 199 static void scx_kick_cpu(struct scx_sched *sch, s32 cpu, u64 flags); 199 200 static bool scx_vexit(struct scx_sched *sch, enum scx_exit_kind kind, ··· 2620 2619 2621 2620 set_cpus_allowed_common(p, ac); 2622 2621 2622 + if (task_dead_and_done(p)) 2623 + return; 2624 + 2623 2625 /* 2624 2626 * The effective cpumask is stored in @p->cpus_ptr which may temporarily 2625 2627 * differ from the configured one in @p->cpus_mask. Always tell the bpf ··· 3038 3034 percpu_up_read(&scx_fork_rwsem); 3039 3035 } 3040 3036 3037 + /** 3038 + * task_dead_and_done - Is a task dead and done running? 3039 + * @p: target task 3040 + * 3041 + * Once sched_ext_dead() removes the dead task from scx_tasks and exits it, the 3042 + * task no longer exists from SCX's POV. However, certain sched_class ops may be 3043 + * invoked on these dead tasks leading to failures - e.g. sched_setscheduler() 3044 + * may try to switch a task which finished sched_ext_dead() back into SCX 3045 + * triggering invalid SCX task state transitions and worse. 3046 + * 3047 + * Once a task has finished the final switch, sched_ext_dead() is the only thing 3048 + * that needs to happen on the task. Use this test to short-circuit sched_class 3049 + * operations which may be called on dead tasks. 3050 + */ 3051 + static bool task_dead_and_done(struct task_struct *p) 3052 + { 3053 + struct rq *rq = task_rq(p); 3054 + 3055 + lockdep_assert_rq_held(rq); 3056 + 3057 + /* 3058 + * In do_task_dead(), a dying task sets %TASK_DEAD with preemption 3059 + * disabled and __schedule(). If @p has %TASK_DEAD set and off CPU, @p 3060 + * won't ever run again. 3061 + */ 3062 + return unlikely(READ_ONCE(p->__state) == TASK_DEAD) && 3063 + !task_on_cpu(rq, p); 3064 + } 3065 + 3041 3066 void sched_ext_dead(struct task_struct *p) 3042 3067 { 3043 3068 unsigned long flags; 3044 3069 3070 + /* 3071 + * By the time control reaches here, @p has %TASK_DEAD set, switched out 3072 + * for the last time and then dropped the rq lock - task_dead_and_done() 3073 + * should be returning %true nullifying the straggling sched_class ops. 3074 + * Remove from scx_tasks and exit @p. 3075 + */ 3045 3076 raw_spin_lock_irqsave(&scx_tasks_lock, flags); 3046 3077 list_del_init(&p->scx.tasks_node); 3047 3078 raw_spin_unlock_irqrestore(&scx_tasks_lock, flags); ··· 3102 3063 3103 3064 lockdep_assert_rq_held(task_rq(p)); 3104 3065 3066 + if (task_dead_and_done(p)) 3067 + return; 3068 + 3105 3069 p->scx.weight = sched_weight_to_cgroup(scale_load_down(lw->weight)); 3106 3070 if (SCX_HAS_OP(sch, set_weight)) 3107 3071 SCX_CALL_OP_TASK(sch, SCX_KF_REST, set_weight, rq, ··· 3119 3077 { 3120 3078 struct scx_sched *sch = scx_root; 3121 3079 3080 + if (task_dead_and_done(p)) 3081 + return; 3082 + 3122 3083 scx_enable_task(p); 3123 3084 3124 3085 /* ··· 3135 3090 3136 3091 static void switched_from_scx(struct rq *rq, struct task_struct *p) 3137 3092 { 3093 + if (task_dead_and_done(p)) 3094 + return; 3095 + 3138 3096 scx_disable_task(p); 3139 3097 } 3140 3098
+35 -9
kernel/sched/sched.h
··· 3816 3816 __this_cpu_write(mm->mm_cid.pcpu->cid, cid); 3817 3817 } 3818 3818 3819 - static __always_inline void mm_cid_from_cpu(struct task_struct *t, unsigned int cpu_cid) 3819 + static __always_inline void mm_cid_from_cpu(struct task_struct *t, unsigned int cpu_cid, 3820 + unsigned int mode) 3820 3821 { 3821 3822 unsigned int max_cids, tcid = t->mm_cid.cid; 3822 3823 struct mm_struct *mm = t->mm; ··· 3842 3841 /* Still nothing, allocate a new one */ 3843 3842 if (!cid_on_cpu(cpu_cid)) 3844 3843 cpu_cid = cid_to_cpu_cid(mm_get_cid(mm)); 3844 + 3845 + /* Handle the transition mode flag if required */ 3846 + if (mode & MM_CID_TRANSIT) 3847 + cpu_cid = cpu_cid_to_cid(cpu_cid) | MM_CID_TRANSIT; 3845 3848 } 3846 3849 mm_cid_update_pcpu_cid(mm, cpu_cid); 3847 3850 mm_cid_update_task_cid(t, cpu_cid); 3848 3851 } 3849 3852 3850 - static __always_inline void mm_cid_from_task(struct task_struct *t, unsigned int cpu_cid) 3853 + static __always_inline void mm_cid_from_task(struct task_struct *t, unsigned int cpu_cid, 3854 + unsigned int mode) 3851 3855 { 3852 3856 unsigned int max_cids, tcid = t->mm_cid.cid; 3853 3857 struct mm_struct *mm = t->mm; ··· 3878 3872 if (!cid_on_task(tcid)) 3879 3873 tcid = mm_get_cid(mm); 3880 3874 /* Set the transition mode flag if required */ 3881 - tcid |= READ_ONCE(mm->mm_cid.transit); 3875 + tcid |= mode & MM_CID_TRANSIT; 3882 3876 } 3883 3877 mm_cid_update_pcpu_cid(mm, tcid); 3884 3878 mm_cid_update_task_cid(t, tcid); ··· 3887 3881 static __always_inline void mm_cid_schedin(struct task_struct *next) 3888 3882 { 3889 3883 struct mm_struct *mm = next->mm; 3890 - unsigned int cpu_cid; 3884 + unsigned int cpu_cid, mode; 3891 3885 3892 3886 if (!next->mm_cid.active) 3893 3887 return; 3894 3888 3895 3889 cpu_cid = __this_cpu_read(mm->mm_cid.pcpu->cid); 3896 - if (likely(!READ_ONCE(mm->mm_cid.percpu))) 3897 - mm_cid_from_task(next, cpu_cid); 3890 + mode = READ_ONCE(mm->mm_cid.mode); 3891 + if (likely(!cid_on_cpu(mode))) 3892 + mm_cid_from_task(next, cpu_cid, mode); 3898 3893 else 3899 - mm_cid_from_cpu(next, cpu_cid); 3894 + mm_cid_from_cpu(next, cpu_cid, mode); 3900 3895 } 3901 3896 3902 3897 static __always_inline void mm_cid_schedout(struct task_struct *prev) 3903 3898 { 3899 + struct mm_struct *mm = prev->mm; 3900 + unsigned int mode, cid; 3901 + 3904 3902 /* During mode transitions CIDs are temporary and need to be dropped */ 3905 3903 if (likely(!cid_in_transit(prev->mm_cid.cid))) 3906 3904 return; 3907 3905 3908 - mm_drop_cid(prev->mm, cid_from_transit_cid(prev->mm_cid.cid)); 3909 - prev->mm_cid.cid = MM_CID_UNSET; 3906 + mode = READ_ONCE(mm->mm_cid.mode); 3907 + cid = cid_from_transit_cid(prev->mm_cid.cid); 3908 + 3909 + /* 3910 + * If transition mode is done, transfer ownership when the CID is 3911 + * within the convergence range to optimize the next schedule in. 3912 + */ 3913 + if (!cid_in_transit(mode) && cid < READ_ONCE(mm->mm_cid.max_cids)) { 3914 + if (cid_on_cpu(mode)) 3915 + cid = cid_to_cpu_cid(cid); 3916 + 3917 + /* Update both so that the next schedule in goes into the fast path */ 3918 + mm_cid_update_pcpu_cid(mm, cid); 3919 + prev->mm_cid.cid = cid; 3920 + } else { 3921 + mm_drop_cid(mm, cid); 3922 + prev->mm_cid.cid = MM_CID_UNSET; 3923 + } 3910 3924 } 3911 3925 3912 3926 static inline void mm_cid_switch_to(struct task_struct *prev, struct task_struct *next)
+5 -2
kernel/trace/trace.h
··· 68 68 #undef __field_fn 69 69 #define __field_fn(type, item) type item; 70 70 71 + #undef __field_packed 72 + #define __field_packed(type, item) type item; 73 + 71 74 #undef __field_struct 72 75 #define __field_struct(type, item) __field(type, item) 73 76 74 77 #undef __field_desc 75 78 #define __field_desc(type, container, item) 76 79 77 - #undef __field_packed 78 - #define __field_packed(type, container, item) 80 + #undef __field_desc_packed 81 + #define __field_desc_packed(type, container, item) 79 82 80 83 #undef __array 81 84 #define __array(type, item, size) type item[size];
+16 -16
kernel/trace/trace_entries.h
··· 79 79 80 80 F_STRUCT( 81 81 __field_struct( struct ftrace_graph_ent, graph_ent ) 82 - __field_packed( unsigned long, graph_ent, func ) 83 - __field_packed( unsigned long, graph_ent, depth ) 82 + __field_desc_packed(unsigned long, graph_ent, func ) 83 + __field_desc_packed(unsigned long, graph_ent, depth ) 84 84 __dynamic_array(unsigned long, args ) 85 85 ), 86 86 ··· 96 96 97 97 F_STRUCT( 98 98 __field_struct( struct fgraph_retaddr_ent, graph_rent ) 99 - __field_packed( unsigned long, graph_rent.ent, func ) 100 - __field_packed( unsigned long, graph_rent.ent, depth ) 101 - __field_packed( unsigned long, graph_rent, retaddr ) 99 + __field_desc_packed( unsigned long, graph_rent.ent, func ) 100 + __field_desc_packed( unsigned long, graph_rent.ent, depth ) 101 + __field_desc_packed( unsigned long, graph_rent, retaddr ) 102 102 __dynamic_array(unsigned long, args ) 103 103 ), 104 104 ··· 123 123 124 124 F_STRUCT( 125 125 __field_struct( struct ftrace_graph_ret, ret ) 126 - __field_packed( unsigned long, ret, func ) 127 - __field_packed( unsigned long, ret, retval ) 128 - __field_packed( unsigned int, ret, depth ) 129 - __field_packed( unsigned int, ret, overrun ) 130 - __field(unsigned long long, calltime ) 131 - __field(unsigned long long, rettime ) 126 + __field_desc_packed( unsigned long, ret, func ) 127 + __field_desc_packed( unsigned long, ret, retval ) 128 + __field_desc_packed( unsigned int, ret, depth ) 129 + __field_desc_packed( unsigned int, ret, overrun ) 130 + __field_packed(unsigned long long, calltime) 131 + __field_packed(unsigned long long, rettime ) 132 132 ), 133 133 134 134 F_printk("<-- %ps (%u) (start: %llx end: %llx) over: %u retval: %lx", ··· 146 146 147 147 F_STRUCT( 148 148 __field_struct( struct ftrace_graph_ret, ret ) 149 - __field_packed( unsigned long, ret, func ) 150 - __field_packed( unsigned int, ret, depth ) 151 - __field_packed( unsigned int, ret, overrun ) 152 - __field(unsigned long long, calltime ) 153 - __field(unsigned long long, rettime ) 149 + __field_desc_packed( unsigned long, ret, func ) 150 + __field_desc_packed( unsigned int, ret, depth ) 151 + __field_desc_packed( unsigned int, ret, overrun ) 152 + __field_packed(unsigned long long, calltime ) 153 + __field_packed(unsigned long long, rettime ) 154 154 ), 155 155 156 156 F_printk("<-- %ps (%u) (start: %llx end: %llx) over: %u",
+15 -6
kernel/trace/trace_export.c
··· 42 42 #undef __field_fn 43 43 #define __field_fn(type, item) type item; 44 44 45 + #undef __field_packed 46 + #define __field_packed(type, item) type item; 47 + 45 48 #undef __field_desc 46 49 #define __field_desc(type, container, item) type item; 47 50 48 - #undef __field_packed 49 - #define __field_packed(type, container, item) type item; 51 + #undef __field_desc_packed 52 + #define __field_desc_packed(type, container, item) type item; 50 53 51 54 #undef __array 52 55 #define __array(type, item, size) type item[size]; ··· 107 104 #undef __field_fn 108 105 #define __field_fn(_type, _item) __field_ext(_type, _item, FILTER_TRACE_FN) 109 106 107 + #undef __field_packed 108 + #define __field_packed(_type, _item) __field_ext_packed(_type, _item, FILTER_OTHER) 109 + 110 110 #undef __field_desc 111 111 #define __field_desc(_type, _container, _item) __field_ext(_type, _item, FILTER_OTHER) 112 112 113 - #undef __field_packed 114 - #define __field_packed(_type, _container, _item) __field_ext_packed(_type, _item, FILTER_OTHER) 113 + #undef __field_desc_packed 114 + #define __field_desc_packed(_type, _container, _item) __field_ext_packed(_type, _item, FILTER_OTHER) 115 115 116 116 #undef __array 117 117 #define __array(_type, _item, _len) { \ ··· 152 146 #undef __field_fn 153 147 #define __field_fn(type, item) 154 148 149 + #undef __field_packed 150 + #define __field_packed(type, item) 151 + 155 152 #undef __field_desc 156 153 #define __field_desc(type, container, item) 157 154 158 - #undef __field_packed 159 - #define __field_packed(type, container, item) 155 + #undef __field_desc_packed 156 + #define __field_desc_packed(type, container, item) 160 157 161 158 #undef __array 162 159 #define __array(type, item, len)
+30 -12
lib/buildid.c
··· 279 279 /* enough for Elf64_Ehdr, Elf64_Phdr, and all the smaller requests */ 280 280 #define MAX_FREADER_BUF_SZ 64 281 281 282 - static int __build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, 282 + static int __build_id_parse(struct file *file, unsigned char *build_id, 283 283 __u32 *size, bool may_fault) 284 284 { 285 285 const Elf32_Ehdr *ehdr; ··· 287 287 char buf[MAX_FREADER_BUF_SZ]; 288 288 int ret; 289 289 290 - /* only works for page backed storage */ 291 - if (!vma->vm_file) 292 - return -EINVAL; 293 - 294 - freader_init_from_file(&r, buf, sizeof(buf), vma->vm_file, may_fault); 290 + freader_init_from_file(&r, buf, sizeof(buf), file, may_fault); 295 291 296 292 /* fetch first 18 bytes of ELF header for checks */ 297 293 ehdr = freader_fetch(&r, 0, offsetofend(Elf32_Ehdr, e_type)); ··· 315 319 return ret; 316 320 } 317 321 318 - /* 319 - * Parse build ID of ELF file mapped to vma 322 + /** 323 + * build_id_parse_nofault() - Parse build ID of ELF file mapped to vma 320 324 * @vma: vma object 321 325 * @build_id: buffer to store build id, at least BUILD_ID_SIZE long 322 326 * @size: returns actual build id size in case of success ··· 328 332 */ 329 333 int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size) 330 334 { 331 - return __build_id_parse(vma, build_id, size, false /* !may_fault */); 335 + if (!vma->vm_file) 336 + return -EINVAL; 337 + 338 + return __build_id_parse(vma->vm_file, build_id, size, false /* !may_fault */); 332 339 } 333 340 334 - /* 335 - * Parse build ID of ELF file mapped to VMA 341 + /** 342 + * build_id_parse() - Parse build ID of ELF file mapped to VMA 336 343 * @vma: vma object 337 344 * @build_id: buffer to store build id, at least BUILD_ID_SIZE long 338 345 * @size: returns actual build id size in case of success ··· 347 348 */ 348 349 int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size) 349 350 { 350 - return __build_id_parse(vma, build_id, size, true /* may_fault */); 351 + if (!vma->vm_file) 352 + return -EINVAL; 353 + 354 + return __build_id_parse(vma->vm_file, build_id, size, true /* may_fault */); 355 + } 356 + 357 + /** 358 + * build_id_parse_file() - Parse build ID of ELF file 359 + * @file: file object 360 + * @build_id: buffer to store build id, at least BUILD_ID_SIZE long 361 + * @size: returns actual build id size in case of success 362 + * 363 + * Assumes faultable context and can cause page faults to bring in file data 364 + * into page cache. 365 + * 366 + * Return: 0 on success; negative error, otherwise 367 + */ 368 + int build_id_parse_file(struct file *file, unsigned char *build_id, __u32 *size) 369 + { 370 + return __build_id_parse(file, build_id, size, true /* may_fault */); 351 371 } 352 372 353 373 /**
+20 -22
mm/memory-failure.c
··· 2411 2411 * In fact it's dangerous to directly bump up page count from 0, 2412 2412 * that may make page_ref_freeze()/page_ref_unfreeze() mismatch. 2413 2413 */ 2414 - if (!(flags & MF_COUNT_INCREASED)) { 2415 - res = get_hwpoison_page(p, flags); 2416 - if (!res) { 2417 - if (is_free_buddy_page(p)) { 2418 - if (take_page_off_buddy(p)) { 2419 - page_ref_inc(p); 2420 - res = MF_RECOVERED; 2421 - } else { 2422 - /* We lost the race, try again */ 2423 - if (retry) { 2424 - ClearPageHWPoison(p); 2425 - retry = false; 2426 - goto try_again; 2427 - } 2428 - res = MF_FAILED; 2429 - } 2430 - res = action_result(pfn, MF_MSG_BUDDY, res); 2414 + res = get_hwpoison_page(p, flags); 2415 + if (!res) { 2416 + if (is_free_buddy_page(p)) { 2417 + if (take_page_off_buddy(p)) { 2418 + page_ref_inc(p); 2419 + res = MF_RECOVERED; 2431 2420 } else { 2432 - res = action_result(pfn, MF_MSG_KERNEL_HIGH_ORDER, MF_IGNORED); 2421 + /* We lost the race, try again */ 2422 + if (retry) { 2423 + ClearPageHWPoison(p); 2424 + retry = false; 2425 + goto try_again; 2426 + } 2427 + res = MF_FAILED; 2433 2428 } 2434 - goto unlock_mutex; 2435 - } else if (res < 0) { 2436 - res = action_result(pfn, MF_MSG_GET_HWPOISON, MF_IGNORED); 2437 - goto unlock_mutex; 2429 + res = action_result(pfn, MF_MSG_BUDDY, res); 2430 + } else { 2431 + res = action_result(pfn, MF_MSG_KERNEL_HIGH_ORDER, MF_IGNORED); 2438 2432 } 2433 + goto unlock_mutex; 2434 + } else if (res < 0) { 2435 + res = action_result(pfn, MF_MSG_GET_HWPOISON, MF_IGNORED); 2436 + goto unlock_mutex; 2439 2437 } 2440 2438 2441 2439 folio = page_folio(p);
+14 -9
mm/shmem.c
··· 1211 1211 swaps_freed = shmem_free_swap(mapping, indices[i], 1212 1212 end - 1, folio); 1213 1213 if (!swaps_freed) { 1214 - /* 1215 - * If found a large swap entry cross the end border, 1216 - * skip it as the truncate_inode_partial_folio above 1217 - * should have at least zerod its content once. 1218 - */ 1214 + pgoff_t base = indices[i]; 1215 + 1219 1216 order = shmem_confirm_swap(mapping, indices[i], 1220 1217 radix_to_swp_entry(folio)); 1221 - if (order > 0 && indices[i] + (1 << order) > end) 1222 - continue; 1223 - /* Swap was replaced by page: retry */ 1224 - index = indices[i]; 1218 + /* 1219 + * If found a large swap entry cross the end or start 1220 + * border, skip it as the truncate_inode_partial_folio 1221 + * above should have at least zerod its content once. 1222 + */ 1223 + if (order > 0) { 1224 + base = round_down(base, 1 << order); 1225 + if (base < start || base + (1 << order) > end) 1226 + continue; 1227 + } 1228 + /* Swap was replaced by page or extended, retry */ 1229 + index = base; 1225 1230 break; 1226 1231 } 1227 1232 nr_swaps_freed += swaps_freed;
+5 -1
mm/slub.c
··· 6689 6689 static noinline 6690 6690 void memcg_alloc_abort_single(struct kmem_cache *s, void *object) 6691 6691 { 6692 + struct slab *slab = virt_to_slab(object); 6693 + 6694 + alloc_tagging_slab_free_hook(s, slab, &object, 1); 6695 + 6692 6696 if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false))) 6693 - do_slab_free(s, virt_to_slab(object), object, object, 1, _RET_IP_); 6697 + do_slab_free(s, slab, object, object, 1, _RET_IP_); 6694 6698 } 6695 6699 #endif 6696 6700
+4 -4
net/core/filter.c
··· 2289 2289 2290 2290 err = bpf_out_neigh_v6(net, skb, dev, nh); 2291 2291 if (unlikely(net_xmit_eval(err))) 2292 - DEV_STATS_INC(dev, tx_errors); 2292 + dev_core_stats_tx_dropped_inc(dev); 2293 2293 else 2294 2294 ret = NET_XMIT_SUCCESS; 2295 2295 goto out_xmit; 2296 2296 out_drop: 2297 - DEV_STATS_INC(dev, tx_errors); 2297 + dev_core_stats_tx_dropped_inc(dev); 2298 2298 kfree_skb(skb); 2299 2299 out_xmit: 2300 2300 return ret; ··· 2396 2396 2397 2397 err = bpf_out_neigh_v4(net, skb, dev, nh); 2398 2398 if (unlikely(net_xmit_eval(err))) 2399 - DEV_STATS_INC(dev, tx_errors); 2399 + dev_core_stats_tx_dropped_inc(dev); 2400 2400 else 2401 2401 ret = NET_XMIT_SUCCESS; 2402 2402 goto out_xmit; 2403 2403 out_drop: 2404 - DEV_STATS_INC(dev, tx_errors); 2404 + dev_core_stats_tx_dropped_inc(dev); 2405 2405 kfree_skb(skb); 2406 2406 out_xmit: 2407 2407 return ret;
+2
net/core/gro.c
··· 265 265 goto out; 266 266 } 267 267 268 + /* NICs can feed encapsulated packets into GRO */ 269 + skb->encapsulation = 0; 268 270 rcu_read_lock(); 269 271 list_for_each_entry_rcu(ptype, head, list) { 270 272 if (ptype->type != type || !ptype->callbacks.gro_complete)
+34 -16
net/core/net-procfs.c
··· 170 170 .show = softnet_seq_show, 171 171 }; 172 172 173 + struct ptype_iter_state { 174 + struct seq_net_private p; 175 + struct net_device *dev; 176 + }; 177 + 173 178 static void *ptype_get_idx(struct seq_file *seq, loff_t pos) 174 179 { 180 + struct ptype_iter_state *iter = seq->private; 175 181 struct list_head *ptype_list = NULL; 176 182 struct packet_type *pt = NULL; 177 183 struct net_device *dev; ··· 187 181 for_each_netdev_rcu(seq_file_net(seq), dev) { 188 182 ptype_list = &dev->ptype_all; 189 183 list_for_each_entry_rcu(pt, ptype_list, list) { 190 - if (i == pos) 184 + if (i == pos) { 185 + iter->dev = dev; 191 186 return pt; 187 + } 192 188 ++i; 193 189 } 194 190 } 191 + 192 + iter->dev = NULL; 195 193 196 194 list_for_each_entry_rcu(pt, &seq_file_net(seq)->ptype_all, list) { 197 195 if (i == pos) ··· 228 218 229 219 static void *ptype_seq_next(struct seq_file *seq, void *v, loff_t *pos) 230 220 { 221 + struct ptype_iter_state *iter = seq->private; 231 222 struct net *net = seq_file_net(seq); 232 223 struct net_device *dev; 233 224 struct packet_type *pt; ··· 240 229 return ptype_get_idx(seq, 0); 241 230 242 231 pt = v; 243 - nxt = pt->list.next; 244 - if (pt->dev) { 245 - if (nxt != &pt->dev->ptype_all) 232 + nxt = READ_ONCE(pt->list.next); 233 + dev = iter->dev; 234 + if (dev) { 235 + if (nxt != &dev->ptype_all) 246 236 goto found; 247 237 248 - dev = pt->dev; 249 238 for_each_netdev_continue_rcu(seq_file_net(seq), dev) { 250 - if (!list_empty(&dev->ptype_all)) { 251 - nxt = dev->ptype_all.next; 239 + nxt = READ_ONCE(dev->ptype_all.next); 240 + if (nxt != &dev->ptype_all) { 241 + iter->dev = dev; 252 242 goto found; 253 243 } 254 244 } 255 - nxt = net->ptype_all.next; 245 + iter->dev = NULL; 246 + nxt = READ_ONCE(net->ptype_all.next); 256 247 goto net_ptype_all; 257 248 } 258 249 ··· 265 252 266 253 if (nxt == &net->ptype_all) { 267 254 /* continue with ->ptype_specific if it's not empty */ 268 - nxt = net->ptype_specific.next; 255 + nxt = READ_ONCE(net->ptype_specific.next); 269 256 if (nxt != &net->ptype_specific) 270 257 goto found; 271 258 } 272 259 273 260 hash = 0; 274 - nxt = ptype_base[0].next; 261 + nxt = READ_ONCE(ptype_base[0].next); 275 262 } else 276 263 hash = ntohs(pt->type) & PTYPE_HASH_MASK; 277 264 278 265 while (nxt == &ptype_base[hash]) { 279 266 if (++hash >= PTYPE_HASH_SIZE) 280 267 return NULL; 281 - nxt = ptype_base[hash].next; 268 + nxt = READ_ONCE(ptype_base[hash].next); 282 269 } 283 270 found: 284 271 return list_entry(nxt, struct packet_type, list); ··· 292 279 293 280 static int ptype_seq_show(struct seq_file *seq, void *v) 294 281 { 282 + struct ptype_iter_state *iter = seq->private; 295 283 struct packet_type *pt = v; 284 + struct net_device *dev; 296 285 297 - if (v == SEQ_START_TOKEN) 286 + if (v == SEQ_START_TOKEN) { 298 287 seq_puts(seq, "Type Device Function\n"); 299 - else if ((!pt->af_packet_net || net_eq(pt->af_packet_net, seq_file_net(seq))) && 300 - (!pt->dev || net_eq(dev_net(pt->dev), seq_file_net(seq)))) { 288 + return 0; 289 + } 290 + dev = iter->dev; 291 + if ((!pt->af_packet_net || net_eq(pt->af_packet_net, seq_file_net(seq))) && 292 + (!dev || net_eq(dev_net(dev), seq_file_net(seq)))) { 301 293 if (pt->type == htons(ETH_P_ALL)) 302 294 seq_puts(seq, "ALL "); 303 295 else 304 296 seq_printf(seq, "%04x", ntohs(pt->type)); 305 297 306 298 seq_printf(seq, " %-8s %ps\n", 307 - pt->dev ? pt->dev->name : "", pt->func); 299 + dev ? dev->name : "", pt->func); 308 300 } 309 301 310 302 return 0; ··· 333 315 &softnet_seq_ops)) 334 316 goto out_dev; 335 317 if (!proc_create_net("ptype", 0444, net->proc_net, &ptype_seq_ops, 336 - sizeof(struct seq_net_private))) 318 + sizeof(struct ptype_iter_state))) 337 319 goto out_softnet; 338 320 339 321 if (wext_proc_init(net))
-3
net/ethtool/common.c
··· 862 862 ctx->key_off = key_off; 863 863 ctx->priv_size = ops->rxfh_priv_size; 864 864 865 - ctx->hfunc = ETH_RSS_HASH_NO_CHANGE; 866 - ctx->input_xfrm = RXH_XFRM_NO_CHANGE; 867 - 868 865 return ctx; 869 866 } 870 867
+2 -7
net/ethtool/rss.c
··· 824 824 static int 825 825 ethnl_rss_set(struct ethnl_req_info *req_info, struct genl_info *info) 826 826 { 827 - bool indir_reset = false, indir_mod, xfrm_sym = false; 828 827 struct rss_req_info *request = RSS_REQINFO(req_info); 828 + bool indir_reset = false, indir_mod, xfrm_sym; 829 829 struct ethtool_rxfh_context *ctx = NULL; 830 830 struct net_device *dev = req_info->dev; 831 831 bool mod = false, fields_mod = false; ··· 860 860 861 861 rxfh.input_xfrm = data.input_xfrm; 862 862 ethnl_update_u8(&rxfh.input_xfrm, tb[ETHTOOL_A_RSS_INPUT_XFRM], &mod); 863 - /* For drivers which don't support input_xfrm it will be set to 0xff 864 - * in the RSS context info. In all other case input_xfrm != 0 means 865 - * symmetric hashing is requested. 866 - */ 867 - if (!request->rss_context || ops->rxfh_per_ctx_key) 868 - xfrm_sym = rxfh.input_xfrm || data.input_xfrm; 863 + xfrm_sym = rxfh.input_xfrm || data.input_xfrm; 869 864 if (rxfh.input_xfrm == data.input_xfrm) 870 865 rxfh.input_xfrm = RXH_XFRM_NO_CHANGE; 871 866
+2 -1
net/ipv6/ip6_fib.c
··· 1138 1138 fib6_set_expires(iter, rt->expires); 1139 1139 fib6_add_gc_list(iter); 1140 1140 } 1141 - if (!(rt->fib6_flags & (RTF_ADDRCONF | RTF_PREFIX_RT))) { 1141 + if (!(rt->fib6_flags & (RTF_ADDRCONF | RTF_PREFIX_RT)) && 1142 + !iter->fib6_nh->fib_nh_gw_family) { 1142 1143 iter->fib6_flags &= ~RTF_ADDRCONF; 1143 1144 iter->fib6_flags &= ~RTF_PREFIX_RT; 1144 1145 }
+1 -1
net/netfilter/nf_tables_api.c
··· 5914 5914 5915 5915 list_for_each_entry(catchall, &set->catchall_list, list) { 5916 5916 ext = nft_set_elem_ext(set, catchall->elem); 5917 - if (!nft_set_elem_active(ext, genmask)) 5917 + if (nft_set_elem_active(ext, genmask)) 5918 5918 continue; 5919 5919 5920 5920 nft_clear(ctx->net, ext);
+6 -7
net/sched/cls_u32.c
··· 161 161 int toff = off + key->off + (off2 & key->offmask); 162 162 __be32 *data, hdata; 163 163 164 - if (skb_headroom(skb) + toff > INT_MAX) 165 - goto out; 166 - 167 - data = skb_header_pointer(skb, toff, 4, &hdata); 164 + data = skb_header_pointer_careful(skb, toff, 4, 165 + &hdata); 168 166 if (!data) 169 167 goto out; 170 168 if ((*data ^ key->val) & key->mask) { ··· 212 214 if (ht->divisor) { 213 215 __be32 *data, hdata; 214 216 215 - data = skb_header_pointer(skb, off + n->sel.hoff, 4, 216 - &hdata); 217 + data = skb_header_pointer_careful(skb, 218 + off + n->sel.hoff, 219 + 4, &hdata); 217 220 if (!data) 218 221 goto out; 219 222 sel = ht->divisor & u32_hash_fold(*data, &n->sel, ··· 228 229 if (n->sel.flags & TC_U32_VAROFFSET) { 229 230 __be16 *data, hdata; 230 231 231 - data = skb_header_pointer(skb, 232 + data = skb_header_pointer_careful(skb, 232 233 off + n->sel.offoff, 233 234 2, &hdata); 234 235 if (!data)
+2 -2
net/tipc/crypto.c
··· 1219 1219 rx = c; 1220 1220 tx = tipc_net(rx->net)->crypto_tx; 1221 1221 if (cancel_delayed_work(&rx->work)) { 1222 - kfree(rx->skey); 1222 + kfree_sensitive(rx->skey); 1223 1223 rx->skey = NULL; 1224 1224 atomic_xchg(&rx->key_distr, 0); 1225 1225 tipc_node_put(rx->node); ··· 2394 2394 break; 2395 2395 default: 2396 2396 synchronize_rcu(); 2397 - kfree(rx->skey); 2397 + kfree_sensitive(rx->skey); 2398 2398 rx->skey = NULL; 2399 2399 break; 2400 2400 }
+9 -11
scripts/livepatch/init.c
··· 9 9 #include <linux/slab.h> 10 10 #include <linux/livepatch.h> 11 11 12 - extern struct klp_object_ext __start_klp_objects[]; 13 - extern struct klp_object_ext __stop_klp_objects[]; 14 - 15 12 static struct klp_patch *patch; 16 13 17 14 static int __init livepatch_mod_init(void) 18 15 { 16 + struct klp_object_ext *obj_exts; 17 + size_t obj_exts_sec_size; 19 18 struct klp_object *objs; 20 19 unsigned int nr_objs; 21 20 int ret; 22 21 23 - nr_objs = __stop_klp_objects - __start_klp_objects; 24 - 22 + obj_exts = klp_find_section_by_name(THIS_MODULE, ".init.klp_objects", 23 + &obj_exts_sec_size); 24 + nr_objs = obj_exts_sec_size / sizeof(*obj_exts); 25 25 if (!nr_objs) { 26 26 pr_err("nothing to patch!\n"); 27 27 ret = -EINVAL; ··· 41 41 } 42 42 43 43 for (int i = 0; i < nr_objs; i++) { 44 - struct klp_object_ext *obj_ext = __start_klp_objects + i; 44 + struct klp_object_ext *obj_ext = obj_exts + i; 45 45 struct klp_func_ext *funcs_ext = obj_ext->funcs; 46 46 unsigned int nr_funcs = obj_ext->nr_funcs; 47 47 struct klp_func *funcs = objs[i].funcs; ··· 90 90 91 91 static void __exit livepatch_mod_exit(void) 92 92 { 93 - unsigned int nr_objs; 93 + struct klp_object *obj; 94 94 95 - nr_objs = __stop_klp_objects - __start_klp_objects; 96 - 97 - for (int i = 0; i < nr_objs; i++) 98 - kfree(patch->objs[i].funcs); 95 + klp_for_each_object_static(patch, obj) 96 + kfree(obj->funcs); 99 97 100 98 kfree(patch->objs); 101 99 kfree(patch);
+4
scripts/livepatch/klp-build
··· 249 249 [[ -v CONFIG_GCC_PLUGIN_RANDSTRUCT ]] && \ 250 250 die "kernel option 'CONFIG_GCC_PLUGIN_RANDSTRUCT' not supported" 251 251 252 + [[ -v CONFIG_AS_IS_LLVM ]] && \ 253 + [[ "$CONFIG_AS_VERSION" -lt 200000 ]] && \ 254 + die "Clang assembler version < 20 not supported" 255 + 252 256 return 0 253 257 } 254 258
+2 -7
scripts/module.lds.S
··· 34 34 35 35 __patchable_function_entries : { *(__patchable_function_entries) } 36 36 37 - __klp_funcs 0: ALIGN(8) { KEEP(*(__klp_funcs)) } 38 - 39 - __klp_objects 0: ALIGN(8) { 40 - __start_klp_objects = .; 41 - KEEP(*(__klp_objects)) 42 - __stop_klp_objects = .; 43 - } 37 + .init.klp_funcs 0 : ALIGN(8) { KEEP(*(.init.klp_funcs)) } 38 + .init.klp_objects 0 : ALIGN(8) { KEEP(*(.init.klp_objects)) } 44 39 45 40 #ifdef CONFIG_ARCH_USES_CFI_TRAPS 46 41 __kcfi_traps : { KEEP(*(.kcfi_traps)) }
-9
security/lsm.h
··· 37 37 38 38 /* LSM framework initializers */ 39 39 40 - #ifdef CONFIG_MMU 41 - int min_addr_init(void); 42 - #else 43 - static inline int min_addr_init(void) 44 - { 45 - return 0; 46 - } 47 - #endif /* CONFIG_MMU */ 48 - 49 40 #ifdef CONFIG_SECURITYFS 50 41 int securityfs_init(void); 51 42 #else
+1 -6
security/lsm_init.c
··· 489 489 */ 490 490 static int __init security_initcall_pure(void) 491 491 { 492 - int rc_adr, rc_lsm; 493 - 494 - rc_adr = min_addr_init(); 495 - rc_lsm = lsm_initcall(pure); 496 - 497 - return (rc_adr ? rc_adr : rc_lsm); 492 + return lsm_initcall(pure); 498 493 } 499 494 pure_initcall(security_initcall_pure); 500 495
+2 -3
security/min_addr.c
··· 5 5 #include <linux/sysctl.h> 6 6 #include <linux/minmax.h> 7 7 8 - #include "lsm.h" 9 - 10 8 /* amount of vm to protect from userspace access by both DAC and the LSM*/ 11 9 unsigned long mmap_min_addr; 12 10 /* amount of vm to protect from userspace using CAP_SYS_RAWIO (DAC) */ ··· 52 54 }, 53 55 }; 54 56 55 - int __init min_addr_init(void) 57 + static int __init mmap_min_addr_init(void) 56 58 { 57 59 register_sysctl_init("vm", min_addr_sysctl_table); 58 60 update_mmap_min_addr(); 59 61 60 62 return 0; 61 63 } 64 + pure_initcall(mmap_min_addr_init);
+16
sound/soc/amd/acp/acp-sdw-legacy-mach.c
··· 95 95 }, 96 96 .driver_data = (void *)(ASOC_SDW_CODEC_SPKR), 97 97 }, 98 + { 99 + .callback = soc_sdw_quirk_cb, 100 + .matches = { 101 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 102 + DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "21YW"), 103 + }, 104 + .driver_data = (void *)(ASOC_SDW_CODEC_SPKR), 105 + }, 106 + { 107 + .callback = soc_sdw_quirk_cb, 108 + .matches = { 109 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 110 + DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "21YX"), 111 + }, 112 + .driver_data = (void *)(ASOC_SDW_CODEC_SPKR), 113 + }, 98 114 {} 99 115 }; 100 116
+37
sound/soc/amd/acp/amd-acp70-acpi-match.c
··· 531 531 {} 532 532 }; 533 533 534 + static const struct snd_soc_acpi_adr_device rt1320_0_single_adr[] = { 535 + { 536 + .adr = 0x000030025D132001ull, 537 + .num_endpoints = 1, 538 + .endpoints = &single_endpoint, 539 + .name_prefix = "rt1320-1" 540 + } 541 + }; 542 + 543 + static const struct snd_soc_acpi_adr_device rt722_1_single_adr[] = { 544 + { 545 + .adr = 0x000130025d072201ull, 546 + .num_endpoints = ARRAY_SIZE(rt722_endpoints), 547 + .endpoints = rt722_endpoints, 548 + .name_prefix = "rt722" 549 + } 550 + }; 551 + 552 + static const struct snd_soc_acpi_link_adr acp70_rt1320_l0_rt722_l1[] = { 553 + { 554 + .mask = BIT(0), 555 + .num_adr = ARRAY_SIZE(rt1320_0_single_adr), 556 + .adr_d = rt1320_0_single_adr, 557 + }, 558 + { 559 + .mask = BIT(1), 560 + .num_adr = ARRAY_SIZE(rt722_1_single_adr), 561 + .adr_d = rt722_1_single_adr, 562 + }, 563 + {} 564 + }; 565 + 534 566 struct snd_soc_acpi_mach snd_soc_acpi_amd_acp70_sdw_machines[] = { 567 + { 568 + .link_mask = BIT(0) | BIT(1), 569 + .links = acp70_rt1320_l0_rt722_l1, 570 + .drv_name = "amd_sdw", 571 + }, 535 572 { 536 573 .link_mask = BIT(0) | BIT(1), 537 574 .links = acp70_rt722_l0_rt1320_l1,
+7 -1
sound/soc/amd/yc/acp6x-mach.c
··· 696 696 DMI_MATCH(DMI_BOARD_NAME, "XyloD5_RBU"), 697 697 } 698 698 }, 699 - 699 + { 700 + .driver_data = &acp6x_card, 701 + .matches = { 702 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 703 + DMI_MATCH(DMI_PRODUCT_NAME, "Vivobook_ASUSLaptop M6501RR_M6501RR"), 704 + } 705 + }, 700 706 {} 701 707 }; 702 708
+2 -1
sound/soc/codecs/aw88261.c
··· 424 424 if (ret) 425 425 break; 426 426 427 + /* keep all three bits from current hw status */ 427 428 read_val &= (~AW88261_AMPPD_MASK) | (~AW88261_PWDN_MASK) | 428 429 (~AW88261_HMUTE_MASK); 429 - reg_val &= (AW88261_AMPPD_MASK | AW88261_PWDN_MASK | AW88261_HMUTE_MASK); 430 + reg_val &= (AW88261_AMPPD_MASK & AW88261_PWDN_MASK & AW88261_HMUTE_MASK); 430 431 reg_val |= read_val; 431 432 432 433 /* enable uls hmute */
+2 -2
sound/soc/codecs/rt721-sdca.c
··· 245 245 regmap_write(rt721->mbq_regmap, 0x5b10007, 0x2000); 246 246 regmap_write(rt721->mbq_regmap, 0x5B10017, 0x1b0f); 247 247 rt_sdca_index_write(rt721->mbq_regmap, RT721_CBJ_CTRL, 248 - RT721_CBJ_A0_GAT_CTRL1, 0x2a02); 248 + RT721_CBJ_A0_GAT_CTRL1, 0x2205); 249 249 rt_sdca_index_write(rt721->mbq_regmap, RT721_CAP_PORT_CTRL, 250 250 RT721_HP_AMP_2CH_CAL4, 0xa105); 251 251 rt_sdca_index_write(rt721->mbq_regmap, RT721_VENDOR_ANA_CTL, 252 252 RT721_UAJ_TOP_TCON14, 0x3b33); 253 - regmap_write(rt721->mbq_regmap, 0x310400, 0x3023); 253 + regmap_write(rt721->mbq_regmap, 0x310400, 0x3043); 254 254 rt_sdca_index_write(rt721->mbq_regmap, RT721_VENDOR_ANA_CTL, 255 255 RT721_UAJ_TOP_TCON14, 0x3f33); 256 256 rt_sdca_index_write(rt721->mbq_regmap, RT721_VENDOR_ANA_CTL,
+43
sound/soc/codecs/tas2783-sdw.c
··· 1216 1216 return tas_io_init(&slave->dev, slave); 1217 1217 } 1218 1218 1219 + /* 1220 + * TAS2783 requires explicit port prepare during playback stream 1221 + * setup even when simple_ch_prep_sm is enabled. Without this, 1222 + * the port fails to enter the prepared state resulting in no audio output. 1223 + */ 1224 + static int tas_port_prep(struct sdw_slave *slave, struct sdw_prepare_ch *prep_ch, 1225 + enum sdw_port_prep_ops pre_ops) 1226 + { 1227 + struct device *dev = &slave->dev; 1228 + struct sdw_dpn_prop *dpn_prop; 1229 + u32 addr; 1230 + int ret; 1231 + 1232 + dpn_prop = slave->prop.sink_dpn_prop; 1233 + if (!dpn_prop || !dpn_prop->simple_ch_prep_sm) 1234 + return 0; 1235 + 1236 + addr = SDW_DPN_PREPARECTRL(prep_ch->num); 1237 + switch (pre_ops) { 1238 + case SDW_OPS_PORT_PRE_PREP: 1239 + ret = sdw_write_no_pm(slave, addr, prep_ch->ch_mask); 1240 + if (ret) 1241 + dev_err(dev, "prep failed for port %d, err=%d\n", 1242 + prep_ch->num, ret); 1243 + return ret; 1244 + 1245 + case SDW_OPS_PORT_PRE_DEPREP: 1246 + ret = sdw_write_no_pm(slave, addr, 0x00); 1247 + if (ret) 1248 + dev_err(dev, "de-prep failed for port %d, err=%d\n", 1249 + prep_ch->num, ret); 1250 + return ret; 1251 + 1252 + case SDW_OPS_PORT_POST_PREP: 1253 + case SDW_OPS_PORT_POST_DEPREP: 1254 + /* No POST handling required for TAS2783 */ 1255 + return 0; 1256 + } 1257 + 1258 + return 0; 1259 + } 1260 + 1219 1261 static const struct sdw_slave_ops tas_sdw_ops = { 1220 1262 .update_status = tas_update_status, 1263 + .port_prep = tas_port_prep, 1221 1264 }; 1222 1265 1223 1266 static void tas_remove(struct tas2783_prv *tas_dev)
-3
sound/soc/fsl/fsl_xcvr.c
··· 223 223 224 224 xcvr->mode = snd_soc_enum_item_to_val(e, item[0]); 225 225 226 - down_read(&card->snd_card->controls_rwsem); 227 226 fsl_xcvr_activate_ctl(dai, fsl_xcvr_arc_mode_kctl.name, 228 227 (xcvr->mode == FSL_XCVR_MODE_ARC)); 229 228 fsl_xcvr_activate_ctl(dai, fsl_xcvr_earc_capds_kctl.name, 230 229 (xcvr->mode == FSL_XCVR_MODE_EARC)); 231 - up_read(&card->snd_card->controls_rwsem); 232 - 233 230 /* Allow playback for SPDIF only */ 234 231 rtd = snd_soc_get_pcm_runtime(card, card->dai_link); 235 232 rtd->pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream_count =
+8
sound/soc/qcom/sm8250.c
··· 104 104 snd_soc_dai_set_fmt(cpu_dai, fmt); 105 105 snd_soc_dai_set_fmt(codec_dai, codec_dai_fmt); 106 106 break; 107 + case QUINARY_MI2S_RX: 108 + codec_dai_fmt |= SND_SOC_DAIFMT_NB_NF | SND_SOC_DAIFMT_I2S; 109 + snd_soc_dai_set_sysclk(cpu_dai, 110 + Q6AFE_LPASS_CLK_ID_QUI_MI2S_IBIT, 111 + MI2S_BCLK_RATE, SNDRV_PCM_STREAM_PLAYBACK); 112 + snd_soc_dai_set_fmt(cpu_dai, fmt); 113 + snd_soc_dai_set_fmt(codec_dai, codec_dai_fmt); 114 + break; 107 115 default: 108 116 break; 109 117 }
+1 -1
sound/soc/renesas/rz-ssi.c
··· 180 180 static inline struct rz_ssi_stream * 181 181 rz_ssi_stream_get(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream) 182 182 { 183 - return (ssi->playback.substream == substream) ? &ssi->playback : &ssi->capture; 183 + return (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) ? &ssi->playback : &ssi->capture; 184 184 } 185 185 186 186 static inline bool rz_ssi_is_dma_enabled(struct rz_ssi_priv *ssi)
+2 -2
tools/objtool/check.c
··· 683 683 684 684 key_sym = find_symbol_by_name(file->elf, tmp); 685 685 if (!key_sym) { 686 - if (!opts.module || file->klp) { 686 + if (!opts.module) { 687 687 ERROR("static_call: can't find static_call_key symbol: %s", tmp); 688 688 return -1; 689 689 } ··· 4762 4762 !strcmp(sec->name, "__bug_table") || 4763 4763 !strcmp(sec->name, "__ex_table") || 4764 4764 !strcmp(sec->name, "__jump_table") || 4765 - !strcmp(sec->name, "__klp_funcs") || 4765 + !strcmp(sec->name, ".init.klp_funcs") || 4766 4766 !strcmp(sec->name, "__mcount_loc") || 4767 4767 !strcmp(sec->name, ".llvm.call-graph-profile") || 4768 4768 !strcmp(sec->name, ".llvm_bb_addr_map") ||
+5 -5
tools/objtool/include/objtool/klp.h
··· 6 6 #define SHN_LIVEPATCH 0xff20 7 7 8 8 /* 9 - * __klp_objects and __klp_funcs are created by klp diff and used by the patch 10 - * module init code to build the klp_patch, klp_object and klp_func structs 11 - * needed by the livepatch API. 9 + * .init.klp_objects and .init.klp_funcs are created by klp diff and used by the 10 + * patch module init code to build the klp_patch, klp_object and klp_func 11 + * structs needed by the livepatch API. 12 12 */ 13 - #define KLP_OBJECTS_SEC "__klp_objects" 14 - #define KLP_FUNCS_SEC "__klp_funcs" 13 + #define KLP_OBJECTS_SEC ".init.klp_objects" 14 + #define KLP_FUNCS_SEC ".init.klp_funcs" 15 15 16 16 /* 17 17 * __klp_relocs is an intermediate section which are created by klp diff and
+35 -6
tools/objtool/klp-diff.c
··· 364 364 struct symbol *file1_sym, *file2_sym; 365 365 struct symbol *sym1, *sym2; 366 366 367 - /* Correlate locals */ 368 - for (file1_sym = first_file_symbol(e->orig), 369 - file2_sym = first_file_symbol(e->patched); ; 370 - file1_sym = next_file_symbol(e->orig, file1_sym), 371 - file2_sym = next_file_symbol(e->patched, file2_sym)) { 367 + file1_sym = first_file_symbol(e->orig); 368 + file2_sym = first_file_symbol(e->patched); 369 + 370 + /* 371 + * Correlate any locals before the first FILE symbol. This has been 372 + * seen when LTO inexplicably strips the initramfs_data.o FILE symbol 373 + * due to the file only containing data and no code. 374 + */ 375 + for_each_sym(e->orig, sym1) { 376 + if (sym1 == file1_sym || !is_local_sym(sym1)) 377 + break; 378 + 379 + if (dont_correlate(sym1)) 380 + continue; 381 + 382 + for_each_sym(e->patched, sym2) { 383 + if (sym2 == file2_sym || !is_local_sym(sym2)) 384 + break; 385 + 386 + if (sym2->twin || dont_correlate(sym2)) 387 + continue; 388 + 389 + if (strcmp(sym1->demangled_name, sym2->demangled_name)) 390 + continue; 391 + 392 + sym1->twin = sym2; 393 + sym2->twin = sym1; 394 + break; 395 + } 396 + } 397 + 398 + /* Correlate locals after the first FILE symbol */ 399 + for (; ; file1_sym = next_file_symbol(e->orig, file1_sym), 400 + file2_sym = next_file_symbol(e->patched, file2_sym)) { 372 401 373 402 if (!file1_sym && file2_sym) { 374 403 ERROR("FILE symbol mismatch: NULL != %s", file2_sym->name); ··· 1465 1436 } 1466 1437 1467 1438 /* 1468 - * Create __klp_objects and __klp_funcs sections which are intermediate 1439 + * Create .init.klp_objects and .init.klp_funcs sections which are intermediate 1469 1440 * sections provided as input to the patch module's init code for building the 1470 1441 * klp_patch, klp_object and klp_func structs for the livepatch API. 1471 1442 */
+1
tools/testing/selftests/kvm/Makefile.kvm
··· 251 251 LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include 252 252 CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ 253 253 -Wno-gnu-variable-sized-type-not-at-end -MD -MP -DCONFIG_64BIT \ 254 + -U_FORTIFY_SOURCE \ 254 255 -fno-builtin-memcmp -fno-builtin-memcpy \ 255 256 -fno-builtin-memset -fno-builtin-strnlen \ 256 257 -fno-stack-protector -fno-PIE -fno-strict-aliasing \
+64
tools/testing/selftests/net/udpgro_fwd.sh
··· 162 162 echo " ok" 163 163 } 164 164 165 + run_test_csum() { 166 + local -r msg="$1" 167 + local -r dst="$2" 168 + local csum_error_filter=UdpInCsumErrors 169 + local csum_errors 170 + 171 + printf "%-40s" "$msg" 172 + 173 + is_ipv6 "$dst" && csum_error_filter=Udp6InCsumErrors 174 + 175 + ip netns exec "$NS_DST" iperf3 -s -1 >/dev/null & 176 + wait_local_port_listen "$NS_DST" 5201 tcp 177 + local spid="$!" 178 + ip netns exec "$NS_SRC" iperf3 -c "$dst" -t 2 >/dev/null 179 + local retc="$?" 180 + wait "$spid" 181 + local rets="$?" 182 + if [ "$rets" -ne 0 ] || [ "$retc" -ne 0 ]; then 183 + echo " fail client exit code $retc, server $rets" 184 + ret=1 185 + return 186 + fi 187 + 188 + csum_errors=$(ip netns exec "$NS_DST" nstat -as "$csum_error_filter" | 189 + grep "$csum_error_filter" | awk '{print $2}') 190 + if [ -n "$csum_errors" ] && [ "$csum_errors" -gt 0 ]; then 191 + echo " fail - csum error on receive $csum_errors, expected 0" 192 + ret=1 193 + return 194 + fi 195 + echo " ok" 196 + } 197 + 165 198 run_bench() { 166 199 local -r msg=$1 167 200 local -r dst=$2 ··· 292 259 # stray traffic on top of the UDP tunnel 293 260 ip netns exec $NS_SRC $PING -q -c 1 $OL_NET$DST_NAT >/dev/null 294 261 run_test "GRO fwd over UDP tunnel" $OL_NET$DST_NAT 10 10 $OL_NET$DST 262 + cleanup 263 + 264 + # force segmentation and re-aggregation 265 + create_vxlan_pair 266 + ip netns exec "$NS_DST" ethtool -K veth"$DST" generic-receive-offload on 267 + ip netns exec "$NS_SRC" ethtool -K veth"$SRC" tso off 268 + ip -n "$NS_SRC" link set dev veth"$SRC" mtu 1430 269 + 270 + # forward to a 2nd veth pair 271 + ip -n "$NS_DST" link add br0 type bridge 272 + ip -n "$NS_DST" link set dev veth"$DST" master br0 273 + 274 + # segment the aggregated TSO packet, without csum offload 275 + ip -n "$NS_DST" link add veth_segment type veth peer veth_rx 276 + for FEATURE in tso tx-udp-segmentation tx-checksumming; do 277 + ip netns exec "$NS_DST" ethtool -K veth_segment "$FEATURE" off 278 + done 279 + ip -n "$NS_DST" link set dev veth_segment master br0 up 280 + ip -n "$NS_DST" link set dev br0 up 281 + ip -n "$NS_DST" link set dev veth_rx up 282 + 283 + # move the lower layer IP in the last added veth 284 + for ADDR in "$BM_NET_V4$DST/24" "$BM_NET_V6$DST/64"; do 285 + # the dad argument will let iproute emit a unharmful warning 286 + # with ipv4 addresses 287 + ip -n "$NS_DST" addr del dev veth"$DST" "$ADDR" 288 + ip -n "$NS_DST" addr add dev veth_rx "$ADDR" \ 289 + nodad 2>/dev/null 290 + done 291 + 292 + run_test_csum "GSO after GRO" "$OL_NET$DST" 295 293 cleanup 296 294 done 297 295
+24 -20
virt/kvm/eventfd.c
··· 157 157 } 158 158 159 159 160 - /* assumes kvm->irqfds.lock is held */ 161 - static bool 162 - irqfd_is_active(struct kvm_kernel_irqfd *irqfd) 160 + static bool irqfd_is_active(struct kvm_kernel_irqfd *irqfd) 163 161 { 162 + /* 163 + * Assert that either irqfds.lock or SRCU is held, as irqfds.lock must 164 + * be held to prevent false positives (on the irqfd being active), and 165 + * while false negatives are impossible as irqfds are never added back 166 + * to the list once they're deactivated, the caller must at least hold 167 + * SRCU to guard against routing changes if the irqfd is deactivated. 168 + */ 169 + lockdep_assert_once(lockdep_is_held(&irqfd->kvm->irqfds.lock) || 170 + srcu_read_lock_held(&irqfd->kvm->irq_srcu)); 171 + 164 172 return list_empty(&irqfd->list) ? false : true; 165 173 } 166 174 167 175 /* 168 176 * Mark the irqfd as inactive and schedule it for removal 169 - * 170 - * assumes kvm->irqfds.lock is held 171 177 */ 172 - static void 173 - irqfd_deactivate(struct kvm_kernel_irqfd *irqfd) 178 + static void irqfd_deactivate(struct kvm_kernel_irqfd *irqfd) 174 179 { 180 + lockdep_assert_held(&irqfd->kvm->irqfds.lock); 181 + 175 182 BUG_ON(!irqfd_is_active(irqfd)); 176 183 177 184 list_del_init(&irqfd->list); ··· 224 217 seq = read_seqcount_begin(&irqfd->irq_entry_sc); 225 218 irq = irqfd->irq_entry; 226 219 } while (read_seqcount_retry(&irqfd->irq_entry_sc, seq)); 227 - /* An event has been signaled, inject an interrupt */ 228 - if (kvm_arch_set_irq_inatomic(&irq, kvm, 220 + 221 + /* 222 + * An event has been signaled, inject an interrupt unless the 223 + * irqfd is being deassigned (isn't active), in which case the 224 + * routing information may be stale (once the irqfd is removed 225 + * from the list, it will stop receiving routing updates). 226 + */ 227 + if (unlikely(!irqfd_is_active(irqfd)) || 228 + kvm_arch_set_irq_inatomic(&irq, kvm, 229 229 KVM_USERSPACE_IRQ_SOURCE_ID, 1, 230 230 false) == -EWOULDBLOCK) 231 231 schedule_work(&irqfd->inject); ··· 599 585 spin_lock_irq(&kvm->irqfds.lock); 600 586 601 587 list_for_each_entry_safe(irqfd, tmp, &kvm->irqfds.items, list) { 602 - if (irqfd->eventfd == eventfd && irqfd->gsi == args->gsi) { 603 - /* 604 - * This clearing of irq_entry.type is needed for when 605 - * another thread calls kvm_irq_routing_update before 606 - * we flush workqueue below (we synchronize with 607 - * kvm_irq_routing_update using irqfds.lock). 608 - */ 609 - write_seqcount_begin(&irqfd->irq_entry_sc); 610 - irqfd->irq_entry.type = 0; 611 - write_seqcount_end(&irqfd->irq_entry_sc); 588 + if (irqfd->eventfd == eventfd && irqfd->gsi == args->gsi) 612 589 irqfd_deactivate(irqfd); 613 - } 614 590 } 615 591 616 592 spin_unlock_irq(&kvm->irqfds.lock);