Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.19-rc9).

No adjacent changes, conflicts:

drivers/net/ethernet/spacemit/k1_emac.c
3125fc1701694 ("net: spacemit: k1-emac: fix jumbo frame support")
f66086798f91f ("net: spacemit: Remove broken flow control support")
https://lore.kernel.org/aYIysFIE9ooavWia@sirena.org.uk

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+1545 -815
+4 -1
.mailmap
··· 34 34 Alexander Lobakin <alobakin@pm.me> <bloodyreaper@yandex.ru> 35 35 Alexander Mikhalitsyn <alexander@mihalicyn.com> <alexander.mikhalitsyn@virtuozzo.com> 36 36 Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@canonical.com> 37 + Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@futurfusion.io> 37 38 Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin.ext@nsn.com> 38 39 Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@gmx.de> 39 40 Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@nokia.com> ··· 787 786 Subbaraman Narayanamurthy <quic_subbaram@quicinc.com> <subbaram@codeaurora.org> 788 787 Subhash Jadavani <subhashj@codeaurora.org> 789 788 Sudarshan Rajagopalan <quic_sudaraja@quicinc.com> <sudaraja@codeaurora.org> 790 - Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> 789 + Sudeep Holla <sudeep.holla@kernel.org> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> 790 + Sudeep Holla <sudeep.holla@kernel.org> <sudeep.holla@arm.com> 791 791 Sumit Garg <sumit.garg@kernel.org> <sumit.garg@linaro.org> 792 792 Sumit Semwal <sumit.semwal@ti.com> 793 793 Surabhi Vishnoi <quic_svishnoi@quicinc.com> <svishnoi@codeaurora.org> ··· 853 851 Veera Sundaram Sankaran <quic_veeras@quicinc.com> <veeras@codeaurora.org> 854 852 Veerabhadrarao Badiganti <quic_vbadigan@quicinc.com> <vbadigan@codeaurora.org> 855 853 Venkateswara Naralasetty <quic_vnaralas@quicinc.com> <vnaralas@codeaurora.org> 854 + Viacheslav Bocharov <v@baodeep.com> <adeep@lexina.in> 856 855 Vikash Garodia <vikash.garodia@oss.qualcomm.com> <vgarodia@codeaurora.org> 857 856 Vikash Garodia <vikash.garodia@oss.qualcomm.com> <quic_vgarodia@quicinc.com> 858 857 Vincent Mailhol <mailhol@kernel.org> <mailhol.vincent@wanadoo.fr>
-10
Documentation/ABI/testing/sysfs-class-tsm
··· 7 7 signals when the PCI layer is able to support establishment of 8 8 link encryption and other device-security features coordinated 9 9 through a platform tsm. 10 - 11 - What: /sys/class/tsm/tsmN/streamH.R.E 12 - Contact: linux-pci@vger.kernel.org 13 - Description: 14 - (RO) When a host bridge has established a secure connection via 15 - the platform TSM, symlink appears. The primary function of this 16 - is have a system global review of TSM resource consumption 17 - across host bridges. The link points to the endpoint PCI device 18 - and matches the same link published by the host bridge. See 19 - Documentation/ABI/testing/sysfs-devices-pci-host-bridge.
+5
Documentation/admin-guide/kernel-parameters.txt
··· 3472 3472 If there are multiple matching configurations changing 3473 3473 the same attribute, the last one is used. 3474 3474 3475 + liveupdate= [KNL,EARLY] 3476 + Format: <bool> 3477 + Enable Live Update Orchestrator (LUO). 3478 + Default: off. 3479 + 3475 3480 load_ramdisk= [RAM] [Deprecated] 3476 3481 3477 3482 lockd.nlm_grace_period=P [NFS] Assign grace period.
+1
Documentation/devicetree/bindings/sound/fsl,sai.yaml
··· 44 44 - items: 45 45 - enum: 46 46 - fsl,imx94-sai 47 + - fsl,imx952-sai 47 48 - const: fsl,imx95-sai 48 49 49 50 reg:
+35 -18
MAINTAINERS
··· 335 335 ACPI FOR ARM64 (ACPI/arm64) 336 336 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 337 337 M: Hanjun Guo <guohanjun@huawei.com> 338 - M: Sudeep Holla <sudeep.holla@arm.com> 338 + M: Sudeep Holla <sudeep.holla@kernel.org> 339 339 L: linux-acpi@vger.kernel.org 340 340 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 341 341 S: Maintained ··· 351 351 F: include/linux/acpi_rimt.h 352 352 353 353 ACPI PCC(Platform Communication Channel) MAILBOX DRIVER 354 - M: Sudeep Holla <sudeep.holla@arm.com> 354 + M: Sudeep Holla <sudeep.holla@kernel.org> 355 355 L: linux-acpi@vger.kernel.org 356 356 S: Supported 357 357 F: drivers/mailbox/pcc.c ··· 2747 2747 F: arch/arm/mach-footbridge/ 2748 2748 2749 2749 ARM/FREESCALE IMX / MXC ARM ARCHITECTURE 2750 - M: Shawn Guo <shawnguo@kernel.org> 2750 + M: Frank Li <Frank.Li@nxp.com> 2751 2751 M: Sascha Hauer <s.hauer@pengutronix.de> 2752 2752 R: Pengutronix Kernel Team <kernel@pengutronix.de> 2753 2753 R: Fabio Estevam <festevam@gmail.com> 2754 2754 L: imx@lists.linux.dev 2755 2755 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2756 2756 S: Maintained 2757 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 2757 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git 2758 2758 F: Documentation/devicetree/bindings/firmware/fsl* 2759 2759 F: Documentation/devicetree/bindings/firmware/nxp* 2760 2760 F: arch/arm/boot/dts/nxp/imx/ ··· 2769 2769 N: \bmxc[^\d] 2770 2770 2771 2771 ARM/FREESCALE LAYERSCAPE ARM ARCHITECTURE 2772 - M: Shawn Guo <shawnguo@kernel.org> 2772 + M: Frank Li <Frank.Li@nxp.com> 2773 2773 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2774 2774 S: Maintained 2775 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 2775 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git 2776 2776 F: arch/arm/boot/dts/nxp/ls/ 2777 2777 F: arch/arm64/boot/dts/freescale/fsl-* 2778 2778 F: arch/arm64/boot/dts/freescale/qoriq-* 2779 2779 2780 2780 ARM/FREESCALE VYBRID ARM ARCHITECTURE 2781 - M: Shawn Guo <shawnguo@kernel.org> 2781 + M: Frank Li <Frank.Li@nxp.com> 2782 2782 M: Sascha Hauer <s.hauer@pengutronix.de> 2783 2783 R: Pengutronix Kernel Team <kernel@pengutronix.de> 2784 2784 R: Stefan Agner <stefan@agner.ch> 2785 2785 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2786 2786 S: Maintained 2787 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 2787 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git 2788 2788 F: arch/arm/boot/dts/nxp/vf/ 2789 2789 F: arch/arm/mach-imx/*vf610* 2790 2790 ··· 3681 3681 3682 3682 ARM/VERSATILE EXPRESS PLATFORM 3683 3683 M: Liviu Dudau <liviu.dudau@arm.com> 3684 - M: Sudeep Holla <sudeep.holla@arm.com> 3684 + M: Sudeep Holla <sudeep.holla@kernel.org> 3685 3685 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 3686 3686 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3687 3687 S: Maintained ··· 6514 6514 6515 6515 CPU FREQUENCY DRIVERS - VEXPRESS SPC ARM BIG LITTLE 6516 6516 M: Viresh Kumar <viresh.kumar@linaro.org> 6517 - M: Sudeep Holla <sudeep.holla@arm.com> 6517 + M: Sudeep Holla <sudeep.holla@kernel.org> 6518 6518 L: linux-pm@vger.kernel.org 6519 6519 S: Maintained 6520 6520 W: http://www.arm.com/products/processors/technologies/biglittleprocessing.php ··· 6610 6610 6611 6611 CPUIDLE DRIVER - ARM PSCI 6612 6612 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 6613 - M: Sudeep Holla <sudeep.holla@arm.com> 6613 + M: Sudeep Holla <sudeep.holla@kernel.org> 6614 6614 M: Ulf Hansson <ulf.hansson@linaro.org> 6615 6615 L: linux-pm@vger.kernel.org 6616 6616 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 9819 9819 F: tools/firewire/ 9820 9820 9821 9821 FIRMWARE FRAMEWORK FOR ARMV8-A 9822 - M: Sudeep Holla <sudeep.holla@arm.com> 9822 + M: Sudeep Holla <sudeep.holla@kernel.org> 9823 9823 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 9824 9824 S: Maintained 9825 9825 F: drivers/firmware/arm_ffa/ ··· 10517 10517 F: scripts/gendwarfksyms/ 10518 10518 10519 10519 GENERIC ARCHITECTURE TOPOLOGY 10520 - M: Sudeep Holla <sudeep.holla@arm.com> 10520 + M: Sudeep Holla <sudeep.holla@kernel.org> 10521 10521 L: linux-kernel@vger.kernel.org 10522 10522 S: Maintained 10523 10523 F: drivers/base/arch_topology.c ··· 11370 11370 F: Documentation/ABI/testing/sysfs-devices-platform-kunpeng_hccs 11371 11371 F: drivers/soc/hisilicon/kunpeng_hccs.c 11372 11372 F: drivers/soc/hisilicon/kunpeng_hccs.h 11373 + 11374 + HISILICON SOC HHA DRIVER 11375 + M: Yushan Wang <wangyushan12@huawei.com> 11376 + S: Maintained 11377 + F: drivers/cache/hisi_soc_hha.c 11373 11378 11374 11379 HISILICON LPC BUS DRIVER 11375 11380 M: Jay Fang <f.fangjian@huawei.com> ··· 15101 15096 F: include/linux/mailbox/arm_mhuv2_message.h 15102 15097 15103 15098 MAILBOX ARM MHUv3 15104 - M: Sudeep Holla <sudeep.holla@arm.com> 15099 + M: Sudeep Holla <sudeep.holla@kernel.org> 15105 15100 M: Cristian Marussi <cristian.marussi@arm.com> 15106 15101 L: linux-kernel@vger.kernel.org 15107 15102 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 20594 20589 PIN CONTROLLER - FREESCALE 20595 20590 M: Dong Aisheng <aisheng.dong@nxp.com> 20596 20591 M: Fabio Estevam <festevam@gmail.com> 20597 - M: Shawn Guo <shawnguo@kernel.org> 20592 + M: Frank Li <Frank.Li@nxp.com> 20598 20593 M: Jacky Bai <ping.bai@nxp.com> 20599 20594 R: Pengutronix Kernel Team <kernel@pengutronix.de> 20600 20595 R: NXP S32 Linux Team <s32@nxp.com> ··· 20989 20984 F: Documentation/devicetree/bindings/net/pse-pd/ 20990 20985 F: drivers/net/pse-pd/ 20991 20986 F: net/ethtool/pse-pd.c 20987 + 20988 + PSP SECURITY PROTOCOL 20989 + M: Daniel Zahka <daniel.zahka@gmail.com> 20990 + M: Jakub Kicinski <kuba@kernel.org> 20991 + M: Willem de Bruijn <willemdebruijn.kernel@gmail.com> 20992 + F: Documentation/netlink/specs/psp.yaml 20993 + F: Documentation/networking/psp.rst 20994 + F: include/net/psp/ 20995 + F: include/net/psp.h 20996 + F: include/uapi/linux/psp.h 20997 + F: net/psp/ 20998 + K: struct\ psp(_assoc|_dev|hdr)\b 20992 20999 20993 21000 PSTORE FILESYSTEM 20994 21001 M: Kees Cook <kees@kernel.org> ··· 23661 23644 SECURE MONITOR CALL(SMC) CALLING CONVENTION (SMCCC) 23662 23645 M: Mark Rutland <mark.rutland@arm.com> 23663 23646 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 23664 - M: Sudeep Holla <sudeep.holla@arm.com> 23647 + M: Sudeep Holla <sudeep.holla@kernel.org> 23665 23648 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 23666 23649 S: Maintained 23667 23650 F: drivers/firmware/smccc/ ··· 25425 25408 F: drivers/mfd/syscon.c 25426 25409 25427 25410 SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers 25428 - M: Sudeep Holla <sudeep.holla@arm.com> 25411 + M: Sudeep Holla <sudeep.holla@kernel.org> 25429 25412 R: Cristian Marussi <cristian.marussi@arm.com> 25430 25413 L: arm-scmi@vger.kernel.org 25431 25414 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 26577 26560 26578 26561 TRUSTED SERVICES TEE DRIVER 26579 26562 M: Balint Dobszay <balint.dobszay@arm.com> 26580 - M: Sudeep Holla <sudeep.holla@arm.com> 26563 + M: Sudeep Holla <sudeep.holla@kernel.org> 26581 26564 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 26582 26565 L: trusted-services@lists.trustedfirmware.org 26583 26566 S: Maintained
+3 -2
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc7 5 + EXTRAVERSION = -rc8 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 1624 1624 certs/x509.genkey \ 1625 1625 vmlinux-gdb.py \ 1626 1626 rpmbuild \ 1627 - rust/libmacros.so rust/libmacros.dylib 1627 + rust/libmacros.so rust/libmacros.dylib \ 1628 + rust/libpin_init_internal.so rust/libpin_init_internal.dylib 1628 1629 1629 1630 # clean - Delete most, but leave enough to build external modules 1630 1631 #
+1 -1
arch/powerpc/kvm/book3s_hv_uvmem.c
··· 723 723 724 724 dpage = pfn_to_page(uvmem_pfn); 725 725 dpage->zone_device_data = pvt; 726 - zone_device_page_init(dpage, 0); 726 + zone_device_page_init(dpage, &kvmppc_uvmem_pgmap, 0); 727 727 return dpage; 728 728 out_clear: 729 729 spin_lock(&kvmppc_uvmem_bitmap_lock);
-18
arch/riscv/errata/sifive/errata.c
··· 75 75 return cpu_req_errata; 76 76 } 77 77 78 - static void __init_or_module warn_miss_errata(u32 miss_errata) 79 - { 80 - int i; 81 - 82 - pr_warn("----------------------------------------------------------------\n"); 83 - pr_warn("WARNING: Missing the following errata may cause potential issues\n"); 84 - for (i = 0; i < ERRATA_SIFIVE_NUMBER; i++) 85 - if (miss_errata & 0x1 << i) 86 - pr_warn("\tSiFive Errata[%d]:%s\n", i, errata_list[i].name); 87 - pr_warn("Please enable the corresponding Kconfig to apply them\n"); 88 - pr_warn("----------------------------------------------------------------\n"); 89 - } 90 - 91 78 void sifive_errata_patch_func(struct alt_entry *begin, struct alt_entry *end, 92 79 unsigned long archid, unsigned long impid, 93 80 unsigned int stage) 94 81 { 95 82 struct alt_entry *alt; 96 83 u32 cpu_req_errata; 97 - u32 cpu_apply_errata = 0; 98 84 u32 tmp; 99 85 100 86 BUILD_BUG_ON(ERRATA_SIFIVE_NUMBER >= RISCV_VENDOR_EXT_ALTERNATIVES_BASE); ··· 104 118 patch_text_nosync(ALT_OLD_PTR(alt), ALT_ALT_PTR(alt), 105 119 alt->alt_len); 106 120 mutex_unlock(&text_mutex); 107 - cpu_apply_errata |= tmp; 108 121 } 109 122 } 110 - if (stage != RISCV_ALTERNATIVES_MODULE && 111 - cpu_apply_errata != cpu_req_errata) 112 - warn_miss_errata(cpu_req_errata - cpu_apply_errata); 113 123 }
+1 -1
arch/riscv/include/asm/compat.h
··· 2 2 #ifndef __ASM_COMPAT_H 3 3 #define __ASM_COMPAT_H 4 4 5 - #define COMPAT_UTS_MACHINE "riscv\0\0" 5 + #define COMPAT_UTS_MACHINE "riscv32\0\0" 6 6 7 7 /* 8 8 * Architecture specific compatibility types
+1 -1
arch/riscv/include/asm/syscall.h
··· 20 20 extern void * const compat_sys_call_table[]; 21 21 22 22 /* 23 - * Only the low 32 bits of orig_r0 are meaningful, so we return int. 23 + * Only the low 32 bits of orig_a0 are meaningful, so we return int. 24 24 * This importantly ignores the high bits on 64-bit, so comparisons 25 25 * sign-extend the low 32 bits. 26 26 */
+3 -3
arch/riscv/kernel/signal.c
··· 145 145 long (*save)(struct pt_regs *regs, void __user *sc_vec); 146 146 }; 147 147 148 - struct arch_ext_priv arch_ext_list[] = { 148 + static struct arch_ext_priv arch_ext_list[] = { 149 149 { 150 150 .magic = RISCV_V_MAGIC, 151 151 .save = &save_v_state, 152 152 }, 153 153 }; 154 154 155 - const size_t nr_arch_exts = ARRAY_SIZE(arch_ext_list); 155 + static const size_t nr_arch_exts = ARRAY_SIZE(arch_ext_list); 156 156 157 157 static long restore_sigcontext(struct pt_regs *regs, 158 158 struct sigcontext __user *sc) ··· 297 297 } else { 298 298 err |= __put_user(arch_ext->magic, &sc_ext_ptr->magic); 299 299 err |= __put_user(ext_size, &sc_ext_ptr->size); 300 - sc_ext_ptr = (void *)sc_ext_ptr + ext_size; 300 + sc_ext_ptr = (void __user *)sc_ext_ptr + ext_size; 301 301 } 302 302 } 303 303 /* Write zero to fp-reserved space and check it on restore_sigcontext */
+4 -3
arch/x86/include/asm/kfence.h
··· 42 42 { 43 43 unsigned int level; 44 44 pte_t *pte = lookup_address(addr, &level); 45 - pteval_t val; 45 + pteval_t val, new; 46 46 47 47 if (WARN_ON(!pte || level != PG_LEVEL_4K)) 48 48 return false; ··· 57 57 return true; 58 58 59 59 /* 60 - * Otherwise, invert the entire PTE. This avoids writing out an 60 + * Otherwise, flip the Present bit, taking care to avoid writing an 61 61 * L1TF-vulnerable PTE (not present, without the high address bits 62 62 * set). 63 63 */ 64 - set_pte(pte, __pte(~val)); 64 + new = val ^ _PAGE_PRESENT; 65 + set_pte(pte, __pte(flip_protnone_guard(val, new, PTE_PFN_MASK))); 65 66 66 67 /* 67 68 * If the page was protected (non-present) and we're making it
+2 -1
arch/x86/kvm/irq.c
··· 514 514 */ 515 515 spin_lock_irq(&kvm->irqfds.lock); 516 516 517 - if (irqfd->irq_entry.type == KVM_IRQ_ROUTING_MSI) { 517 + if (irqfd->irq_entry.type == KVM_IRQ_ROUTING_MSI || 518 + WARN_ON_ONCE(irqfd->irq_bypass_vcpu)) { 518 519 ret = kvm_pi_update_irte(irqfd, NULL); 519 520 if (ret) 520 521 pr_info("irq bypass consumer (eventfd %p) unregistration fails: %d\n",
+2 -2
arch/x86/kvm/svm/avic.c
··· 376 376 377 377 static int avic_init_backing_page(struct kvm_vcpu *vcpu) 378 378 { 379 + u32 max_id = x2avic_enabled ? x2avic_max_physical_id : AVIC_MAX_PHYSICAL_ID; 379 380 struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm); 380 381 struct vcpu_svm *svm = to_svm(vcpu); 381 382 u32 id = vcpu->vcpu_id; ··· 389 388 * avic_vcpu_load() expects to be called if and only if the vCPU has 390 389 * fully initialized AVIC. 391 390 */ 392 - if ((!x2avic_enabled && id > AVIC_MAX_PHYSICAL_ID) || 393 - (id > x2avic_max_physical_id)) { 391 + if (id > max_id) { 394 392 kvm_set_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_PHYSICAL_ID_TOO_BIG); 395 393 vcpu->arch.apic->apicv_active = false; 396 394 return 0;
+2
arch/x86/kvm/svm/svm.c
··· 5284 5284 */ 5285 5285 kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT); 5286 5286 kvm_cpu_cap_clear(X86_FEATURE_MSR_IMM); 5287 + 5288 + kvm_setup_xss_caps(); 5287 5289 } 5288 5290 5289 5291 static __init int svm_hardware_setup(void)
+2
arch/x86/kvm/vmx/vmx.c
··· 8051 8051 kvm_cpu_cap_clear(X86_FEATURE_SHSTK); 8052 8052 kvm_cpu_cap_clear(X86_FEATURE_IBT); 8053 8053 } 8054 + 8055 + kvm_setup_xss_caps(); 8054 8056 } 8055 8057 8056 8058 static bool vmx_is_io_intercepted(struct kvm_vcpu *vcpu,
+17 -13
arch/x86/kvm/x86.c
··· 9953 9953 }; 9954 9954 #endif 9955 9955 9956 + void kvm_setup_xss_caps(void) 9957 + { 9958 + if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES)) 9959 + kvm_caps.supported_xss = 0; 9960 + 9961 + if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) && 9962 + !kvm_cpu_cap_has(X86_FEATURE_IBT)) 9963 + kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL; 9964 + 9965 + if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) { 9966 + kvm_cpu_cap_clear(X86_FEATURE_SHSTK); 9967 + kvm_cpu_cap_clear(X86_FEATURE_IBT); 9968 + kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL; 9969 + } 9970 + } 9971 + EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_setup_xss_caps); 9972 + 9956 9973 static inline void kvm_ops_update(struct kvm_x86_init_ops *ops) 9957 9974 { 9958 9975 memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops)); ··· 10141 10124 /* KVM always ignores guest PAT for shadow paging. */ 10142 10125 if (!tdp_enabled) 10143 10126 kvm_caps.supported_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT; 10144 - 10145 - if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES)) 10146 - kvm_caps.supported_xss = 0; 10147 - 10148 - if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) && 10149 - !kvm_cpu_cap_has(X86_FEATURE_IBT)) 10150 - kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL; 10151 - 10152 - if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) { 10153 - kvm_cpu_cap_clear(X86_FEATURE_SHSTK); 10154 - kvm_cpu_cap_clear(X86_FEATURE_IBT); 10155 - kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL; 10156 - } 10157 10127 10158 10128 if (kvm_caps.has_tsc_control) { 10159 10129 /*
+2
arch/x86/kvm/x86.h
··· 471 471 472 472 extern bool enable_pmu; 473 473 474 + void kvm_setup_xss_caps(void); 475 + 474 476 /* 475 477 * Get a filtered version of KVM's supported XCR0 that strips out dynamic 476 478 * features for which the current process doesn't (yet) have permission to use.
+1
drivers/block/rnbd/rnbd-clt.c
··· 1662 1662 /* To avoid deadlock firstly remove itself */ 1663 1663 sysfs_remove_file_self(&dev->kobj, sysfs_self); 1664 1664 kobject_del(&dev->kobj); 1665 + kobject_put(&dev->kobj); 1665 1666 } 1666 1667 } 1667 1668
+6
drivers/bus/simple-pm-bus.c
··· 142 142 { .compatible = "simple-mfd", .data = ONLY_BUS }, 143 143 { .compatible = "isa", .data = ONLY_BUS }, 144 144 { .compatible = "arm,amba-bus", .data = ONLY_BUS }, 145 + { .compatible = "fsl,ls1021a-scfg", }, 146 + { .compatible = "fsl,ls1043a-scfg", }, 147 + { .compatible = "fsl,ls1046a-scfg", }, 148 + { .compatible = "fsl,ls1088a-isc", }, 149 + { .compatible = "fsl,ls2080a-isc", }, 150 + { .compatible = "fsl,lx2160a-isc", }, 145 151 { /* sentinel */ } 146 152 }; 147 153 MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
+1
drivers/cpufreq/qcom-cpufreq-nvmem.c
··· 263 263 { .compatible = "qcom,ipq8066", .data = (const void *)QCOM_ID_IPQ8066 }, 264 264 { .compatible = "qcom,ipq8068", .data = (const void *)QCOM_ID_IPQ8068 }, 265 265 { .compatible = "qcom,ipq8069", .data = (const void *)QCOM_ID_IPQ8069 }, 266 + { /* sentinel */ } 266 267 }; 267 268 268 269 static int qcom_cpufreq_ipq8064_name_version(struct device *cpu_dev,
+1 -14
drivers/crypto/ccp/sev-dev-tsm.c
··· 19 19 20 20 MODULE_IMPORT_NS("PCI_IDE"); 21 21 22 - #define TIO_DEFAULT_NR_IDE_STREAMS 1 23 - 24 - static uint nr_ide_streams = TIO_DEFAULT_NR_IDE_STREAMS; 25 - module_param_named(ide_nr, nr_ide_streams, uint, 0644); 26 - MODULE_PARM_DESC(ide_nr, "Set the maximum number of IDE streams per PHB"); 27 - 28 22 #define dev_to_sp(dev) ((struct sp_device *)dev_get_drvdata(dev)) 29 23 #define dev_to_psp(dev) ((struct psp_device *)(dev_to_sp(dev)->psp_data)) 30 24 #define dev_to_sev(dev) ((struct sev_device *)(dev_to_psp(dev)->sev_data)) ··· 187 193 static int stream_alloc(struct pci_dev *pdev, struct pci_ide **ide, 188 194 unsigned int tc) 189 195 { 190 - struct pci_dev *rp = pcie_find_root_port(pdev); 191 196 struct pci_ide *ide1; 192 197 193 198 if (ide[tc]) { ··· 194 201 return -EBUSY; 195 202 } 196 203 197 - /* FIXME: find a better way */ 198 - if (nr_ide_streams != TIO_DEFAULT_NR_IDE_STREAMS) 199 - pci_notice(pdev, "Enable non-default %d streams", nr_ide_streams); 200 - pci_ide_set_nr_streams(to_pci_host_bridge(rp->bus->bridge), nr_ide_streams); 201 - 202 204 ide1 = pci_ide_stream_alloc(pdev); 203 205 if (!ide1) 204 206 return -EFAULT; 205 207 206 - /* Blindly assign streamid=0 to TC=0, and so on */ 207 - ide1->stream_id = tc; 208 + ide1->stream_id = ide1->host_bridge_stream; 208 209 209 210 ide[tc] = ide1; 210 211
+10 -9
drivers/firewire/core-transaction.c
··· 173 173 } 174 174 } 175 175 176 - static void start_split_transaction_timeout(struct fw_transaction *t, 177 - struct fw_card *card) 176 + // card->transactions.lock should be acquired in advance for the linked list. 177 + static void start_split_transaction_timeout(struct fw_transaction *t, unsigned int delta) 178 178 { 179 - unsigned long delta; 180 - 181 179 if (list_empty(&t->link) || WARN_ON(t->is_split_transaction)) 182 180 return; 183 181 184 182 t->is_split_transaction = true; 185 183 186 - // NOTE: This can be without irqsave when we can guarantee that __fw_send_request() for 187 - // local destination never runs in any type of IRQ context. 188 - scoped_guard(spinlock_irqsave, &card->split_timeout.lock) 189 - delta = card->split_timeout.jiffies; 190 184 mod_timer(&t->split_timeout_timer, jiffies + delta); 191 185 } 192 186 ··· 201 207 break; 202 208 case ACK_PENDING: 203 209 { 210 + unsigned int delta; 211 + 204 212 // NOTE: This can be without irqsave when we can guarantee that __fw_send_request() for 205 213 // local destination never runs in any type of IRQ context. 206 214 scoped_guard(spinlock_irqsave, &card->split_timeout.lock) { 207 215 t->split_timeout_cycle = 208 216 compute_split_timeout_timestamp(card, packet->timestamp) & 0xffff; 217 + delta = card->split_timeout.jiffies; 209 218 } 210 - start_split_transaction_timeout(t, card); 219 + 220 + // NOTE: This can be without irqsave when we can guarantee that __fw_send_request() for 221 + // local destination never runs in any type of IRQ context. 222 + scoped_guard(spinlock_irqsave, &card->transactions.lock) 223 + start_split_transaction_timeout(t, delta); 211 224 break; 212 225 } 213 226 case ACK_BUSY_X:
+3 -5
drivers/gpio/gpio-brcmstb.c
··· 301 301 struct brcmstb_gpio_priv *priv, irq_hw_number_t hwirq) 302 302 { 303 303 struct brcmstb_gpio_bank *bank; 304 - int i = 0; 305 304 306 - /* banks are in descending order */ 307 - list_for_each_entry_reverse(bank, &priv->bank_list, node) { 308 - i += bank->chip.gc.ngpio; 309 - if (hwirq < i) 305 + list_for_each_entry(bank, &priv->bank_list, node) { 306 + if (hwirq >= bank->chip.gc.offset && 307 + hwirq < (bank->chip.gc.offset + bank->chip.gc.ngpio)) 310 308 return bank; 311 309 } 312 310 return NULL;
+18 -4
drivers/gpio/gpio-omap.c
··· 799 799 800 800 static inline void omap_mpuio_init(struct gpio_bank *bank) 801 801 { 802 - platform_set_drvdata(&omap_mpuio_device, bank); 802 + static bool registered; 803 803 804 - if (platform_driver_register(&omap_mpuio_driver) == 0) 805 - (void) platform_device_register(&omap_mpuio_device); 804 + platform_set_drvdata(&omap_mpuio_device, bank); 805 + if (!registered) { 806 + (void)platform_device_register(&omap_mpuio_device); 807 + registered = true; 808 + } 806 809 } 807 810 808 811 /*---------------------------------------------------------------------*/ ··· 1578 1575 */ 1579 1576 static int __init omap_gpio_drv_reg(void) 1580 1577 { 1581 - return platform_driver_register(&omap_gpio_driver); 1578 + int ret; 1579 + 1580 + ret = platform_driver_register(&omap_mpuio_driver); 1581 + if (ret) 1582 + return ret; 1583 + 1584 + ret = platform_driver_register(&omap_gpio_driver); 1585 + if (ret) 1586 + platform_driver_unregister(&omap_mpuio_driver); 1587 + 1588 + return ret; 1582 1589 } 1583 1590 postcore_initcall(omap_gpio_drv_reg); 1584 1591 1585 1592 static void __exit omap_gpio_exit(void) 1586 1593 { 1587 1594 platform_driver_unregister(&omap_gpio_driver); 1595 + platform_driver_unregister(&omap_mpuio_driver); 1588 1596 } 1589 1597 module_exit(omap_gpio_exit); 1590 1598
+2
drivers/gpio/gpio-pca953x.c
··· 914 914 clear_bit(hwirq, chip->irq_trig_fall); 915 915 clear_bit(hwirq, chip->irq_trig_level_low); 916 916 clear_bit(hwirq, chip->irq_trig_level_high); 917 + 918 + pca953x_irq_mask(d); 917 919 } 918 920 919 921 static void pca953x_irq_print_chip(struct irq_data *data, struct seq_file *p)
-8
drivers/gpio/gpio-rockchip.c
··· 18 18 #include <linux/of.h> 19 19 #include <linux/of_address.h> 20 20 #include <linux/of_irq.h> 21 - #include <linux/pinctrl/consumer.h> 22 21 #include <linux/pinctrl/pinconf-generic.h> 23 22 #include <linux/platform_device.h> 24 23 #include <linux/regmap.h> ··· 162 163 struct rockchip_pin_bank *bank = gpiochip_get_data(chip); 163 164 unsigned long flags; 164 165 u32 data = input ? 0 : 1; 165 - 166 - 167 - if (input) 168 - pinctrl_gpio_direction_input(chip, offset); 169 - else 170 - pinctrl_gpio_direction_output(chip, offset); 171 166 172 167 raw_spin_lock_irqsave(&bank->slock, flags); 173 168 rockchip_gpio_writel_bit(bank, offset, data, bank->gpio_regs->port_ddr); ··· 586 593 gc->ngpio = bank->nr_pins; 587 594 gc->label = bank->name; 588 595 gc->parent = bank->dev; 589 - gc->can_sleep = true; 590 596 591 597 ret = gpiochip_add_data(gc, bank); 592 598 if (ret) {
+4 -4
drivers/gpio/gpio-sprd.c
··· 35 35 struct sprd_gpio { 36 36 struct gpio_chip chip; 37 37 void __iomem *base; 38 - spinlock_t lock; 38 + raw_spinlock_t lock; 39 39 int irq; 40 40 }; 41 41 ··· 54 54 unsigned long flags; 55 55 u32 tmp; 56 56 57 - spin_lock_irqsave(&sprd_gpio->lock, flags); 57 + raw_spin_lock_irqsave(&sprd_gpio->lock, flags); 58 58 tmp = readl_relaxed(base + reg); 59 59 60 60 if (val) ··· 63 63 tmp &= ~BIT(SPRD_GPIO_BIT(offset)); 64 64 65 65 writel_relaxed(tmp, base + reg); 66 - spin_unlock_irqrestore(&sprd_gpio->lock, flags); 66 + raw_spin_unlock_irqrestore(&sprd_gpio->lock, flags); 67 67 } 68 68 69 69 static int sprd_gpio_read(struct gpio_chip *chip, unsigned int offset, u16 reg) ··· 236 236 if (IS_ERR(sprd_gpio->base)) 237 237 return PTR_ERR(sprd_gpio->base); 238 238 239 - spin_lock_init(&sprd_gpio->lock); 239 + raw_spin_lock_init(&sprd_gpio->lock); 240 240 241 241 sprd_gpio->chip.label = dev_name(&pdev->dev); 242 242 sprd_gpio->chip.ngpio = SPRD_GPIO_NR;
+4 -4
drivers/gpio/gpio-virtuser.c
··· 1682 1682 { 1683 1683 struct gpio_virtuser_device *dev = to_gpio_virtuser_device(item); 1684 1684 1685 - guard(mutex)(&dev->lock); 1686 - 1687 - if (gpio_virtuser_device_is_live(dev)) 1688 - gpio_virtuser_device_deactivate(dev); 1685 + scoped_guard(mutex, &dev->lock) { 1686 + if (gpio_virtuser_device_is_live(dev)) 1687 + gpio_virtuser_device_deactivate(dev); 1688 + } 1689 1689 1690 1690 mutex_destroy(&dev->lock); 1691 1691 ida_free(&gpio_virtuser_ida, dev->id);
+17 -4
drivers/gpio/gpiolib-acpi-core.c
··· 1104 1104 unsigned int pin = agpio->pin_table[i]; 1105 1105 struct acpi_gpio_connection *conn; 1106 1106 struct gpio_desc *desc; 1107 + u16 word, shift; 1107 1108 bool found; 1108 1109 1109 1110 mutex_lock(&achip->conn_lock); ··· 1159 1158 1160 1159 mutex_unlock(&achip->conn_lock); 1161 1160 1162 - if (function == ACPI_WRITE) 1163 - gpiod_set_raw_value_cansleep(desc, !!(*value & BIT(i))); 1164 - else 1165 - *value |= (u64)gpiod_get_raw_value_cansleep(desc) << i; 1161 + /* 1162 + * For the cases when OperationRegion() consists of more than 1163 + * 64 bits calculate the word and bit shift to use that one to 1164 + * access the value. 1165 + */ 1166 + word = i / 64; 1167 + shift = i % 64; 1168 + 1169 + if (function == ACPI_WRITE) { 1170 + gpiod_set_raw_value_cansleep(desc, value[word] & BIT_ULL(shift)); 1171 + } else { 1172 + if (gpiod_get_raw_value_cansleep(desc)) 1173 + value[word] |= BIT_ULL(shift); 1174 + else 1175 + value[word] &= ~BIT_ULL(shift); 1176 + } 1166 1177 } 1167 1178 1168 1179 out:
+6 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
··· 498 498 499 499 if (adev->irq.retry_cam_enabled) 500 500 return; 501 + else if (adev->irq.ih1.ring_size) 502 + ih = &adev->irq.ih1; 503 + else if (adev->irq.ih_soft.enabled) 504 + ih = &adev->irq.ih_soft; 505 + else 506 + return; 501 507 502 - ih = &adev->irq.ih1; 503 508 /* Get the WPTR of the last entry in IH ring */ 504 509 last_wptr = amdgpu_ih_get_wptr(adev, ih); 505 510 /* Order wptr with ring data. */
+3 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
··· 235 235 236 236 amdgpu_ring_ib_begin(ring); 237 237 238 - if (ring->funcs->emit_gfx_shadow) 238 + if (ring->funcs->emit_gfx_shadow && adev->gfx.cp_gfx_shadow) 239 239 amdgpu_ring_emit_gfx_shadow(ring, shadow_va, csa_va, gds_va, 240 240 init_shadow, vmid); 241 241 ··· 291 291 fence_flags | AMDGPU_FENCE_FLAG_64BIT); 292 292 } 293 293 294 - if (ring->funcs->emit_gfx_shadow && ring->funcs->init_cond_exec) { 294 + if (ring->funcs->emit_gfx_shadow && ring->funcs->init_cond_exec && 295 + adev->gfx.cp_gfx_shadow) { 295 296 amdgpu_ring_emit_gfx_shadow(ring, 0, 0, 0, false, 0); 296 297 amdgpu_ring_init_cond_exec(ring, ring->cond_exe_gpu_addr); 297 298 }
+1 -1
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 6879 6879 memcpy_toio(mqd, adev->gfx.me.mqd_backup[mqd_idx], sizeof(*mqd)); 6880 6880 /* reset the ring */ 6881 6881 ring->wptr = 0; 6882 - *ring->wptr_cpu_addr = 0; 6882 + atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0); 6883 6883 amdgpu_ring_clear_ring(ring); 6884 6884 } 6885 6885
+14 -11
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 4201 4201 memcpy_toio(mqd, adev->gfx.me.mqd_backup[mqd_idx], sizeof(*mqd)); 4202 4202 /* reset the ring */ 4203 4203 ring->wptr = 0; 4204 - *ring->wptr_cpu_addr = 0; 4204 + atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0); 4205 4205 amdgpu_ring_clear_ring(ring); 4206 4206 } 4207 4207 ··· 6823 6823 struct amdgpu_fence *timedout_fence) 6824 6824 { 6825 6825 struct amdgpu_device *adev = ring->adev; 6826 + bool use_mmio = false; 6826 6827 int r; 6827 6828 6828 6829 amdgpu_ring_reset_helper_begin(ring, timedout_fence); 6829 6830 6830 - r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false); 6831 + r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, use_mmio); 6831 6832 if (r) { 6832 6833 6833 6834 dev_warn(adev->dev, "reset via MES failed and try pipe reset %d\n", r); ··· 6837 6836 return r; 6838 6837 } 6839 6838 6840 - r = gfx_v11_0_kgq_init_queue(ring, true); 6841 - if (r) { 6842 - dev_err(adev->dev, "failed to init kgq\n"); 6843 - return r; 6844 - } 6839 + if (use_mmio) { 6840 + r = gfx_v11_0_kgq_init_queue(ring, true); 6841 + if (r) { 6842 + dev_err(adev->dev, "failed to init kgq\n"); 6843 + return r; 6844 + } 6845 6845 6846 - r = amdgpu_mes_map_legacy_queue(adev, ring); 6847 - if (r) { 6848 - dev_err(adev->dev, "failed to remap kgq\n"); 6849 - return r; 6846 + r = amdgpu_mes_map_legacy_queue(adev, ring); 6847 + if (r) { 6848 + dev_err(adev->dev, "failed to remap kgq\n"); 6849 + return r; 6850 + } 6850 6851 } 6851 6852 6852 6853 return amdgpu_ring_reset_helper_end(ring, timedout_fence);
+14 -11
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
··· 3079 3079 memcpy_toio(mqd, adev->gfx.me.mqd_backup[mqd_idx], sizeof(*mqd)); 3080 3080 /* reset the ring */ 3081 3081 ring->wptr = 0; 3082 - *ring->wptr_cpu_addr = 0; 3082 + atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0); 3083 3083 amdgpu_ring_clear_ring(ring); 3084 3084 } 3085 3085 ··· 5297 5297 struct amdgpu_fence *timedout_fence) 5298 5298 { 5299 5299 struct amdgpu_device *adev = ring->adev; 5300 + bool use_mmio = false; 5300 5301 int r; 5301 5302 5302 5303 amdgpu_ring_reset_helper_begin(ring, timedout_fence); 5303 5304 5304 - r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false); 5305 + r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, use_mmio); 5305 5306 if (r) { 5306 5307 dev_warn(adev->dev, "reset via MES failed and try pipe reset %d\n", r); 5307 5308 r = gfx_v12_reset_gfx_pipe(ring); ··· 5310 5309 return r; 5311 5310 } 5312 5311 5313 - r = gfx_v12_0_kgq_init_queue(ring, true); 5314 - if (r) { 5315 - dev_err(adev->dev, "failed to init kgq\n"); 5316 - return r; 5317 - } 5312 + if (use_mmio) { 5313 + r = gfx_v12_0_kgq_init_queue(ring, true); 5314 + if (r) { 5315 + dev_err(adev->dev, "failed to init kgq\n"); 5316 + return r; 5317 + } 5318 5318 5319 - r = amdgpu_mes_map_legacy_queue(adev, ring); 5320 - if (r) { 5321 - dev_err(adev->dev, "failed to remap kgq\n"); 5322 - return r; 5319 + r = amdgpu_mes_map_legacy_queue(adev, ring); 5320 + if (r) { 5321 + dev_err(adev->dev, "failed to remap kgq\n"); 5322 + return r; 5323 + } 5323 5324 } 5324 5325 5325 5326 return amdgpu_ring_reset_helper_end(ring, timedout_fence);
+7 -1
drivers/gpu/drm/amd/amdgpu/soc21.c
··· 225 225 226 226 static u32 soc21_get_xclk(struct amdgpu_device *adev) 227 227 { 228 - return adev->clock.spll.reference_freq; 228 + u32 reference_clock = adev->clock.spll.reference_freq; 229 + 230 + /* reference clock is actually 99.81 Mhz rather than 100 Mhz */ 231 + if ((adev->flags & AMD_IS_APU) && reference_clock == 10000) 232 + return 9981; 233 + 234 + return reference_clock; 229 235 } 230 236 231 237
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
··· 217 217 page = pfn_to_page(pfn); 218 218 svm_range_bo_ref(prange->svm_bo); 219 219 page->zone_device_data = prange->svm_bo; 220 - zone_device_page_init(page, 0); 220 + zone_device_page_init(page, page_pgmap(page), 0); 221 221 } 222 222 223 223 static void
+6 -4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 7754 7754 drm_dp_mst_topology_mgr_destroy(&aconnector->mst_mgr); 7755 7755 7756 7756 /* Cancel and flush any pending HDMI HPD debounce work */ 7757 - cancel_delayed_work_sync(&aconnector->hdmi_hpd_debounce_work); 7758 - if (aconnector->hdmi_prev_sink) { 7759 - dc_sink_release(aconnector->hdmi_prev_sink); 7760 - aconnector->hdmi_prev_sink = NULL; 7757 + if (aconnector->hdmi_hpd_debounce_delay_ms) { 7758 + cancel_delayed_work_sync(&aconnector->hdmi_hpd_debounce_work); 7759 + if (aconnector->hdmi_prev_sink) { 7760 + dc_sink_release(aconnector->hdmi_prev_sink); 7761 + aconnector->hdmi_prev_sink = NULL; 7762 + } 7761 7763 } 7762 7764 7763 7765 if (aconnector->bl_idx != -1) {
+4 -3
drivers/gpu/drm/amd/pm/amdgpu_dpm.c
··· 80 80 enum ip_power_state pwr_state = gate ? POWER_STATE_OFF : POWER_STATE_ON; 81 81 bool is_vcn = block_type == AMD_IP_BLOCK_TYPE_VCN; 82 82 83 + mutex_lock(&adev->pm.mutex); 84 + 83 85 if (atomic_read(&adev->pm.pwr_state[block_type]) == pwr_state && 84 86 (!is_vcn || adev->vcn.num_vcn_inst == 1)) { 85 87 dev_dbg(adev->dev, "IP block%d already in the target %s state!", 86 88 block_type, gate ? "gate" : "ungate"); 87 - return 0; 89 + goto out_unlock; 88 90 } 89 - 90 - mutex_lock(&adev->pm.mutex); 91 91 92 92 switch (block_type) { 93 93 case AMD_IP_BLOCK_TYPE_UVD: ··· 115 115 if (!ret) 116 116 atomic_set(&adev->pm.pwr_state[block_type], pwr_state); 117 117 118 + out_unlock: 118 119 mutex_unlock(&adev->pm.mutex); 119 120 120 121 return ret;
+1
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
··· 56 56 #define SMUQ10_TO_UINT(x) ((x) >> 10) 57 57 #define SMUQ10_FRAC(x) ((x) & 0x3ff) 58 58 #define SMUQ10_ROUND(x) ((SMUQ10_TO_UINT(x)) + ((SMUQ10_FRAC(x)) >= 0x200)) 59 + #define SMU_V13_SOFT_FREQ_ROUND(x) ((x) + 1) 59 60 60 61 extern const int pmfw_decoded_link_speed[5]; 61 62 extern const int pmfw_decoded_link_width[7];
+1
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h
··· 57 57 58 58 #define DECODE_GEN_SPEED(gen_speed_idx) (decoded_link_speed[gen_speed_idx]) 59 59 #define DECODE_LANE_WIDTH(lane_width_idx) (decoded_link_width[lane_width_idx]) 60 + #define SMU_V14_SOFT_FREQ_ROUND(x) ((x) + 1) 60 61 61 62 struct smu_14_0_max_sustainable_clocks { 62 63 uint32_t display_clock;
+1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 1555 1555 return clk_id; 1556 1556 1557 1557 if (max > 0) { 1558 + max = SMU_V13_SOFT_FREQ_ROUND(max); 1558 1559 if (automatic) 1559 1560 param = (uint32_t)((clk_id << 16) | 0xffff); 1560 1561 else
+1
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
··· 1178 1178 return clk_id; 1179 1179 1180 1180 if (max > 0) { 1181 + max = SMU_V14_SOFT_FREQ_ROUND(max); 1181 1182 if (automatic) 1182 1183 param = (uint32_t)((clk_id << 16) | 0xffff); 1183 1184 else
+12 -6
drivers/gpu/drm/drm_gem.c
··· 960 960 { 961 961 struct drm_gem_change_handle *args = data; 962 962 struct drm_gem_object *obj; 963 - int ret; 963 + int handle, ret; 964 964 965 965 if (!drm_core_check_feature(dev, DRIVER_GEM)) 966 966 return -EOPNOTSUPP; 967 + 968 + /* idr_alloc() limitation. */ 969 + if (args->new_handle > INT_MAX) 970 + return -EINVAL; 971 + handle = args->new_handle; 967 972 968 973 obj = drm_gem_object_lookup(file_priv, args->handle); 969 974 if (!obj) 970 975 return -ENOENT; 971 976 972 - if (args->handle == args->new_handle) { 977 + if (args->handle == handle) { 973 978 ret = 0; 974 979 goto out; 975 980 } ··· 982 977 mutex_lock(&file_priv->prime.lock); 983 978 984 979 spin_lock(&file_priv->table_lock); 985 - ret = idr_alloc(&file_priv->object_idr, obj, 986 - args->new_handle, args->new_handle + 1, GFP_NOWAIT); 980 + ret = idr_alloc(&file_priv->object_idr, obj, handle, handle + 1, 981 + GFP_NOWAIT); 987 982 spin_unlock(&file_priv->table_lock); 988 983 989 984 if (ret < 0) 990 985 goto out_unlock; 991 986 992 987 if (obj->dma_buf) { 993 - ret = drm_prime_add_buf_handle(&file_priv->prime, obj->dma_buf, args->new_handle); 988 + ret = drm_prime_add_buf_handle(&file_priv->prime, obj->dma_buf, 989 + handle); 994 990 if (ret < 0) { 995 991 spin_lock(&file_priv->table_lock); 996 - idr_remove(&file_priv->object_idr, args->new_handle); 992 + idr_remove(&file_priv->object_idr, handle); 997 993 spin_unlock(&file_priv->table_lock); 998 994 goto out_unlock; 999 995 }
+1 -1
drivers/gpu/drm/drm_pagemap.c
··· 197 197 struct drm_pagemap_zdd *zdd) 198 198 { 199 199 page->zone_device_data = drm_pagemap_zdd_get(zdd); 200 - zone_device_page_init(page, 0); 200 + zone_device_page_init(page, page_pgmap(page), 0); 201 201 } 202 202 203 203 /**
+13
drivers/gpu/drm/imx/ipuv3/imx-tve.c
··· 528 528 .bind = imx_tve_bind, 529 529 }; 530 530 531 + static void imx_tve_put_device(void *_dev) 532 + { 533 + struct device *dev = _dev; 534 + 535 + put_device(dev); 536 + } 537 + 531 538 static int imx_tve_probe(struct platform_device *pdev) 532 539 { 533 540 struct device *dev = &pdev->dev; ··· 556 549 if (ddc_node) { 557 550 tve->ddc = of_find_i2c_adapter_by_node(ddc_node); 558 551 of_node_put(ddc_node); 552 + if (tve->ddc) { 553 + ret = devm_add_action_or_reset(dev, imx_tve_put_device, 554 + &tve->ddc->dev); 555 + if (ret) 556 + return ret; 557 + } 559 558 } 560 559 561 560 tve->mode = of_get_tve_mode(np);
-2
drivers/gpu/drm/msm/adreno/a6xx_catalog.c
··· 501 501 {REG_A6XX_RBBM_CLOCK_CNTL_GMU_GX, 0x00000222}, 502 502 {REG_A6XX_RBBM_CLOCK_DELAY_GMU_GX, 0x00000111}, 503 503 {REG_A6XX_RBBM_CLOCK_HYST_GMU_GX, 0x00000555}, 504 - {REG_A6XX_GPU_GMU_AO_GMU_CGC_DELAY_CNTL, 0x10111}, 505 - {REG_A6XX_GPU_GMU_AO_GMU_CGC_HYST_CNTL, 0x5555}, 506 504 {} 507 505 }; 508 506
+1 -1
drivers/gpu/drm/nouveau/nouveau_dmem.c
··· 425 425 order = ilog2(DMEM_CHUNK_NPAGES); 426 426 } 427 427 428 - zone_device_folio_init(folio, order); 428 + zone_device_folio_init(folio, page_pgmap(folio_page(folio, 0)), order); 429 429 return page; 430 430 } 431 431
+1
drivers/gpu/drm/tyr/Kconfig
··· 6 6 depends on RUST 7 7 depends on ARM || ARM64 || COMPILE_TEST 8 8 depends on !GENERIC_ATOMIC64 # for IOMMU_IO_PGTABLE_LPAE 9 + depends on COMMON_CLK 9 10 default n 10 11 help 11 12 Rust DRM driver for ARM Mali CSF-based GPUs.
+1 -2
drivers/gpu/drm/xe/xe_configfs.c
··· 347 347 return false; 348 348 349 349 ret = pci_get_drvdata(pdev); 350 - pci_dev_put(pdev); 351 - 352 350 if (ret) 353 351 pci_dbg(pdev, "Already bound to driver\n"); 354 352 353 + pci_dev_put(pdev); 355 354 return ret; 356 355 } 357 356
-2
drivers/gpu/drm/xe/xe_device.c
··· 984 984 { 985 985 xe_display_unregister(xe); 986 986 987 - xe_nvm_fini(xe); 988 - 989 987 drm_dev_unplug(&xe->drm); 990 988 991 989 xe_bo_pci_dev_remove_all(xe);
+3 -3
drivers/gpu/drm/xe/xe_exec.c
··· 190 190 goto err_syncs; 191 191 } 192 192 193 - if (xe_exec_queue_is_parallel(q)) { 194 - err = copy_from_user(addresses, addresses_user, sizeof(u64) * 195 - q->width); 193 + if (args->num_batch_buffer && xe_exec_queue_is_parallel(q)) { 194 + err = copy_from_user(addresses, addresses_user, 195 + sizeof(u64) * q->width); 196 196 if (err) { 197 197 err = -EFAULT; 198 198 goto err_syncs;
+1 -1
drivers/gpu/drm/xe/xe_lrc.c
··· 1185 1185 return -ENOSPC; 1186 1186 1187 1187 *cmd++ = MI_LOAD_REGISTER_IMM | MI_LRI_NUM_REGS(1); 1188 - *cmd++ = CS_DEBUG_MODE1(0).addr; 1188 + *cmd++ = CS_DEBUG_MODE2(0).addr; 1189 1189 *cmd++ = _MASKED_BIT_ENABLE(INSTRUCTION_STATE_CACHE_INVALIDATE); 1190 1190 1191 1191 return cmd - batch;
+27 -28
drivers/gpu/drm/xe/xe_nvm.c
··· 83 83 return writable_override; 84 84 } 85 85 86 + static void xe_nvm_fini(void *arg) 87 + { 88 + struct xe_device *xe = arg; 89 + struct intel_dg_nvm_dev *nvm = xe->nvm; 90 + 91 + if (!xe->info.has_gsc_nvm) 92 + return; 93 + 94 + /* No access to internal NVM from VFs */ 95 + if (IS_SRIOV_VF(xe)) 96 + return; 97 + 98 + /* Nvm pointer should not be NULL here */ 99 + if (WARN_ON(!nvm)) 100 + return; 101 + 102 + auxiliary_device_delete(&nvm->aux_dev); 103 + auxiliary_device_uninit(&nvm->aux_dev); 104 + xe->nvm = NULL; 105 + } 106 + 86 107 int xe_nvm_init(struct xe_device *xe) 87 108 { 88 109 struct pci_dev *pdev = to_pci_dev(xe->drm.dev); ··· 153 132 ret = auxiliary_device_init(aux_dev); 154 133 if (ret) { 155 134 drm_err(&xe->drm, "xe-nvm aux init failed %d\n", ret); 156 - goto err; 135 + kfree(nvm); 136 + xe->nvm = NULL; 137 + return ret; 157 138 } 158 139 159 140 ret = auxiliary_device_add(aux_dev); 160 141 if (ret) { 161 142 drm_err(&xe->drm, "xe-nvm aux add failed %d\n", ret); 162 143 auxiliary_device_uninit(aux_dev); 163 - goto err; 144 + xe->nvm = NULL; 145 + return ret; 164 146 } 165 - return 0; 166 - 167 - err: 168 - kfree(nvm); 169 - xe->nvm = NULL; 170 - return ret; 171 - } 172 - 173 - void xe_nvm_fini(struct xe_device *xe) 174 - { 175 - struct intel_dg_nvm_dev *nvm = xe->nvm; 176 - 177 - if (!xe->info.has_gsc_nvm) 178 - return; 179 - 180 - /* No access to internal NVM from VFs */ 181 - if (IS_SRIOV_VF(xe)) 182 - return; 183 - 184 - /* Nvm pointer should not be NULL here */ 185 - if (WARN_ON(!nvm)) 186 - return; 187 - 188 - auxiliary_device_delete(&nvm->aux_dev); 189 - auxiliary_device_uninit(&nvm->aux_dev); 190 - xe->nvm = NULL; 147 + return devm_add_action_or_reset(xe->drm.dev, xe_nvm_fini, xe); 191 148 }
-2
drivers/gpu/drm/xe/xe_nvm.h
··· 10 10 11 11 int xe_nvm_init(struct xe_device *xe); 12 12 13 - void xe_nvm_fini(struct xe_device *xe); 14 - 15 13 #endif
+1 -5
drivers/gpu/drm/xe/xe_pci.c
··· 342 342 .has_display = true, 343 343 .has_flat_ccs = 1, 344 344 .has_pxp = true, 345 - .has_mem_copy_instr = true, 346 345 .max_gt_per_tile = 2, 347 346 .needs_scratch = true, 348 347 .va_bits = 48, ··· 362 363 .has_heci_cscfi = 1, 363 364 .has_late_bind = true, 364 365 .has_sriov = true, 365 - .has_mem_copy_instr = true, 366 366 .max_gt_per_tile = 2, 367 367 .needs_scratch = true, 368 368 .subplatforms = (const struct xe_subplatform_desc[]) { ··· 378 380 .has_display = true, 379 381 .has_flat_ccs = 1, 380 382 .has_sriov = true, 381 - .has_mem_copy_instr = true, 382 383 .max_gt_per_tile = 2, 383 384 .needs_scratch = true, 384 385 .needs_shared_vf_gt_wq = true, ··· 390 393 .dma_mask_size = 46, 391 394 .has_display = true, 392 395 .has_flat_ccs = 1, 393 - .has_mem_copy_instr = true, 394 396 .max_gt_per_tile = 2, 395 397 .require_force_probe = true, 396 398 .va_bits = 48, ··· 671 675 xe->info.has_pxp = desc->has_pxp; 672 676 xe->info.has_sriov = xe_configfs_primary_gt_allowed(to_pci_dev(xe->drm.dev)) && 673 677 desc->has_sriov; 674 - xe->info.has_mem_copy_instr = desc->has_mem_copy_instr; 675 678 xe->info.skip_guc_pc = desc->skip_guc_pc; 676 679 xe->info.skip_mtcfg = desc->skip_mtcfg; 677 680 xe->info.skip_pcode = desc->skip_pcode; ··· 859 864 xe->info.has_range_tlb_inval = graphics_desc->has_range_tlb_inval; 860 865 xe->info.has_usm = graphics_desc->has_usm; 861 866 xe->info.has_64bit_timestamp = graphics_desc->has_64bit_timestamp; 867 + xe->info.has_mem_copy_instr = GRAPHICS_VER(xe) >= 20; 862 868 863 869 xe_info_probe_tile_count(xe); 864 870
-1
drivers/gpu/drm/xe/xe_pci_types.h
··· 46 46 u8 has_late_bind:1; 47 47 u8 has_llc:1; 48 48 u8 has_mbx_power_limits:1; 49 - u8 has_mem_copy_instr:1; 50 49 u8 has_pxp:1; 51 50 u8 has_sriov:1; 52 51 u8 needs_scratch:1;
+3
drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
··· 1078 1078 { 1079 1079 char header[64]; 1080 1080 1081 + /* Reset VCMDQ */ 1082 + tegra241_vcmdq_hw_deinit(vcmdq); 1083 + 1081 1084 /* Configure the vcmdq only; User space does the enabling */ 1082 1085 writeq_relaxed(vcmdq->cmdq.q.q_base, REG_VCMDQ_PAGE1(vcmdq, BASE)); 1083 1086
+10 -1
drivers/iommu/generic_pt/iommu_pt.h
··· 931 931 struct pt_table_p *table) 932 932 { 933 933 struct pt_state pts = pt_init(range, level, table); 934 + unsigned int flush_start_index = UINT_MAX; 935 + unsigned int flush_end_index = UINT_MAX; 934 936 struct pt_unmap_args *unmap = arg; 935 937 unsigned int num_oas = 0; 936 938 unsigned int start_index; ··· 988 986 iommu_pages_list_add(&unmap->free_list, 989 987 pts.table_lower); 990 988 pt_clear_entries(&pts, ilog2(1)); 989 + if (pts.index < flush_start_index) 990 + flush_start_index = pts.index; 991 + flush_end_index = pts.index + 1; 991 992 } 992 993 pts.index++; 993 994 } else { ··· 1004 999 num_contig_lg2 = pt_entry_num_contig_lg2(&pts); 1005 1000 pt_clear_entries(&pts, num_contig_lg2); 1006 1001 num_oas += log2_to_int(num_contig_lg2); 1002 + if (pts.index < flush_start_index) 1003 + flush_start_index = pts.index; 1007 1004 pts.index += log2_to_int(num_contig_lg2); 1005 + flush_end_index = pts.index; 1008 1006 } 1009 1007 if (pts.index >= pts.end_index) 1010 1008 break; ··· 1015 1007 } while (true); 1016 1008 1017 1009 unmap->unmapped += log2_mul(num_oas, pt_table_item_lg2sz(&pts)); 1018 - flush_writes_range(&pts, start_index, pts.index); 1010 + if (flush_start_index != flush_end_index) 1011 + flush_writes_range(&pts, flush_start_index, flush_end_index); 1019 1012 1020 1013 return ret; 1021 1014 }
+1
drivers/iommu/iommufd/pages.c
··· 289 289 batch->end = 0; 290 290 batch->pfns[0] = 0; 291 291 batch->npfns[0] = 0; 292 + batch->kind = 0; 292 293 } 293 294 294 295 /*
+36 -39
drivers/irqchip/irq-ls-extirq.c
··· 168 168 return 0; 169 169 } 170 170 171 - static int __init 172 - ls_extirq_of_init(struct device_node *node, struct device_node *parent) 171 + static int ls_extirq_probe(struct platform_device *pdev) 173 172 { 174 173 struct irq_domain *domain, *parent_domain; 174 + struct device_node *node, *parent; 175 + struct device *dev = &pdev->dev; 175 176 struct ls_extirq_data *priv; 176 177 int ret; 177 178 179 + node = dev->of_node; 180 + parent = of_irq_find_parent(node); 181 + if (!parent) 182 + return dev_err_probe(dev, -ENODEV, "Failed to get IRQ parent node\n"); 183 + 178 184 parent_domain = irq_find_host(parent); 179 - if (!parent_domain) { 180 - pr_err("Cannot find parent domain\n"); 181 - ret = -ENODEV; 182 - goto err_irq_find_host; 183 - } 185 + if (!parent_domain) 186 + return dev_err_probe(dev, -EPROBE_DEFER, "Cannot find parent domain\n"); 184 187 185 - priv = kzalloc(sizeof(*priv), GFP_KERNEL); 186 - if (!priv) { 187 - ret = -ENOMEM; 188 - goto err_alloc_priv; 189 - } 188 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 189 + if (!priv) 190 + return dev_err_probe(dev, -ENOMEM, "Failed to allocate memory\n"); 190 191 191 - /* 192 - * All extirq OF nodes are under a scfg/syscon node with 193 - * the 'ranges' property 194 - */ 195 - priv->intpcr = of_iomap(node, 0); 196 - if (!priv->intpcr) { 197 - pr_err("Cannot ioremap OF node %pOF\n", node); 198 - ret = -ENOMEM; 199 - goto err_iomap; 200 - } 192 + priv->intpcr = devm_of_iomap(dev, node, 0, NULL); 193 + if (!priv->intpcr) 194 + return dev_err_probe(dev, -ENOMEM, "Cannot ioremap OF node %pOF\n", node); 201 195 202 196 ret = ls_extirq_parse_map(priv, node); 203 197 if (ret) 204 - goto err_parse_map; 198 + return dev_err_probe(dev, ret, "Failed to parse IRQ map\n"); 205 199 206 200 priv->big_endian = of_device_is_big_endian(node->parent); 207 201 priv->is_ls1021a_or_ls1043a = of_device_is_compatible(node, "fsl,ls1021a-extirq") || ··· 204 210 205 211 domain = irq_domain_create_hierarchy(parent_domain, 0, priv->nirq, of_fwnode_handle(node), 206 212 &extirq_domain_ops, priv); 207 - if (!domain) { 208 - ret = -ENOMEM; 209 - goto err_add_hierarchy; 210 - } 213 + if (!domain) 214 + return dev_err_probe(dev, -ENOMEM, "Failed to add IRQ domain\n"); 211 215 212 216 return 0; 213 - 214 - err_add_hierarchy: 215 - err_parse_map: 216 - iounmap(priv->intpcr); 217 - err_iomap: 218 - kfree(priv); 219 - err_alloc_priv: 220 - err_irq_find_host: 221 - return ret; 222 217 } 223 218 224 - IRQCHIP_DECLARE(ls1021a_extirq, "fsl,ls1021a-extirq", ls_extirq_of_init); 225 - IRQCHIP_DECLARE(ls1043a_extirq, "fsl,ls1043a-extirq", ls_extirq_of_init); 226 - IRQCHIP_DECLARE(ls1088a_extirq, "fsl,ls1088a-extirq", ls_extirq_of_init); 219 + static const struct of_device_id ls_extirq_dt_ids[] = { 220 + { .compatible = "fsl,ls1021a-extirq" }, 221 + { .compatible = "fsl,ls1043a-extirq" }, 222 + { .compatible = "fsl,ls1088a-extirq" }, 223 + {} 224 + }; 225 + MODULE_DEVICE_TABLE(of, ls_extirq_dt_ids); 226 + 227 + static struct platform_driver ls_extirq_driver = { 228 + .probe = ls_extirq_probe, 229 + .driver = { 230 + .name = "ls-extirq", 231 + .of_match_table = ls_extirq_dt_ids, 232 + } 233 + }; 234 + 235 + builtin_platform_driver(ls_extirq_driver);
+1 -5
drivers/md/bcache/request.c
··· 1107 1107 1108 1108 if (bio_op(orig_bio) == REQ_OP_DISCARD && 1109 1109 !bdev_max_discard_sectors(dc->bdev)) { 1110 + bio_end_io_acct(orig_bio, start_time); 1110 1111 bio_endio(orig_bio); 1111 1112 return; 1112 1113 } 1113 1114 1114 1115 clone_bio = bio_alloc_clone(dc->bdev, orig_bio, GFP_NOIO, 1115 1116 &d->bio_detached); 1116 - if (!clone_bio) { 1117 - orig_bio->bi_status = BLK_STS_RESOURCE; 1118 - bio_endio(orig_bio); 1119 - return; 1120 - } 1121 1117 1122 1118 ddip = container_of(clone_bio, struct detached_dev_io_private, bio); 1123 1119 /* Count on the bcache device */
+1 -1
drivers/mtd/nand/spi/esmt.c
··· 215 215 SPINAND_FACT_OTP_INFO(2, 0, &f50l1g41lb_fact_otp_ops)), 216 216 SPINAND_INFO("F50D1G41LB", 217 217 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x11, 0x7f, 218 - 0x7f), 218 + 0x7f, 0x7f), 219 219 NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 220 220 NAND_ECCREQ(1, 512), 221 221 SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+3
drivers/net/ethernet/adi/adin1110.c
··· 1089 1089 1090 1090 reset_gpio = devm_gpiod_get_optional(&priv->spidev->dev, "reset", 1091 1091 GPIOD_OUT_LOW); 1092 + if (IS_ERR(reset_gpio)) 1093 + return dev_err_probe(&priv->spidev->dev, PTR_ERR(reset_gpio), 1094 + "failed to get reset gpio\n"); 1092 1095 if (reset_gpio) { 1093 1096 /* MISO pin is used for internal configuration, can't have 1094 1097 * anyone else disturbing the SDO line.
+20 -19
drivers/net/ethernet/cavium/liquidio/lio_main.c
··· 3505 3505 */ 3506 3506 netdev->netdev_ops = &lionetdevops; 3507 3507 3508 + lio = GET_LIO(netdev); 3509 + 3510 + memset(lio, 0, sizeof(struct lio)); 3511 + 3512 + lio->ifidx = ifidx_or_pfnum; 3513 + 3514 + props = &octeon_dev->props[i]; 3515 + props->gmxport = resp->cfg_info.linfo.gmxport; 3516 + props->netdev = netdev; 3517 + 3518 + /* Point to the properties for octeon device to which this 3519 + * interface belongs. 3520 + */ 3521 + lio->oct_dev = octeon_dev; 3522 + lio->octprops = props; 3523 + lio->netdev = netdev; 3524 + 3508 3525 retval = netif_set_real_num_rx_queues(netdev, num_oqueues); 3509 3526 if (retval) { 3510 3527 dev_err(&octeon_dev->pci_dev->dev, ··· 3537 3520 WRITE_ONCE(sc->caller_is_done, true); 3538 3521 goto setup_nic_dev_free; 3539 3522 } 3540 - 3541 - lio = GET_LIO(netdev); 3542 - 3543 - memset(lio, 0, sizeof(struct lio)); 3544 - 3545 - lio->ifidx = ifidx_or_pfnum; 3546 - 3547 - props = &octeon_dev->props[i]; 3548 - props->gmxport = resp->cfg_info.linfo.gmxport; 3549 - props->netdev = netdev; 3550 3523 3551 3524 lio->linfo.num_rxpciq = num_oqueues; 3552 3525 lio->linfo.num_txpciq = num_iqueues; ··· 3602 3595 /* MTU range: 68 - 16000 */ 3603 3596 netdev->min_mtu = LIO_MIN_MTU_SIZE; 3604 3597 netdev->max_mtu = LIO_MAX_MTU_SIZE; 3605 - 3606 - /* Point to the properties for octeon device to which this 3607 - * interface belongs. 3608 - */ 3609 - lio->oct_dev = octeon_dev; 3610 - lio->octprops = props; 3611 - lio->netdev = netdev; 3612 3598 3613 3599 dev_dbg(&octeon_dev->pci_dev->dev, 3614 3600 "if%d gmx: %d hw_addr: 0x%llx\n", i, ··· 3750 3750 if (!devlink) { 3751 3751 device_unlock(&octeon_dev->pci_dev->dev); 3752 3752 dev_err(&octeon_dev->pci_dev->dev, "devlink alloc failed\n"); 3753 + i--; 3753 3754 goto setup_nic_dev_free; 3754 3755 } 3755 3756 ··· 3766 3765 3767 3766 setup_nic_dev_free: 3768 3767 3769 - while (i--) { 3768 + do { 3770 3769 dev_err(&octeon_dev->pci_dev->dev, 3771 3770 "NIC ifidx:%d Setup failed\n", i); 3772 3771 liquidio_destroy_nic_device(octeon_dev, i); 3773 - } 3772 + } while (i--); 3774 3773 3775 3774 setup_nic_dev_done: 3776 3775
+2 -2
drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
··· 2212 2212 2213 2213 setup_nic_dev_free: 2214 2214 2215 - while (i--) { 2215 + do { 2216 2216 dev_err(&octeon_dev->pci_dev->dev, 2217 2217 "NIC ifidx:%d Setup failed\n", i); 2218 2218 liquidio_destroy_nic_device(octeon_dev, i); 2219 - } 2219 + } while (i--); 2220 2220 2221 2221 setup_nic_dev_done: 2222 2222
+10
drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
··· 1531 1531 } 1532 1532 1533 1533 if_id = (status & 0xFFFF0000) >> 16; 1534 + if (if_id >= ethsw->sw_attr.num_ifs) { 1535 + dev_err(dev, "Invalid if_id %d in IRQ status\n", if_id); 1536 + goto out; 1537 + } 1534 1538 port_priv = ethsw->ports[if_id]; 1535 1539 1536 1540 if (status & DPSW_IRQ_EVENT_LINK_CHANGED) ··· 3025 3021 &ethsw->sw_attr); 3026 3022 if (err) { 3027 3023 dev_err(dev, "dpsw_get_attributes err %d\n", err); 3024 + goto err_close; 3025 + } 3026 + 3027 + if (!ethsw->sw_attr.num_ifs) { 3028 + dev_err(dev, "DPSW device has no interfaces\n"); 3029 + err = -ENODEV; 3028 3030 goto err_close; 3029 3031 } 3030 3032
+7 -4
drivers/net/ethernet/freescale/enetc/enetc.c
··· 2512 2512 struct enetc_hw *hw = &si->hw; 2513 2513 int err; 2514 2514 2515 - /* set SI cache attributes */ 2516 - enetc_wr(hw, ENETC_SICAR0, 2517 - ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT); 2518 - enetc_wr(hw, ENETC_SICAR1, ENETC_SICAR_MSI); 2515 + if (is_enetc_rev1(si)) { 2516 + /* set SI cache attributes */ 2517 + enetc_wr(hw, ENETC_SICAR0, 2518 + ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT); 2519 + enetc_wr(hw, ENETC_SICAR1, ENETC_SICAR_MSI); 2520 + } 2521 + 2519 2522 /* enable SI */ 2520 2523 enetc_wr(hw, ENETC_SIMR, ENETC_SIMR_EN); 2521 2524
+3 -3
drivers/net/ethernet/freescale/enetc/enetc4_pf.c
··· 59 59 60 60 if (si != 0) { 61 61 __raw_writel(upper, hw->port + ENETC4_PSIPMAR0(si)); 62 - __raw_writew(lower, hw->port + ENETC4_PSIPMAR1(si)); 62 + __raw_writel(lower, hw->port + ENETC4_PSIPMAR1(si)); 63 63 } else { 64 64 __raw_writel(upper, hw->port + ENETC4_PMAR0); 65 - __raw_writew(lower, hw->port + ENETC4_PMAR1); 65 + __raw_writel(lower, hw->port + ENETC4_PMAR1); 66 66 } 67 67 } 68 68 ··· 73 73 u16 lower; 74 74 75 75 upper = __raw_readl(hw->port + ENETC4_PSIPMAR0(si)); 76 - lower = __raw_readw(hw->port + ENETC4_PSIPMAR1(si)); 76 + lower = __raw_readl(hw->port + ENETC4_PSIPMAR1(si)); 77 77 78 78 put_unaligned_le32(upper, addr); 79 79 put_unaligned_le16(lower, addr + 4);
-4
drivers/net/ethernet/freescale/enetc/enetc_cbdr.c
··· 74 74 if (!user->ring) 75 75 return -ENOMEM; 76 76 77 - /* set CBDR cache attributes */ 78 - enetc_wr(hw, ENETC_SICAR2, 79 - ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT); 80 - 81 77 regs.pir = hw->reg + ENETC_SICBDRPIR; 82 78 regs.cir = hw->reg + ENETC_SICBDRCIR; 83 79 regs.mr = hw->reg + ENETC_SICBDRMR;
+14 -3
drivers/net/ethernet/freescale/enetc/enetc_hw.h
··· 708 708 #define ENETC_RFSE_EN BIT(15) 709 709 #define ENETC_RFSE_MODE_BD 2 710 710 711 + static inline void enetc_get_primary_mac_addr(struct enetc_hw *hw, u8 *addr) 712 + { 713 + u32 upper; 714 + u16 lower; 715 + 716 + upper = __raw_readl(hw->reg + ENETC_SIPMAR0); 717 + lower = __raw_readl(hw->reg + ENETC_SIPMAR1); 718 + 719 + put_unaligned_le32(upper, addr); 720 + put_unaligned_le16(lower, addr + 4); 721 + } 722 + 711 723 static inline void enetc_load_primary_mac_addr(struct enetc_hw *hw, 712 724 struct net_device *ndev) 713 725 { 714 - u8 addr[ETH_ALEN] __aligned(4); 726 + u8 addr[ETH_ALEN]; 715 727 716 - *(u32 *)addr = __raw_readl(hw->reg + ENETC_SIPMAR0); 717 - *(u16 *)(addr + 4) = __raw_readw(hw->reg + ENETC_SIPMAR1); 728 + enetc_get_primary_mac_addr(hw, addr); 718 729 eth_hw_addr_set(ndev, addr); 719 730 } 720 731
+51 -26
drivers/net/ethernet/google/gve/gve_ethtool.c
··· 152 152 u64 tmp_rx_pkts, tmp_rx_hsplit_pkt, tmp_rx_bytes, tmp_rx_hsplit_bytes, 153 153 tmp_rx_skb_alloc_fail, tmp_rx_buf_alloc_fail, 154 154 tmp_rx_desc_err_dropped_pkt, tmp_rx_hsplit_unsplit_pkt, 155 - tmp_tx_pkts, tmp_tx_bytes; 155 + tmp_tx_pkts, tmp_tx_bytes, 156 + tmp_xdp_tx_errors, tmp_xdp_redirect_errors; 156 157 u64 rx_buf_alloc_fail, rx_desc_err_dropped_pkt, rx_hsplit_unsplit_pkt, 157 158 rx_pkts, rx_hsplit_pkt, rx_skb_alloc_fail, rx_bytes, tx_pkts, tx_bytes, 158 - tx_dropped; 159 - int stats_idx, base_stats_idx, max_stats_idx; 159 + tx_dropped, xdp_tx_errors, xdp_redirect_errors; 160 + int rx_base_stats_idx, max_rx_stats_idx, max_tx_stats_idx; 161 + int stats_idx, stats_region_len, nic_stats_len; 160 162 struct stats *report_stats; 161 163 int *rx_qid_to_stats_idx; 162 164 int *tx_qid_to_stats_idx; ··· 200 198 for (rx_pkts = 0, rx_bytes = 0, rx_hsplit_pkt = 0, 201 199 rx_skb_alloc_fail = 0, rx_buf_alloc_fail = 0, 202 200 rx_desc_err_dropped_pkt = 0, rx_hsplit_unsplit_pkt = 0, 201 + xdp_tx_errors = 0, xdp_redirect_errors = 0, 203 202 ring = 0; 204 203 ring < priv->rx_cfg.num_queues; ring++) { 205 204 if (priv->rx) { ··· 218 215 rx->rx_desc_err_dropped_pkt; 219 216 tmp_rx_hsplit_unsplit_pkt = 220 217 rx->rx_hsplit_unsplit_pkt; 218 + tmp_xdp_tx_errors = rx->xdp_tx_errors; 219 + tmp_xdp_redirect_errors = 220 + rx->xdp_redirect_errors; 221 221 } while (u64_stats_fetch_retry(&priv->rx[ring].statss, 222 222 start)); 223 223 rx_pkts += tmp_rx_pkts; ··· 230 224 rx_buf_alloc_fail += tmp_rx_buf_alloc_fail; 231 225 rx_desc_err_dropped_pkt += tmp_rx_desc_err_dropped_pkt; 232 226 rx_hsplit_unsplit_pkt += tmp_rx_hsplit_unsplit_pkt; 227 + xdp_tx_errors += tmp_xdp_tx_errors; 228 + xdp_redirect_errors += tmp_xdp_redirect_errors; 233 229 } 234 230 } 235 231 for (tx_pkts = 0, tx_bytes = 0, tx_dropped = 0, ring = 0; ··· 257 249 data[i++] = rx_bytes; 258 250 data[i++] = tx_bytes; 259 251 /* total rx dropped packets */ 260 - data[i++] = rx_skb_alloc_fail + rx_buf_alloc_fail + 261 - rx_desc_err_dropped_pkt; 252 + data[i++] = rx_skb_alloc_fail + rx_desc_err_dropped_pkt + 253 + xdp_tx_errors + xdp_redirect_errors; 262 254 data[i++] = tx_dropped; 263 255 data[i++] = priv->tx_timeo_cnt; 264 256 data[i++] = rx_skb_alloc_fail; ··· 273 265 data[i++] = priv->stats_report_trigger_cnt; 274 266 i = GVE_MAIN_STATS_LEN; 275 267 276 - /* For rx cross-reporting stats, start from nic rx stats in report */ 277 - base_stats_idx = GVE_TX_STATS_REPORT_NUM * num_tx_queues + 278 - GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues; 279 - /* The boundary between driver stats and NIC stats shifts if there are 280 - * stopped queues. 281 - */ 282 - base_stats_idx += NIC_RX_STATS_REPORT_NUM * num_stopped_rxqs + 283 - NIC_TX_STATS_REPORT_NUM * num_stopped_txqs; 284 - max_stats_idx = NIC_RX_STATS_REPORT_NUM * 285 - (priv->rx_cfg.num_queues - num_stopped_rxqs) + 286 - base_stats_idx; 268 + rx_base_stats_idx = 0; 269 + max_rx_stats_idx = 0; 270 + max_tx_stats_idx = 0; 271 + stats_region_len = priv->stats_report_len - 272 + sizeof(struct gve_stats_report); 273 + nic_stats_len = (NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues + 274 + NIC_TX_STATS_REPORT_NUM * num_tx_queues) * sizeof(struct stats); 275 + if (unlikely((stats_region_len - 276 + nic_stats_len) % sizeof(struct stats))) { 277 + net_err_ratelimited("Starting index of NIC stats should be multiple of stats size"); 278 + } else { 279 + /* For rx cross-reporting stats, 280 + * start from nic rx stats in report 281 + */ 282 + rx_base_stats_idx = (stats_region_len - nic_stats_len) / 283 + sizeof(struct stats); 284 + /* The boundary between driver stats and NIC stats 285 + * shifts if there are stopped queues 286 + */ 287 + rx_base_stats_idx += NIC_RX_STATS_REPORT_NUM * 288 + num_stopped_rxqs + NIC_TX_STATS_REPORT_NUM * 289 + num_stopped_txqs; 290 + max_rx_stats_idx = NIC_RX_STATS_REPORT_NUM * 291 + (priv->rx_cfg.num_queues - num_stopped_rxqs) + 292 + rx_base_stats_idx; 293 + max_tx_stats_idx = NIC_TX_STATS_REPORT_NUM * 294 + (num_tx_queues - num_stopped_txqs) + 295 + max_rx_stats_idx; 296 + } 287 297 /* Preprocess the stats report for rx, map queue id to start index */ 288 298 skip_nic_stats = false; 289 - for (stats_idx = base_stats_idx; stats_idx < max_stats_idx; 299 + for (stats_idx = rx_base_stats_idx; stats_idx < max_rx_stats_idx; 290 300 stats_idx += NIC_RX_STATS_REPORT_NUM) { 291 301 u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name); 292 302 u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id); ··· 337 311 tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail; 338 312 tmp_rx_desc_err_dropped_pkt = 339 313 rx->rx_desc_err_dropped_pkt; 314 + tmp_xdp_tx_errors = rx->xdp_tx_errors; 315 + tmp_xdp_redirect_errors = 316 + rx->xdp_redirect_errors; 340 317 } while (u64_stats_fetch_retry(&priv->rx[ring].statss, 341 318 start)); 342 319 data[i++] = tmp_rx_bytes; ··· 350 321 data[i++] = rx->rx_frag_alloc_cnt; 351 322 /* rx dropped packets */ 352 323 data[i++] = tmp_rx_skb_alloc_fail + 353 - tmp_rx_buf_alloc_fail + 354 - tmp_rx_desc_err_dropped_pkt; 324 + tmp_rx_desc_err_dropped_pkt + 325 + tmp_xdp_tx_errors + 326 + tmp_xdp_redirect_errors; 355 327 data[i++] = rx->rx_copybreak_pkt; 356 328 data[i++] = rx->rx_copied_pkt; 357 329 /* stats from NIC */ ··· 384 354 i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS; 385 355 } 386 356 387 - /* For tx cross-reporting stats, start from nic tx stats in report */ 388 - base_stats_idx = max_stats_idx; 389 - max_stats_idx = NIC_TX_STATS_REPORT_NUM * 390 - (num_tx_queues - num_stopped_txqs) + 391 - max_stats_idx; 392 - /* Preprocess the stats report for tx, map queue id to start index */ 393 357 skip_nic_stats = false; 394 - for (stats_idx = base_stats_idx; stats_idx < max_stats_idx; 358 + /* NIC TX stats start right after NIC RX stats */ 359 + for (stats_idx = max_rx_stats_idx; stats_idx < max_tx_stats_idx; 395 360 stats_idx += NIC_TX_STATS_REPORT_NUM) { 396 361 u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name); 397 362 u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id);
+2 -2
drivers/net/ethernet/google/gve/gve_main.c
··· 283 283 int tx_stats_num, rx_stats_num; 284 284 285 285 tx_stats_num = (GVE_TX_STATS_REPORT_NUM + NIC_TX_STATS_REPORT_NUM) * 286 - gve_num_tx_queues(priv); 286 + priv->tx_cfg.max_queues; 287 287 rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) * 288 - priv->rx_cfg.num_queues; 288 + priv->rx_cfg.max_queues; 289 289 priv->stats_report_len = struct_size(priv->stats_report, stats, 290 290 size_add(tx_stats_num, rx_stats_num)); 291 291 priv->stats_report =
-1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 9030 9030 TCP_FLAG_FIN | 9031 9031 TCP_FLAG_CWR) >> 16); 9032 9032 wr32(&pf->hw, I40E_GLLAN_TSOMSK_L, be32_to_cpu(TCP_FLAG_CWR) >> 16); 9033 - udp_tunnel_get_rx_info(netdev); 9034 9033 9035 9034 return 0; 9036 9035 }
+14 -12
drivers/net/ethernet/intel/ice/ice_main.c
··· 3314 3314 if (ice_is_reset_in_progress(pf->state)) 3315 3315 goto skip_irq; 3316 3316 3317 - if (test_and_clear_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread)) { 3318 - /* Process outstanding Tx timestamps. If there is more work, 3319 - * re-arm the interrupt to trigger again. 3320 - */ 3321 - if (ice_ptp_process_ts(pf) == ICE_TX_TSTAMP_WORK_PENDING) { 3322 - wr32(hw, PFINT_OICR, PFINT_OICR_TSYN_TX_M); 3323 - ice_flush(hw); 3324 - } 3325 - } 3317 + if (test_and_clear_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread)) 3318 + ice_ptp_process_ts(pf); 3326 3319 3327 3320 skip_irq: 3328 3321 ice_irq_dynamic_ena(hw, NULL, NULL); 3322 + ice_flush(hw); 3323 + 3324 + if (ice_ptp_tx_tstamps_pending(pf)) { 3325 + /* If any new Tx timestamps happened while in interrupt, 3326 + * re-arm the interrupt to trigger it again. 3327 + */ 3328 + wr32(hw, PFINT_OICR, PFINT_OICR_TSYN_TX_M); 3329 + ice_flush(hw); 3330 + } 3329 3331 3330 3332 return IRQ_HANDLED; 3331 3333 } ··· 7865 7863 7866 7864 /* Restore timestamp mode settings after VSI rebuild */ 7867 7865 ice_ptp_restore_timestamp_mode(pf); 7866 + 7867 + /* Start PTP periodic work after VSI is fully rebuilt */ 7868 + ice_ptp_queue_work(pf); 7868 7869 return; 7869 7870 7870 7871 err_vsi_rebuild: ··· 9717 9712 if (err) 9718 9713 netdev_err(netdev, "Failed to open VSI 0x%04X on switch 0x%04X\n", 9719 9714 vsi->vsi_num, vsi->vsw->sw_id); 9720 - 9721 - /* Update existing tunnels information */ 9722 - udp_tunnel_get_rx_info(netdev); 9723 9715 9724 9716 return err; 9725 9717 }
+109 -70
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 573 573 pf = ptp_port_to_pf(ptp_port); 574 574 hw = &pf->hw; 575 575 576 + if (!tx->init) 577 + return; 578 + 576 579 /* Read the Tx ready status first */ 577 580 if (tx->has_ready_bitmap) { 578 581 err = ice_get_phy_tx_tstamp_ready(hw, tx->block, &tstamp_ready); ··· 677 674 pf->ptp.tx_hwtstamp_good += tstamp_good; 678 675 } 679 676 680 - /** 681 - * ice_ptp_tx_tstamp_owner - Process Tx timestamps for all ports on the device 682 - * @pf: Board private structure 683 - */ 684 - static enum ice_tx_tstamp_work ice_ptp_tx_tstamp_owner(struct ice_pf *pf) 677 + static void ice_ptp_tx_tstamp_owner(struct ice_pf *pf) 685 678 { 686 679 struct ice_ptp_port *port; 687 - unsigned int i; 688 680 689 681 mutex_lock(&pf->adapter->ports.lock); 690 682 list_for_each_entry(port, &pf->adapter->ports.ports, list_node) { ··· 691 693 ice_ptp_process_tx_tstamp(tx); 692 694 } 693 695 mutex_unlock(&pf->adapter->ports.lock); 694 - 695 - for (i = 0; i < ICE_GET_QUAD_NUM(pf->hw.ptp.num_lports); i++) { 696 - u64 tstamp_ready; 697 - int err; 698 - 699 - /* Read the Tx ready status first */ 700 - err = ice_get_phy_tx_tstamp_ready(&pf->hw, i, &tstamp_ready); 701 - if (err) 702 - break; 703 - else if (tstamp_ready) 704 - return ICE_TX_TSTAMP_WORK_PENDING; 705 - } 706 - 707 - return ICE_TX_TSTAMP_WORK_DONE; 708 - } 709 - 710 - /** 711 - * ice_ptp_tx_tstamp - Process Tx timestamps for this function. 712 - * @tx: Tx tracking structure to initialize 713 - * 714 - * Returns: ICE_TX_TSTAMP_WORK_PENDING if there are any outstanding incomplete 715 - * Tx timestamps, or ICE_TX_TSTAMP_WORK_DONE otherwise. 716 - */ 717 - static enum ice_tx_tstamp_work ice_ptp_tx_tstamp(struct ice_ptp_tx *tx) 718 - { 719 - bool more_timestamps; 720 - unsigned long flags; 721 - 722 - if (!tx->init) 723 - return ICE_TX_TSTAMP_WORK_DONE; 724 - 725 - /* Process the Tx timestamp tracker */ 726 - ice_ptp_process_tx_tstamp(tx); 727 - 728 - /* Check if there are outstanding Tx timestamps */ 729 - spin_lock_irqsave(&tx->lock, flags); 730 - more_timestamps = tx->init && !bitmap_empty(tx->in_use, tx->len); 731 - spin_unlock_irqrestore(&tx->lock, flags); 732 - 733 - if (more_timestamps) 734 - return ICE_TX_TSTAMP_WORK_PENDING; 735 - 736 - return ICE_TX_TSTAMP_WORK_DONE; 737 696 } 738 697 739 698 /** ··· 1334 1379 /* Do not reconfigure E810 or E830 PHY */ 1335 1380 return; 1336 1381 case ICE_MAC_GENERIC: 1337 - case ICE_MAC_GENERIC_3K_E825: 1338 1382 ice_ptp_port_phy_restart(ptp_port); 1383 + return; 1384 + case ICE_MAC_GENERIC_3K_E825: 1385 + if (linkup) 1386 + ice_ptp_port_phy_restart(ptp_port); 1339 1387 return; 1340 1388 default: 1341 1389 dev_warn(ice_pf_to_dev(pf), "%s: Unknown PHY type\n", __func__); ··· 2653 2695 return idx + tx->offset; 2654 2696 } 2655 2697 2656 - /** 2657 - * ice_ptp_process_ts - Process the PTP Tx timestamps 2658 - * @pf: Board private structure 2659 - * 2660 - * Returns: ICE_TX_TSTAMP_WORK_PENDING if there are any outstanding Tx 2661 - * timestamps that need processing, and ICE_TX_TSTAMP_WORK_DONE otherwise. 2662 - */ 2663 - enum ice_tx_tstamp_work ice_ptp_process_ts(struct ice_pf *pf) 2698 + void ice_ptp_process_ts(struct ice_pf *pf) 2664 2699 { 2665 2700 switch (pf->ptp.tx_interrupt_mode) { 2666 2701 case ICE_PTP_TX_INTERRUPT_NONE: 2667 2702 /* This device has the clock owner handle timestamps for it */ 2668 - return ICE_TX_TSTAMP_WORK_DONE; 2703 + return; 2669 2704 case ICE_PTP_TX_INTERRUPT_SELF: 2670 2705 /* This device handles its own timestamps */ 2671 - return ice_ptp_tx_tstamp(&pf->ptp.port.tx); 2706 + ice_ptp_process_tx_tstamp(&pf->ptp.port.tx); 2707 + return; 2672 2708 case ICE_PTP_TX_INTERRUPT_ALL: 2673 2709 /* This device handles timestamps for all ports */ 2674 - return ice_ptp_tx_tstamp_owner(pf); 2710 + ice_ptp_tx_tstamp_owner(pf); 2711 + return; 2675 2712 default: 2676 2713 WARN_ONCE(1, "Unexpected Tx timestamp interrupt mode %u\n", 2677 2714 pf->ptp.tx_interrupt_mode); 2678 - return ICE_TX_TSTAMP_WORK_DONE; 2715 + return; 2679 2716 } 2717 + } 2718 + 2719 + static bool ice_port_has_timestamps(struct ice_ptp_tx *tx) 2720 + { 2721 + bool more_timestamps; 2722 + 2723 + scoped_guard(spinlock_irqsave, &tx->lock) { 2724 + if (!tx->init) 2725 + return false; 2726 + 2727 + more_timestamps = !bitmap_empty(tx->in_use, tx->len); 2728 + } 2729 + 2730 + return more_timestamps; 2731 + } 2732 + 2733 + static bool ice_any_port_has_timestamps(struct ice_pf *pf) 2734 + { 2735 + struct ice_ptp_port *port; 2736 + 2737 + scoped_guard(mutex, &pf->adapter->ports.lock) { 2738 + list_for_each_entry(port, &pf->adapter->ports.ports, 2739 + list_node) { 2740 + struct ice_ptp_tx *tx = &port->tx; 2741 + 2742 + if (ice_port_has_timestamps(tx)) 2743 + return true; 2744 + } 2745 + } 2746 + 2747 + return false; 2748 + } 2749 + 2750 + bool ice_ptp_tx_tstamps_pending(struct ice_pf *pf) 2751 + { 2752 + struct ice_hw *hw = &pf->hw; 2753 + unsigned int i; 2754 + 2755 + /* Check software indicator */ 2756 + switch (pf->ptp.tx_interrupt_mode) { 2757 + case ICE_PTP_TX_INTERRUPT_NONE: 2758 + return false; 2759 + case ICE_PTP_TX_INTERRUPT_SELF: 2760 + if (ice_port_has_timestamps(&pf->ptp.port.tx)) 2761 + return true; 2762 + break; 2763 + case ICE_PTP_TX_INTERRUPT_ALL: 2764 + if (ice_any_port_has_timestamps(pf)) 2765 + return true; 2766 + break; 2767 + default: 2768 + WARN_ONCE(1, "Unexpected Tx timestamp interrupt mode %u\n", 2769 + pf->ptp.tx_interrupt_mode); 2770 + break; 2771 + } 2772 + 2773 + /* Check hardware indicator */ 2774 + for (i = 0; i < ICE_GET_QUAD_NUM(hw->ptp.num_lports); i++) { 2775 + u64 tstamp_ready = 0; 2776 + int err; 2777 + 2778 + err = ice_get_phy_tx_tstamp_ready(&pf->hw, i, &tstamp_ready); 2779 + if (err || tstamp_ready) 2780 + return true; 2781 + } 2782 + 2783 + return false; 2680 2784 } 2681 2785 2682 2786 /** ··· 2790 2770 return IRQ_WAKE_THREAD; 2791 2771 case ICE_MAC_E830: 2792 2772 /* E830 can read timestamps in the top half using rd32() */ 2793 - if (ice_ptp_process_ts(pf) == ICE_TX_TSTAMP_WORK_PENDING) { 2773 + ice_ptp_process_ts(pf); 2774 + 2775 + if (ice_ptp_tx_tstamps_pending(pf)) { 2794 2776 /* Process outstanding Tx timestamps. If there 2795 2777 * is more work, re-arm the interrupt to trigger again. 2796 2778 */ ··· 2872 2850 } 2873 2851 2874 2852 /** 2853 + * ice_ptp_queue_work - Queue PTP periodic work for a PF 2854 + * @pf: Board private structure 2855 + * 2856 + * Helper function to queue PTP periodic work after VSI rebuild completes. 2857 + * This ensures that PTP work only runs when VSI structures are ready. 2858 + */ 2859 + void ice_ptp_queue_work(struct ice_pf *pf) 2860 + { 2861 + if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags) && 2862 + pf->ptp.state == ICE_PTP_READY) 2863 + kthread_queue_delayed_work(pf->ptp.kworker, &pf->ptp.work, 0); 2864 + } 2865 + 2866 + /** 2875 2867 * ice_ptp_prepare_rebuild_sec - Prepare second NAC for PTP reset or rebuild 2876 2868 * @pf: Board private structure 2877 2869 * @rebuild: rebuild if true, prepare if false ··· 2903 2867 struct ice_pf *peer_pf = ptp_port_to_pf(port); 2904 2868 2905 2869 if (!ice_is_primary(&peer_pf->hw)) { 2906 - if (rebuild) 2870 + if (rebuild) { 2871 + /* TODO: When implementing rebuild=true: 2872 + * 1. Ensure secondary PFs' VSIs are rebuilt 2873 + * 2. Call ice_ptp_queue_work(peer_pf) after VSI rebuild 2874 + */ 2907 2875 ice_ptp_rebuild(peer_pf, reset_type); 2908 - else 2876 + } else { 2909 2877 ice_ptp_prepare_for_reset(peer_pf, reset_type); 2878 + } 2910 2879 } 2911 2880 } 2912 2881 } ··· 3056 3015 } 3057 3016 3058 3017 ptp->state = ICE_PTP_READY; 3059 - 3060 - /* Start periodic work going */ 3061 - kthread_queue_delayed_work(ptp->kworker, &ptp->work, 0); 3062 3018 3063 3019 dev_info(ice_pf_to_dev(pf), "PTP reset successful\n"); 3064 3020 return; ··· 3261 3223 { 3262 3224 switch (pf->hw.mac_type) { 3263 3225 case ICE_MAC_GENERIC: 3264 - /* E822 based PHY has the clock owner process the interrupt 3265 - * for all ports. 3226 + case ICE_MAC_GENERIC_3K_E825: 3227 + /* E82x hardware has the clock owner process timestamps for 3228 + * all ports. 3266 3229 */ 3267 3230 if (ice_pf_src_tmr_owned(pf)) 3268 3231 pf->ptp.tx_interrupt_mode = ICE_PTP_TX_INTERRUPT_ALL;
+13 -5
drivers/net/ethernet/intel/ice/ice_ptp.h
··· 304 304 s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb); 305 305 void ice_ptp_req_tx_single_tstamp(struct ice_ptp_tx *tx, u8 idx); 306 306 void ice_ptp_complete_tx_single_tstamp(struct ice_ptp_tx *tx); 307 - enum ice_tx_tstamp_work ice_ptp_process_ts(struct ice_pf *pf); 307 + void ice_ptp_process_ts(struct ice_pf *pf); 308 308 irqreturn_t ice_ptp_ts_irq(struct ice_pf *pf); 309 + bool ice_ptp_tx_tstamps_pending(struct ice_pf *pf); 309 310 u64 ice_ptp_read_src_clk_reg(struct ice_pf *pf, 310 311 struct ptp_system_timestamp *sts); 311 312 ··· 318 317 void ice_ptp_init(struct ice_pf *pf); 319 318 void ice_ptp_release(struct ice_pf *pf); 320 319 void ice_ptp_link_change(struct ice_pf *pf, bool linkup); 320 + void ice_ptp_queue_work(struct ice_pf *pf); 321 321 #else /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */ 322 322 323 323 static inline int ice_ptp_hwtstamp_get(struct net_device *netdev, ··· 347 345 348 346 static inline void ice_ptp_complete_tx_single_tstamp(struct ice_ptp_tx *tx) { } 349 347 350 - static inline bool ice_ptp_process_ts(struct ice_pf *pf) 351 - { 352 - return true; 353 - } 348 + static inline void ice_ptp_process_ts(struct ice_pf *pf) { } 354 349 355 350 static inline irqreturn_t ice_ptp_ts_irq(struct ice_pf *pf) 356 351 { 357 352 return IRQ_HANDLED; 353 + } 354 + 355 + static inline bool ice_ptp_tx_tstamps_pending(struct ice_pf *pf) 356 + { 357 + return false; 358 358 } 359 359 360 360 static inline u64 ice_ptp_read_src_clk_reg(struct ice_pf *pf, ··· 384 380 static inline void ice_ptp_init(struct ice_pf *pf) { } 385 381 static inline void ice_ptp_release(struct ice_pf *pf) { } 386 382 static inline void ice_ptp_link_change(struct ice_pf *pf, bool linkup) 383 + { 384 + } 385 + 386 + static inline void ice_ptp_queue_work(struct ice_pf *pf) 387 387 { 388 388 } 389 389
+15 -5
drivers/net/ethernet/spacemit/k1_emac.c
··· 12 12 #include <linux/dma-mapping.h> 13 13 #include <linux/etherdevice.h> 14 14 #include <linux/ethtool.h> 15 + #include <linux/if_vlan.h> 15 16 #include <linux/interrupt.h> 16 17 #include <linux/io.h> 17 18 #include <linux/iopoll.h> ··· 39 38 40 39 #define EMAC_DEFAULT_BUFSIZE 1536 41 40 #define EMAC_RX_BUF_2K 2048 42 - #define EMAC_RX_BUF_4K 4096 41 + #define EMAC_RX_BUF_MAX FIELD_MAX(RX_DESC_1_BUFFER_SIZE_1_MASK) 43 42 44 43 /* Tuning parameters from SpacemiT */ 45 44 #define EMAC_TX_FRAMES 64 ··· 194 193 195 194 static void emac_init_hw(struct emac_priv *priv) 196 195 { 197 - u32 rxirq = 0, dma = 0; 196 + u32 rxirq = 0, dma = 0, frame_sz; 198 197 199 198 regmap_set_bits(priv->regmap_apmu, 200 199 priv->regmap_apmu_offset + APMU_EMAC_CTRL_REG, ··· 218 217 emac_wr(priv, MAC_TRANSMIT_PACKET_START_THRESHOLD, 219 218 DEFAULT_TX_THRESHOLD); 220 219 emac_wr(priv, MAC_RECEIVE_PACKET_START_THRESHOLD, DEFAULT_RX_THRESHOLD); 220 + 221 + /* Set maximum frame size and jabber size based on configured MTU, 222 + * accounting for Ethernet header, double VLAN tags, and FCS. 223 + */ 224 + frame_sz = priv->ndev->mtu + ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN; 225 + 226 + emac_wr(priv, MAC_MAXIMUM_FRAME_SIZE, frame_sz); 227 + emac_wr(priv, MAC_TRANSMIT_JABBER_SIZE, frame_sz); 228 + emac_wr(priv, MAC_RECEIVE_JABBER_SIZE, frame_sz); 221 229 222 230 /* RX IRQ mitigation */ 223 231 rxirq = FIELD_PREP(MREGBIT_RECEIVE_IRQ_FRAME_COUNTER_MASK, ··· 918 908 return -EBUSY; 919 909 } 920 910 921 - frame_len = mtu + ETH_HLEN + ETH_FCS_LEN; 911 + frame_len = mtu + ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN; 922 912 923 913 if (frame_len <= EMAC_DEFAULT_BUFSIZE) 924 914 priv->dma_buf_sz = EMAC_DEFAULT_BUFSIZE; 925 915 else if (frame_len <= EMAC_RX_BUF_2K) 926 916 priv->dma_buf_sz = EMAC_RX_BUF_2K; 927 917 else 928 - priv->dma_buf_sz = EMAC_RX_BUF_4K; 918 + priv->dma_buf_sz = EMAC_RX_BUF_MAX; 929 919 930 920 ndev->mtu = mtu; 931 921 ··· 1927 1917 ndev->hw_features = NETIF_F_SG; 1928 1918 ndev->features |= ndev->hw_features; 1929 1919 1930 - ndev->max_mtu = EMAC_RX_BUF_4K - (ETH_HLEN + ETH_FCS_LEN); 1920 + ndev->max_mtu = EMAC_RX_BUF_MAX - (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN); 1931 1921 ndev->pcpu_stat_type = NETDEV_PCPU_STAT_DSTATS; 1932 1922 1933 1923 priv = netdev_priv(ndev);
+2 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 8093 8093 u32 chan; 8094 8094 8095 8095 if (!ndev || !netif_running(ndev)) 8096 - return 0; 8096 + goto suspend_bsp; 8097 8097 8098 8098 mutex_lock(&priv->lock); 8099 8099 ··· 8132 8132 if (stmmac_fpe_supported(priv)) 8133 8133 ethtool_mmsv_stop(&priv->fpe_cfg.mmsv); 8134 8134 8135 + suspend_bsp: 8135 8136 if (priv->plat->suspend) 8136 8137 return priv->plat->suspend(dev, priv->plat->bsp_priv); 8137 8138
+35 -6
drivers/net/ethernet/ti/cpsw.c
··· 305 305 return 0; 306 306 } 307 307 308 - static void cpsw_ndo_set_rx_mode(struct net_device *ndev) 308 + static void cpsw_ndo_set_rx_mode_work(struct work_struct *work) 309 309 { 310 - struct cpsw_priv *priv = netdev_priv(ndev); 310 + struct cpsw_priv *priv = container_of(work, struct cpsw_priv, rx_mode_work); 311 311 struct cpsw_common *cpsw = priv->cpsw; 312 + struct net_device *ndev = priv->ndev; 312 313 int slave_port = -1; 314 + 315 + rtnl_lock(); 316 + if (!netif_running(ndev)) 317 + goto unlock_rtnl; 318 + 319 + netif_addr_lock_bh(ndev); 313 320 314 321 if (cpsw->data.dual_emac) 315 322 slave_port = priv->emac_port + 1; ··· 325 318 /* Enable promiscuous mode */ 326 319 cpsw_set_promiscious(ndev, true); 327 320 cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, slave_port); 328 - return; 321 + goto unlock_addr; 329 322 } else { 330 323 /* Disable promiscuous mode */ 331 324 cpsw_set_promiscious(ndev, false); ··· 338 331 /* add/remove mcast address either for real netdev or for vlan */ 339 332 __hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr, 340 333 cpsw_del_mc_addr); 334 + 335 + unlock_addr: 336 + netif_addr_unlock_bh(ndev); 337 + unlock_rtnl: 338 + rtnl_unlock(); 339 + } 340 + 341 + static void cpsw_ndo_set_rx_mode(struct net_device *ndev) 342 + { 343 + struct cpsw_priv *priv = netdev_priv(ndev); 344 + 345 + schedule_work(&priv->rx_mode_work); 341 346 } 342 347 343 348 static unsigned int cpsw_rxbuf_total_len(unsigned int len) ··· 1491 1472 priv_sl2->ndev = ndev; 1492 1473 priv_sl2->dev = &ndev->dev; 1493 1474 priv_sl2->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG); 1475 + INIT_WORK(&priv_sl2->rx_mode_work, cpsw_ndo_set_rx_mode_work); 1494 1476 1495 1477 if (is_valid_ether_addr(data->slave_data[1].mac_addr)) { 1496 1478 memcpy(priv_sl2->mac_addr, data->slave_data[1].mac_addr, ··· 1673 1653 priv->dev = dev; 1674 1654 priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG); 1675 1655 priv->emac_port = 0; 1656 + INIT_WORK(&priv->rx_mode_work, cpsw_ndo_set_rx_mode_work); 1676 1657 1677 1658 if (is_valid_ether_addr(data->slave_data[0].mac_addr)) { 1678 1659 memcpy(priv->mac_addr, data->slave_data[0].mac_addr, ETH_ALEN); ··· 1779 1758 static void cpsw_remove(struct platform_device *pdev) 1780 1759 { 1781 1760 struct cpsw_common *cpsw = platform_get_drvdata(pdev); 1761 + struct net_device *ndev; 1762 + struct cpsw_priv *priv; 1782 1763 int i, ret; 1783 1764 1784 1765 ret = pm_runtime_resume_and_get(&pdev->dev); ··· 1793 1770 return; 1794 1771 } 1795 1772 1796 - for (i = 0; i < cpsw->data.slaves; i++) 1797 - if (cpsw->slaves[i].ndev) 1798 - unregister_netdev(cpsw->slaves[i].ndev); 1773 + for (i = 0; i < cpsw->data.slaves; i++) { 1774 + ndev = cpsw->slaves[i].ndev; 1775 + if (!ndev) 1776 + continue; 1777 + 1778 + priv = netdev_priv(ndev); 1779 + unregister_netdev(ndev); 1780 + disable_work_sync(&priv->rx_mode_work); 1781 + } 1799 1782 1800 1783 cpts_release(cpsw->cpts); 1801 1784 cpdma_ctlr_destroy(cpsw->dma);
+29 -5
drivers/net/ethernet/ti/cpsw_new.c
··· 248 248 return 0; 249 249 } 250 250 251 - static void cpsw_ndo_set_rx_mode(struct net_device *ndev) 251 + static void cpsw_ndo_set_rx_mode_work(struct work_struct *work) 252 252 { 253 - struct cpsw_priv *priv = netdev_priv(ndev); 253 + struct cpsw_priv *priv = container_of(work, struct cpsw_priv, rx_mode_work); 254 254 struct cpsw_common *cpsw = priv->cpsw; 255 + struct net_device *ndev = priv->ndev; 255 256 257 + rtnl_lock(); 258 + if (!netif_running(ndev)) 259 + goto unlock_rtnl; 260 + 261 + netif_addr_lock_bh(ndev); 256 262 if (ndev->flags & IFF_PROMISC) { 257 263 /* Enable promiscuous mode */ 258 264 cpsw_set_promiscious(ndev, true); 259 265 cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, priv->emac_port); 260 - return; 266 + goto unlock_addr; 261 267 } 262 268 263 269 /* Disable promiscuous mode */ ··· 276 270 /* add/remove mcast address either for real netdev or for vlan */ 277 271 __hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr, 278 272 cpsw_del_mc_addr); 273 + 274 + unlock_addr: 275 + netif_addr_unlock_bh(ndev); 276 + unlock_rtnl: 277 + rtnl_unlock(); 278 + } 279 + 280 + static void cpsw_ndo_set_rx_mode(struct net_device *ndev) 281 + { 282 + struct cpsw_priv *priv = netdev_priv(ndev); 283 + 284 + schedule_work(&priv->rx_mode_work); 279 285 } 280 286 281 287 static unsigned int cpsw_rxbuf_total_len(unsigned int len) ··· 1416 1398 priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG); 1417 1399 priv->emac_port = i + 1; 1418 1400 priv->tx_packet_min = CPSW_MIN_PACKET_SIZE; 1401 + INIT_WORK(&priv->rx_mode_work, cpsw_ndo_set_rx_mode_work); 1419 1402 1420 1403 if (is_valid_ether_addr(slave_data->mac_addr)) { 1421 1404 ether_addr_copy(priv->mac_addr, slave_data->mac_addr); ··· 1466 1447 1467 1448 static void cpsw_unregister_ports(struct cpsw_common *cpsw) 1468 1449 { 1450 + struct net_device *ndev; 1451 + struct cpsw_priv *priv; 1469 1452 int i = 0; 1470 1453 1471 1454 for (i = 0; i < cpsw->data.slaves; i++) { 1472 - if (!cpsw->slaves[i].ndev) 1455 + ndev = cpsw->slaves[i].ndev; 1456 + if (!ndev) 1473 1457 continue; 1474 1458 1475 - unregister_netdev(cpsw->slaves[i].ndev); 1459 + priv = netdev_priv(ndev); 1460 + unregister_netdev(ndev); 1461 + disable_work_sync(&priv->rx_mode_work); 1476 1462 } 1477 1463 } 1478 1464
+1
drivers/net/ethernet/ti/cpsw_priv.h
··· 391 391 u32 tx_packet_min; 392 392 struct cpsw_ale_ratelimit ale_bc_ratelimit; 393 393 struct cpsw_ale_ratelimit ale_mc_ratelimit; 394 + struct work_struct rx_mode_work; 394 395 }; 395 396 396 397 #define ndev_to_cpsw(ndev) (((struct cpsw_priv *)netdev_priv(ndev))->cpsw)
+3 -2
drivers/net/macvlan.c
··· 1567 1567 /* the macvlan port may be freed by macvlan_uninit when fail to register. 1568 1568 * so we destroy the macvlan port only when it's valid. 1569 1569 */ 1570 - if (create && macvlan_port_get_rtnl(lowerdev)) { 1570 + if (macvlan_port_get_rtnl(lowerdev)) { 1571 1571 macvlan_flush_sources(port, vlan); 1572 - macvlan_port_destroy(port->dev); 1572 + if (create) 1573 + macvlan_port_destroy(port->dev); 1573 1574 } 1574 1575 return err; 1575 1576 }
+2
drivers/net/phy/sfp.c
··· 479 479 linkmode_zero(caps->link_modes); 480 480 linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseX_Full_BIT, 481 481 caps->link_modes); 482 + phy_interface_zero(caps->interfaces); 483 + __set_bit(PHY_INTERFACE_MODE_1000BASEX, caps->interfaces); 482 484 } 483 485 484 486 #define SFP_QUIRK(_v, _p, _s, _f) \
+15 -14
drivers/net/usb/r8152.c
··· 8530 8530 usb_submit_urb(tp->intr_urb, GFP_NOIO); 8531 8531 } 8532 8532 8533 - /* If the device is RTL8152_INACCESSIBLE here then we should do a 8534 - * reset. This is important because the usb_lock_device_for_reset() 8535 - * that happens as a result of usb_queue_reset_device() will silently 8536 - * fail if the device was suspended or if too much time passed. 8537 - * 8538 - * NOTE: The device is locked here so we can directly do the reset. 8539 - * We don't need usb_lock_device_for_reset() because that's just a 8540 - * wrapper over device_lock() and device_resume() (which calls us) 8541 - * does that for us. 8542 - */ 8543 - if (test_bit(RTL8152_INACCESSIBLE, &tp->flags)) 8544 - usb_reset_device(tp->udev); 8545 - 8546 8533 return 0; 8547 8534 } 8548 8535 ··· 8640 8653 static int rtl8152_resume(struct usb_interface *intf) 8641 8654 { 8642 8655 struct r8152 *tp = usb_get_intfdata(intf); 8656 + bool runtime_resume = test_bit(SELECTIVE_SUSPEND, &tp->flags); 8643 8657 int ret; 8644 8658 8645 8659 mutex_lock(&tp->control); 8646 8660 8647 8661 rtl_reset_ocp_base(tp); 8648 8662 8649 - if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) 8663 + if (runtime_resume) 8650 8664 ret = rtl8152_runtime_resume(tp); 8651 8665 else 8652 8666 ret = rtl8152_system_resume(tp); 8653 8667 8654 8668 mutex_unlock(&tp->control); 8669 + 8670 + /* If the device is RTL8152_INACCESSIBLE here then we should do a 8671 + * reset. This is important because the usb_lock_device_for_reset() 8672 + * that happens as a result of usb_queue_reset_device() will silently 8673 + * fail if the device was suspended or if too much time passed. 8674 + * 8675 + * NOTE: The device is locked here so we can directly do the reset. 8676 + * We don't need usb_lock_device_for_reset() because that's just a 8677 + * wrapper over device_lock() and device_resume() (which calls us) 8678 + * does that for us. 8679 + */ 8680 + if (!runtime_resume && test_bit(RTL8152_INACCESSIBLE, &tp->flags)) 8681 + usb_reset_device(tp->udev); 8655 8682 8656 8683 return ret; 8657 8684 }
-2
drivers/net/wireless/intel/iwlwifi/mld/iface.c
··· 55 55 56 56 ieee80211_iter_keys(mld->hw, vif, iwl_mld_cleanup_keys_iter, NULL); 57 57 58 - wiphy_delayed_work_cancel(mld->wiphy, &mld_vif->mlo_scan_start_wk); 59 - 60 58 CLEANUP_STRUCT(mld_vif); 61 59 } 62 60
+2
drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
··· 1840 1840 wiphy_work_cancel(mld->wiphy, &mld_vif->emlsr.unblock_tpt_wk); 1841 1841 wiphy_delayed_work_cancel(mld->wiphy, 1842 1842 &mld_vif->emlsr.check_tpt_wk); 1843 + wiphy_delayed_work_cancel(mld->wiphy, 1844 + &mld_vif->mlo_scan_start_wk); 1843 1845 1844 1846 iwl_mld_reset_cca_40mhz_workaround(mld, vif); 1845 1847 iwl_mld_smps_workaround(mld, vif, true);
+5 -1
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2012-2014, 2018-2025 Intel Corporation 3 + * Copyright (C) 2012-2014, 2018-2026 Intel Corporation 4 4 * Copyright (C) 2013-2015 Intel Mobile Communications GmbH 5 5 * Copyright (C) 2016-2017 Intel Deutschland GmbH 6 6 */ ··· 3214 3214 3215 3215 IWL_DEBUG_WOWLAN(mvm, "Starting fast suspend flow\n"); 3216 3216 3217 + iwl_mvm_pause_tcm(mvm, true); 3218 + 3217 3219 mvm->fast_resume = true; 3218 3220 set_bit(IWL_MVM_STATUS_IN_D3, &mvm->status); 3219 3221 ··· 3271 3269 IWL_ERR(mvm, "Couldn't get the d3 notif %d\n", ret); 3272 3270 mvm->trans->state = IWL_TRANS_NO_FW; 3273 3271 } 3272 + 3273 + iwl_mvm_resume_tcm(mvm); 3274 3274 3275 3275 out: 3276 3276 clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
+2 -2
drivers/nvme/host/pci.c
··· 806 806 if (!blk_rq_dma_unmap(req, dma_dev, &iod->dma_state, iod->total_len, 807 807 map)) { 808 808 if (nvme_pci_cmd_use_sgl(&iod->cmd)) 809 - nvme_free_sgls(req, iod->descriptors[0], 810 - &iod->cmd.common.dptr.sgl, attrs); 809 + nvme_free_sgls(req, &iod->cmd.common.dptr.sgl, 810 + iod->descriptors[0], attrs); 811 811 else 812 812 nvme_free_prps(req, attrs); 813 813 }
+2 -1
drivers/nvme/target/io-cmd-bdev.c
··· 180 180 static void nvmet_bio_done(struct bio *bio) 181 181 { 182 182 struct nvmet_req *req = bio->bi_private; 183 + blk_status_t blk_status = bio->bi_status; 183 184 184 - nvmet_req_complete(req, blk_to_nvme_status(req, bio->bi_status)); 185 185 nvmet_req_bio_put(req, bio); 186 + nvmet_req_complete(req, blk_to_nvme_status(req, blk_status)); 186 187 } 187 188 188 189 #ifdef CONFIG_BLK_DEV_INTEGRITY
+17 -2
drivers/of/of_reserved_mem.c
··· 157 157 phys_addr_t base, size; 158 158 int i, len; 159 159 const __be32 *prop; 160 - bool nomap; 160 + bool nomap, default_cma; 161 161 162 162 prop = of_flat_dt_get_addr_size_prop(node, "reg", &len); 163 163 if (!prop) 164 164 return -ENOENT; 165 165 166 166 nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL; 167 + default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL); 168 + 169 + if (default_cma && cma_skip_dt_default_reserved_mem()) { 170 + pr_err("Skipping dt linux,cma-default for \"cma=\" kernel param.\n"); 171 + return -EINVAL; 172 + } 167 173 168 174 for (i = 0; i < len; i++) { 169 175 u64 b, s; ··· 254 248 255 249 fdt_for_each_subnode(child, fdt, node) { 256 250 const char *uname; 251 + bool default_cma = of_get_flat_dt_prop(child, "linux,cma-default", NULL); 257 252 u64 b, s; 258 253 259 254 if (!of_fdt_device_is_available(fdt, child)) 255 + continue; 256 + if (default_cma && cma_skip_dt_default_reserved_mem()) 260 257 continue; 261 258 262 259 if (!of_flat_dt_get_addr_size(child, "reg", &b, &s)) ··· 398 389 phys_addr_t base = 0, align = 0, size; 399 390 int i, len; 400 391 const __be32 *prop; 401 - bool nomap; 392 + bool nomap, default_cma; 402 393 int ret; 403 394 404 395 prop = of_get_flat_dt_prop(node, "size", &len); ··· 422 413 } 423 414 424 415 nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL; 416 + default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL); 417 + 418 + if (default_cma && cma_skip_dt_default_reserved_mem()) { 419 + pr_err("Skipping dt linux,cma-default for \"cma=\" kernel param.\n"); 420 + return -EINVAL; 421 + } 425 422 426 423 /* Need adjust the alignment to satisfy the CMA requirement */ 427 424 if (IS_ENABLED(CONFIG_CMA)
+3 -7
drivers/pci/ide.c
··· 11 11 #include <linux/pci_regs.h> 12 12 #include <linux/slab.h> 13 13 #include <linux/sysfs.h> 14 - #include <linux/tsm.h> 15 14 16 15 #include "pci.h" 17 16 ··· 167 168 for (u16 i = 0; i < nr_streams; i++) { 168 169 int pos = __sel_ide_offset(ide_cap, nr_link_ide, i, nr_ide_mem); 169 170 170 - pci_read_config_dword(pdev, pos + PCI_IDE_SEL_CAP, &val); 171 + pci_read_config_dword(pdev, pos + PCI_IDE_SEL_CTL, &val); 171 172 if (val & PCI_IDE_SEL_CTL_EN) 172 173 continue; 173 174 val &= ~PCI_IDE_SEL_CTL_ID; ··· 282 283 /* for SR-IOV case, cover all VFs */ 283 284 num_vf = pci_num_vf(pdev); 284 285 if (num_vf) 285 - rid_end = PCI_DEVID(pci_iov_virtfn_bus(pdev, num_vf), 286 - pci_iov_virtfn_devfn(pdev, num_vf)); 286 + rid_end = PCI_DEVID(pci_iov_virtfn_bus(pdev, num_vf - 1), 287 + pci_iov_virtfn_devfn(pdev, num_vf - 1)); 287 288 else 288 289 rid_end = pci_dev_id(pdev); 289 290 ··· 371 372 372 373 if (ide->partner[PCI_IDE_EP].enable) 373 374 pci_ide_stream_disable(pdev, ide); 374 - 375 - if (ide->tsm_dev) 376 - tsm_ide_stream_unregister(ide); 377 375 378 376 if (ide->partner[PCI_IDE_RP].setup) 379 377 pci_ide_stream_teardown(rp, ide);
+4 -5
drivers/pinctrl/pinctrl-rockchip.c
··· 3545 3545 return 0; 3546 3546 } 3547 3547 3548 - static int rockchip_pmx_gpio_set_direction(struct pinctrl_dev *pctldev, 3549 - struct pinctrl_gpio_range *range, 3550 - unsigned offset, 3551 - bool input) 3548 + static int rockchip_pmx_gpio_request_enable(struct pinctrl_dev *pctldev, 3549 + struct pinctrl_gpio_range *range, 3550 + unsigned int offset) 3552 3551 { 3553 3552 struct rockchip_pinctrl *info = pinctrl_dev_get_drvdata(pctldev); 3554 3553 struct rockchip_pin_bank *bank; ··· 3561 3562 .get_function_name = rockchip_pmx_get_func_name, 3562 3563 .get_function_groups = rockchip_pmx_get_groups, 3563 3564 .set_mux = rockchip_pmx_set, 3564 - .gpio_set_direction = rockchip_pmx_gpio_set_direction, 3565 + .gpio_request_enable = rockchip_pmx_gpio_request_enable, 3565 3566 }; 3566 3567 3567 3568 /*
+7
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 302 302 DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"), 303 303 } 304 304 }, 305 + { 306 + .ident = "MECHREVO Wujie 15X Pro", 307 + .driver_data = &quirk_spurious_8042, 308 + .matches = { 309 + DMI_MATCH(DMI_BOARD_NAME, "WUJIE Series-X5SP4NAG"), 310 + } 311 + }, 305 312 {} 306 313 }; 307 314
+32
drivers/platform/x86/classmate-laptop.c
··· 207 207 208 208 acpi = to_acpi_device(dev); 209 209 inputdev = dev_get_drvdata(&acpi->dev); 210 + if (!inputdev) 211 + return -ENXIO; 212 + 210 213 accel = dev_get_drvdata(&inputdev->dev); 214 + if (!accel) 215 + return -ENXIO; 211 216 212 217 return sysfs_emit(buf, "%d\n", accel->sensitivity); 213 218 } ··· 229 224 230 225 acpi = to_acpi_device(dev); 231 226 inputdev = dev_get_drvdata(&acpi->dev); 227 + if (!inputdev) 228 + return -ENXIO; 229 + 232 230 accel = dev_get_drvdata(&inputdev->dev); 231 + if (!accel) 232 + return -ENXIO; 233 233 234 234 r = kstrtoul(buf, 0, &sensitivity); 235 235 if (r) ··· 266 256 267 257 acpi = to_acpi_device(dev); 268 258 inputdev = dev_get_drvdata(&acpi->dev); 259 + if (!inputdev) 260 + return -ENXIO; 261 + 269 262 accel = dev_get_drvdata(&inputdev->dev); 263 + if (!accel) 264 + return -ENXIO; 270 265 271 266 return sysfs_emit(buf, "%d\n", accel->g_select); 272 267 } ··· 288 273 289 274 acpi = to_acpi_device(dev); 290 275 inputdev = dev_get_drvdata(&acpi->dev); 276 + if (!inputdev) 277 + return -ENXIO; 278 + 291 279 accel = dev_get_drvdata(&inputdev->dev); 280 + if (!accel) 281 + return -ENXIO; 292 282 293 283 r = kstrtoul(buf, 0, &g_select); 294 284 if (r) ··· 322 302 323 303 acpi = to_acpi_device(input->dev.parent); 324 304 accel = dev_get_drvdata(&input->dev); 305 + if (!accel) 306 + return -ENXIO; 325 307 326 308 cmpc_accel_set_sensitivity_v4(acpi->handle, accel->sensitivity); 327 309 cmpc_accel_set_g_select_v4(acpi->handle, accel->g_select); ··· 571 549 572 550 acpi = to_acpi_device(dev); 573 551 inputdev = dev_get_drvdata(&acpi->dev); 552 + if (!inputdev) 553 + return -ENXIO; 554 + 574 555 accel = dev_get_drvdata(&inputdev->dev); 556 + if (!accel) 557 + return -ENXIO; 575 558 576 559 return sysfs_emit(buf, "%d\n", accel->sensitivity); 577 560 } ··· 593 566 594 567 acpi = to_acpi_device(dev); 595 568 inputdev = dev_get_drvdata(&acpi->dev); 569 + if (!inputdev) 570 + return -ENXIO; 571 + 596 572 accel = dev_get_drvdata(&inputdev->dev); 573 + if (!accel) 574 + return -ENXIO; 597 575 598 576 r = kstrtoul(buf, 0, &sensitivity); 599 577 if (r)
+5
drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
··· 696 696 return ret; 697 697 } 698 698 699 + if (!str_value || !str_value[0]) { 700 + pr_debug("Ignoring attribute with empty name\n"); 701 + goto pack_attr_exit; 702 + } 703 + 699 704 /* All duplicate attributes found are ignored */ 700 705 duplicate = kset_find_obj(temp_kset, str_value); 701 706 if (duplicate) {
+1 -1
drivers/platform/x86/intel/plr_tpmi.c
··· 316 316 snprintf(name, sizeof(name), "domain%d", i); 317 317 318 318 dentry = debugfs_create_dir(name, plr->dbgfs_dir); 319 - debugfs_create_file("status", 0444, dentry, &plr->die_info[i], 319 + debugfs_create_file("status", 0644, dentry, &plr->die_info[i], 320 320 &plr_status_fops); 321 321 } 322 322
+2 -2
drivers/platform/x86/intel/telemetry/debugfs.c
··· 449 449 for (index = 0; index < debugfs_conf->pss_ltr_evts; index++) { 450 450 seq_printf(s, "%-32s\t%u\n", 451 451 debugfs_conf->pss_ltr_data[index].name, 452 - pss_s0ix_wakeup[index]); 452 + pss_ltr_blkd[index]); 453 453 } 454 454 455 455 seq_puts(s, "\n--------------------------------------\n"); ··· 459 459 for (index = 0; index < debugfs_conf->pss_wakeup_evts; index++) { 460 460 seq_printf(s, "%-32s\t%u\n", 461 461 debugfs_conf->pss_wakeup[index].name, 462 - pss_ltr_blkd[index]); 462 + pss_s0ix_wakeup[index]); 463 463 } 464 464 465 465 return 0;
+1 -1
drivers/platform/x86/intel/telemetry/pltdrv.c
··· 610 610 /* Get telemetry Info */ 611 611 events = (read_buf & TELEM_INFO_SRAMEVTS_MASK) >> 612 612 TELEM_INFO_SRAMEVTS_SHIFT; 613 - event_regs = read_buf & TELEM_INFO_SRAMEVTS_MASK; 613 + event_regs = read_buf & TELEM_INFO_NENABLES_MASK; 614 614 if ((events < TELEM_MAX_EVENTS_SRAM) || 615 615 (event_regs < TELEM_MAX_EVENTS_SRAM)) { 616 616 dev_err(&pdev->dev, "PSS:Insufficient Space for SRAM Trace\n");
+2
drivers/platform/x86/intel/vsec.c
··· 766 766 #define PCI_DEVICE_ID_INTEL_VSEC_LNL_M 0x647d 767 767 #define PCI_DEVICE_ID_INTEL_VSEC_PTL 0xb07d 768 768 #define PCI_DEVICE_ID_INTEL_VSEC_WCL 0xfd7d 769 + #define PCI_DEVICE_ID_INTEL_VSEC_NVL 0xd70d 769 770 static const struct pci_device_id intel_vsec_pci_ids[] = { 770 771 { PCI_DEVICE_DATA(INTEL, VSEC_ADL, &tgl_info) }, 771 772 { PCI_DEVICE_DATA(INTEL, VSEC_DG1, &dg1_info) }, ··· 779 778 { PCI_DEVICE_DATA(INTEL, VSEC_LNL_M, &lnl_info) }, 780 779 { PCI_DEVICE_DATA(INTEL, VSEC_PTL, &mtl_info) }, 781 780 { PCI_DEVICE_DATA(INTEL, VSEC_WCL, &mtl_info) }, 781 + { PCI_DEVICE_DATA(INTEL, VSEC_NVL, &mtl_info) }, 782 782 { } 783 783 }; 784 784 MODULE_DEVICE_TABLE(pci, intel_vsec_pci_ids);
+10 -1
drivers/platform/x86/lg-laptop.c
··· 838 838 case 'P': 839 839 year = 2021; 840 840 break; 841 - default: 841 + case 'Q': 842 842 year = 2022; 843 + break; 844 + case 'R': 845 + year = 2023; 846 + break; 847 + case 'S': 848 + year = 2024; 849 + break; 850 + default: 851 + year = 2025; 843 852 } 844 853 break; 845 854 default:
+3 -1
drivers/platform/x86/panasonic-laptop.c
··· 1089 1089 PLATFORM_DEVID_NONE, NULL, 0); 1090 1090 if (IS_ERR(pcc->platform)) { 1091 1091 result = PTR_ERR(pcc->platform); 1092 - goto out_backlight; 1092 + goto out_sysfs; 1093 1093 } 1094 1094 result = device_create_file(&pcc->platform->dev, 1095 1095 &dev_attr_cdpower); ··· 1105 1105 1106 1106 out_platform: 1107 1107 platform_device_unregister(pcc->platform); 1108 + out_sysfs: 1109 + sysfs_remove_group(&device->dev.kobj, &pcc_attr_group); 1108 1110 out_backlight: 1109 1111 backlight_device_unregister(pcc->backlight); 1110 1112 out_input:
+1 -1
drivers/platform/x86/toshiba_haps.c
··· 183 183 184 184 pr_info("Toshiba HDD Active Protection Sensor device\n"); 185 185 186 - haps = kzalloc(sizeof(struct toshiba_haps_dev), GFP_KERNEL); 186 + haps = devm_kzalloc(&acpi_dev->dev, sizeof(*haps), GFP_KERNEL); 187 187 if (!haps) 188 188 return -ENOMEM; 189 189
+1
drivers/scsi/be2iscsi/be_mgmt.c
··· 1025 1025 &nonemb_cmd->dma, 1026 1026 GFP_KERNEL); 1027 1027 if (!nonemb_cmd->va) { 1028 + free_mcc_wrb(ctrl, tag); 1028 1029 mutex_unlock(&ctrl->mbox_lock); 1029 1030 return 0; 1030 1031 }
+1 -1
drivers/scsi/qla2xxx/qla_os.c
··· 4489 4489 fail_elsrej: 4490 4490 dma_pool_destroy(ha->purex_dma_pool); 4491 4491 fail_flt: 4492 - dma_free_coherent(&ha->pdev->dev, SFP_DEV_SIZE, 4492 + dma_free_coherent(&ha->pdev->dev, sizeof(struct qla_flt_header) + FLT_REGIONS_SIZE, 4493 4493 ha->flt, ha->flt_dma); 4494 4494 4495 4495 fail_flt_buffer:
+3 -2
drivers/soc/qcom/smem.c
··· 396 396 */ 397 397 bool qcom_smem_is_available(void) 398 398 { 399 - return !!__smem; 399 + return !IS_ERR(__smem); 400 400 } 401 401 EXPORT_SYMBOL_GPL(qcom_smem_is_available); 402 402 ··· 1247 1247 { 1248 1248 platform_device_unregister(__smem->socinfo); 1249 1249 1250 - __smem = NULL; 1250 + /* Set to -EPROBE_DEFER to signal unprobed state */ 1251 + __smem = ERR_PTR(-EPROBE_DEFER); 1251 1252 } 1252 1253 1253 1254 static const struct of_device_id qcom_smem_of_match[] = {
+2 -2
drivers/target/sbp/sbp_target.c
··· 1960 1960 container_of(wwn, struct sbp_tport, tport_wwn); 1961 1961 1962 1962 struct sbp_tpg *tpg; 1963 - unsigned long tpgt; 1963 + u16 tpgt; 1964 1964 int ret; 1965 1965 1966 1966 if (strstr(name, "tpgt_") != name) 1967 1967 return ERR_PTR(-EINVAL); 1968 - if (kstrtoul(name + 5, 10, &tpgt) || tpgt > UINT_MAX) 1968 + if (kstrtou16(name + 5, 10, &tpgt)) 1969 1969 return ERR_PTR(-EINVAL); 1970 1970 1971 1971 if (tport->tpg) {
+1 -1
drivers/ufs/host/ufs-amd-versal2.c
··· 367 367 { 368 368 int ret = 0; 369 369 370 - if (status == PRE_CHANGE) { 370 + if (status == POST_CHANGE) { 371 371 ret = ufs_versal2_phy_init(hba); 372 372 if (ret) 373 373 dev_err(hba->dev, "Phy init failed (%d)\n", ret);
-30
drivers/virt/coco/tsm-core.c
··· 4 4 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 5 5 6 6 #include <linux/tsm.h> 7 - #include <linux/pci.h> 8 - #include <linux/rwsem.h> 9 7 #include <linux/device.h> 10 8 #include <linux/module.h> 11 9 #include <linux/cleanup.h> 12 10 #include <linux/pci-tsm.h> 13 - #include <linux/pci-ide.h> 14 11 15 12 static struct class *tsm_class; 16 - static DECLARE_RWSEM(tsm_rwsem); 17 13 static DEFINE_IDA(tsm_ida); 18 14 19 15 static int match_id(struct device *dev, const void *data) ··· 103 107 device_unregister(&tsm_dev->dev); 104 108 } 105 109 EXPORT_SYMBOL_GPL(tsm_unregister); 106 - 107 - /* must be invoked between tsm_register / tsm_unregister */ 108 - int tsm_ide_stream_register(struct pci_ide *ide) 109 - { 110 - struct pci_dev *pdev = ide->pdev; 111 - struct pci_tsm *tsm = pdev->tsm; 112 - struct tsm_dev *tsm_dev = tsm->tsm_dev; 113 - int rc; 114 - 115 - rc = sysfs_create_link(&tsm_dev->dev.kobj, &pdev->dev.kobj, ide->name); 116 - if (rc) 117 - return rc; 118 - 119 - ide->tsm_dev = tsm_dev; 120 - return 0; 121 - } 122 - EXPORT_SYMBOL_GPL(tsm_ide_stream_register); 123 - 124 - void tsm_ide_stream_unregister(struct pci_ide *ide) 125 - { 126 - struct tsm_dev *tsm_dev = ide->tsm_dev; 127 - 128 - ide->tsm_dev = NULL; 129 - sysfs_remove_link(&tsm_dev->dev.kobj, ide->name); 130 - } 131 - EXPORT_SYMBOL_GPL(tsm_ide_stream_unregister); 132 110 133 111 static void tsm_release(struct device *dev) 134 112 {
+1
fs/btrfs/raid56.c
··· 150 150 static void free_raid_bio_pointers(struct btrfs_raid_bio *rbio) 151 151 { 152 152 bitmap_free(rbio->error_bitmap); 153 + bitmap_free(rbio->stripe_uptodate_bitmap); 153 154 kfree(rbio->stripe_pages); 154 155 kfree(rbio->bio_paddrs); 155 156 kfree(rbio->stripe_paddrs);
+1 -1
fs/efivarfs/vars.c
··· 552 552 err = __efivar_entry_get(entry, attributes, size, data); 553 553 efivar_unlock(); 554 554 555 - return 0; 555 + return err; 556 556 } 557 557 558 558 /**
+3 -1
fs/smb/client/cifstransport.c
··· 251 251 rc = cifs_send_recv(xid, ses, ses->server, 252 252 &rqst, &resp_buf_type, flags, &resp_iov); 253 253 if (rc < 0) 254 - return rc; 254 + goto out; 255 255 256 256 if (out_buf) { 257 257 *pbytes_returned = resp_iov.iov_len; 258 258 if (resp_iov.iov_len) 259 259 memcpy(out_buf, resp_iov.iov_base, resp_iov.iov_len); 260 260 } 261 + 262 + out: 261 263 free_rsp_buf(resp_buf_type, resp_iov.iov_base); 262 264 return rc; 263 265 }
+1
fs/smb/client/smb2file.c
··· 178 178 rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov, 179 179 &err_buftype); 180 180 if (rc == -EACCES && retry_without_read_attributes) { 181 + free_rsp_buf(err_buftype, err_iov.iov_base); 181 182 oparms->desired_access &= ~FILE_READ_ATTRIBUTES; 182 183 rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov, 183 184 &err_buftype);
+9
include/linux/cma.h
··· 57 57 58 58 extern void cma_reserve_pages_on_error(struct cma *cma); 59 59 60 + #ifdef CONFIG_DMA_CMA 61 + extern bool cma_skip_dt_default_reserved_mem(void); 62 + #else 63 + static inline bool cma_skip_dt_default_reserved_mem(void) 64 + { 65 + return false; 66 + } 67 + #endif 68 + 60 69 #ifdef CONFIG_CMA 61 70 struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp); 62 71 bool cma_free_folio(struct cma *cma, const struct folio *folio);
+14
include/linux/kasan.h
··· 641 641 __kasan_unpoison_vmap_areas(vms, nr_vms, flags); 642 642 } 643 643 644 + void __kasan_vrealloc(const void *start, unsigned long old_size, 645 + unsigned long new_size); 646 + 647 + static __always_inline void kasan_vrealloc(const void *start, 648 + unsigned long old_size, 649 + unsigned long new_size) 650 + { 651 + if (kasan_enabled()) 652 + __kasan_vrealloc(start, old_size, new_size); 653 + } 654 + 644 655 #else /* CONFIG_KASAN_VMALLOC */ 645 656 646 657 static inline void kasan_populate_early_vm_area_shadow(void *start, ··· 680 669 kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms, 681 670 kasan_vmalloc_flags_t flags) 682 671 { } 672 + 673 + static inline void kasan_vrealloc(const void *start, unsigned long old_size, 674 + unsigned long new_size) { } 683 675 684 676 #endif /* CONFIG_KASAN_VMALLOC */ 685 677
+6
include/linux/memfd.h
··· 17 17 * to by vm_flags_ptr. 18 18 */ 19 19 int memfd_check_seals_mmap(struct file *file, vm_flags_t *vm_flags_ptr); 20 + struct file *memfd_alloc_file(const char *name, unsigned int flags); 20 21 #else 21 22 static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned int a) 22 23 { ··· 31 30 vm_flags_t *vm_flags_ptr) 32 31 { 33 32 return 0; 33 + } 34 + 35 + static inline struct file *memfd_alloc_file(const char *name, unsigned int flags) 36 + { 37 + return ERR_PTR(-EINVAL); 34 38 } 35 39 #endif 36 40
+6 -3
include/linux/memremap.h
··· 224 224 } 225 225 226 226 #ifdef CONFIG_ZONE_DEVICE 227 - void zone_device_page_init(struct page *page, unsigned int order); 227 + void zone_device_page_init(struct page *page, struct dev_pagemap *pgmap, 228 + unsigned int order); 228 229 void *memremap_pages(struct dev_pagemap *pgmap, int nid); 229 230 void memunmap_pages(struct dev_pagemap *pgmap); 230 231 void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); ··· 235 234 236 235 unsigned long memremap_compat_align(void); 237 236 238 - static inline void zone_device_folio_init(struct folio *folio, unsigned int order) 237 + static inline void zone_device_folio_init(struct folio *folio, 238 + struct dev_pagemap *pgmap, 239 + unsigned int order) 239 240 { 240 - zone_device_page_init(&folio->page, order); 241 + zone_device_page_init(&folio->page, pgmap, order); 241 242 if (order) 242 243 folio_set_large_rmappable(folio); 243 244 }
+1 -3
include/linux/pci-ide.h
··· 26 26 /** 27 27 * struct pci_ide_partner - Per port pair Selective IDE Stream settings 28 28 * @rid_start: Partner Port Requester ID range start 29 - * @rid_end: Partner Port Requester ID range end 29 + * @rid_end: Partner Port Requester ID range end (inclusive) 30 30 * @stream_index: Selective IDE Stream Register Block selection 31 31 * @mem_assoc: PCI bus memory address association for targeting peer partner 32 32 * @pref_assoc: PCI bus prefetchable memory address association for ··· 82 82 * @host_bridge_stream: allocated from host bridge @ide_stream_ida pool 83 83 * @stream_id: unique Stream ID (within Partner Port pairing) 84 84 * @name: name of the established Selective IDE Stream in sysfs 85 - * @tsm_dev: For TSM established IDE, the TSM device context 86 85 * 87 86 * Negative @stream_id values indicate "uninitialized" on the 88 87 * expectation that with TSM established IDE the TSM owns the stream_id ··· 93 94 u8 host_bridge_stream; 94 95 int stream_id; 95 96 const char *name; 96 - struct tsm_dev *tsm_dev; 97 97 }; 98 98 99 99 /*
+5
include/linux/sched.h
··· 1776 1776 (current->nr_cpus_allowed == 1); 1777 1777 } 1778 1778 1779 + static __always_inline bool is_user_task(struct task_struct *task) 1780 + { 1781 + return task->mm && !(task->flags & (PF_KTHREAD | PF_USER_WORKER)); 1782 + } 1783 + 1779 1784 /* Per-process atomic flags. */ 1780 1785 #define PFA_NO_NEW_PRIVS 0 /* May not gain new privileges. */ 1781 1786 #define PFA_SPREAD_PAGE 1 /* Spread page cache over cpuset */
+12
include/linux/skbuff.h
··· 4302 4302 skb_headlen(skb), buffer); 4303 4303 } 4304 4304 4305 + /* Variant of skb_header_pointer() where @offset is user-controlled 4306 + * and potentially negative. 4307 + */ 4308 + static inline void * __must_check 4309 + skb_header_pointer_careful(const struct sk_buff *skb, int offset, 4310 + int len, void *buffer) 4311 + { 4312 + if (unlikely(offset < 0 && -offset > skb_headroom(skb))) 4313 + return NULL; 4314 + return skb_header_pointer(skb, offset, len, buffer); 4315 + } 4316 + 4305 4317 static inline void * __must_check 4306 4318 skb_pointer_if_linear(const struct sk_buff *skb, int offset, int len) 4307 4319 {
-3
include/linux/tsm.h
··· 123 123 struct tsm_dev *tsm_register(struct device *parent, struct pci_tsm_ops *ops); 124 124 void tsm_unregister(struct tsm_dev *tsm_dev); 125 125 struct tsm_dev *find_tsm_dev(int id); 126 - struct pci_ide; 127 - int tsm_ide_stream_register(struct pci_ide *ide); 128 - void tsm_ide_stream_unregister(struct pci_ide *ide); 129 126 #endif /* __TSM_H */
+63 -7
kernel/cgroup/dmem.c
··· 14 14 #include <linux/mutex.h> 15 15 #include <linux/page_counter.h> 16 16 #include <linux/parser.h> 17 + #include <linux/refcount.h> 17 18 #include <linux/rculist.h> 18 19 #include <linux/slab.h> 19 20 ··· 72 71 struct rcu_head rcu; 73 72 74 73 struct page_counter cnt; 74 + struct dmem_cgroup_pool_state *parent; 75 75 76 + refcount_t ref; 76 77 bool inited; 77 78 }; 78 79 ··· 90 87 */ 91 88 static DEFINE_SPINLOCK(dmemcg_lock); 92 89 static LIST_HEAD(dmem_cgroup_regions); 90 + 91 + static void dmemcg_free_region(struct kref *ref); 92 + static void dmemcg_pool_free_rcu(struct rcu_head *rcu); 93 93 94 94 static inline struct dmemcg_state * 95 95 css_to_dmemcs(struct cgroup_subsys_state *css) ··· 110 104 return cg->css.parent ? css_to_dmemcs(cg->css.parent) : NULL; 111 105 } 112 106 107 + static void dmemcg_pool_get(struct dmem_cgroup_pool_state *pool) 108 + { 109 + refcount_inc(&pool->ref); 110 + } 111 + 112 + static bool dmemcg_pool_tryget(struct dmem_cgroup_pool_state *pool) 113 + { 114 + return refcount_inc_not_zero(&pool->ref); 115 + } 116 + 117 + static void dmemcg_pool_put(struct dmem_cgroup_pool_state *pool) 118 + { 119 + if (!refcount_dec_and_test(&pool->ref)) 120 + return; 121 + 122 + call_rcu(&pool->rcu, dmemcg_pool_free_rcu); 123 + } 124 + 125 + static void dmemcg_pool_free_rcu(struct rcu_head *rcu) 126 + { 127 + struct dmem_cgroup_pool_state *pool = container_of(rcu, typeof(*pool), rcu); 128 + 129 + if (pool->parent) 130 + dmemcg_pool_put(pool->parent); 131 + kref_put(&pool->region->ref, dmemcg_free_region); 132 + kfree(pool); 133 + } 134 + 113 135 static void free_cg_pool(struct dmem_cgroup_pool_state *pool) 114 136 { 115 137 list_del(&pool->region_node); 116 - kfree(pool); 138 + dmemcg_pool_put(pool); 117 139 } 118 140 119 141 static void ··· 376 342 page_counter_init(&pool->cnt, 377 343 ppool ? &ppool->cnt : NULL, true); 378 344 reset_all_resource_limits(pool); 345 + refcount_set(&pool->ref, 1); 346 + kref_get(&region->ref); 347 + if (ppool && !pool->parent) { 348 + pool->parent = ppool; 349 + dmemcg_pool_get(ppool); 350 + } 379 351 380 352 list_add_tail_rcu(&pool->css_node, &dmemcs->pools); 381 353 list_add_tail(&pool->region_node, &region->pools); ··· 429 389 430 390 /* Fix up parent links, mark as inited. */ 431 391 pool->cnt.parent = &ppool->cnt; 392 + if (ppool && !pool->parent) { 393 + pool->parent = ppool; 394 + dmemcg_pool_get(ppool); 395 + } 432 396 pool->inited = true; 433 397 434 398 pool = ppool; ··· 467 423 */ 468 424 void dmem_cgroup_unregister_region(struct dmem_cgroup_region *region) 469 425 { 470 - struct list_head *entry; 426 + struct dmem_cgroup_pool_state *pool, *next; 471 427 472 428 if (!region) 473 429 return; ··· 477 433 /* Remove from global region list */ 478 434 list_del_rcu(&region->region_node); 479 435 480 - list_for_each_rcu(entry, &region->pools) { 481 - struct dmem_cgroup_pool_state *pool = 482 - container_of(entry, typeof(*pool), region_node); 483 - 436 + list_for_each_entry_safe(pool, next, &region->pools, region_node) { 484 437 list_del_rcu(&pool->css_node); 438 + list_del(&pool->region_node); 439 + dmemcg_pool_put(pool); 485 440 } 486 441 487 442 /* ··· 561 518 */ 562 519 void dmem_cgroup_pool_state_put(struct dmem_cgroup_pool_state *pool) 563 520 { 564 - if (pool) 521 + if (pool) { 565 522 css_put(&pool->cs->css); 523 + dmemcg_pool_put(pool); 524 + } 566 525 } 567 526 EXPORT_SYMBOL_GPL(dmem_cgroup_pool_state_put); 568 527 ··· 578 533 pool = find_cg_pool_locked(cg, region); 579 534 if (pool && !READ_ONCE(pool->inited)) 580 535 pool = NULL; 536 + if (pool && !dmemcg_pool_tryget(pool)) 537 + pool = NULL; 581 538 rcu_read_unlock(); 582 539 583 540 while (!pool) { ··· 588 541 pool = get_cg_pool_locked(cg, region, &allocpool); 589 542 else 590 543 pool = ERR_PTR(-ENODEV); 544 + if (!IS_ERR(pool)) 545 + dmemcg_pool_get(pool); 591 546 spin_unlock(&dmemcg_lock); 592 547 593 548 if (pool == ERR_PTR(-ENOMEM)) { ··· 625 576 626 577 page_counter_uncharge(&pool->cnt, size); 627 578 css_put(&pool->cs->css); 579 + dmemcg_pool_put(pool); 628 580 } 629 581 EXPORT_SYMBOL_GPL(dmem_cgroup_uncharge); 630 582 ··· 677 627 if (ret_limit_pool) { 678 628 *ret_limit_pool = container_of(fail, struct dmem_cgroup_pool_state, cnt); 679 629 css_get(&(*ret_limit_pool)->cs->css); 630 + dmemcg_pool_get(*ret_limit_pool); 680 631 } 632 + dmemcg_pool_put(pool); 681 633 ret = -EAGAIN; 682 634 goto err; 683 635 } ··· 752 700 if (!region_name[0]) 753 701 continue; 754 702 703 + if (!options || !*options) 704 + return -EINVAL; 705 + 755 706 rcu_read_lock(); 756 707 region = dmemcg_get_region_by_name(region_name); 757 708 rcu_read_unlock(); ··· 774 719 775 720 /* And commit */ 776 721 apply(pool, new_limit); 722 + dmemcg_pool_put(pool); 777 723 778 724 out_put: 779 725 kref_put(&region->ref, dmemcg_free_region);
+10 -6
kernel/dma/contiguous.c
··· 91 91 } 92 92 early_param("cma", early_cma); 93 93 94 + /* 95 + * cma_skip_dt_default_reserved_mem - This is called from the 96 + * reserved_mem framework to detect if the default cma region is being 97 + * set by the "cma=" kernel parameter. 98 + */ 99 + bool __init cma_skip_dt_default_reserved_mem(void) 100 + { 101 + return size_cmdline != -1; 102 + } 103 + 94 104 #ifdef CONFIG_DMA_NUMA_CMA 95 105 96 106 static struct cma *dma_contiguous_numa_area[MAX_NUMNODES]; ··· 479 469 bool default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL); 480 470 struct cma *cma; 481 471 int err; 482 - 483 - if (size_cmdline != -1 && default_cma) { 484 - pr_info("Reserved memory: bypass %s node, using cmdline CMA params instead\n", 485 - rmem->name); 486 - return -EBUSY; 487 - } 488 472 489 473 if (!of_get_flat_dt_prop(node, "reusable", NULL) || 490 474 of_get_flat_dt_prop(node, "no-map", NULL))
+6 -1
kernel/dma/pool.c
··· 277 277 { 278 278 struct gen_pool *pool = NULL; 279 279 struct page *page; 280 + bool pool_found = false; 280 281 281 282 while ((pool = dma_guess_pool(pool, gfp))) { 283 + pool_found = true; 282 284 page = __dma_alloc_from_pool(dev, size, pool, cpu_addr, 283 285 phys_addr_ok); 284 286 if (page) 285 287 return page; 286 288 } 287 289 288 - WARN(1, "Failed to get suitable pool for %s\n", dev_name(dev)); 290 + if (pool_found) 291 + WARN(!(gfp & __GFP_NOWARN), "DMA pool exhausted for %s\n", dev_name(dev)); 292 + else 293 + WARN(1, "Failed to get suitable pool for %s\n", dev_name(dev)); 289 294 return NULL; 290 295 } 291 296
+1 -1
kernel/events/callchain.c
··· 246 246 247 247 if (user && !crosstask) { 248 248 if (!user_mode(regs)) { 249 - if (current->flags & (PF_KTHREAD | PF_USER_WORKER)) 249 + if (!is_user_task(current)) 250 250 goto exit_put; 251 251 regs = task_pt_regs(current); 252 252 }
+3 -3
kernel/events/core.c
··· 7460 7460 if (user_mode(regs)) { 7461 7461 regs_user->abi = perf_reg_abi(current); 7462 7462 regs_user->regs = regs; 7463 - } else if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER))) { 7463 + } else if (is_user_task(current)) { 7464 7464 perf_get_regs_user(regs_user, regs); 7465 7465 } else { 7466 7466 regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE; ··· 8100 8100 * Try IRQ-safe get_user_page_fast_only first. 8101 8101 * If failed, leave phys_addr as 0. 8102 8102 */ 8103 - if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER))) { 8103 + if (is_user_task(current)) { 8104 8104 struct page *p; 8105 8105 8106 8106 pagefault_disable(); ··· 8215 8215 { 8216 8216 bool kernel = !event->attr.exclude_callchain_kernel; 8217 8217 bool user = !event->attr.exclude_callchain_user && 8218 - !(current->flags & (PF_KTHREAD | PF_USER_WORKER)); 8218 + is_user_task(current); 8219 8219 /* Disallow cross-task user callchains. */ 8220 8220 bool crosstask = event->ctx->task && event->ctx->task != current; 8221 8221 bool defer_user = IS_ENABLED(CONFIG_UNWIND_USER) && user &&
+11 -1
kernel/liveupdate/kexec_handover.c
··· 255 255 if (is_folio && info.order) 256 256 prep_compound_page(page, info.order); 257 257 258 + /* Always mark headpage's codetag as empty to avoid accounting mismatch */ 259 + clear_page_tag_ref(page); 260 + if (!is_folio) { 261 + /* Also do that for the non-compound tail pages */ 262 + for (unsigned int i = 1; i < nr_pages; i++) 263 + clear_page_tag_ref(page + i); 264 + } 265 + 258 266 adjust_managed_page_count(page, nr_pages); 259 267 return page; 260 268 } ··· 1014 1006 chunk->phys[idx++] = phys; 1015 1007 if (idx == ARRAY_SIZE(chunk->phys)) { 1016 1008 chunk = new_vmalloc_chunk(chunk); 1017 - if (!chunk) 1009 + if (!chunk) { 1010 + err = -ENOMEM; 1018 1011 goto err_free; 1012 + } 1019 1013 idx = 0; 1020 1014 } 1021 1015 }
-2
kernel/liveupdate/luo_file.c
··· 402 402 403 403 luo_file->fh->ops->unfreeze(&args); 404 404 } 405 - 406 - luo_file->serialized_data = 0; 407 405 } 408 406 409 407 static void __luo_file_unfreeze(struct luo_file_set *file_set,
+12
kernel/sched/deadline.c
··· 1034 1034 return; 1035 1035 } 1036 1036 1037 + /* 1038 + * When [4] D->A is followed by [1] A->B, dl_defer_running 1039 + * needs to be cleared, otherwise it will fail to properly 1040 + * start the zero-laxity timer. 1041 + */ 1042 + dl_se->dl_defer_running = 0; 1037 1043 replenish_dl_new_period(dl_se, rq); 1038 1044 } else if (dl_server(dl_se) && dl_se->dl_defer) { 1039 1045 /* ··· 1661 1655 * dl_server_active = 1; 1662 1656 * enqueue_dl_entity() 1663 1657 * update_dl_entity(WAKEUP) 1658 + * if (dl_time_before() || dl_entity_overflow()) 1659 + * dl_defer_running = 0; 1660 + * replenish_dl_new_period(); 1661 + * // fwd period 1662 + * dl_throttled = 1; 1663 + * dl_defer_armed = 1; 1664 1664 * if (!dl_defer_running) 1665 1665 * dl_defer_armed = 1; 1666 1666 * dl_throttled = 1;
+48
kernel/sched/ext.c
··· 194 194 #include <trace/events/sched_ext.h> 195 195 196 196 static void process_ddsp_deferred_locals(struct rq *rq); 197 + static bool task_dead_and_done(struct task_struct *p); 197 198 static u32 reenq_local(struct rq *rq); 198 199 static void scx_kick_cpu(struct scx_sched *sch, s32 cpu, u64 flags); 199 200 static bool scx_vexit(struct scx_sched *sch, enum scx_exit_kind kind, ··· 2620 2619 2621 2620 set_cpus_allowed_common(p, ac); 2622 2621 2622 + if (task_dead_and_done(p)) 2623 + return; 2624 + 2623 2625 /* 2624 2626 * The effective cpumask is stored in @p->cpus_ptr which may temporarily 2625 2627 * differ from the configured one in @p->cpus_mask. Always tell the bpf ··· 3038 3034 percpu_up_read(&scx_fork_rwsem); 3039 3035 } 3040 3036 3037 + /** 3038 + * task_dead_and_done - Is a task dead and done running? 3039 + * @p: target task 3040 + * 3041 + * Once sched_ext_dead() removes the dead task from scx_tasks and exits it, the 3042 + * task no longer exists from SCX's POV. However, certain sched_class ops may be 3043 + * invoked on these dead tasks leading to failures - e.g. sched_setscheduler() 3044 + * may try to switch a task which finished sched_ext_dead() back into SCX 3045 + * triggering invalid SCX task state transitions and worse. 3046 + * 3047 + * Once a task has finished the final switch, sched_ext_dead() is the only thing 3048 + * that needs to happen on the task. Use this test to short-circuit sched_class 3049 + * operations which may be called on dead tasks. 3050 + */ 3051 + static bool task_dead_and_done(struct task_struct *p) 3052 + { 3053 + struct rq *rq = task_rq(p); 3054 + 3055 + lockdep_assert_rq_held(rq); 3056 + 3057 + /* 3058 + * In do_task_dead(), a dying task sets %TASK_DEAD with preemption 3059 + * disabled and __schedule(). If @p has %TASK_DEAD set and off CPU, @p 3060 + * won't ever run again. 3061 + */ 3062 + return unlikely(READ_ONCE(p->__state) == TASK_DEAD) && 3063 + !task_on_cpu(rq, p); 3064 + } 3065 + 3041 3066 void sched_ext_dead(struct task_struct *p) 3042 3067 { 3043 3068 unsigned long flags; 3044 3069 3070 + /* 3071 + * By the time control reaches here, @p has %TASK_DEAD set, switched out 3072 + * for the last time and then dropped the rq lock - task_dead_and_done() 3073 + * should be returning %true nullifying the straggling sched_class ops. 3074 + * Remove from scx_tasks and exit @p. 3075 + */ 3045 3076 raw_spin_lock_irqsave(&scx_tasks_lock, flags); 3046 3077 list_del_init(&p->scx.tasks_node); 3047 3078 raw_spin_unlock_irqrestore(&scx_tasks_lock, flags); ··· 3102 3063 3103 3064 lockdep_assert_rq_held(task_rq(p)); 3104 3065 3066 + if (task_dead_and_done(p)) 3067 + return; 3068 + 3105 3069 p->scx.weight = sched_weight_to_cgroup(scale_load_down(lw->weight)); 3106 3070 if (SCX_HAS_OP(sch, set_weight)) 3107 3071 SCX_CALL_OP_TASK(sch, SCX_KF_REST, set_weight, rq, ··· 3119 3077 { 3120 3078 struct scx_sched *sch = scx_root; 3121 3079 3080 + if (task_dead_and_done(p)) 3081 + return; 3082 + 3122 3083 scx_enable_task(p); 3123 3084 3124 3085 /* ··· 3135 3090 3136 3091 static void switched_from_scx(struct rq *rq, struct task_struct *p) 3137 3092 { 3093 + if (task_dead_and_done(p)) 3094 + return; 3095 + 3138 3096 scx_disable_task(p); 3139 3097 } 3140 3098
+5 -1
kernel/vmcore_info.c
··· 36 36 time64_t timestamp; 37 37 }; 38 38 39 - static struct hwerr_info hwerr_data[HWERR_RECOV_MAX]; 39 + /* 40 + * The hwerr_data[] array is declared with global scope so that it remains 41 + * accessible to vmcoreinfo even when Link Time Optimization (LTO) is enabled. 42 + */ 43 + struct hwerr_info hwerr_data[HWERR_RECOV_MAX]; 40 44 41 45 Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, 42 46 void *data, size_t data_len)
+3 -2
lib/flex_proportions.c
··· 64 64 bool fprop_new_period(struct fprop_global *p, int periods) 65 65 { 66 66 s64 events = percpu_counter_sum(&p->events); 67 + unsigned long flags; 67 68 68 69 /* 69 70 * Don't do anything if there are no events. 70 71 */ 71 72 if (events <= 1) 72 73 return false; 73 - preempt_disable_nested(); 74 + local_irq_save(flags); 74 75 write_seqcount_begin(&p->sequence); 75 76 if (periods < 64) 76 77 events -= events >> periods; ··· 79 78 percpu_counter_add(&p->events, -events); 80 79 p->period += periods; 81 80 write_seqcount_end(&p->sequence); 82 - preempt_enable_nested(); 81 + local_irq_restore(flags); 83 82 84 83 return true; 85 84 }
+3 -1
lib/test_hmm.c
··· 662 662 goto error; 663 663 } 664 664 665 - zone_device_folio_init(page_folio(dpage), order); 665 + zone_device_folio_init(page_folio(dpage), 666 + page_pgmap(folio_page(page_folio(dpage), 0)), 667 + order); 666 668 dpage->zone_device_data = rpage; 667 669 return dpage; 668 670
+21
mm/kasan/common.c
··· 606 606 __kasan_unpoison_vmalloc(addr, size, flags | KASAN_VMALLOC_KEEP_TAG); 607 607 } 608 608 } 609 + 610 + void __kasan_vrealloc(const void *addr, unsigned long old_size, 611 + unsigned long new_size) 612 + { 613 + if (new_size < old_size) { 614 + kasan_poison_last_granule(addr, new_size); 615 + 616 + new_size = round_up(new_size, KASAN_GRANULE_SIZE); 617 + old_size = round_up(old_size, KASAN_GRANULE_SIZE); 618 + if (new_size < old_size) 619 + __kasan_poison_vmalloc(addr + new_size, 620 + old_size - new_size); 621 + } else if (new_size > old_size) { 622 + old_size = round_down(old_size, KASAN_GRANULE_SIZE); 623 + __kasan_unpoison_vmalloc(addr + old_size, 624 + new_size - old_size, 625 + KASAN_VMALLOC_PROT_NORMAL | 626 + KASAN_VMALLOC_VM_ALLOC | 627 + KASAN_VMALLOC_KEEP_TAG); 628 + } 629 + } 609 630 #endif
+19 -4
mm/kfence/core.c
··· 596 596 static unsigned long kfence_init_pool(void) 597 597 { 598 598 unsigned long addr, start_pfn; 599 - int i; 599 + int i, rand; 600 600 601 601 if (!arch_kfence_init_pool()) 602 602 return (unsigned long)__kfence_pool; ··· 647 647 INIT_LIST_HEAD(&meta->list); 648 648 raw_spin_lock_init(&meta->lock); 649 649 meta->state = KFENCE_OBJECT_UNUSED; 650 - meta->addr = addr; /* Initialize for validation in metadata_to_pageaddr(). */ 651 - list_add_tail(&meta->list, &kfence_freelist); 650 + /* Use addr to randomize the freelist. */ 651 + meta->addr = i; 652 652 653 653 /* Protect the right redzone. */ 654 - if (unlikely(!kfence_protect(addr + PAGE_SIZE))) 654 + if (unlikely(!kfence_protect(addr + 2 * i * PAGE_SIZE + PAGE_SIZE))) 655 655 goto reset_slab; 656 + } 656 657 658 + for (i = CONFIG_KFENCE_NUM_OBJECTS; i > 0; i--) { 659 + rand = get_random_u32_below(i); 660 + swap(kfence_metadata_init[i - 1].addr, kfence_metadata_init[rand].addr); 661 + } 662 + 663 + for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { 664 + struct kfence_metadata *meta_1 = &kfence_metadata_init[i]; 665 + struct kfence_metadata *meta_2 = &kfence_metadata_init[meta_1->addr]; 666 + 667 + list_add_tail(&meta_2->list, &kfence_freelist); 668 + } 669 + for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { 670 + kfence_metadata_init[i].addr = addr; 657 671 addr += 2 * PAGE_SIZE; 658 672 } 659 673 ··· 680 666 return 0; 681 667 682 668 reset_slab: 669 + addr += 2 * i * PAGE_SIZE; 683 670 for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { 684 671 struct page *page; 685 672
+2 -2
mm/memfd.c
··· 456 456 return ERR_PTR(error); 457 457 } 458 458 459 - static struct file *alloc_file(const char *name, unsigned int flags) 459 + struct file *memfd_alloc_file(const char *name, unsigned int flags) 460 460 { 461 461 unsigned int *file_seals; 462 462 struct file *file; ··· 520 520 return PTR_ERR(name); 521 521 522 522 fd_flags = (flags & MFD_CLOEXEC) ? O_CLOEXEC : 0; 523 - return FD_ADD(fd_flags, alloc_file(name, flags)); 523 + return FD_ADD(fd_flags, memfd_alloc_file(name, flags)); 524 524 }
+6 -4
mm/memfd_luo.c
··· 78 78 #include <linux/liveupdate.h> 79 79 #include <linux/shmem_fs.h> 80 80 #include <linux/vmalloc.h> 81 + #include <linux/memfd.h> 81 82 #include "internal.h" 82 83 83 84 static int memfd_luo_preserve_folios(struct file *file, ··· 444 443 if (!ser) 445 444 return -EINVAL; 446 445 447 - file = shmem_file_setup("", 0, VM_NORESERVE); 448 - 446 + file = memfd_alloc_file("", 0); 449 447 if (IS_ERR(file)) { 450 448 pr_err("failed to setup file: %pe\n", file); 451 - return PTR_ERR(file); 449 + err = PTR_ERR(file); 450 + goto free_ser; 452 451 } 453 452 454 453 vfs_setpos(file, ser->pos, MAX_LFS_FILESIZE); ··· 474 473 475 474 put_file: 476 475 fput(file); 477 - 476 + free_ser: 477 + kho_restore_free(ser); 478 478 return err; 479 479 } 480 480
+60 -39
mm/memory-failure.c
··· 692 692 unsigned long poisoned_pfn, struct to_kill *tk) 693 693 { 694 694 unsigned long pfn = 0; 695 + unsigned long hwpoison_vaddr; 696 + unsigned long mask; 695 697 696 698 if (pte_present(pte)) { 697 699 pfn = pte_pfn(pte); ··· 704 702 pfn = softleaf_to_pfn(entry); 705 703 } 706 704 707 - if (!pfn || pfn != poisoned_pfn) 705 + mask = ~((1UL << (shift - PAGE_SHIFT)) - 1); 706 + if (!pfn || pfn != (poisoned_pfn & mask)) 708 707 return 0; 709 708 710 - set_to_kill(tk, addr, shift); 709 + hwpoison_vaddr = addr + ((poisoned_pfn - pfn) << PAGE_SHIFT); 710 + set_to_kill(tk, hwpoison_vaddr, shift); 711 711 return 1; 712 712 } 713 713 ··· 1887 1883 return count; 1888 1884 } 1889 1885 1890 - static int folio_set_hugetlb_hwpoison(struct folio *folio, struct page *page) 1886 + #define MF_HUGETLB_FREED 0 /* freed hugepage */ 1887 + #define MF_HUGETLB_IN_USED 1 /* in-use hugepage */ 1888 + #define MF_HUGETLB_NON_HUGEPAGE 2 /* not a hugepage */ 1889 + #define MF_HUGETLB_FOLIO_PRE_POISONED 3 /* folio already poisoned */ 1890 + #define MF_HUGETLB_PAGE_PRE_POISONED 4 /* exact page already poisoned */ 1891 + #define MF_HUGETLB_RETRY 5 /* hugepage is busy, retry */ 1892 + /* 1893 + * Set hugetlb folio as hwpoisoned, update folio private raw hwpoison list 1894 + * to keep track of the poisoned pages. 1895 + */ 1896 + static int hugetlb_update_hwpoison(struct folio *folio, struct page *page) 1891 1897 { 1892 1898 struct llist_head *head; 1893 1899 struct raw_hwp_page *raw_hwp; 1894 1900 struct raw_hwp_page *p; 1895 - int ret = folio_test_set_hwpoison(folio) ? -EHWPOISON : 0; 1901 + int ret = folio_test_set_hwpoison(folio) ? MF_HUGETLB_FOLIO_PRE_POISONED : 0; 1896 1902 1897 1903 /* 1898 1904 * Once the hwpoison hugepage has lost reliable raw error info, ··· 1910 1896 * so skip to add additional raw error info. 1911 1897 */ 1912 1898 if (folio_test_hugetlb_raw_hwp_unreliable(folio)) 1913 - return -EHWPOISON; 1899 + return MF_HUGETLB_FOLIO_PRE_POISONED; 1914 1900 head = raw_hwp_list_head(folio); 1915 1901 llist_for_each_entry(p, head->first, node) { 1916 1902 if (p->page == page) 1917 - return -EHWPOISON; 1903 + return MF_HUGETLB_PAGE_PRE_POISONED; 1918 1904 } 1919 1905 1920 1906 raw_hwp = kmalloc(sizeof(struct raw_hwp_page), GFP_ATOMIC); 1921 1907 if (raw_hwp) { 1922 1908 raw_hwp->page = page; 1923 1909 llist_add(&raw_hwp->node, head); 1924 - /* the first error event will be counted in action_result(). */ 1925 - if (ret) 1926 - num_poisoned_pages_inc(page_to_pfn(page)); 1927 1910 } else { 1928 1911 /* 1929 1912 * Failed to save raw error info. We no longer trace all ··· 1968 1957 1969 1958 /* 1970 1959 * Called from hugetlb code with hugetlb_lock held. 1971 - * 1972 - * Return values: 1973 - * 0 - free hugepage 1974 - * 1 - in-use hugepage 1975 - * 2 - not a hugepage 1976 - * -EBUSY - the hugepage is busy (try to retry) 1977 - * -EHWPOISON - the hugepage is already hwpoisoned 1978 1960 */ 1979 1961 int __get_huge_page_for_hwpoison(unsigned long pfn, int flags, 1980 1962 bool *migratable_cleared) 1981 1963 { 1982 1964 struct page *page = pfn_to_page(pfn); 1983 1965 struct folio *folio = page_folio(page); 1984 - int ret = 2; /* fallback to normal page handling */ 1985 1966 bool count_increased = false; 1967 + int ret, rc; 1986 1968 1987 - if (!folio_test_hugetlb(folio)) 1969 + if (!folio_test_hugetlb(folio)) { 1970 + ret = MF_HUGETLB_NON_HUGEPAGE; 1988 1971 goto out; 1989 - 1990 - if (flags & MF_COUNT_INCREASED) { 1991 - ret = 1; 1972 + } else if (flags & MF_COUNT_INCREASED) { 1973 + ret = MF_HUGETLB_IN_USED; 1992 1974 count_increased = true; 1993 1975 } else if (folio_test_hugetlb_freed(folio)) { 1994 - ret = 0; 1976 + ret = MF_HUGETLB_FREED; 1995 1977 } else if (folio_test_hugetlb_migratable(folio)) { 1996 - ret = folio_try_get(folio); 1997 - if (ret) 1978 + if (folio_try_get(folio)) { 1979 + ret = MF_HUGETLB_IN_USED; 1998 1980 count_increased = true; 1981 + } else { 1982 + ret = MF_HUGETLB_FREED; 1983 + } 1999 1984 } else { 2000 - ret = -EBUSY; 1985 + ret = MF_HUGETLB_RETRY; 2001 1986 if (!(flags & MF_NO_RETRY)) 2002 1987 goto out; 2003 1988 } 2004 1989 2005 - if (folio_set_hugetlb_hwpoison(folio, page)) { 2006 - ret = -EHWPOISON; 1990 + rc = hugetlb_update_hwpoison(folio, page); 1991 + if (rc >= MF_HUGETLB_FOLIO_PRE_POISONED) { 1992 + ret = rc; 2007 1993 goto out; 2008 1994 } 2009 1995 ··· 2025 2017 * with basic operations like hugepage allocation/free/demotion. 2026 2018 * So some of prechecks for hwpoison (pinning, and testing/setting 2027 2019 * PageHWPoison) should be done in single hugetlb_lock range. 2020 + * Returns: 2021 + * 0 - not hugetlb, or recovered 2022 + * -EBUSY - not recovered 2023 + * -EOPNOTSUPP - hwpoison_filter'ed 2024 + * -EHWPOISON - folio or exact page already poisoned 2025 + * -EFAULT - kill_accessing_process finds current->mm null 2028 2026 */ 2029 2027 static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb) 2030 2028 { 2031 - int res; 2029 + int res, rv; 2032 2030 struct page *p = pfn_to_page(pfn); 2033 2031 struct folio *folio; 2034 2032 unsigned long page_flags; ··· 2043 2029 *hugetlb = 1; 2044 2030 retry: 2045 2031 res = get_huge_page_for_hwpoison(pfn, flags, &migratable_cleared); 2046 - if (res == 2) { /* fallback to normal page handling */ 2032 + switch (res) { 2033 + case MF_HUGETLB_NON_HUGEPAGE: /* fallback to normal page handling */ 2047 2034 *hugetlb = 0; 2048 2035 return 0; 2049 - } else if (res == -EHWPOISON) { 2050 - if (flags & MF_ACTION_REQUIRED) { 2051 - folio = page_folio(p); 2052 - res = kill_accessing_process(current, folio_pfn(folio), flags); 2053 - } 2054 - action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED); 2055 - return res; 2056 - } else if (res == -EBUSY) { 2036 + case MF_HUGETLB_RETRY: 2057 2037 if (!(flags & MF_NO_RETRY)) { 2058 2038 flags |= MF_NO_RETRY; 2059 2039 goto retry; 2060 2040 } 2061 2041 return action_result(pfn, MF_MSG_GET_HWPOISON, MF_IGNORED); 2042 + case MF_HUGETLB_FOLIO_PRE_POISONED: 2043 + case MF_HUGETLB_PAGE_PRE_POISONED: 2044 + rv = -EHWPOISON; 2045 + if (flags & MF_ACTION_REQUIRED) 2046 + rv = kill_accessing_process(current, pfn, flags); 2047 + if (res == MF_HUGETLB_PAGE_PRE_POISONED) 2048 + action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED); 2049 + else 2050 + action_result(pfn, MF_MSG_HUGE, MF_FAILED); 2051 + return rv; 2052 + default: 2053 + WARN_ON((res != MF_HUGETLB_FREED) && (res != MF_HUGETLB_IN_USED)); 2054 + break; 2062 2055 } 2063 2056 2064 2057 folio = page_folio(p); ··· 2076 2055 if (migratable_cleared) 2077 2056 folio_set_hugetlb_migratable(folio); 2078 2057 folio_unlock(folio); 2079 - if (res == 1) 2058 + if (res == MF_HUGETLB_IN_USED) 2080 2059 folio_put(folio); 2081 2060 return -EOPNOTSUPP; 2082 2061 } ··· 2085 2064 * Handling free hugepage. The possible race with hugepage allocation 2086 2065 * or demotion can be prevented by PageHWPoison flag. 2087 2066 */ 2088 - if (res == 0) { 2067 + if (res == MF_HUGETLB_FREED) { 2089 2068 folio_unlock(folio); 2090 2069 if (__page_handle_poison(p) > 0) { 2091 2070 page_ref_inc(p);
+34 -1
mm/memremap.c
··· 477 477 } 478 478 } 479 479 480 - void zone_device_page_init(struct page *page, unsigned int order) 480 + void zone_device_page_init(struct page *page, struct dev_pagemap *pgmap, 481 + unsigned int order) 481 482 { 483 + struct page *new_page = page; 484 + unsigned int i; 485 + 482 486 VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES); 487 + 488 + for (i = 0; i < (1UL << order); ++i, ++new_page) { 489 + struct folio *new_folio = (struct folio *)new_page; 490 + 491 + /* 492 + * new_page could have been part of previous higher order folio 493 + * which encodes the order, in page + 1, in the flags bits. We 494 + * blindly clear bits which could have set my order field here, 495 + * including page head. 496 + */ 497 + new_page->flags.f &= ~0xffUL; /* Clear possible order, page head */ 498 + 499 + #ifdef NR_PAGES_IN_LARGE_FOLIO 500 + /* 501 + * This pointer math looks odd, but new_page could have been 502 + * part of a previous higher order folio, which sets _nr_pages 503 + * in page + 1 (new_page). Therefore, we use pointer casting to 504 + * correctly locate the _nr_pages bits within new_page which 505 + * could have modified by previous higher order folio. 506 + */ 507 + ((struct folio *)(new_page - 1))->_nr_pages = 0; 508 + #endif 509 + 510 + new_folio->mapping = NULL; 511 + new_folio->pgmap = pgmap; /* Also clear compound head */ 512 + new_folio->share = 0; /* fsdax only, unused for device private */ 513 + VM_WARN_ON_FOLIO(folio_ref_count(new_folio), new_folio); 514 + VM_WARN_ON_FOLIO(!folio_is_zone_device(new_folio), new_folio); 515 + } 483 516 484 517 /* 485 518 * Drivers shouldn't be allocating pages after calling
+6 -6
mm/mm_init.c
··· 2059 2059 */ 2060 2060 static unsigned long __init 2061 2061 deferred_init_memmap_chunk(unsigned long start_pfn, unsigned long end_pfn, 2062 - struct zone *zone) 2062 + struct zone *zone, bool can_resched) 2063 2063 { 2064 2064 int nid = zone_to_nid(zone); 2065 2065 unsigned long nr_pages = 0; ··· 2085 2085 2086 2086 spfn = chunk_end; 2087 2087 2088 - if (irqs_disabled()) 2089 - touch_nmi_watchdog(); 2090 - else 2088 + if (can_resched) 2091 2089 cond_resched(); 2090 + else 2091 + touch_nmi_watchdog(); 2092 2092 } 2093 2093 } 2094 2094 ··· 2101 2101 { 2102 2102 struct zone *zone = arg; 2103 2103 2104 - deferred_init_memmap_chunk(start_pfn, end_pfn, zone); 2104 + deferred_init_memmap_chunk(start_pfn, end_pfn, zone, true); 2105 2105 } 2106 2106 2107 2107 static unsigned int __init ··· 2216 2216 for (spfn = first_deferred_pfn, epfn = SECTION_ALIGN_UP(spfn + 1); 2217 2217 nr_pages < nr_pages_needed && spfn < zone_end_pfn(zone); 2218 2218 spfn = epfn, epfn += PAGES_PER_SECTION) { 2219 - nr_pages += deferred_init_memmap_chunk(spfn, epfn, zone); 2219 + nr_pages += deferred_init_memmap_chunk(spfn, epfn, zone, false); 2220 2220 } 2221 2221 2222 2222 /*
+41 -13
mm/shmem.c
··· 962 962 * being freed). 963 963 */ 964 964 static long shmem_free_swap(struct address_space *mapping, 965 - pgoff_t index, void *radswap) 965 + pgoff_t index, pgoff_t end, void *radswap) 966 966 { 967 - int order = xa_get_order(&mapping->i_pages, index); 968 - void *old; 967 + XA_STATE(xas, &mapping->i_pages, index); 968 + unsigned int nr_pages = 0; 969 + pgoff_t base; 970 + void *entry; 969 971 970 - old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0); 971 - if (old != radswap) 972 - return 0; 973 - free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order); 972 + xas_lock_irq(&xas); 973 + entry = xas_load(&xas); 974 + if (entry == radswap) { 975 + nr_pages = 1 << xas_get_order(&xas); 976 + base = round_down(xas.xa_index, nr_pages); 977 + if (base < index || base + nr_pages - 1 > end) 978 + nr_pages = 0; 979 + else 980 + xas_store(&xas, NULL); 981 + } 982 + xas_unlock_irq(&xas); 974 983 975 - return 1 << order; 984 + if (nr_pages) 985 + free_swap_and_cache_nr(radix_to_swp_entry(radswap), nr_pages); 986 + 987 + return nr_pages; 976 988 } 977 989 978 990 /* ··· 1136 1124 if (xa_is_value(folio)) { 1137 1125 if (unfalloc) 1138 1126 continue; 1139 - nr_swaps_freed += shmem_free_swap(mapping, 1140 - indices[i], folio); 1127 + nr_swaps_freed += shmem_free_swap(mapping, indices[i], 1128 + end - 1, folio); 1141 1129 continue; 1142 1130 } 1143 1131 ··· 1203 1191 folio = fbatch.folios[i]; 1204 1192 1205 1193 if (xa_is_value(folio)) { 1194 + int order; 1206 1195 long swaps_freed; 1207 1196 1208 1197 if (unfalloc) 1209 1198 continue; 1210 - swaps_freed = shmem_free_swap(mapping, indices[i], folio); 1199 + swaps_freed = shmem_free_swap(mapping, indices[i], 1200 + end - 1, folio); 1211 1201 if (!swaps_freed) { 1212 - /* Swap was replaced by page: retry */ 1213 - index = indices[i]; 1202 + pgoff_t base = indices[i]; 1203 + 1204 + order = shmem_confirm_swap(mapping, indices[i], 1205 + radix_to_swp_entry(folio)); 1206 + /* 1207 + * If found a large swap entry cross the end or start 1208 + * border, skip it as the truncate_inode_partial_folio 1209 + * above should have at least zerod its content once. 1210 + */ 1211 + if (order > 0) { 1212 + base = round_down(base, 1 << order); 1213 + if (base < start || base + (1 << order) > end) 1214 + continue; 1215 + } 1216 + /* Swap was replaced by page or extended, retry */ 1217 + index = base; 1214 1218 break; 1215 1219 } 1216 1220 nr_swaps_freed += swaps_freed;
+1 -1
mm/swap.h
··· 198 198 void __swap_writepage(struct folio *folio, struct swap_iocb **swap_plug); 199 199 200 200 /* linux/mm/swap_state.c */ 201 - extern struct address_space swap_space __ro_after_init; 201 + extern struct address_space swap_space __read_mostly; 202 202 static inline struct address_space *swap_address_space(swp_entry_t entry) 203 203 { 204 204 return &swap_space;
+1 -2
mm/swap_state.c
··· 37 37 #endif 38 38 }; 39 39 40 - /* Set swap_space as read only as swap cache is handled by swap table */ 41 - struct address_space swap_space __ro_after_init = { 40 + struct address_space swap_space __read_mostly = { 42 41 .a_ops = &swap_aops, 43 42 }; 44 43
+2 -5
mm/vmalloc.c
··· 4322 4322 if (want_init_on_free() || want_init_on_alloc(flags)) 4323 4323 memset((void *)p + size, 0, old_size - size); 4324 4324 vm->requested_size = size; 4325 - kasan_poison_vmalloc(p + size, old_size - size); 4325 + kasan_vrealloc(p, old_size, size); 4326 4326 return (void *)p; 4327 4327 } 4328 4328 ··· 4330 4330 * We already have the bytes available in the allocation; use them. 4331 4331 */ 4332 4332 if (size <= alloced_size) { 4333 - kasan_unpoison_vmalloc(p + old_size, size - old_size, 4334 - KASAN_VMALLOC_PROT_NORMAL | 4335 - KASAN_VMALLOC_VM_ALLOC | 4336 - KASAN_VMALLOC_KEEP_TAG); 4337 4333 /* 4338 4334 * No need to zero memory here, as unused memory will have 4339 4335 * already been zeroed at initial allocation time or during 4340 4336 * realloc shrink time. 4341 4337 */ 4342 4338 vm->requested_size = size; 4339 + kasan_vrealloc(p, old_size, size); 4343 4340 return (void *)p; 4344 4341 } 4345 4342
+4 -4
net/core/filter.c
··· 2289 2289 2290 2290 err = bpf_out_neigh_v6(net, skb, dev, nh); 2291 2291 if (unlikely(net_xmit_eval(err))) 2292 - DEV_STATS_INC(dev, tx_errors); 2292 + dev_core_stats_tx_dropped_inc(dev); 2293 2293 else 2294 2294 ret = NET_XMIT_SUCCESS; 2295 2295 goto out_xmit; 2296 2296 out_drop: 2297 - DEV_STATS_INC(dev, tx_errors); 2297 + dev_core_stats_tx_dropped_inc(dev); 2298 2298 kfree_skb(skb); 2299 2299 out_xmit: 2300 2300 return ret; ··· 2396 2396 2397 2397 err = bpf_out_neigh_v4(net, skb, dev, nh); 2398 2398 if (unlikely(net_xmit_eval(err))) 2399 - DEV_STATS_INC(dev, tx_errors); 2399 + dev_core_stats_tx_dropped_inc(dev); 2400 2400 else 2401 2401 ret = NET_XMIT_SUCCESS; 2402 2402 goto out_xmit; 2403 2403 out_drop: 2404 - DEV_STATS_INC(dev, tx_errors); 2404 + dev_core_stats_tx_dropped_inc(dev); 2405 2405 kfree_skb(skb); 2406 2406 out_xmit: 2407 2407 return ret;
+2
net/core/gro.c
··· 265 265 goto out; 266 266 } 267 267 268 + /* NICs can feed encapsulated packets into GRO */ 269 + skb->encapsulation = 0; 268 270 rcu_read_lock(); 269 271 list_for_each_entry_rcu(ptype, head, list) { 270 272 if (ptype->type != type || !ptype->callbacks.gro_complete)
+34 -16
net/core/net-procfs.c
··· 170 170 .show = softnet_seq_show, 171 171 }; 172 172 173 + struct ptype_iter_state { 174 + struct seq_net_private p; 175 + struct net_device *dev; 176 + }; 177 + 173 178 static void *ptype_get_idx(struct seq_file *seq, loff_t pos) 174 179 { 180 + struct ptype_iter_state *iter = seq->private; 175 181 struct list_head *ptype_list = NULL; 176 182 struct packet_type *pt = NULL; 177 183 struct net_device *dev; ··· 187 181 for_each_netdev_rcu(seq_file_net(seq), dev) { 188 182 ptype_list = &dev->ptype_all; 189 183 list_for_each_entry_rcu(pt, ptype_list, list) { 190 - if (i == pos) 184 + if (i == pos) { 185 + iter->dev = dev; 191 186 return pt; 187 + } 192 188 ++i; 193 189 } 194 190 } 191 + 192 + iter->dev = NULL; 195 193 196 194 list_for_each_entry_rcu(pt, &seq_file_net(seq)->ptype_all, list) { 197 195 if (i == pos) ··· 228 218 229 219 static void *ptype_seq_next(struct seq_file *seq, void *v, loff_t *pos) 230 220 { 221 + struct ptype_iter_state *iter = seq->private; 231 222 struct net *net = seq_file_net(seq); 232 223 struct net_device *dev; 233 224 struct packet_type *pt; ··· 240 229 return ptype_get_idx(seq, 0); 241 230 242 231 pt = v; 243 - nxt = pt->list.next; 244 - if (pt->dev) { 245 - if (nxt != &pt->dev->ptype_all) 232 + nxt = READ_ONCE(pt->list.next); 233 + dev = iter->dev; 234 + if (dev) { 235 + if (nxt != &dev->ptype_all) 246 236 goto found; 247 237 248 - dev = pt->dev; 249 238 for_each_netdev_continue_rcu(seq_file_net(seq), dev) { 250 - if (!list_empty(&dev->ptype_all)) { 251 - nxt = dev->ptype_all.next; 239 + nxt = READ_ONCE(dev->ptype_all.next); 240 + if (nxt != &dev->ptype_all) { 241 + iter->dev = dev; 252 242 goto found; 253 243 } 254 244 } 255 - nxt = net->ptype_all.next; 245 + iter->dev = NULL; 246 + nxt = READ_ONCE(net->ptype_all.next); 256 247 goto net_ptype_all; 257 248 } 258 249 ··· 265 252 266 253 if (nxt == &net->ptype_all) { 267 254 /* continue with ->ptype_specific if it's not empty */ 268 - nxt = net->ptype_specific.next; 255 + nxt = READ_ONCE(net->ptype_specific.next); 269 256 if (nxt != &net->ptype_specific) 270 257 goto found; 271 258 } 272 259 273 260 hash = 0; 274 - nxt = ptype_base[0].next; 261 + nxt = READ_ONCE(ptype_base[0].next); 275 262 } else 276 263 hash = ntohs(pt->type) & PTYPE_HASH_MASK; 277 264 278 265 while (nxt == &ptype_base[hash]) { 279 266 if (++hash >= PTYPE_HASH_SIZE) 280 267 return NULL; 281 - nxt = ptype_base[hash].next; 268 + nxt = READ_ONCE(ptype_base[hash].next); 282 269 } 283 270 found: 284 271 return list_entry(nxt, struct packet_type, list); ··· 292 279 293 280 static int ptype_seq_show(struct seq_file *seq, void *v) 294 281 { 282 + struct ptype_iter_state *iter = seq->private; 295 283 struct packet_type *pt = v; 284 + struct net_device *dev; 296 285 297 - if (v == SEQ_START_TOKEN) 286 + if (v == SEQ_START_TOKEN) { 298 287 seq_puts(seq, "Type Device Function\n"); 299 - else if ((!pt->af_packet_net || net_eq(pt->af_packet_net, seq_file_net(seq))) && 300 - (!pt->dev || net_eq(dev_net(pt->dev), seq_file_net(seq)))) { 288 + return 0; 289 + } 290 + dev = iter->dev; 291 + if ((!pt->af_packet_net || net_eq(pt->af_packet_net, seq_file_net(seq))) && 292 + (!dev || net_eq(dev_net(dev), seq_file_net(seq)))) { 301 293 if (pt->type == htons(ETH_P_ALL)) 302 294 seq_puts(seq, "ALL "); 303 295 else 304 296 seq_printf(seq, "%04x", ntohs(pt->type)); 305 297 306 298 seq_printf(seq, " %-8s %ps\n", 307 - pt->dev ? pt->dev->name : "", pt->func); 299 + dev ? dev->name : "", pt->func); 308 300 } 309 301 310 302 return 0; ··· 333 315 &softnet_seq_ops)) 334 316 goto out_dev; 335 317 if (!proc_create_net("ptype", 0444, net->proc_net, &ptype_seq_ops, 336 - sizeof(struct seq_net_private))) 318 + sizeof(struct ptype_iter_state))) 337 319 goto out_softnet; 338 320 339 321 if (wext_proc_init(net))
-3
net/ethtool/common.c
··· 901 901 ctx->key_off = key_off; 902 902 ctx->priv_size = ops->rxfh_priv_size; 903 903 904 - ctx->hfunc = ETH_RSS_HASH_NO_CHANGE; 905 - ctx->input_xfrm = RXH_XFRM_NO_CHANGE; 906 - 907 904 return ctx; 908 905 } 909 906
+2 -7
net/ethtool/rss.c
··· 824 824 static int 825 825 ethnl_rss_set(struct ethnl_req_info *req_info, struct genl_info *info) 826 826 { 827 - bool indir_reset = false, indir_mod, xfrm_sym = false; 828 827 struct rss_req_info *request = RSS_REQINFO(req_info); 828 + bool indir_reset = false, indir_mod, xfrm_sym; 829 829 struct ethtool_rxfh_context *ctx = NULL; 830 830 struct net_device *dev = req_info->dev; 831 831 bool mod = false, fields_mod = false; ··· 860 860 861 861 rxfh.input_xfrm = data.input_xfrm; 862 862 ethnl_update_u8(&rxfh.input_xfrm, tb[ETHTOOL_A_RSS_INPUT_XFRM], &mod); 863 - /* For drivers which don't support input_xfrm it will be set to 0xff 864 - * in the RSS context info. In all other case input_xfrm != 0 means 865 - * symmetric hashing is requested. 866 - */ 867 - if (!request->rss_context || ops->rxfh_per_ctx_key) 868 - xfrm_sym = rxfh.input_xfrm || data.input_xfrm; 863 + xfrm_sym = rxfh.input_xfrm || data.input_xfrm; 869 864 if (rxfh.input_xfrm == data.input_xfrm) 870 865 rxfh.input_xfrm = RXH_XFRM_NO_CHANGE; 871 866
+2 -1
net/ipv6/ip6_fib.c
··· 1138 1138 fib6_set_expires(iter, rt->expires); 1139 1139 fib6_add_gc_list(iter); 1140 1140 } 1141 - if (!(rt->fib6_flags & (RTF_ADDRCONF | RTF_PREFIX_RT))) { 1141 + if (!(rt->fib6_flags & (RTF_ADDRCONF | RTF_PREFIX_RT)) && 1142 + !iter->fib6_nh->fib_nh_gw_family) { 1142 1143 iter->fib6_flags &= ~RTF_ADDRCONF; 1143 1144 iter->fib6_flags &= ~RTF_PREFIX_RT; 1144 1145 }
+1 -1
net/netfilter/nf_tables_api.c
··· 5915 5915 5916 5916 list_for_each_entry(catchall, &set->catchall_list, list) { 5917 5917 ext = nft_set_elem_ext(set, catchall->elem); 5918 - if (!nft_set_elem_active(ext, genmask)) 5918 + if (nft_set_elem_active(ext, genmask)) 5919 5919 continue; 5920 5920 5921 5921 nft_clear(ctx->net, ext);
+6 -7
net/sched/cls_u32.c
··· 161 161 int toff = off + key->off + (off2 & key->offmask); 162 162 __be32 *data, hdata; 163 163 164 - if (skb_headroom(skb) + toff > INT_MAX) 165 - goto out; 166 - 167 - data = skb_header_pointer(skb, toff, 4, &hdata); 164 + data = skb_header_pointer_careful(skb, toff, 4, 165 + &hdata); 168 166 if (!data) 169 167 goto out; 170 168 if ((*data ^ key->val) & key->mask) { ··· 212 214 if (ht->divisor) { 213 215 __be32 *data, hdata; 214 216 215 - data = skb_header_pointer(skb, off + n->sel.hoff, 4, 216 - &hdata); 217 + data = skb_header_pointer_careful(skb, 218 + off + n->sel.hoff, 219 + 4, &hdata); 217 220 if (!data) 218 221 goto out; 219 222 sel = ht->divisor & u32_hash_fold(*data, &n->sel, ··· 228 229 if (n->sel.flags & TC_U32_VAROFFSET) { 229 230 __be16 *data, hdata; 230 231 231 - data = skb_header_pointer(skb, 232 + data = skb_header_pointer_careful(skb, 232 233 off + n->sel.offoff, 233 234 2, &hdata); 234 235 if (!data)
+2 -2
net/tipc/crypto.c
··· 1219 1219 rx = c; 1220 1220 tx = tipc_net(rx->net)->crypto_tx; 1221 1221 if (cancel_delayed_work(&rx->work)) { 1222 - kfree(rx->skey); 1222 + kfree_sensitive(rx->skey); 1223 1223 rx->skey = NULL; 1224 1224 atomic_xchg(&rx->key_distr, 0); 1225 1225 tipc_node_put(rx->node); ··· 2394 2394 break; 2395 2395 default: 2396 2396 synchronize_rcu(); 2397 - kfree(rx->skey); 2397 + kfree_sensitive(rx->skey); 2398 2398 rx->skey = NULL; 2399 2399 break; 2400 2400 }
+1
rust/Makefile
··· 383 383 -fno-inline-functions-called-once -fsanitize=bounds-strict \ 384 384 -fstrict-flex-arrays=% -fmin-function-alignment=% \ 385 385 -fzero-init-padding-bits=% -mno-fdpic \ 386 + -fdiagnostics-show-context -fdiagnostics-show-context=% \ 386 387 --param=% --param asan-% -fno-isolate-erroneous-paths-dereference 387 388 388 389 # Derived from `scripts/Makefile.clang`.
+4 -2
rust/kernel/bits.rs
··· 27 27 /// 28 28 /// This version is the default and should be used if `n` is known at 29 29 /// compile time. 30 - #[inline] 30 + // Always inline to optimize out error path of `build_assert`. 31 + #[inline(always)] 31 32 pub const fn [<bit_ $ty>](n: u32) -> $ty { 32 33 build_assert!(n < <$ty>::BITS); 33 34 (1 as $ty) << n ··· 76 75 /// This version is the default and should be used if the range is known 77 76 /// at compile time. 78 77 $(#[$genmask_ex])* 79 - #[inline] 78 + // Always inline to optimize out error path of `build_assert`. 79 + #[inline(always)] 80 80 pub const fn [<genmask_ $ty>](range: RangeInclusive<u32>) -> $ty { 81 81 let start = *range.start(); 82 82 let end = *range.end();
+1 -1
rust/kernel/fmt.rs
··· 6 6 7 7 pub use core::fmt::{Arguments, Debug, Error, Formatter, Result, Write}; 8 8 9 - /// Internal adapter used to route allow implementations of formatting traits for foreign types. 9 + /// Internal adapter used to route and allow implementations of formatting traits for foreign types. 10 10 /// 11 11 /// It is inserted automatically by the [`fmt!`] macro and is not meant to be used directly. 12 12 ///
+26 -23
rust/kernel/num/bounded.rs
··· 40 40 fits_within!(value, T, num_bits) 41 41 } 42 42 43 - /// An integer value that requires only the `N` less significant bits of the wrapped type to be 43 + /// An integer value that requires only the `N` least significant bits of the wrapped type to be 44 44 /// encoded. 45 45 /// 46 46 /// This limits the number of usable bits in the wrapped integer type, and thus the stored value to 47 - /// a narrower range, which provides guarantees that can be useful when working with in e.g. 47 + /// a narrower range, which provides guarantees that can be useful when working within e.g. 48 48 /// bitfields. 49 49 /// 50 50 /// # Invariants ··· 56 56 /// # Examples 57 57 /// 58 58 /// The preferred way to create values is through constants and the [`Bounded::new`] family of 59 - /// constructors, as they trigger a build error if the type invariants cannot be withheld. 59 + /// constructors, as they trigger a build error if the type invariants cannot be upheld. 60 60 /// 61 61 /// ``` 62 62 /// use kernel::num::Bounded; ··· 82 82 /// ``` 83 83 /// use kernel::num::Bounded; 84 84 /// 85 - /// // This succeeds because `15` can be represented with 4 unsigned bits. 85 + /// // This succeeds because `15` can be represented with 4 unsigned bits. 86 86 /// assert!(Bounded::<u8, 4>::try_new(15).is_some()); 87 87 /// 88 88 /// // This fails because `16` cannot be represented with 4 unsigned bits. ··· 221 221 /// let v: Option<Bounded<u16, 8>> = 128u32.try_into_bounded(); 222 222 /// assert_eq!(v.as_deref().copied(), Some(128)); 223 223 /// 224 - /// // Fails because `128` doesn't fits into 6 bits. 224 + /// // Fails because `128` doesn't fit into 6 bits. 225 225 /// let v: Option<Bounded<u16, 6>> = 128u32.try_into_bounded(); 226 226 /// assert_eq!(v, None); 227 227 /// ``` ··· 259 259 assert!(fits_within!(VALUE, $type, N)); 260 260 } 261 261 262 - // INVARIANT: `fits_within` confirmed that `VALUE` can be represented within 262 + // SAFETY: `fits_within` confirmed that `VALUE` can be represented within 263 263 // `N` bits. 264 - Self::__new(VALUE) 264 + unsafe { Self::__new(VALUE) } 265 265 } 266 266 } 267 267 )* ··· 282 282 /// All instances of [`Bounded`] must be created through this method as it enforces most of the 283 283 /// type invariants. 284 284 /// 285 - /// The caller remains responsible for checking, either statically or dynamically, that `value` 286 - /// can be represented as a `T` using at most `N` bits. 287 - const fn __new(value: T) -> Self { 285 + /// # Safety 286 + /// 287 + /// The caller must ensure that `value` can be represented within `N` bits. 288 + const unsafe fn __new(value: T) -> Self { 288 289 // Enforce the type invariants. 289 290 const { 290 291 // `N` cannot be zero. ··· 294 293 assert!(N <= T::BITS); 295 294 } 296 295 296 + // INVARIANT: The caller ensures `value` fits within `N` bits. 297 297 Self(value) 298 298 } 299 299 ··· 330 328 /// ``` 331 329 pub fn try_new(value: T) -> Option<Self> { 332 330 fits_within(value, N).then(|| { 333 - // INVARIANT: `fits_within` confirmed that `value` can be represented within `N` bits. 334 - Self::__new(value) 331 + // SAFETY: `fits_within` confirmed that `value` can be represented within `N` bits. 332 + unsafe { Self::__new(value) } 335 333 }) 336 334 } 337 335 ··· 365 363 /// assert_eq!(Bounded::<u8, 1>::from_expr(1).get(), 1); 366 364 /// assert_eq!(Bounded::<u16, 8>::from_expr(0xff).get(), 0xff); 367 365 /// ``` 366 + // Always inline to optimize out error path of `build_assert`. 368 367 #[inline(always)] 369 368 pub fn from_expr(expr: T) -> Self { 370 369 crate::build_assert!( ··· 373 370 "Requested value larger than maximal representable value." 374 371 ); 375 372 376 - // INVARIANT: `fits_within` confirmed that `expr` can be represented within `N` bits. 377 - Self::__new(expr) 373 + // SAFETY: `fits_within` confirmed that `expr` can be represented within `N` bits. 374 + unsafe { Self::__new(expr) } 378 375 } 379 376 380 377 /// Returns the wrapped value as the backing type. ··· 413 410 ); 414 411 } 415 412 416 - // INVARIANT: The value did fit within `N` bits, so it will all the more fit within 413 + // SAFETY: The value did fit within `N` bits, so it will all the more fit within 417 414 // the larger `M` bits. 418 - Bounded::__new(self.0) 415 + unsafe { Bounded::__new(self.0) } 419 416 } 420 417 421 418 /// Attempts to shrink the number of bits usable for `self`. ··· 469 466 // `U` and `T` have the same sign, hence this conversion cannot fail. 470 467 let value = unsafe { U::try_from(self.get()).unwrap_unchecked() }; 471 468 472 - // INVARIANT: Although the backing type has changed, the value is still represented within 469 + // SAFETY: Although the backing type has changed, the value is still represented within 473 470 // `N` bits, and with the same signedness. 474 - Bounded::__new(value) 471 + unsafe { Bounded::__new(value) } 475 472 } 476 473 } 477 474 ··· 504 501 /// let v: Option<Bounded<u16, 8>> = 128u32.try_into_bounded(); 505 502 /// assert_eq!(v.as_deref().copied(), Some(128)); 506 503 /// 507 - /// // Fails because `128` doesn't fits into 6 bits. 504 + /// // Fails because `128` doesn't fit into 6 bits. 508 505 /// let v: Option<Bounded<u16, 6>> = 128u32.try_into_bounded(); 509 506 /// assert_eq!(v, None); 510 507 /// ``` ··· 947 944 Self: AtLeastXBits<{ <$type as Integer>::BITS as usize }>, 948 945 { 949 946 fn from(value: $type) -> Self { 950 - // INVARIANT: The trait bound on `Self` guarantees that `N` bits is 947 + // SAFETY: The trait bound on `Self` guarantees that `N` bits is 951 948 // enough to hold any value of the source type. 952 - Self::__new(T::from(value)) 949 + unsafe { Self::__new(T::from(value)) } 953 950 } 954 951 } 955 952 )* ··· 1054 1051 T: Integer + From<bool>, 1055 1052 { 1056 1053 fn from(value: bool) -> Self { 1057 - // INVARIANT: A boolean can be represented using a single bit, and thus fits within any 1054 + // SAFETY: A boolean can be represented using a single bit, and thus fits within any 1058 1055 // integer type for any `N` > 0. 1059 - Self::__new(T::from(value)) 1056 + unsafe { Self::__new(T::from(value)) } 1060 1057 } 1061 1058 }
+2 -2
rust/kernel/rbtree.rs
··· 985 985 self.peek(Direction::Prev) 986 986 } 987 987 988 - /// Access the previous node without moving the cursor. 988 + /// Access the next node without moving the cursor. 989 989 pub fn peek_next(&self) -> Option<(&K, &V)> { 990 990 self.peek(Direction::Next) 991 991 } ··· 1130 1130 } 1131 1131 1132 1132 // SAFETY: The [`IterMut`] has exclusive access to both `K` and `V`, so it is sufficient to require them to be `Send`. 1133 - // The iterator only gives out immutable references to the keys, but since the iterator has excusive access to those same 1133 + // The iterator only gives out immutable references to the keys, but since the iterator has exclusive access to those same 1134 1134 // keys, `Send` is sufficient. `Sync` would be okay, but it is more restrictive to the user. 1135 1135 unsafe impl<'a, K: Send, V: Send> Send for IterMut<'a, K, V> {} 1136 1136
+11
rust/kernel/sync/atomic/predefine.rs
··· 35 35 // as `isize` and `usize`, and `isize` and `usize` are always bi-directional transmutable to 36 36 // `isize_atomic_repr`, which also always implements `AtomicImpl`. 37 37 #[allow(non_camel_case_types)] 38 + #[cfg(not(testlib))] 38 39 #[cfg(not(CONFIG_64BIT))] 39 40 type isize_atomic_repr = i32; 40 41 #[allow(non_camel_case_types)] 42 + #[cfg(not(testlib))] 41 43 #[cfg(CONFIG_64BIT)] 44 + type isize_atomic_repr = i64; 45 + 46 + #[allow(non_camel_case_types)] 47 + #[cfg(testlib)] 48 + #[cfg(target_pointer_width = "32")] 49 + type isize_atomic_repr = i32; 50 + #[allow(non_camel_case_types)] 51 + #[cfg(testlib)] 52 + #[cfg(target_pointer_width = "64")] 42 53 type isize_atomic_repr = i64; 43 54 44 55 // Ensure size and alignment requirements are checked.
+2 -1
rust/kernel/sync/refcount.rs
··· 23 23 /// Construct a new [`Refcount`] from an initial value. 24 24 /// 25 25 /// The initial value should be non-saturated. 26 - #[inline] 26 + // Always inline to optimize out error path of `build_assert`. 27 + #[inline(always)] 27 28 pub fn new(value: i32) -> Self { 28 29 build_assert!(value >= 0, "initial value saturated"); 29 30 // SAFETY: There are no safety requirements for this FFI call.
+1 -1
rust/macros/fmt.rs
··· 67 67 } 68 68 (None, acc) 69 69 })(); 70 - args.extend(quote_spanned!(first_span => #lhs #adapter(&#rhs))); 70 + args.extend(quote_spanned!(first_span => #lhs #adapter(&(#rhs)))); 71 71 } 72 72 }; 73 73
+1 -1
rust/macros/lib.rs
··· 59 59 /// 60 60 /// # Examples 61 61 /// 62 - /// ``` 62 + /// ```ignore 63 63 /// use kernel::prelude::*; 64 64 /// 65 65 /// module!{
+4
rust/proc-macro2/lib.rs
··· 1 1 // SPDX-License-Identifier: Apache-2.0 OR MIT 2 2 3 + // When fixdep scans this, it will find this string `CONFIG_RUSTC_VERSION_TEXT` 4 + // and thus add a dependency on `include/config/RUSTC_VERSION_TEXT`, which is 5 + // touched by Kconfig when the version string from the compiler changes. 6 + 3 7 //! [![github]](https://github.com/dtolnay/proc-macro2)&ensp;[![crates-io]](https://crates.io/crates/proc-macro2)&ensp;[![docs-rs]](crate) 4 8 //! 5 9 //! [github]: https://img.shields.io/badge/github-8da0cb?style=for-the-badge&labelColor=555555&logo=github
+3 -1
scripts/Makefile.build
··· 166 166 cmd_force_checksrc = $(CHECK) $(CHECKFLAGS) $(c_flags) $< 167 167 endif 168 168 169 + ifeq ($(KBUILD_EXTMOD),) 169 170 ifneq ($(KBUILD_EXTRA_WARN),) 170 171 cmd_checkdoc = PYTHONDONTWRITEBYTECODE=1 $(PYTHON3) $(KERNELDOC) -none $(KDOCFLAGS) \ 171 172 $(if $(findstring 2, $(KBUILD_EXTRA_WARN)), -Wall) \ 172 173 $< 174 + endif 173 175 endif 174 176 175 177 # Compile C sources (.c) ··· 358 356 quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ 359 357 cmd_rustc_rsi_rs = \ 360 358 $(rust_common_cmd) -Zunpretty=expanded $< >$@; \ 361 - command -v $(RUSTFMT) >/dev/null && $(RUSTFMT) $@ 359 + command -v $(RUSTFMT) >/dev/null && $(RUSTFMT) --config-path $(srctree)/.rustfmt.toml $@ 362 360 363 361 $(obj)/%.rsi: $(obj)/%.rs FORCE 364 362 +$(call if_changed_dep,rustc_rsi_rs)
+2 -1
scripts/Makefile.vmlinux
··· 113 113 # what kmod expects to parse. 114 114 quiet_cmd_modules_builtin_modinfo = GEN $@ 115 115 cmd_modules_builtin_modinfo = $(cmd_objcopy); \ 116 - sed -i 's/\x00\+$$/\x00/g' $@ 116 + sed -i 's/\x00\+$$/\x00/g' $@; \ 117 + chmod -x $@ 117 118 118 119 OBJCOPYFLAGS_modules.builtin.modinfo := -j .modinfo -O binary 119 120
+33 -12
scripts/generate_rust_analyzer.py
··· 61 61 display_name, 62 62 deps, 63 63 cfg=[], 64 - edition="2021", 65 64 ): 66 65 append_crate( 67 66 display_name, ··· 68 69 deps, 69 70 cfg, 70 71 is_workspace_member=False, 71 - edition=edition, 72 + # Miguel Ojeda writes: 73 + # 74 + # > ... in principle even the sysroot crates may have different 75 + # > editions. 76 + # > 77 + # > For instance, in the move to 2024, it seems all happened at once 78 + # > in 1.87.0 in these upstream commits: 79 + # > 80 + # > 0e071c2c6a58 ("Migrate core to Rust 2024") 81 + # > f505d4e8e380 ("Migrate alloc to Rust 2024") 82 + # > 0b2489c226c3 ("Migrate proc_macro to Rust 2024") 83 + # > 993359e70112 ("Migrate std to Rust 2024") 84 + # > 85 + # > But in the previous move to 2021, `std` moved in 1.59.0, while 86 + # > the others in 1.60.0: 87 + # > 88 + # > b656384d8398 ("Update stdlib to the 2021 edition") 89 + # > 06a1c14d52a8 ("Switch all libraries to the 2021 edition") 90 + # 91 + # Link: https://lore.kernel.org/all/CANiq72kd9bHdKaAm=8xCUhSHMy2csyVed69bOc4dXyFAW4sfuw@mail.gmail.com/ 92 + # 93 + # At the time of writing all rust versions we support build the 94 + # sysroot crates with the same edition. We may need to relax this 95 + # assumption if future edition moves span multiple rust versions. 96 + edition=core_edition, 72 97 ) 73 98 74 99 # NB: sysroot crates reexport items from one another so setting up our transitive dependencies 75 100 # here is important for ensuring that rust-analyzer can resolve symbols. The sources of truth 76 101 # for this dependency graph are `(sysroot_src / crate / "Cargo.toml" for crate in crates)`. 77 - append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", []), edition=core_edition) 102 + append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", [])) 78 103 append_sysroot_crate("alloc", ["core"]) 79 104 append_sysroot_crate("std", ["alloc", "core"]) 80 105 append_sysroot_crate("proc_macro", ["core", "std"]) ··· 106 83 append_crate( 107 84 "compiler_builtins", 108 85 srctree / "rust" / "compiler_builtins.rs", 109 - [], 86 + ["core"], 110 87 ) 111 88 112 89 append_crate( ··· 119 96 append_crate( 120 97 "quote", 121 98 srctree / "rust" / "quote" / "lib.rs", 122 - ["alloc", "proc_macro", "proc_macro2"], 99 + ["core", "alloc", "std", "proc_macro", "proc_macro2"], 123 100 cfg=crates_cfgs["quote"], 101 + edition="2018", 124 102 ) 125 103 126 104 append_crate( 127 105 "syn", 128 106 srctree / "rust" / "syn" / "lib.rs", 129 - ["proc_macro", "proc_macro2", "quote"], 107 + ["std", "proc_macro", "proc_macro2", "quote"], 130 108 cfg=crates_cfgs["syn"], 131 109 ) 132 110 ··· 147 123 append_crate( 148 124 "pin_init_internal", 149 125 srctree / "rust" / "pin-init" / "internal" / "src" / "lib.rs", 150 - [], 126 + ["std", "proc_macro"], 151 127 cfg=["kernel"], 152 128 is_proc_macro=True, 153 129 ) ··· 155 131 append_crate( 156 132 "pin_init", 157 133 srctree / "rust" / "pin-init" / "src" / "lib.rs", 158 - ["core", "pin_init_internal", "macros"], 134 + ["core", "compiler_builtins", "pin_init_internal", "macros"], 159 135 cfg=["kernel"], 160 136 ) 161 137 ··· 214 190 append_crate( 215 191 name, 216 192 path, 217 - ["core", "kernel"], 193 + ["core", "kernel", "pin_init"], 218 194 cfg=cfg, 219 195 ) 220 196 ··· 236 212 format="[%(asctime)s] [%(levelname)s] %(message)s", 237 213 level=logging.INFO if args.verbose else logging.WARNING 238 214 ) 239 - 240 - # Making sure that the `sysroot` and `sysroot_src` belong to the same toolchain. 241 - assert args.sysroot in args.sysroot_src.parents 242 215 243 216 rust_project = { 244 217 "crates": generate_crates(args.srctree, args.objtree, args.sysroot_src, args.exttree, args.cfgs, args.core_edition),
+4 -4
scripts/livepatch/klp-build
··· 555 555 local file_dir="$(dirname "$file")" 556 556 local orig_file="$ORIG_DIR/$rel_file" 557 557 local orig_dir="$(dirname "$orig_file")" 558 - local cmd_file="$file_dir/.$(basename "$file").cmd" 559 558 560 559 [[ ! -f "$file" ]] && die "missing $(basename "$file") for $_file" 561 560 562 561 mkdir -p "$orig_dir" 563 562 cp -f "$file" "$orig_dir" 564 - [[ -e "$cmd_file" ]] && cp -f "$cmd_file" "$orig_dir" 565 563 done 566 564 xtrace_restore 567 565 ··· 738 740 local orig_dir="$(dirname "$orig_file")" 739 741 local kmod_file="$KMOD_DIR/$rel_file" 740 742 local kmod_dir="$(dirname "$kmod_file")" 741 - local cmd_file="$orig_dir/.$(basename "$file").cmd" 743 + local cmd_file="$kmod_dir/.$(basename "$file").cmd" 742 744 743 745 mkdir -p "$kmod_dir" 744 746 cp -f "$file" "$kmod_dir" 745 - [[ -e "$cmd_file" ]] && cp -f "$cmd_file" "$kmod_dir" 746 747 747 748 # Tell kbuild this is a prebuilt object 748 749 cp -f "$file" "${kmod_file}_shipped" 750 + 751 + # Make modpost happy 752 + touch "$cmd_file" 749 753 750 754 echo -n " $rel_file" >> "$makefile" 751 755 done
+30 -35
scripts/package/kernel.spec
··· 2 2 %{!?_arch: %define _arch dummy} 3 3 %{!?make: %define make make} 4 4 %define makeflags %{?_smp_mflags} ARCH=%{ARCH} 5 + %define __spec_install_post /usr/lib/rpm/brp-compress || : 6 + %define debug_package %{nil} 5 7 6 8 Name: kernel 7 9 Summary: The Linux Kernel ··· 48 46 %endif 49 47 50 48 %if %{with_debuginfo} 51 - # list of debuginfo-related options taken from distribution kernel.spec 52 - # files 53 - %undefine _include_minidebuginfo 54 - %undefine _find_debuginfo_dwz_opts 55 - %undefine _unique_build_ids 56 - %undefine _unique_debug_names 57 - %undefine _unique_debug_srcs 58 - %undefine _debugsource_packages 59 - %undefine _debuginfo_subpackages 60 - %global _find_debuginfo_opts -r 61 - %global _missing_build_ids_terminate_build 1 62 - %global _no_recompute_build_ids 1 63 - %{debug_package} 49 + %package debuginfo 50 + Summary: Debug information package for the Linux kernel 51 + %description debuginfo 52 + This package provides debug information for the kernel image and modules from the 53 + %{version} package. 64 54 %endif 65 - # some (but not all) versions of rpmbuild emit %%debug_package with 66 - # %%install. since we've already emitted it manually, that would cause 67 - # a package redefinition error. ensure that doesn't happen 68 - %define debug_package %{nil} 69 - 70 - # later, we make all modules executable so that find-debuginfo.sh strips 71 - # them up. but they don't actually need to be executable, so remove the 72 - # executable bit, taking care to do it _after_ find-debuginfo.sh has run 73 - %define __spec_install_post \ 74 - %{?__debug_package:%{__debug_install_post}} \ 75 - %{__arch_install_post} \ 76 - %{__os_install_post} \ 77 - find %{buildroot}/lib/modules/%{KERNELRELEASE} -name "*.ko" -type f \\\ 78 - | xargs --no-run-if-empty chmod u-x 79 55 80 56 %prep 81 57 %setup -q -n linux ··· 67 87 mkdir -p %{buildroot}/lib/modules/%{KERNELRELEASE} 68 88 cp $(%{make} %{makeflags} -s image_name) %{buildroot}/lib/modules/%{KERNELRELEASE}/vmlinuz 69 89 # DEPMOD=true makes depmod no-op. We do not package depmod-generated files. 70 - %{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} DEPMOD=true modules_install 90 + %{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} INSTALL_MOD_STRIP=1 DEPMOD=true modules_install 71 91 %{make} %{makeflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install 72 92 cp System.map %{buildroot}/lib/modules/%{KERNELRELEASE} 73 93 cp .config %{buildroot}/lib/modules/%{KERNELRELEASE}/config ··· 98 118 echo "%exclude /lib/modules/%{KERNELRELEASE}/build" 99 119 } > %{buildroot}/kernel.list 100 120 101 - # make modules executable so that find-debuginfo.sh strips them. this 102 - # will be undone later in %%__spec_install_post 103 - find %{buildroot}/lib/modules/%{KERNELRELEASE} -name "*.ko" -type f \ 104 - | xargs --no-run-if-empty chmod u+x 105 - 106 121 %if %{with_debuginfo} 107 122 # copying vmlinux directly to the debug directory means it will not get 108 123 # stripped (but its source paths will still be collected + fixed up) 109 124 mkdir -p %{buildroot}/usr/lib/debug/lib/modules/%{KERNELRELEASE} 110 125 cp vmlinux %{buildroot}/usr/lib/debug/lib/modules/%{KERNELRELEASE} 126 + 127 + echo /usr/lib/debug/lib/modules/%{KERNELRELEASE}/vmlinux > %{buildroot}/debuginfo.list 128 + 129 + while read -r mod; do 130 + mod="${mod%.o}.ko" 131 + dbg="%{buildroot}/usr/lib/debug/lib/modules/%{KERNELRELEASE}/kernel/${mod}" 132 + buildid=$("${READELF}" -n "${mod}" | sed -n 's@^.*Build ID: \(..\)\(.*\)@\1/\2@p') 133 + link="%{buildroot}/usr/lib/debug/.build-id/${buildid}.debug" 134 + 135 + mkdir -p "${dbg%/*}" "${link%/*}" 136 + "${OBJCOPY}" --only-keep-debug "${mod}" "${dbg}" 137 + ln -sf --relative "${dbg}" "${link}" 138 + 139 + echo "${dbg#%{buildroot}}" >> %{buildroot}/debuginfo.list 140 + echo "${link#%{buildroot}}" >> %{buildroot}/debuginfo.list 141 + done < modules.order 111 142 %endif 112 143 113 144 %clean 114 145 rm -rf %{buildroot} 115 - rm -f debugfiles.list debuglinks.list debugsourcefiles.list debugsources.list \ 116 - elfbins.list 117 146 118 147 %post 119 148 if [ -x /usr/bin/kernel-install ]; then ··· 160 171 %defattr (-, root, root) 161 172 /usr/src/kernels/%{KERNELRELEASE} 162 173 /lib/modules/%{KERNELRELEASE}/build 174 + %endif 175 + 176 + %if %{with_debuginfo} 177 + %files -f %{buildroot}/debuginfo.list debuginfo 178 + %defattr (-, root, root) 179 + %exclude /debuginfo.list 163 180 %endif
+1 -1
scripts/rustdoc_test_gen.rs
··· 206 206 207 207 /// The anchor where the test code body starts. 208 208 #[allow(unused)] 209 - static __DOCTEST_ANCHOR: i32 = ::core::line!() as i32 + {body_offset} + 1; 209 + static __DOCTEST_ANCHOR: i32 = ::core::line!() as i32 + {body_offset} + 2; 210 210 {{ 211 211 #![allow(unreachable_pub, clippy::disallowed_names)] 212 212 {body}
-9
security/lsm.h
··· 37 37 38 38 /* LSM framework initializers */ 39 39 40 - #ifdef CONFIG_MMU 41 - int min_addr_init(void); 42 - #else 43 - static inline int min_addr_init(void) 44 - { 45 - return 0; 46 - } 47 - #endif /* CONFIG_MMU */ 48 - 49 40 #ifdef CONFIG_SECURITYFS 50 41 int securityfs_init(void); 51 42 #else
+1 -6
security/lsm_init.c
··· 489 489 */ 490 490 static int __init security_initcall_pure(void) 491 491 { 492 - int rc_adr, rc_lsm; 493 - 494 - rc_adr = min_addr_init(); 495 - rc_lsm = lsm_initcall(pure); 496 - 497 - return (rc_adr ? rc_adr : rc_lsm); 492 + return lsm_initcall(pure); 498 493 } 499 494 pure_initcall(security_initcall_pure); 500 495
+2 -3
security/min_addr.c
··· 5 5 #include <linux/sysctl.h> 6 6 #include <linux/minmax.h> 7 7 8 - #include "lsm.h" 9 - 10 8 /* amount of vm to protect from userspace access by both DAC and the LSM*/ 11 9 unsigned long mmap_min_addr; 12 10 /* amount of vm to protect from userspace using CAP_SYS_RAWIO (DAC) */ ··· 52 54 }, 53 55 }; 54 56 55 - int __init min_addr_init(void) 57 + static int __init mmap_min_addr_init(void) 56 58 { 57 59 register_sysctl_init("vm", min_addr_sysctl_table); 58 60 update_mmap_min_addr(); 59 61 60 62 return 0; 61 63 } 64 + pure_initcall(mmap_min_addr_init);
+14 -4
sound/hda/codecs/realtek/alc269.c
··· 3383 3383 struct snd_pcm_substream *substream, 3384 3384 int action) 3385 3385 { 3386 + static const struct coef_fw dis_coefs[] = { 3387 + WRITE_COEF(0x24, 0x0013), WRITE_COEF(0x25, 0x0000), WRITE_COEF(0x26, 0xC203), 3388 + WRITE_COEF(0x28, 0x0004), WRITE_COEF(0x29, 0xb023), 3389 + }; /* Disable AMP silence detection */ 3390 + static const struct coef_fw en_coefs[] = { 3391 + WRITE_COEF(0x24, 0x0013), WRITE_COEF(0x25, 0x0000), WRITE_COEF(0x26, 0xC203), 3392 + WRITE_COEF(0x28, 0x0084), WRITE_COEF(0x29, 0xb023), 3393 + }; /* Enable AMP silence detection */ 3394 + 3386 3395 switch (action) { 3387 3396 case HDA_GEN_PCM_ACT_OPEN: 3397 + alc_process_coef_fw(codec, dis_coefs); 3388 3398 alc_write_coefex_idx(codec, 0x5a, 0x00, 0x954f); /* write gpio3 to high */ 3389 3399 break; 3390 3400 case HDA_GEN_PCM_ACT_CLOSE: 3401 + alc_process_coef_fw(codec, en_coefs); 3391 3402 alc_write_coefex_idx(codec, 0x5a, 0x00, 0x554f); /* write gpio3 as default value */ 3392 3403 break; 3393 3404 } ··· 6750 6739 SND_PCI_QUIRK(0x103c, 0x8c8c, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED), 6751 6740 SND_PCI_QUIRK(0x103c, 0x8c8d, "HP ProBook 440 G11", ALC236_FIXUP_HP_GPIO_LED), 6752 6741 SND_PCI_QUIRK(0x103c, 0x8c8e, "HP ProBook 460 G11", ALC236_FIXUP_HP_GPIO_LED), 6742 + SND_PCI_QUIRK(0x103c, 0x8c8f, "HP EliteBook 630 G11", ALC236_FIXUP_HP_GPIO_LED), 6753 6743 SND_PCI_QUIRK(0x103c, 0x8c90, "HP EliteBook 640", ALC236_FIXUP_HP_GPIO_LED), 6754 6744 SND_PCI_QUIRK(0x103c, 0x8c91, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED), 6755 6745 SND_PCI_QUIRK(0x103c, 0x8c96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), ··· 7348 7336 SND_PCI_QUIRK(0x1d05, 0x1409, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), 7349 7337 SND_PCI_QUIRK(0x1d05, 0x300f, "TongFang X6AR5xxY", ALC2XX_FIXUP_HEADSET_MIC), 7350 7338 SND_PCI_QUIRK(0x1d05, 0x3019, "TongFang X6FR5xxY", ALC2XX_FIXUP_HEADSET_MIC), 7339 + SND_PCI_QUIRK(0x1d05, 0x3031, "TongFang X6AR55xU", ALC2XX_FIXUP_HEADSET_MIC), 7351 7340 SND_PCI_QUIRK(0x1d17, 0x3288, "Haier Boyue G42", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS), 7352 7341 SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), 7353 7342 SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE), ··· 7359 7346 SND_PCI_QUIRK(0x1ee7, 0x2078, "HONOR BRB-X M1010", ALC2XX_FIXUP_HEADSET_MIC), 7360 7347 SND_PCI_QUIRK(0x1f66, 0x0105, "Ayaneo Portable Game Player", ALC287_FIXUP_CS35L41_I2C_2), 7361 7348 SND_PCI_QUIRK(0x2014, 0x800a, "Positivo ARN50", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 7349 + SND_PCI_QUIRK(0x2039, 0x0001, "Inspur S14-G1", ALC295_FIXUP_CHROME_BOOK), 7362 7350 SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 7363 7351 SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13), 7364 7352 SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO), ··· 7819 7805 {0x12, 0x90a60140}, 7820 7806 {0x19, 0x04a11030}, 7821 7807 {0x21, 0x04211020}), 7822 - SND_HDA_PIN_QUIRK(0x10ec0274, 0x1d05, "TongFang", ALC274_FIXUP_HP_HEADSET_MIC, 7823 - {0x17, 0x90170110}, 7824 - {0x19, 0x03a11030}, 7825 - {0x21, 0x03211020}), 7826 7808 SND_HDA_PIN_QUIRK(0x10ec0282, 0x1025, "Acer", ALC282_FIXUP_ACER_DISABLE_LINEOUT, 7827 7809 ALC282_STANDARD_PINS, 7828 7810 {0x12, 0x90a609c0},
+15
sound/soc/amd/yc/acp6x-mach.c
··· 545 545 { 546 546 .driver_data = &acp6x_card, 547 547 .matches = { 548 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 549 + DMI_MATCH(DMI_PRODUCT_NAME, "ASUS EXPERTBOOK PM1503CDA"), 550 + } 551 + }, 552 + { 553 + .driver_data = &acp6x_card, 554 + .matches = { 548 555 DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 549 556 DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16z-n000"), 550 557 } ··· 682 675 DMI_MATCH(DMI_PRODUCT_NAME, "GOH-X"), 683 676 } 684 677 }, 678 + { 679 + .driver_data = &acp6x_card, 680 + .matches = { 681 + DMI_MATCH(DMI_BOARD_VENDOR, "RB"), 682 + DMI_MATCH(DMI_BOARD_NAME, "XyloD5_RBU"), 683 + } 684 + }, 685 + 685 686 {} 686 687 }; 687 688
+1 -1
sound/soc/codecs/cs35l45.c
··· 453 453 SND_SOC_DAPM_AIF_OUT("ASP_TX2", NULL, 1, CS35L45_ASP_ENABLES1, CS35L45_ASP_TX2_EN_SHIFT, 0), 454 454 SND_SOC_DAPM_AIF_OUT("ASP_TX3", NULL, 2, CS35L45_ASP_ENABLES1, CS35L45_ASP_TX3_EN_SHIFT, 0), 455 455 SND_SOC_DAPM_AIF_OUT("ASP_TX4", NULL, 3, CS35L45_ASP_ENABLES1, CS35L45_ASP_TX4_EN_SHIFT, 0), 456 - SND_SOC_DAPM_AIF_OUT("ASP_TX5", NULL, 3, CS35L45_ASP_ENABLES1, CS35L45_ASP_TX5_EN_SHIFT, 0), 456 + SND_SOC_DAPM_AIF_OUT("ASP_TX5", NULL, 4, CS35L45_ASP_ENABLES1, CS35L45_ASP_TX5_EN_SHIFT, 0), 457 457 458 458 SND_SOC_DAPM_MUX("ASP_TX1 Source", SND_SOC_NOPM, 0, 0, &cs35l45_asp_muxes[0]), 459 459 SND_SOC_DAPM_MUX("ASP_TX2 Source", SND_SOC_NOPM, 0, 0, &cs35l45_asp_muxes[1]),
-1
sound/soc/fsl/imx-card.c
··· 346 346 SND_SOC_DAIFMT_PDM; 347 347 } else { 348 348 slots = 2; 349 - slot_width = params_physical_width(params); 350 349 fmt = (rtd->dai_link->dai_fmt & ~SND_SOC_DAIFMT_FORMAT_MASK) | 351 350 SND_SOC_DAIFMT_I2S; 352 351 }
+1 -1
sound/soc/intel/boards/sof_es8336.c
··· 120 120 gpiod_set_value_cansleep(priv->gpio_speakers, priv->speaker_en); 121 121 122 122 if (quirk & SOF_ES8336_HEADPHONE_GPIO) 123 - gpiod_set_value_cansleep(priv->gpio_headphone, priv->speaker_en); 123 + gpiod_set_value_cansleep(priv->gpio_headphone, !priv->speaker_en); 124 124 125 125 } 126 126
+1
sound/soc/intel/boards/sof_sdw.c
··· 838 838 SND_PCI_QUIRK(0x17aa, 0x2347, "Lenovo P16", SOC_SDW_CODEC_MIC), 839 839 SND_PCI_QUIRK(0x17aa, 0x2348, "Lenovo P16", SOC_SDW_CODEC_MIC), 840 840 SND_PCI_QUIRK(0x17aa, 0x2349, "Lenovo P1", SOC_SDW_CODEC_MIC), 841 + SND_PCI_QUIRK(0x17aa, 0x3821, "Lenovo 0x3821", SOC_SDW_SIDECAR_AMPS), 841 842 {} 842 843 }; 843 844
+1 -1
sound/soc/intel/common/soc-acpi-intel-ptl-match.c
··· 442 442 .adr = 0x000230025D132001ull, 443 443 .num_endpoints = 1, 444 444 .endpoints = &spk_r_endpoint, 445 - .name_prefix = "rt1320-1" 445 + .name_prefix = "rt1320-2" 446 446 } 447 447 }; 448 448
+2 -1
tools/objtool/check.c
··· 197 197 * as well as changes to the source code itself between versions (since 198 198 * these come from the Rust standard library). 199 199 */ 200 - return str_ends_with(func->name, "_4core5sliceSp15copy_from_slice17len_mismatch_fail") || 200 + return str_ends_with(func->name, "_4core3num22from_ascii_radix_panic") || 201 + str_ends_with(func->name, "_4core5sliceSp15copy_from_slice17len_mismatch_fail") || 201 202 str_ends_with(func->name, "_4core6option13expect_failed") || 202 203 str_ends_with(func->name, "_4core6option13unwrap_failed") || 203 204 str_ends_with(func->name, "_4core6result13unwrap_failed") ||
+8 -6
tools/objtool/disas.c
··· 108 108 109 109 #define DINFO_FPRINTF(dinfo, ...) \ 110 110 ((*(dinfo)->fprintf_func)((dinfo)->stream, __VA_ARGS__)) 111 + #define bfd_vma_fmt \ 112 + __builtin_choose_expr(sizeof(bfd_vma) == sizeof(unsigned long), "%#lx <%s>", "%#llx <%s>") 111 113 112 114 static int disas_result_fprintf(struct disas_context *dctx, 113 115 const char *fmt, va_list ap) ··· 172 170 173 171 if (sym) { 174 172 sprint_name(symstr, sym->name, addr - sym->offset); 175 - DINFO_FPRINTF(dinfo, "0x%lx <%s>", addr, symstr); 173 + DINFO_FPRINTF(dinfo, bfd_vma_fmt, addr, symstr); 176 174 } else { 177 175 str = offstr(sec, addr); 178 - DINFO_FPRINTF(dinfo, "0x%lx <%s>", addr, str); 176 + DINFO_FPRINTF(dinfo, bfd_vma_fmt, addr, str); 179 177 free(str); 180 178 } 181 179 } ··· 254 252 * example: "lea 0x0(%rip),%rdi". The kernel can reference 255 253 * the next IP with _THIS_IP_ macro. 256 254 */ 257 - DINFO_FPRINTF(dinfo, "0x%lx <_THIS_IP_>", addr); 255 + DINFO_FPRINTF(dinfo, bfd_vma_fmt, addr, "_THIS_IP_"); 258 256 return; 259 257 } 260 258 ··· 266 264 */ 267 265 if (reloc->sym->type == STT_SECTION) { 268 266 str = offstr(reloc->sym->sec, reloc->sym->offset + offset); 269 - DINFO_FPRINTF(dinfo, "0x%lx <%s>", addr, str); 267 + DINFO_FPRINTF(dinfo, bfd_vma_fmt, addr, str); 270 268 free(str); 271 269 } else { 272 270 sprint_name(symstr, reloc->sym->name, offset); 273 - DINFO_FPRINTF(dinfo, "0x%lx <%s>", addr, symstr); 271 + DINFO_FPRINTF(dinfo, bfd_vma_fmt, addr, symstr); 274 272 } 275 273 } 276 274 ··· 313 311 */ 314 312 sym = insn_call_dest(insn); 315 313 if (sym && (sym->offset == addr || (sym->offset == 0 && is_reloc))) { 316 - DINFO_FPRINTF(dinfo, "0x%lx <%s>", addr, sym->name); 314 + DINFO_FPRINTF(dinfo, bfd_vma_fmt, addr, sym->name); 317 315 return; 318 316 } 319 317
+6 -7
tools/objtool/elf.c
··· 18 18 #include <errno.h> 19 19 #include <libgen.h> 20 20 #include <ctype.h> 21 + #include <linux/align.h> 22 + #include <linux/kernel.h> 21 23 #include <linux/interval_tree_generic.h> 24 + #include <linux/log2.h> 22 25 #include <objtool/builtin.h> 23 26 #include <objtool/elf.h> 24 27 #include <objtool/warn.h> 25 - 26 - #define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1)) 27 - #define ALIGN_UP_POW2(x) (1U << ((8 * sizeof(x)) - __builtin_clz((x) - 1U))) 28 - #define MAX(a, b) ((a) > (b) ? (a) : (b)) 29 28 30 29 static inline u32 str_hash(const char *str) 31 30 { ··· 1335 1336 return -1; 1336 1337 } 1337 1338 1338 - offset = ALIGN_UP(strtab->sh.sh_size, strtab->sh.sh_addralign); 1339 + offset = ALIGN(strtab->sh.sh_size, strtab->sh.sh_addralign); 1339 1340 1340 1341 if (!elf_add_data(elf, strtab, str, strlen(str) + 1)) 1341 1342 return -1; ··· 1377 1378 sec->data->d_size = size; 1378 1379 sec->data->d_align = 1; 1379 1380 1380 - offset = ALIGN_UP(sec->sh.sh_size, sec->sh.sh_addralign); 1381 + offset = ALIGN(sec->sh.sh_size, sec->sh.sh_addralign); 1381 1382 sec->sh.sh_size = offset + size; 1382 1383 1383 1384 mark_sec_changed(elf, sec, true); ··· 1501 1502 rsec->data->d_size = nr_relocs_new * elf_rela_size(elf); 1502 1503 rsec->sh.sh_size = rsec->data->d_size; 1503 1504 1504 - nr_alloc = MAX(64, ALIGN_UP_POW2(nr_relocs_new)); 1505 + nr_alloc = max(64UL, roundup_pow_of_two(nr_relocs_new)); 1505 1506 if (nr_alloc <= rsec->nr_alloc_relocs) 1506 1507 return 0; 1507 1508
+11 -3
tools/objtool/klp-diff.c
··· 1425 1425 { 1426 1426 struct section *patched_sec; 1427 1427 1428 - if (create_fake_symbols(e->patched)) 1429 - return -1; 1430 - 1431 1428 for_each_sec(e->patched, patched_sec) { 1432 1429 if (is_special_section(patched_sec)) { 1433 1430 if (clone_special_section(e, patched_sec)) ··· 1699 1702 1700 1703 e.out = elf_create_file(&e.orig->ehdr, argv[2]); 1701 1704 if (!e.out) 1705 + return -1; 1706 + 1707 + /* 1708 + * Special section fake symbols are needed so that individual special 1709 + * section entries can be extracted by clone_special_sections(). 1710 + * 1711 + * Note the fake symbols are also needed by clone_included_functions() 1712 + * because __WARN_printf() call sites add references to bug table 1713 + * entries in the calling functions. 1714 + */ 1715 + if (create_fake_symbols(e.patched)) 1702 1716 return -1; 1703 1717 1704 1718 if (clone_included_functions(&e))
+1
tools/testing/selftests/kvm/Makefile.kvm
··· 251 251 LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include 252 252 CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ 253 253 -Wno-gnu-variable-sized-type-not-at-end -MD -MP -DCONFIG_64BIT \ 254 + -U_FORTIFY_SOURCE \ 254 255 -fno-builtin-memcmp -fno-builtin-memcpy \ 255 256 -fno-builtin-memset -fno-builtin-strnlen \ 256 257 -fno-stack-protector -fno-PIE -fno-strict-aliasing \
+64
tools/testing/selftests/net/udpgro_fwd.sh
··· 162 162 echo " ok" 163 163 } 164 164 165 + run_test_csum() { 166 + local -r msg="$1" 167 + local -r dst="$2" 168 + local csum_error_filter=UdpInCsumErrors 169 + local csum_errors 170 + 171 + printf "%-40s" "$msg" 172 + 173 + is_ipv6 "$dst" && csum_error_filter=Udp6InCsumErrors 174 + 175 + ip netns exec "$NS_DST" iperf3 -s -1 >/dev/null & 176 + wait_local_port_listen "$NS_DST" 5201 tcp 177 + local spid="$!" 178 + ip netns exec "$NS_SRC" iperf3 -c "$dst" -t 2 >/dev/null 179 + local retc="$?" 180 + wait "$spid" 181 + local rets="$?" 182 + if [ "$rets" -ne 0 ] || [ "$retc" -ne 0 ]; then 183 + echo " fail client exit code $retc, server $rets" 184 + ret=1 185 + return 186 + fi 187 + 188 + csum_errors=$(ip netns exec "$NS_DST" nstat -as "$csum_error_filter" | 189 + grep "$csum_error_filter" | awk '{print $2}') 190 + if [ -n "$csum_errors" ] && [ "$csum_errors" -gt 0 ]; then 191 + echo " fail - csum error on receive $csum_errors, expected 0" 192 + ret=1 193 + return 194 + fi 195 + echo " ok" 196 + } 197 + 165 198 run_bench() { 166 199 local -r msg=$1 167 200 local -r dst=$2 ··· 292 259 # stray traffic on top of the UDP tunnel 293 260 ip netns exec $NS_SRC $PING -q -c 1 $OL_NET$DST_NAT >/dev/null 294 261 run_test "GRO fwd over UDP tunnel" $OL_NET$DST_NAT 10 10 $OL_NET$DST 262 + cleanup 263 + 264 + # force segmentation and re-aggregation 265 + create_vxlan_pair 266 + ip netns exec "$NS_DST" ethtool -K veth"$DST" generic-receive-offload on 267 + ip netns exec "$NS_SRC" ethtool -K veth"$SRC" tso off 268 + ip -n "$NS_SRC" link set dev veth"$SRC" mtu 1430 269 + 270 + # forward to a 2nd veth pair 271 + ip -n "$NS_DST" link add br0 type bridge 272 + ip -n "$NS_DST" link set dev veth"$DST" master br0 273 + 274 + # segment the aggregated TSO packet, without csum offload 275 + ip -n "$NS_DST" link add veth_segment type veth peer veth_rx 276 + for FEATURE in tso tx-udp-segmentation tx-checksumming; do 277 + ip netns exec "$NS_DST" ethtool -K veth_segment "$FEATURE" off 278 + done 279 + ip -n "$NS_DST" link set dev veth_segment master br0 up 280 + ip -n "$NS_DST" link set dev br0 up 281 + ip -n "$NS_DST" link set dev veth_rx up 282 + 283 + # move the lower layer IP in the last added veth 284 + for ADDR in "$BM_NET_V4$DST/24" "$BM_NET_V6$DST/64"; do 285 + # the dad argument will let iproute emit a unharmful warning 286 + # with ipv4 addresses 287 + ip -n "$NS_DST" addr del dev veth"$DST" "$ADDR" 288 + ip -n "$NS_DST" addr add dev veth_rx "$ADDR" \ 289 + nodad 2>/dev/null 290 + done 291 + 292 + run_test_csum "GSO after GRO" "$OL_NET$DST" 295 293 cleanup 296 294 done 297 295
+24 -20
virt/kvm/eventfd.c
··· 157 157 } 158 158 159 159 160 - /* assumes kvm->irqfds.lock is held */ 161 - static bool 162 - irqfd_is_active(struct kvm_kernel_irqfd *irqfd) 160 + static bool irqfd_is_active(struct kvm_kernel_irqfd *irqfd) 163 161 { 162 + /* 163 + * Assert that either irqfds.lock or SRCU is held, as irqfds.lock must 164 + * be held to prevent false positives (on the irqfd being active), and 165 + * while false negatives are impossible as irqfds are never added back 166 + * to the list once they're deactivated, the caller must at least hold 167 + * SRCU to guard against routing changes if the irqfd is deactivated. 168 + */ 169 + lockdep_assert_once(lockdep_is_held(&irqfd->kvm->irqfds.lock) || 170 + srcu_read_lock_held(&irqfd->kvm->irq_srcu)); 171 + 164 172 return list_empty(&irqfd->list) ? false : true; 165 173 } 166 174 167 175 /* 168 176 * Mark the irqfd as inactive and schedule it for removal 169 - * 170 - * assumes kvm->irqfds.lock is held 171 177 */ 172 - static void 173 - irqfd_deactivate(struct kvm_kernel_irqfd *irqfd) 178 + static void irqfd_deactivate(struct kvm_kernel_irqfd *irqfd) 174 179 { 180 + lockdep_assert_held(&irqfd->kvm->irqfds.lock); 181 + 175 182 BUG_ON(!irqfd_is_active(irqfd)); 176 183 177 184 list_del_init(&irqfd->list); ··· 224 217 seq = read_seqcount_begin(&irqfd->irq_entry_sc); 225 218 irq = irqfd->irq_entry; 226 219 } while (read_seqcount_retry(&irqfd->irq_entry_sc, seq)); 227 - /* An event has been signaled, inject an interrupt */ 228 - if (kvm_arch_set_irq_inatomic(&irq, kvm, 220 + 221 + /* 222 + * An event has been signaled, inject an interrupt unless the 223 + * irqfd is being deassigned (isn't active), in which case the 224 + * routing information may be stale (once the irqfd is removed 225 + * from the list, it will stop receiving routing updates). 226 + */ 227 + if (unlikely(!irqfd_is_active(irqfd)) || 228 + kvm_arch_set_irq_inatomic(&irq, kvm, 229 229 KVM_USERSPACE_IRQ_SOURCE_ID, 1, 230 230 false) == -EWOULDBLOCK) 231 231 schedule_work(&irqfd->inject); ··· 599 585 spin_lock_irq(&kvm->irqfds.lock); 600 586 601 587 list_for_each_entry_safe(irqfd, tmp, &kvm->irqfds.items, list) { 602 - if (irqfd->eventfd == eventfd && irqfd->gsi == args->gsi) { 603 - /* 604 - * This clearing of irq_entry.type is needed for when 605 - * another thread calls kvm_irq_routing_update before 606 - * we flush workqueue below (we synchronize with 607 - * kvm_irq_routing_update using irqfds.lock). 608 - */ 609 - write_seqcount_begin(&irqfd->irq_entry_sc); 610 - irqfd->irq_entry.type = 0; 611 - write_seqcount_end(&irqfd->irq_entry_sc); 588 + if (irqfd->eventfd == eventfd && irqfd->gsi == args->gsi) 612 589 irqfd_deactivate(irqfd); 613 - } 614 590 } 615 591 616 592 spin_unlock_irq(&kvm->irqfds.lock);