···12421242E: vfalico@gmail.com12431243D: Co-maintainer and co-author of the network bonding driver.1244124412451245+N: Thomas Falcon12461246+E: tlfalcon@linux.ibm.com12471247+D: Initial author of the IBM ibmvnic network driver12481248+12451249N: János Farkas12461250E: chexum@shadow.banki.hu12471251D: romfs, various (mostly networking) fixes···24182414S: Am Muehlenweg 3824192415S: D53424 Remagen24202416S: Germany24172417+24182418+N: Jonathan Lemon24192419+E: jonathan.lemon@gmail.com24202420+D: OpenCompute PTP clock driver (ptp_ocp)2421242124222422N: Colin Leroy24232423E: colin@colino.net
···11-What: /sys/bus/platform/devices/INOU0000:XX/fn_lock_toggle_enable11+What: /sys/bus/platform/devices/INOU0000:XX/fn_lock22Date: November 202533KernelVersion: 6.1944Contact: Armin Wolf <W_Armin@gmx.de>···8899 Reading this file returns the current enable status of the FN lock functionality.10101111-What: /sys/bus/platform/devices/INOU0000:XX/super_key_toggle_enable1111+What: /sys/bus/platform/devices/INOU0000:XX/super_key_enable1212Date: November 20251313KernelVersion: 6.191414Contact: Armin Wolf <W_Armin@gmx.de>1515Description:1616- Allows userspace applications to enable/disable the super key functionality1717- of the integrated keyboard by writing "1"/"0" into this file.1616+ Allows userspace applications to enable/disable the super key of the integrated1717+ keyboard by writing "1"/"0" into this file.18181919- Reading this file returns the current enable status of the super key functionality.1919+ Reading this file returns the current enable status of the super key.20202121What: /sys/bus/platform/devices/INOU0000:XX/touchpad_toggle_enable2222Date: November 2025
+13
Documentation/admin-guide/kernel-parameters.txt
···7474 TPM TPM drivers are enabled.7575 UMS USB Mass Storage support is enabled.7676 USB USB support is enabled.7777+ NVME NVMe support is enabled7778 USBHID USB Human Interface Device support is enabled.7879 V4L Video For Linux support is enabled.7980 VGA The VGA console has been enabled.···47874786 'node', 'default' can be specified47884787 This can be set from sysctl after boot.47894788 See Documentation/admin-guide/sysctl/vm.rst for details.47894789+47904790+ nvme.quirks= [NVME] A list of quirk entries to augment the built-in47914791+ nvme quirk list. List entries are separated by a47924792+ '-' character.47934793+ Each entry has the form VendorID:ProductID:quirk_names.47944794+ The IDs are 4-digits hex numbers and quirk_names is a47954795+ list of quirk names separated by commas. A quirk name47964796+ can be prefixed by '^', meaning that the specified47974797+ quirk must be disabled.47984798+47994799+ Example:48004800+ nvme.quirks=7710:2267:bogus_nid,^identify_cns-9900:7711:broken_msi4790480147914802 ohci1394_dma=early [HW,EARLY] enable debugging via the ohci1394 driver.47924803 See Documentation/core-api/debugging-via-ohci1394.rst for more
···24242525The ``uniwill-laptop`` driver allows the user to enable/disable:26262727- - the FN and super key lock functionality of the integrated keyboard2727+ - the FN lock and super key of the integrated keyboard2828 - the touchpad toggle functionality of the integrated touchpad29293030See Documentation/ABI/testing/sysfs-driver-uniwill-laptop for details.
···11-.. SPDX-License-Identifier: GPL-2.0-only22-33-Kernel driver sa67mcu44-=====================55-66-Supported chips:77-88- * Kontron sa67mcu99-1010- Prefix: 'sa67mcu'1111-1212- Datasheet: not available1313-1414-Authors: Michael Walle <mwalle@kernel.org>1515-1616-Description1717------------1818-1919-The sa67mcu is a board management controller which also exposes a hardware2020-monitoring controller.2121-2222-The controller has two voltage and one temperature sensor. The values are2323-hold in two 8 bit registers to form one 16 bit value. Reading the lower byte2424-will also capture the high byte to make the access atomic. The unit of the2525-volatge sensors are 1mV and the unit of the temperature sensor is 0.1degC.2626-2727-Sysfs entries2828--------------2929-3030-The following attributes are supported.3131-3232-======================= ========================================================3333-in0_label "VDDIN"3434-in0_input Measured VDDIN voltage.3535-3636-in1_label "VDD_RTC"3737-in1_input Measured VDD_RTC voltage.3838-3939-temp1_input MCU temperature. Roughly the board temperature.4040-======================= ========================================================4141-
+2-2
Documentation/netlink/specs/nfsd.yaml
···152152 - compound-ops153153 -154154 name: threads-set155155- doc: set the number of running threads155155+ doc: set the maximum number of running threads156156 attribute-set: server157157 flags: [admin-perm]158158 do:···165165 - min-threads166166 -167167 name: threads-get168168- doc: get the number of running threads168168+ doc: get the maximum number of running threads169169 attribute-set: server170170 do:171171 reply:
+4
Documentation/sound/alsa-configuration.rst
···23722372 audible volume23732373 * bit 25: ``mixer_capture_min_mute``23742374 Similar to bit 24 but for capture streams23752375+ * bit 26: ``skip_iface_setup``23762376+ Skip the probe-time interface setup (usb_set_interface,23772377+ init_pitch, init_sample_rate); redundant with23782378+ snd_usb_endpoint_prepare() at stream-open time2375237923762380This module supports multiple devices, autoprobe and hotplugging.23772381
+10-27
MAINTAINERS
···993993F: drivers/thermal/thermal_mmio.c994994995995AMAZON ETHERNET DRIVERS996996-M: Shay Agroskin <shayagr@amazon.com>997996M: Arthur Kiyanovski <akiyano@amazon.com>998998-R: David Arinzon <darinzon@amazon.com>999999-R: Saeed Bishara <saeedb@amazon.com>997997+M: David Arinzon <darinzon@amazon.com>1000998L: netdev@vger.kernel.org1001999S: Maintained10021000F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst···4615461746164618BLUETOOTH SUBSYSTEM46174619M: Marcel Holtmann <marcel@holtmann.org>46184618-M: Johan Hedberg <johan.hedberg@gmail.com>46194620M: Luiz Augusto von Dentz <luiz.dentz@gmail.com>46204621L: linux-bluetooth@vger.kernel.org46214622S: Supported···10178101811017910182FREESCALE IMX / MXC FEC DRIVER1018010183M: Wei Fang <wei.fang@nxp.com>1018410184+R: Frank Li <frank.li@nxp.com>1018110185R: Shenwei Wang <shenwei.wang@nxp.com>1018210182-R: Clark Wang <xiaoning.wang@nxp.com>1018310186L: imx@lists.linux.dev1018410187L: netdev@vger.kernel.org1018510188S: Maintained···1049110494F: Documentation/trace/ftrace*1049210495F: arch/*/*/*/*ftrace*1049310496F: arch/*/*/*ftrace*1049410494-F: include/*/ftrace.h1049710497+F: include/*/*ftrace*1049510498F: kernel/trace/fgraph.c1049610499F: kernel/trace/ftrace*1049710500F: samples/ftrace···1222412227M: Haren Myneni <haren@linux.ibm.com>1222512228M: Rick Lindsley <ricklind@linux.ibm.com>1222612229R: Nick Child <nnac123@linux.ibm.com>1222712227-R: Thomas Falcon <tlfalcon@linux.ibm.com>1222812230L: netdev@vger.kernel.org1222912231S: Maintained1223012232F: drivers/net/ethernet/ibm/ibmvnic.*···13949139531395013954KERNEL UNIT TESTING FRAMEWORK (KUnit)1395113955M: Brendan Higgins <brendan.higgins@linux.dev>1395213952-M: David Gow <davidgow@google.com>1395613956+M: David Gow <david@davidgow.net>1395313957R: Rae Moar <raemoar63@gmail.com>1395413958L: linux-kselftest@vger.kernel.org1395513959L: kunit-dev@googlegroups.com···1476914773F: drivers/platform/x86/hp/hp_accel.c14770147741477114775LIST KUNIT TEST1477214772-M: David Gow <davidgow@google.com>1477614776+M: David Gow <david@davidgow.net>1477314777L: linux-kselftest@vger.kernel.org1477414778L: kunit-dev@googlegroups.com1477514779S: Maintained···1538215386F: include/linux/soc/marvell/octeontx2/15383153871538415388MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2)1538515385-M: Mirko Lindner <mlindner@marvell.com>1538615386-M: Stephen Hemminger <stephen@networkplumber.org>1538715389L: netdev@vger.kernel.org1538815388-S: Odd fixes1539015390+S: Orphan1538915391F: drivers/net/ethernet/marvell/sk*15390153921539115393MARVELL LIBERTAS WIRELESS DRIVER···1548015486M: Sunil Goutham <sgoutham@marvell.com>1548115487M: Linu Cherian <lcherian@marvell.com>1548215488M: Geetha sowjanya <gakula@marvell.com>1548315483-M: Jerin Jacob <jerinj@marvell.com>1548415489M: hariprasad <hkelam@marvell.com>1548515490M: Subbaraya Sundeep <sbhatta@marvell.com>1548615491L: netdev@vger.kernel.org···1549415501F: drivers/perf/marvell_pem_pmu.c15495155021549615503MARVELL PRESTERA ETHERNET SWITCH DRIVER1549715497-M: Taras Chornyi <taras.chornyi@plvision.eu>1550415504+M: Elad Nachman <enachman@marvell.com>1549815505S: Supported1549915506W: https://github.com/Marvell-switching/switchdev-prestera1550015507F: drivers/net/ethernet/marvell/prestera/···16168161751616916176MEDIATEK ETHERNET DRIVER1617016177M: Felix Fietkau <nbd@nbd.name>1617116171-M: Sean Wang <sean.wang@mediatek.com>1617216178M: Lorenzo Bianconi <lorenzo@kernel.org>1617316179L: netdev@vger.kernel.org1617416180S: Maintained···1636016368MEDIATEK SWITCH DRIVER1636116369M: Chester A. Unal <chester.a.unal@arinc9.com>1636216370M: Daniel Golle <daniel@makrotopia.org>1636316363-M: DENG Qingfang <dqfext@gmail.com>1636416364-M: Sean Wang <sean.wang@mediatek.com>1636516371L: netdev@vger.kernel.org1636616372S: Maintained1636716373F: drivers/net/dsa/mt7530-mdio.c···19227192371922819238OCELOT ETHERNET SWITCH DRIVER1922919239M: Vladimir Oltean <vladimir.oltean@nxp.com>1923019230-M: Claudiu Manoil <claudiu.manoil@nxp.com>1923119231-M: Alexandre Belloni <alexandre.belloni@bootlin.com>1923219240M: UNGLinuxDriver@microchip.com1923319241L: netdev@vger.kernel.org1923419242S: Supported···1981219824F: include/dt-bindings/19813198251981419826OPENCOMPUTE PTP CLOCK DRIVER1981519815-M: Jonathan Lemon <jonathan.lemon@gmail.com>1981619827M: Vadim Fedorenko <vadim.fedorenko@linux.dev>1981719828L: netdev@vger.kernel.org1981819829S: Maintained···2011920132F: drivers/pci/controller/pci-aardvark.c20120201332012120134PCI DRIVER FOR ALTERA PCIE IP2012220122-M: Joyce Ooi <joyce.ooi@intel.com>2012320135L: linux-pci@vger.kernel.org2012420124-S: Supported2013620136+S: Orphan2012520137F: Documentation/devicetree/bindings/pci/altr,pcie-root-port.yaml2012620138F: drivers/pci/controller/pcie-altera.c2012720139···2036520379F: Documentation/PCI/pci-error-recovery.rst20366203802036720381PCI MSI DRIVER FOR ALTERA MSI IP2036820368-M: Joyce Ooi <joyce.ooi@intel.com>2036920382L: linux-pci@vger.kernel.org2037020370-S: Supported2038320383+S: Orphan2037120384F: Documentation/devicetree/bindings/interrupt-controller/altr,msi-controller.yaml2037220385F: drivers/pci/controller/pcie-altera-msi.c2037320386···2145321468F: drivers/scsi/qedi/21454214692145521470QLOGIC QL4xxx ETHERNET DRIVER2145621456-M: Manish Chopra <manishc@marvell.com>2145721471L: netdev@vger.kernel.org2145821458-S: Maintained2147221472+S: Orphan2145921473F: drivers/net/ethernet/qlogic/qed/2146021474F: drivers/net/ethernet/qlogic/qede/2146121475F: include/linux/qed/···2433224348F: Documentation/devicetree/bindings/pwm/kontron,sl28cpld-pwm.yaml2433324349F: Documentation/devicetree/bindings/watchdog/kontron,sl28cpld-wdt.yaml2433424350F: drivers/gpio/gpio-sl28cpld.c2433524335-F: drivers/hwmon/sa67mcu-hwmon.c2433624351F: drivers/hwmon/sl28cpld-hwmon.c2433724352F: drivers/irqchip/irq-sl28cpld.c2433824353F: drivers/pwm/pwm-sl28cpld.c
···22#ifndef _ASM_RUNTIME_CONST_H33#define _ASM_RUNTIME_CONST_H4455+#ifdef MODULE66+ #error "Cannot use runtime-const infrastructure from modules"77+#endif88+59#include <asm/cacheflush.h>610711/* Sigh. You can still run arm64 in BE mode */
···599599}600600EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);601601602602+static bool contpte_all_subptes_match_access_flags(pte_t *ptep, pte_t entry)603603+{604604+ pte_t *cont_ptep = contpte_align_down(ptep);605605+ /*606606+ * PFNs differ per sub-PTE. Match only bits consumed by607607+ * __ptep_set_access_flags(): AF, DIRTY and write permission.608608+ */609609+ const pteval_t cmp_mask = PTE_RDONLY | PTE_AF | PTE_WRITE | PTE_DIRTY;610610+ pteval_t entry_cmp = pte_val(entry) & cmp_mask;611611+ int i;612612+613613+ for (i = 0; i < CONT_PTES; i++) {614614+ pteval_t pte_cmp = pte_val(__ptep_get(cont_ptep + i)) & cmp_mask;615615+616616+ if (pte_cmp != entry_cmp)617617+ return false;618618+ }619619+620620+ return true;621621+}622622+602623int contpte_ptep_set_access_flags(struct vm_area_struct *vma,603624 unsigned long addr, pte_t *ptep,604625 pte_t entry, int dirty)···629608 int i;630609631610 /*632632- * Gather the access/dirty bits for the contiguous range. If nothing has633633- * changed, its a noop.611611+ * Check whether all sub-PTEs in the CONT block already match the612612+ * requested access flags/write permission, using raw per-PTE values613613+ * rather than the gathered ptep_get() view.614614+ *615615+ * __ptep_set_access_flags() can update AF, dirty and write616616+ * permission, but only to make the mapping more permissive.617617+ *618618+ * ptep_get() gathers AF/dirty state across the whole CONT block,619619+ * which is correct for a CPU with FEAT_HAFDBS. But page-table620620+ * walkers that evaluate each descriptor individually (e.g. a CPU621621+ * without DBM support, or an SMMU without HTTU, or with HA/HD622622+ * disabled in CD.TCR) can keep faulting on the target sub-PTE if623623+ * only a sibling has been updated. Gathering can therefore cause624624+ * false no-ops when only a sibling has been updated:625625+ * - write faults: target still has PTE_RDONLY (needs PTE_RDONLY cleared)626626+ * - read faults: target still lacks PTE_AF627627+ *628628+ * Per Arm ARM (DDI 0487) D8.7.1, any sub-PTE in a CONT range may629629+ * become the effective cached translation, so all entries must have630630+ * consistent attributes. Check the full CONT block before returning631631+ * no-op, and when any sub-PTE mismatches, proceed to update the whole632632+ * range.634633 */635635- orig_pte = pte_mknoncont(ptep_get(ptep));636636- if (pte_val(orig_pte) == pte_val(entry))634634+ if (contpte_all_subptes_match_access_flags(ptep, entry))637635 return 0;636636+637637+ /*638638+ * Use raw target pte (not gathered) for write-bit unfold decision.639639+ */640640+ orig_pte = pte_mknoncont(__ptep_get(ptep));638641639642 /*640643 * We can fix up access/dirty bits without having to unfold the contig
···101101 /* Throw in the debugging sections */102102 STABS_DEBUG103103 DWARF_DEBUG104104+ MODINFO104105 ELF_DETAILS105106106107 /* Sections to be discarded -- must be last */
+1
arch/parisc/boot/compressed/vmlinux.lds.S
···9090 /* Sections to be discarded */9191 DISCARDS9292 /DISCARD/ : {9393+ *(.modinfo)9394#ifdef CONFIG_64BIT9495 /* temporary hack until binutils is fixed to not emit these9596 * for static binaries
+1-1
arch/parisc/include/asm/pgtable.h
···8585 printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, (unsigned long)pgd_val(e))86868787/* This is the size of the initially mapped kernel memory */8888-#if defined(CONFIG_64BIT)8888+#if defined(CONFIG_64BIT) || defined(CONFIG_KALLSYMS)8989#define KERNEL_INITIAL_ORDER 26 /* 1<<26 = 64MB */9090#else9191#define KERNEL_INITIAL_ORDER 25 /* 1<<25 = 32MB */
+6-1
arch/parisc/kernel/head.S
···56565757 .import __bss_start,data5858 .import __bss_stop,data5959+ .import __end,data59606061 load32 PA(__bss_start),%r36162 load32 PA(__bss_stop),%r4···150149 * everything ... it will get remapped correctly later */151150 ldo 0+_PAGE_KERNEL_RWX(%r0),%r3 /* Hardwired 0 phys addr start */152151 load32 (1<<(KERNEL_INITIAL_ORDER-PAGE_SHIFT)),%r11 /* PFN count */153153- load32 PA(pg0),%r1152152+ load32 PA(_end),%r1153153+ SHRREG %r1,PAGE_SHIFT,%r1 /* %r1 is PFN count for _end symbol */154154+ cmpb,<<,n %r11,%r1,1f155155+ copy %r1,%r11 /* %r1 PFN count smaller than %r11 */156156+1: load32 PA(pg0),%r1154157155158$pgt_fill_loop:156159 STREGM %r3,ASM_PTE_ENTRY_SIZE(%r1)
+12-8
arch/parisc/kernel/setup.c
···120120#endif121121 printk(KERN_CONT ".\n");122122123123- /*124124- * Check if initial kernel page mappings are sufficient.125125- * panic early if not, else we may access kernel functions126126- * and variables which can't be reached.127127- */128128- if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE)129129- panic("KERNEL_INITIAL_ORDER too small!");130130-131123#ifdef CONFIG_64BIT132124 if(parisc_narrow_firmware) {133125 printk(KERN_INFO "Kernel is using PDC in 32-bit mode.\n");···270278{271279 int ret, cpunum;272280 struct pdc_coproc_cfg coproc_cfg;281281+282282+ /*283283+ * Check if initial kernel page mapping is sufficient.284284+ * Print warning if not, because we may access kernel functions and285285+ * variables which can't be reached yet through the initial mappings.286286+ * Note that the panic() and printk() functions are not functional287287+ * yet, so we need to use direct iodc() firmware calls instead.288288+ */289289+ const char warn1[] = "CRITICAL: Kernel may crash because "290290+ "KERNEL_INITIAL_ORDER is too small.\n";291291+ if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE)292292+ pdc_iodc_print(warn1, sizeof(warn1) - 1);273293274294 /* check QEMU/SeaBIOS marker in PAGE0 */275295 running_on_qemu = (memcmp(&PAGE0->pad0, "SeaBIOS", 8) == 0);
···212212 dev->error_state = pci_channel_io_normal;213213 dev->dma_mask = 0xffffffff;214214215215+ /*216216+ * Assume 64-bit addresses for MSI initially. Will be changed to 32-bit217217+ * if MSI (rather than MSI-X) capability does not have218218+ * PCI_MSI_FLAGS_64BIT. Can also be overridden by driver.219219+ */220220+ dev->msi_addr_mask = DMA_BIT_MASK(64);221221+215222 /* Early fixups, before probing the BARs */216223 pci_fixup_device(pci_fixup_early, dev);217224
···355355 dev->error_state = pci_channel_io_normal;356356 dev->dma_mask = 0xffffffff;357357358358+ /*359359+ * Assume 64-bit addresses for MSI initially. Will be changed to 32-bit360360+ * if MSI (rather than MSI-X) capability does not have361361+ * PCI_MSI_FLAGS_64BIT. Can also be overridden by driver.362362+ */363363+ dev->msi_addr_mask = DMA_BIT_MASK(64);364364+358365 if (of_node_name_eq(node, "pci")) {359366 /* a PCI-PCI bridge */360367 dev->hdr_type = PCI_HEADER_TYPE_BRIDGE;
···3535#endif3636.endm37373838+/*3939+ * WARNING:4040+ *4141+ * A bug in the libgcc unwinder as of at least gcc 15.2 (2026) means that4242+ * the unwinder fails to recognize the signal frame flag.4343+ *4444+ * There is a hacky legacy fallback path in libgcc which ends up4545+ * getting invoked instead. It happens to work as long as BOTH of the4646+ * following conditions are true:4747+ *4848+ * 1. There is at least one byte before the each of the sigreturn4949+ * functions which falls outside any function. This is enforced by5050+ * an explicit nop instruction before the ALIGN.5151+ * 2. The code sequences between the entry point up to and including5252+ * the int $0x80 below need to match EXACTLY. Do not change them5353+ * in any way. The exact byte sequences are:5454+ *5555+ * __kernel_sigreturn:5656+ * 0: 58 pop %eax5757+ * 1: b8 77 00 00 00 mov $0x77,%eax5858+ * 6: cd 80 int $0x805959+ *6060+ * __kernel_rt_sigreturn:6161+ * 0: b8 ad 00 00 00 mov $0xad,%eax6262+ * 5: cd 80 int $0x806363+ *6464+ * For details, see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=1240506565+ */3866 .text3967 .globl __kernel_sigreturn4068 .type __kernel_sigreturn,@function6969+ nop /* libgcc hack: see comment above */4170 ALIGN4271__kernel_sigreturn:4372 STARTPROC_SIGNAL_FRAME IA32_SIGFRAME_sigcontext···81528253 .globl __kernel_rt_sigreturn8354 .type __kernel_rt_sigreturn,@function5555+ nop /* libgcc hack: see comment above */8456 ALIGN8557__kernel_rt_sigreturn:8658 STARTPROC_SIGNAL_FRAME IA32_RT_SIGFRAME_sigcontext
+1-1
arch/x86/include/asm/efi.h
···138138extern int __init efi_reuse_config(u64 tables, int nr_tables);139139extern void efi_delete_dummy_variable(void);140140extern void efi_crash_gracefully_on_page_fault(unsigned long phys_addr);141141-extern void efi_free_boot_services(void);141141+extern void efi_unmap_boot_services(void);142142143143void arch_efi_call_virt_setup(void);144144void arch_efi_call_virt_teardown(void);
···155155extern unsigned int __max_threads_per_core;156156extern unsigned int __num_threads_per_package;157157extern unsigned int __num_cores_per_package;158158+extern unsigned int __num_nodes_per_package;158159159160const char *get_topology_cpu_type_name(struct cpuinfo_x86 *c);160161enum x86_topology_cpu_type get_topology_cpu_type(struct cpuinfo_x86 *c);···178177static inline unsigned int topology_num_threads_per_package(void)179178{180179 return __num_threads_per_package;180180+}181181+182182+static inline unsigned int topology_num_nodes_per_package(void)183183+{184184+ return __num_nodes_per_package;181185}182186183187#ifdef CONFIG_X86_LOCAL_APIC
+3
arch/x86/kernel/cpu/common.c
···9595unsigned int __max_logical_packages __ro_after_init = 1;9696EXPORT_SYMBOL(__max_logical_packages);97979898+unsigned int __num_nodes_per_package __ro_after_init = 1;9999+EXPORT_SYMBOL(__num_nodes_per_package);100100+98101unsigned int __num_cores_per_package __ro_after_init = 1;99102EXPORT_SYMBOL(__num_cores_per_package);100103
+5-31
arch/x86/kernel/cpu/resctrl/monitor.c
···364364 msr_clear_bit(MSR_RMID_SNC_CONFIG, 0);365365}366366367367-/* CPU models that support MSR_RMID_SNC_CONFIG */367367+/* CPU models that support SNC and MSR_RMID_SNC_CONFIG */368368static const struct x86_cpu_id snc_cpu_ids[] __initconst = {369369 X86_MATCH_VFM(INTEL_ICELAKE_X, 0),370370 X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, 0),···375375 {}376376};377377378378-/*379379- * There isn't a simple hardware bit that indicates whether a CPU is running380380- * in Sub-NUMA Cluster (SNC) mode. Infer the state by comparing the381381- * number of CPUs sharing the L3 cache with CPU0 to the number of CPUs in382382- * the same NUMA node as CPU0.383383- * It is not possible to accurately determine SNC state if the system is384384- * booted with a maxcpus=N parameter. That distorts the ratio of SNC nodes385385- * to L3 caches. It will be OK if system is booted with hyperthreading386386- * disabled (since this doesn't affect the ratio).387387- */388378static __init int snc_get_config(void)389379{390390- struct cacheinfo *ci = get_cpu_cacheinfo_level(0, RESCTRL_L3_CACHE);391391- const cpumask_t *node0_cpumask;392392- int cpus_per_node, cpus_per_l3;393393- int ret;380380+ int ret = topology_num_nodes_per_package();394381395395- if (!x86_match_cpu(snc_cpu_ids) || !ci)382382+ if (ret > 1 && !x86_match_cpu(snc_cpu_ids)) {383383+ pr_warn("CoD enabled system? Resctrl not supported\n");396384 return 1;397397-398398- cpus_read_lock();399399- if (num_online_cpus() != num_present_cpus())400400- pr_warn("Some CPUs offline, SNC detection may be incorrect\n");401401- cpus_read_unlock();402402-403403- node0_cpumask = cpumask_of_node(cpu_to_node(0));404404-405405- cpus_per_node = cpumask_weight(node0_cpumask);406406- cpus_per_l3 = cpumask_weight(&ci->shared_cpu_map);407407-408408- if (!cpus_per_node || !cpus_per_l3)409409- return 1;410410-411411- ret = cpus_per_l3 / cpus_per_node;385385+ }412386413387 /* sanity check: Only valid results are 1, 2, 3, 4, 6 */414388 switch (ret) {
···616616617617 .data618618619619-#if defined(CONFIG_XEN_PV) || defined(CONFIG_PVH)620620-SYM_DATA_START_PTI_ALIGNED(init_top_pgt)621621- .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC622622- .org init_top_pgt + L4_PAGE_OFFSET*8, 0623623- .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC624624- .org init_top_pgt + L4_START_KERNEL*8, 0625625- /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */626626- .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC627627- .fill PTI_USER_PGD_FILL,8,0628628-SYM_DATA_END(init_top_pgt)629629-630630-SYM_DATA_START_PAGE_ALIGNED(level3_ident_pgt)631631- .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC632632- .fill 511, 8, 0633633-SYM_DATA_END(level3_ident_pgt)634634-SYM_DATA_START_PAGE_ALIGNED(level2_ident_pgt)635635- /*636636- * Since I easily can, map the first 1G.637637- * Don't set NX because code runs from these pages.638638- *639639- * Note: This sets _PAGE_GLOBAL despite whether640640- * the CPU supports it or it is enabled. But,641641- * the CPU should ignore the bit.642642- */643643- PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)644644-SYM_DATA_END(level2_ident_pgt)645645-#else646619SYM_DATA_START_PTI_ALIGNED(init_top_pgt)647620 .fill 512,8,0648621 .fill PTI_USER_PGD_FILL,8,0649622SYM_DATA_END(init_top_pgt)650650-#endif651623652624SYM_DATA_START_PAGE_ALIGNED(level4_kernel_pgt)653625 .fill 511,8,0
+144-55
arch/x86/kernel/smpboot.c
···468468}469469#endif470470471471-/*472472- * Set if a package/die has multiple NUMA nodes inside.473473- * AMD Magny-Cours, Intel Cluster-on-Die, and Intel474474- * Sub-NUMA Clustering have this.475475- */476476-static bool x86_has_numa_in_package;477477-478471static struct sched_domain_topology_level x86_topology[] = {479472 SDTL_INIT(tl_smt_mask, cpu_smt_flags, SMT),480473#ifdef CONFIG_SCHED_CLUSTER···489496 * PKG domain since the NUMA domains will auto-magically create the490497 * right spanning domains based on the SLIT.491498 */492492- if (x86_has_numa_in_package) {499499+ if (topology_num_nodes_per_package() > 1) {493500 unsigned int pkgdom = ARRAY_SIZE(x86_topology) - 2;494501495502 memset(&x86_topology[pkgdom], 0, sizeof(x86_topology[pkgdom]));···506513}507514508515#ifdef CONFIG_NUMA509509-static int sched_avg_remote_distance;510510-static int avg_remote_numa_distance(void)516516+/*517517+ * Test if the on-trace cluster at (N,N) is symmetric.518518+ * Uses upper triangle iteration to avoid obvious duplicates.519519+ */520520+static bool slit_cluster_symmetric(int N)511521{512512- int i, j;513513- int distance, nr_remote, total_distance;522522+ int u = topology_num_nodes_per_package();514523515515- if (sched_avg_remote_distance > 0)516516- return sched_avg_remote_distance;517517-518518- nr_remote = 0;519519- total_distance = 0;520520- for_each_node_state(i, N_CPU) {521521- for_each_node_state(j, N_CPU) {522522- distance = node_distance(i, j);523523-524524- if (distance >= REMOTE_DISTANCE) {525525- nr_remote++;526526- total_distance += distance;527527- }524524+ for (int k = 0; k < u; k++) {525525+ for (int l = k; l < u; l++) {526526+ if (node_distance(N + k, N + l) !=527527+ node_distance(N + l, N + k))528528+ return false;528529 }529530 }530530- if (nr_remote)531531- sched_avg_remote_distance = total_distance / nr_remote;532532- else533533- sched_avg_remote_distance = REMOTE_DISTANCE;534531535535- return sched_avg_remote_distance;532532+ return true;533533+}534534+535535+/*536536+ * Return the package-id of the cluster, or ~0 if indeterminate.537537+ * Each node in the on-trace cluster should have the same package-id.538538+ */539539+static u32 slit_cluster_package(int N)540540+{541541+ int u = topology_num_nodes_per_package();542542+ u32 pkg_id = ~0;543543+544544+ for (int n = 0; n < u; n++) {545545+ const struct cpumask *cpus = cpumask_of_node(N + n);546546+ int cpu;547547+548548+ for_each_cpu(cpu, cpus) {549549+ u32 id = topology_logical_package_id(cpu);550550+551551+ if (pkg_id == ~0)552552+ pkg_id = id;553553+ if (pkg_id != id)554554+ return ~0;555555+ }556556+ }557557+558558+ return pkg_id;559559+}560560+561561+/*562562+ * Validate the SLIT table is of the form expected for SNC, specifically:563563+ *564564+ * - each on-trace cluster should be symmetric,565565+ * - each on-trace cluster should have a unique package-id.566566+ *567567+ * If you NUMA_EMU on top of SNC, you get to keep the pieces.568568+ */569569+static bool slit_validate(void)570570+{571571+ int u = topology_num_nodes_per_package();572572+ u32 pkg_id, prev_pkg_id = ~0;573573+574574+ for (int pkg = 0; pkg < topology_max_packages(); pkg++) {575575+ int n = pkg * u;576576+577577+ /*578578+ * Ensure the on-trace cluster is symmetric and each cluster579579+ * has a different package id.580580+ */581581+ if (!slit_cluster_symmetric(n))582582+ return false;583583+ pkg_id = slit_cluster_package(n);584584+ if (pkg_id == ~0)585585+ return false;586586+ if (pkg && pkg_id == prev_pkg_id)587587+ return false;588588+589589+ prev_pkg_id = pkg_id;590590+ }591591+592592+ return true;593593+}594594+595595+/*596596+ * Compute a sanitized SLIT table for SNC; notably SNC-3 can end up with597597+ * asymmetric off-trace clusters, reflecting physical assymmetries. However598598+ * this leads to 'unfortunate' sched_domain configurations.599599+ *600600+ * For example dual socket GNR with SNC-3:601601+ *602602+ * node distances:603603+ * node 0 1 2 3 4 5604604+ * 0: 10 15 17 21 28 26605605+ * 1: 15 10 15 23 26 23606606+ * 2: 17 15 10 26 23 21607607+ * 3: 21 28 26 10 15 17608608+ * 4: 23 26 23 15 10 15609609+ * 5: 26 23 21 17 15 10610610+ *611611+ * Fix things up by averaging out the off-trace clusters; resulting in:612612+ *613613+ * node 0 1 2 3 4 5614614+ * 0: 10 15 17 24 24 24615615+ * 1: 15 10 15 24 24 24616616+ * 2: 17 15 10 24 24 24617617+ * 3: 24 24 24 10 15 17618618+ * 4: 24 24 24 15 10 15619619+ * 5: 24 24 24 17 15 10620620+ */621621+static int slit_cluster_distance(int i, int j)622622+{623623+ static int slit_valid = -1;624624+ int u = topology_num_nodes_per_package();625625+ long d = 0;626626+ int x, y;627627+628628+ if (slit_valid < 0) {629629+ slit_valid = slit_validate();630630+ if (!slit_valid)631631+ pr_err(FW_BUG "SLIT table doesn't have the expected form for SNC -- fixup disabled!\n");632632+ else633633+ pr_info("Fixing up SNC SLIT table.\n");634634+ }635635+636636+ /*637637+ * Is this a unit cluster on the trace?638638+ */639639+ if ((i / u) == (j / u) || !slit_valid)640640+ return node_distance(i, j);641641+642642+ /*643643+ * Off-trace cluster.644644+ *645645+ * Notably average out the symmetric pair of off-trace clusters to646646+ * ensure the resulting SLIT table is symmetric.647647+ */648648+ x = i - (i % u);649649+ y = j - (j % u);650650+651651+ for (i = x; i < x + u; i++) {652652+ for (j = y; j < y + u; j++) {653653+ d += node_distance(i, j);654654+ d += node_distance(j, i);655655+ }656656+ }657657+658658+ return d / (2*u*u);536659}537660538661int arch_sched_node_distance(int from, int to)···658549 switch (boot_cpu_data.x86_vfm) {659550 case INTEL_GRANITERAPIDS_X:660551 case INTEL_ATOM_DARKMONT_X:661661-662662- if (!x86_has_numa_in_package || topology_max_packages() == 1 ||663663- d < REMOTE_DISTANCE)552552+ if (topology_max_packages() == 1 ||553553+ topology_num_nodes_per_package() < 3)664554 return d;665555666556 /*667667- * With SNC enabled, there could be too many levels of remote668668- * NUMA node distances, creating NUMA domain levels669669- * including local nodes and partial remote nodes.670670- *671671- * Trim finer distance tuning for NUMA nodes in remote package672672- * for the purpose of building sched domains. Group NUMA nodes673673- * in the remote package in the same sched group.674674- * Simplify NUMA domains and avoid extra NUMA levels including675675- * different remote NUMA nodes and local nodes.676676- *677677- * GNR and CWF don't expect systems with more than 2 packages678678- * and more than 2 hops between packages. Single average remote679679- * distance won't be appropriate if there are more than 2680680- * packages as average distance to different remote packages681681- * could be different.557557+ * Handle SNC-3 asymmetries.682558 */683683- WARN_ONCE(topology_max_packages() > 2,684684- "sched: Expect only up to 2 packages for GNR or CWF, "685685- "but saw %d packages when building sched domains.",686686- topology_max_packages());687687-688688- d = avg_remote_numa_distance();559559+ return slit_cluster_distance(from, to);689560 }690561 return d;691562}···695606 o = &cpu_data(i);696607697608 if (match_pkg(c, o) && !topology_same_node(c, o))698698- x86_has_numa_in_package = true;609609+ WARN_ON_ONCE(topology_num_nodes_per_package() == 1);699610700611 if ((i == cpu) || (has_smt && match_smt(c, o)))701612 link_mask(topology_sibling_cpumask, cpu, i);
···836836 }837837838838 efi_check_for_embedded_firmwares();839839- efi_free_boot_services();839839+ efi_unmap_boot_services();840840841841 if (!efi_is_mixed())842842 efi_native_runtime_setup();
+52-3
arch/x86/platform/efi/quirks.c
···341341342342 /*343343 * Because the following memblock_reserve() is paired344344- * with memblock_free_late() for this region in344344+ * with free_reserved_area() for this region in345345 * efi_free_boot_services(), we must be extremely346346 * careful not to reserve, and subsequently free,347347 * critical regions of memory (like the kernel image) or···404404 pr_err("Failed to unmap VA mapping for 0x%llx\n", va);405405}406406407407-void __init efi_free_boot_services(void)407407+struct efi_freeable_range {408408+ u64 start;409409+ u64 end;410410+};411411+412412+static struct efi_freeable_range *ranges_to_free;413413+414414+void __init efi_unmap_boot_services(void)408415{409416 struct efi_memory_map_data data = { 0 };410417 efi_memory_desc_t *md;411418 int num_entries = 0;419419+ int idx = 0;420420+ size_t sz;412421 void *new, *new_md;413422414423 /* Keep all regions for /sys/kernel/debug/efi */415424 if (efi_enabled(EFI_DBG))416425 return;426426+427427+ sz = sizeof(*ranges_to_free) * efi.memmap.nr_map + 1;428428+ ranges_to_free = kzalloc(sz, GFP_KERNEL);429429+ if (!ranges_to_free) {430430+ pr_err("Failed to allocate storage for freeable EFI regions\n");431431+ return;432432+ }417433418434 for_each_efi_memory_desc(md) {419435 unsigned long long start = md->phys_addr;···487471 start = SZ_1M;488472 }489473490490- memblock_free_late(start, size);474474+ /*475475+ * With CONFIG_DEFERRED_STRUCT_PAGE_INIT parts of the memory476476+ * map are still not initialized and we can't reliably free477477+ * memory here.478478+ * Queue the ranges to free at a later point.479479+ */480480+ ranges_to_free[idx].start = start;481481+ ranges_to_free[idx].end = start + size;482482+ idx++;491483 }492484493485 if (!num_entries)···535511 return;536512 }537513}514514+515515+static int __init efi_free_boot_services(void)516516+{517517+ struct efi_freeable_range *range = ranges_to_free;518518+ unsigned long freed = 0;519519+520520+ if (!ranges_to_free)521521+ return 0;522522+523523+ while (range->start) {524524+ void *start = phys_to_virt(range->start);525525+ void *end = phys_to_virt(range->end);526526+527527+ free_reserved_area(start, end, -1, NULL);528528+ freed += (end - start);529529+ range++;530530+ }531531+ kfree(ranges_to_free);532532+533533+ if (freed)534534+ pr_info("Freeing EFI boot services memory: %ldK\n", freed / SZ_1K);535535+536536+ return 0;537537+}538538+arch_initcall(efi_free_boot_services);538539539540/*540541 * A number of config table entries get remapped to virtual addresses
+1-6
arch/x86/platform/pvh/enlighten.c
···25252626const unsigned int __initconst pvh_start_info_sz = sizeof(pvh_start_info);27272828-static u64 __init pvh_get_root_pointer(void)2929-{3030- return pvh_start_info.rsdp_paddr;3131-}3232-3328/*3429 * Xen guests are able to obtain the memory map from the hypervisor via the3530 * HYPERVISOR_memory_op hypercall.···9095 pvh_bootparams.hdr.version = (2 << 8) | 12;9196 pvh_bootparams.hdr.type_of_loader = ((xen_guest ? 0x9 : 0xb) << 4) | 0;92979393- x86_init.acpi.get_root_pointer = pvh_get_root_pointer;9898+ pvh_bootparams.acpi_rsdp_addr = pvh_start_info.rsdp_paddr;9499}9510096101/*
+1-1
arch/x86/xen/enlighten_pv.c
···392392393393 /*394394 * Xen PV would need some work to support PCID: CR3 handling as well395395- * as xen_flush_tlb_others() would need updating.395395+ * as xen_flush_tlb_multi() would need updating.396396 */397397 setup_clear_cpu_cap(X86_FEATURE_PCID);398398
+9
arch/x86/xen/mmu_pv.c
···105105static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;106106#endif107107108108+static pud_t level3_ident_pgt[PTRS_PER_PUD] __page_aligned_bss;109109+static pmd_t level2_ident_pgt[PTRS_PER_PMD] __page_aligned_bss;110110+108111/*109112 * Protects atomic reservation decrease/increase against concurrent increases.110113 * Also protects non-atomic updates of current_pages and balloon lists.···1779177617801777 /* Zap identity mapping */17811778 init_top_pgt[0] = __pgd(0);17791779+17801780+ init_top_pgt[pgd_index(__PAGE_OFFSET_BASE_L4)].pgd =17811781+ __pa_symbol(level3_ident_pgt) + _KERNPG_TABLE_NOENC;17821782+ init_top_pgt[pgd_index(__START_KERNEL_map)].pgd =17831783+ __pa_symbol(level3_kernel_pgt) + _PAGE_TABLE_NOENC;17841784+ level3_ident_pgt[0].pud = __pa_symbol(level2_ident_pgt) + _KERNPG_TABLE_NOENC;1782178517831786 /* Pre-constructed entries are in pfn, so convert to mfn */17841787 /* L4[273] -> level3_ident_pgt */
+1-2
block/blk-map.c
···398398 if (op_is_write(op))399399 memcpy(page_address(page), p, bytes);400400401401- if (bio_add_page(bio, page, bytes, 0) < bytes)402402- break;401401+ __bio_add_page(bio, page, bytes, 0);403402404403 len -= bytes;405404 p += bytes;
+30-15
block/blk-mq.c
···47934793 }47944794}4795479547964796-static int blk_mq_realloc_tag_set_tags(struct blk_mq_tag_set *set,47974797- int new_nr_hw_queues)47964796+static struct blk_mq_tags **blk_mq_prealloc_tag_set_tags(47974797+ struct blk_mq_tag_set *set,47984798+ int new_nr_hw_queues)47984799{47994800 struct blk_mq_tags **new_tags;48004801 int i;4801480248024803 if (set->nr_hw_queues >= new_nr_hw_queues)48034803- goto done;48044804+ return NULL;4804480548054806 new_tags = kcalloc_node(new_nr_hw_queues, sizeof(struct blk_mq_tags *),48064807 GFP_KERNEL, set->numa_node);48074808 if (!new_tags)48084808- return -ENOMEM;48094809+ return ERR_PTR(-ENOMEM);4809481048104811 if (set->tags)48114812 memcpy(new_tags, set->tags, set->nr_hw_queues *48124813 sizeof(*set->tags));48134813- kfree(set->tags);48144814- set->tags = new_tags;4815481448164815 for (i = set->nr_hw_queues; i < new_nr_hw_queues; i++) {48174817- if (!__blk_mq_alloc_map_and_rqs(set, i)) {48184818- while (--i >= set->nr_hw_queues)48194819- __blk_mq_free_map_and_rqs(set, i);48204820- return -ENOMEM;48164816+ if (blk_mq_is_shared_tags(set->flags)) {48174817+ new_tags[i] = set->shared_tags;48184818+ } else {48194819+ new_tags[i] = blk_mq_alloc_map_and_rqs(set, i,48204820+ set->queue_depth);48214821+ if (!new_tags[i])48224822+ goto out_unwind;48214823 }48224824 cond_resched();48234825 }4824482648254825-done:48264826- set->nr_hw_queues = new_nr_hw_queues;48274827- return 0;48274827+ return new_tags;48284828+out_unwind:48294829+ while (--i >= set->nr_hw_queues) {48304830+ if (!blk_mq_is_shared_tags(set->flags))48314831+ blk_mq_free_map_and_rqs(set, new_tags[i], i);48324832+ }48334833+ kfree(new_tags);48344834+ return ERR_PTR(-ENOMEM);48284835}4829483648304837/*···51205113 unsigned int memflags;51215114 int i;51225115 struct xarray elv_tbl;51165116+ struct blk_mq_tags **new_tags;51235117 bool queues_frozen = false;5124511851255119 lockdep_assert_held(&set->tag_list_lock);···51555147 if (blk_mq_elv_switch_none(q, &elv_tbl))51565148 goto switch_back;5157514951505150+ new_tags = blk_mq_prealloc_tag_set_tags(set, nr_hw_queues);51515151+ if (IS_ERR(new_tags))51525152+ goto switch_back;51535153+51585154 list_for_each_entry(q, &set->tag_list, tag_set_list)51595155 blk_mq_freeze_queue_nomemsave(q);51605156 queues_frozen = true;51615161- if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0)51625162- goto switch_back;51575157+ if (new_tags) {51585158+ kfree(set->tags);51595159+ set->tags = new_tags;51605160+ }51615161+ set->nr_hw_queues = nr_hw_queues;5163516251645163fallback:51655164 blk_mq_update_queue_map(set);
+7-1
block/blk-sysfs.c
···7878 /*7979 * Serialize updating nr_requests with concurrent queue_requests_store()8080 * and switching elevator.8181+ *8282+ * Use trylock to avoid circular lock dependency with kernfs active8383+ * reference during concurrent disk deletion:8484+ * update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del)8585+ * kn->active -> update_nr_hwq_lock (via this sysfs write path)8186 */8282- down_write(&set->update_nr_hwq_lock);8787+ if (!down_write_trylock(&set->update_nr_hwq_lock))8888+ return -EBUSY;83898490 if (nr == q->nr_requests)8591 goto unlock;
+11-1
block/elevator.c
···807807 elv_iosched_load_module(ctx.name);808808 ctx.type = elevator_find_get(ctx.name);809809810810- down_read(&set->update_nr_hwq_lock);810810+ /*811811+ * Use trylock to avoid circular lock dependency with kernfs active812812+ * reference during concurrent disk deletion:813813+ * update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del)814814+ * kn->active -> update_nr_hwq_lock (via this sysfs write path)815815+ */816816+ if (!down_read_trylock(&set->update_nr_hwq_lock)) {817817+ ret = -EBUSY;818818+ goto out;819819+ }811820 if (!blk_queue_no_elv_switch(q)) {812821 ret = elevator_change(q, &ctx);813822 if (!ret)···826817 }827818 up_read(&set->update_nr_hwq_lock);828819820820+out:829821 if (ctx.type)830822 elevator_put(ctx.type);831823 return ret;
-9
crypto/Kconfig
···876876 - blake2b-384877877 - blake2b-512878878879879- Used by the btrfs filesystem.880880-881879 See https://blake2.net for further information.882880883881config CRYPTO_CMAC···963965 10118-3), including HMAC support.964966965967 This is required for IPsec AH (XFRM_AH) and IPsec ESP (XFRM_ESP).966966- Used by the btrfs filesystem, Ceph, NFS, and SMB.967968968969config CRYPTO_SHA512969970 tristate "SHA-384 and SHA-512"···1036103910371040 Extremely fast, working at speeds close to RAM limits.1038104110391039- Used by the btrfs filesystem.10401040-10411042endmenu1042104310431044menu "CRCs (cyclic redundancy checks)"···10531058 on Communications, Vol. 41, No. 6, June 1993, selected for use with10541059 iSCSI.1055106010561056- Used by btrfs, ext4, jbd2, NVMeoF/TCP, and iSCSI.10571057-10581061config CRYPTO_CRC3210591062 tristate "CRC32"10601063 select CRYPTO_HASH10611064 select CRC3210621065 help10631066 CRC32 CRC algorithm (IEEE 802.3)10641064-10651065- Used by RoCEv2 and f2fs.1066106710671068endmenu10681069
···135135 return INVALID_CU_IDX;136136}137137138138+int amdxdna_cmd_set_error(struct amdxdna_gem_obj *abo,139139+ struct amdxdna_sched_job *job, u32 cmd_idx,140140+ enum ert_cmd_state error_state)141141+{142142+ struct amdxdna_client *client = job->hwctx->client;143143+ struct amdxdna_cmd *cmd = abo->mem.kva;144144+ struct amdxdna_cmd_chain *cc = NULL;145145+146146+ cmd->header &= ~AMDXDNA_CMD_STATE;147147+ cmd->header |= FIELD_PREP(AMDXDNA_CMD_STATE, error_state);148148+149149+ if (amdxdna_cmd_get_op(abo) == ERT_CMD_CHAIN) {150150+ cc = amdxdna_cmd_get_payload(abo, NULL);151151+ cc->error_index = (cmd_idx < cc->command_count) ? cmd_idx : 0;152152+ abo = amdxdna_gem_get_obj(client, cc->data[0], AMDXDNA_BO_CMD);153153+ if (!abo)154154+ return -EINVAL;155155+ cmd = abo->mem.kva;156156+ }157157+158158+ memset(cmd->data, 0xff, abo->mem.size - sizeof(*cmd));159159+ if (cc)160160+ amdxdna_gem_put_obj(abo);161161+162162+ return 0;163163+}164164+138165/*139166 * This should be called in close() and remove(). DO NOT call in other syscalls.140167 * This guarantee that when hwctx and resources will be released, if user
···14571457 return 0;1458145814591459 /*14601460- * Skip devices whose ACPI companions don't support power management and14611461- * don't have a wakeup GPE.14621462- */14631463- if (!acpi_device_power_manageable(adev) && !acpi_device_can_wakeup(adev)) {14641464- dev_dbg(dev, "No ACPI power management or wakeup GPE\n");14651465- return 0;14661466- }14671467-14681468- /*14691460 * Only attach the power domain to the first device if the14701461 * companion is shared by multiple. This is to prevent doing power14711462 * management twice.
+2
drivers/ata/libata-core.c
···41894189 ATA_QUIRK_FIRMWARE_WARN },4190419041914191 /* Seagate disks with LPM issues */41924192+ { "ST1000DM010-2EP102", NULL, ATA_QUIRK_NOLPM },41924193 { "ST2000DM008-2FR102", NULL, ATA_QUIRK_NOLPM },4193419441944195 /* drives which fail FPDMA_AA activation (some may freeze afterwards)···42324231 /* Devices that do not need bridging limits applied */42334232 { "MTRON MSP-SATA*", NULL, ATA_QUIRK_BRIDGE_OK },42344233 { "BUFFALO HD-QSU2/R5", NULL, ATA_QUIRK_BRIDGE_OK },42344234+ { "QEMU HARDDISK", "2.5+", ATA_QUIRK_BRIDGE_OK },4235423542364236 /* Devices which aren't very happy with higher link speeds */42374237 { "WD My Book", NULL, ATA_QUIRK_1_5_GBPS },
+2-1
drivers/ata/libata-eh.c
···647647 break;648648 }649649650650- if (qc == ap->deferred_qc) {650650+ if (i < ATA_MAX_QUEUE && qc == ap->deferred_qc) {651651 /*652652 * This is a deferred command that timed out while653653 * waiting for the command queue to drain. Since the qc···659659 */660660 WARN_ON_ONCE(qc->flags & ATA_QCFLAG_ACTIVE);661661 ap->deferred_qc = NULL;662662+ cancel_work(&ap->deferred_qc_work);662663 set_host_byte(scmd, DID_TIME_OUT);663664 scsi_eh_finish_cmd(scmd, &ap->eh_done_q);664665 } else if (i < ATA_MAX_QUEUE) {
···928928 bool async_allowed;929929 int ret;930930931931- ret = driver_match_device_locked(drv, dev);931931+ ret = driver_match_device(drv, dev);932932 if (ret == 0) {933933 /* no match */934934 return 0;
···11051105{11061106 struct psp_device *psp_master = psp_get_master_device();11071107 struct snp_hv_fixed_pages_entry *entry;11081108- struct sev_device *sev;11091108 unsigned int order;11101109 struct page *page;1111111011121112- if (!psp_master || !psp_master->sev_data)11111111+ if (!psp_master)11131112 return NULL;11141114-11151115- sev = psp_master->sev_data;1116111311171114 order = get_order(PMD_SIZE * num_2mb_pages);11181115···11231126 * This API uses SNP_INIT_EX to transition allocated pages to HV_Fixed11241127 * page state, fail if SNP is already initialized.11251128 */11261126- if (sev->snp_initialized)11291129+ if (psp_master->sev_data &&11301130+ ((struct sev_device *)psp_master->sev_data)->snp_initialized)11271131 return NULL;1128113211291133 /* Re-use freed pages that match the request */···11601162 struct psp_device *psp_master = psp_get_master_device();11611163 struct snp_hv_fixed_pages_entry *entry, *nentry;1162116411631163- if (!psp_master || !psp_master->sev_data)11651165+ if (!psp_master)11641166 return;1165116711661168 /*
+1-1
drivers/firmware/efi/mokvar-table.c
···8585 * as an alternative to ordinary EFI variables, due to platform-dependent8686 * limitations. The memory occupied by this table is marked as reserved.8787 *8888- * This routine must be called before efi_free_boot_services() in order8888+ * This routine must be called before efi_unmap_boot_services() in order8989 * to guarantee that it can mark the table as reserved.9090 *9191 * Implicit inputs:
+8-3
drivers/gpu/drm/i915/display/intel_psr.c
···13071307 u16 sink_y_granularity = crtc_state->has_panel_replay ?13081308 connector->dp.panel_replay_caps.su_y_granularity :13091309 connector->dp.psr_caps.su_y_granularity;13101310- u16 sink_w_granularity = crtc_state->has_panel_replay ?13111311- connector->dp.panel_replay_caps.su_w_granularity :13121312- connector->dp.psr_caps.su_w_granularity;13101310+ u16 sink_w_granularity;13111311+13121312+ if (crtc_state->has_panel_replay)13131313+ sink_w_granularity = connector->dp.panel_replay_caps.su_w_granularity ==13141314+ DP_PANEL_REPLAY_FULL_LINE_GRANULARITY ?13151315+ crtc_hdisplay : connector->dp.panel_replay_caps.su_w_granularity;13161316+ else13171317+ sink_w_granularity = connector->dp.psr_caps.su_w_granularity;1313131813141319 /* PSR2 HW only send full lines so we only need to validate the width */13151320 if (crtc_hdisplay % sink_w_granularity)
+3
drivers/gpu/drm/nouveau/nouveau_connector.c
···12301230 u8 size = msg->size;12311231 int ret;1232123212331233+ if (pm_runtime_suspended(nv_connector->base.dev->dev))12341234+ return -EBUSY;12351235+12331236 nv_encoder = find_encoder(&nv_connector->base, DCB_OUTPUT_DP);12341237 if (!nv_encoder)12351238 return -ENODEV;
+5-4
drivers/gpu/drm/panthor/panthor_sched.c
···893893894894out_sync:895895 /* Make sure the CPU caches are invalidated before the seqno is read.896896- * drm_gem_shmem_sync() is a NOP if map_wc=true, so no need to check896896+ * panthor_gem_sync() is a NOP if map_wc=true, so no need to check897897 * it here.898898 */899899- panthor_gem_sync(&bo->base.base, queue->syncwait.offset,899899+ panthor_gem_sync(&bo->base.base,900900+ DRM_PANTHOR_BO_SYNC_CPU_CACHE_FLUSH_AND_INVALIDATE,901901+ queue->syncwait.offset,900902 queue->syncwait.sync64 ?901903 sizeof(struct panthor_syncobj_64b) :902902- sizeof(struct panthor_syncobj_32b),903903- DRM_PANTHOR_BO_SYNC_CPU_CACHE_FLUSH_AND_INVALIDATE);904904+ sizeof(struct panthor_syncobj_32b));904905905906 return queue->syncwait.kmap + queue->syncwait.offset;906907
+15-1
drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
···11221122 struct mipi_dsi_device *device)11231123{11241124 struct rzg2l_mipi_dsi *dsi = host_to_rzg2l_mipi_dsi(host);11251125+ int bpp;11251126 int ret;1126112711271128 if (device->lanes > dsi->num_data_lanes) {···11321131 return -EINVAL;11331132 }1134113311351135- switch (mipi_dsi_pixel_format_to_bpp(device->format)) {11341134+ bpp = mipi_dsi_pixel_format_to_bpp(device->format);11351135+ switch (bpp) {11361136 case 24:11371137 break;11381138 case 18:···11631161 }1164116211651163 drm_bridge_add(&dsi->bridge);11641164+11651165+ /*11661166+ * Report the required division ratio setting for the MIPI clock dividers.11671167+ *11681168+ * vclk * bpp = hsclk * 8 * num_lanes11691169+ *11701170+ * vclk * DSI_AB_divider = hsclk * 1611711171+ *11721172+ * which simplifies to...11731173+ * DSI_AB_divider = bpp * 2 / num_lanes11741174+ */11751175+ rzg2l_cpg_dsi_div_set_divider(bpp * 2 / dsi->lanes, PLL5_TARGET_DSI);1166117611671177 return 0;11681178}
+1
drivers/gpu/drm/scheduler/sched_main.c
···361361/**362362 * drm_sched_job_done - complete a job363363 * @s_job: pointer to the job which is done364364+ * @result: 0 on success, -ERRNO on error364365 *365366 * Finish the job's fence and resubmit the work items.366367 */
+2-4
drivers/gpu/drm/solomon/ssd130x.c
···737737 unsigned int height = drm_rect_height(rect);738738 unsigned int line_length = DIV_ROUND_UP(width, 8);739739 unsigned int page_height = SSD130X_PAGE_HEIGHT;740740+ u8 page_start = ssd130x->page_offset + y / page_height;740741 unsigned int pages = DIV_ROUND_UP(height, page_height);741742 struct drm_device *drm = &ssd130x->drm;742743 u32 array_idx = 0;···775774 */776775777776 if (!ssd130x->page_address_mode) {778778- u8 page_start;779779-780777 /* Set address range for horizontal addressing mode */781778 ret = ssd130x_set_col_range(ssd130x, ssd130x->col_offset + x, width);782779 if (ret < 0)783780 return ret;784781785785- page_start = ssd130x->page_offset + y / page_height;786782 ret = ssd130x_set_page_range(ssd130x, page_start, pages);787783 if (ret < 0)788784 return ret;···811813 */812814 if (ssd130x->page_address_mode) {813815 ret = ssd130x_set_page_pos(ssd130x,814814- ssd130x->page_offset + i,816816+ page_start + i,815817 ssd130x->col_offset + x);816818 if (ret < 0)817819 return ret;
+2-2
drivers/gpu/drm/ttm/tests/ttm_bo_test.c
···222222 KUNIT_FAIL(test, "Couldn't create ttm bo reserve task\n");223223224224 /* Take a lock so the threaded reserve has to wait */225225- mutex_lock(&bo->base.resv->lock.base);225225+ dma_resv_lock(bo->base.resv, NULL);226226227227 wake_up_process(task);228228 msleep(20);229229 err = kthread_stop(task);230230231231- mutex_unlock(&bo->base.resv->lock.base);231231+ dma_resv_unlock(bo->base.resv);232232233233 KUNIT_ASSERT_EQ(test, err, -ERESTARTSYS);234234}
+5-6
drivers/gpu/drm/ttm/ttm_bo.c
···11071107static s6411081108ttm_bo_swapout_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo)11091109{11101110- struct ttm_resource *res = bo->resource;11111111- struct ttm_place place = { .mem_type = res->mem_type };11101110+ struct ttm_place place = { .mem_type = bo->resource->mem_type };11121111 struct ttm_bo_swapout_walk *swapout_walk =11131112 container_of(walk, typeof(*swapout_walk), walk);11141113 struct ttm_operation_ctx *ctx = walk->arg.ctx;···11471148 /*11481149 * Move to system cached11491150 */11501150- if (res->mem_type != TTM_PL_SYSTEM) {11511151+ if (bo->resource->mem_type != TTM_PL_SYSTEM) {11511152 struct ttm_resource *evict_mem;11521153 struct ttm_place hop;11531154···1179118011801181 if (ttm_tt_is_populated(tt)) {11811182 spin_lock(&bdev->lru_lock);11821182- ttm_resource_del_bulk_move(res, bo);11831183+ ttm_resource_del_bulk_move(bo->resource, bo);11831184 spin_unlock(&bdev->lru_lock);1184118511851186 ret = ttm_tt_swapout(bdev, tt, swapout_walk->gfp_flags);1186118711871188 spin_lock(&bdev->lru_lock);11881189 if (ret)11891189- ttm_resource_add_bulk_move(res, bo);11901190- ttm_resource_move_to_lru_tail(res);11901190+ ttm_resource_add_bulk_move(bo->resource, bo);11911191+ ttm_resource_move_to_lru_tail(bo->resource);11911192 spin_unlock(&bdev->lru_lock);11921193 }11931194
···14521452 hid_warn(pidff->hid, "unknown ramp effect layout\n");1453145314541454 if (PIDFF_FIND_FIELDS(set_condition, PID_SET_CONDITION, 1)) {14551455- if (test_and_clear_bit(FF_SPRING, dev->ffbit) ||14561456- test_and_clear_bit(FF_DAMPER, dev->ffbit) ||14571457- test_and_clear_bit(FF_FRICTION, dev->ffbit) ||14581458- test_and_clear_bit(FF_INERTIA, dev->ffbit))14551455+ bool test = false;14561456+14571457+ test |= test_and_clear_bit(FF_SPRING, dev->ffbit);14581458+ test |= test_and_clear_bit(FF_DAMPER, dev->ffbit);14591459+ test |= test_and_clear_bit(FF_FRICTION, dev->ffbit);14601460+ test |= test_and_clear_bit(FF_INERTIA, dev->ffbit);14611461+ if (test)14591462 hid_warn(pidff->hid, "unknown condition effect layout\n");14601463 }14611464
-10
drivers/hwmon/Kconfig
···19271927 This driver can also be built as a module. If so, the module19281928 will be called raspberrypi-hwmon.1929192919301930-config SENSORS_SA67MCU19311931- tristate "Kontron sa67mcu hardware monitoring driver"19321932- depends on MFD_SL28CPLD || COMPILE_TEST19331933- help19341934- If you say yes here you get support for the voltage and temperature19351935- monitor of the sa67 board management controller.19361936-19371937- This driver can also be built as a module. If so, the module19381938- will be called sa67mcu-hwmon.19391939-19401930config SENSORS_SL28CPLD19411931 tristate "Kontron sl28cpld hardware monitoring driver"19421932 depends on MFD_SL28CPLD || COMPILE_TEST
···324324 }325325}326326327327-bool bond_xdp_check(struct bonding *bond, int mode)327327+bool __bond_xdp_check(int mode, int xmit_policy)328328{329329 switch (mode) {330330 case BOND_MODE_ROUNDROBIN:···335335 /* vlan+srcmac is not supported with XDP as in most cases the 802.1q336336 * payload is not in the packet due to hardware offload.337337 */338338- if (bond->params.xmit_policy != BOND_XMIT_POLICY_VLAN_SRCMAC)338338+ if (xmit_policy != BOND_XMIT_POLICY_VLAN_SRCMAC)339339 return true;340340 fallthrough;341341 default:342342 return false;343343 }344344+}345345+346346+bool bond_xdp_check(struct bonding *bond, int mode)347347+{348348+ return __bond_xdp_check(mode, bond->params.xmit_policy);344349}345350346351/*---------------------------------- VLAN -----------------------------------*/
+2
drivers/net/bonding/bond_options.c
···15751575static int bond_option_xmit_hash_policy_set(struct bonding *bond,15761576 const struct bond_opt_value *newval)15771577{15781578+ if (bond->xdp_prog && !__bond_xdp_check(BOND_MODE(bond), newval->value))15791579+ return -EOPNOTSUPP;15781580 netdev_dbg(bond->dev, "Setting xmit hash policy to %s (%llu)\n",15791581 newval->string, newval->value);15801582 bond->params.xmit_policy = newval->value;
···12141214{12151215 struct mcp251x_priv *priv = netdev_priv(net);12161216 struct spi_device *spi = priv->spi;12171217+ bool release_irq = false;12171218 unsigned long flags = 0;12181219 int ret;12191220···12581257 return 0;1259125812601259out_free_irq:12611261- free_irq(spi->irq, priv);12601260+ /* The IRQ handler might be running, and if so it will be waiting12611261+ * for the lock. But free_irq() must wait for the handler to finish12621262+ * so calling it here would deadlock.12631263+ *12641264+ * Setting priv->force_quit will let the handler exit right away12651265+ * without any access to the hardware. This make it safe to call12661266+ * free_irq() after the lock is released.12671267+ */12681268+ priv->force_quit = 1;12691269+ release_irq = true;12701270+12621271 mcp251x_hw_sleep(spi);12631272out_close:12641273 mcp251x_power_enable(priv->transceiver, 0);12651274 close_candev(net);12661275 mutex_unlock(&priv->mcp_lock);12761276+ if (release_irq)12771277+ free_irq(spi->irq, priv);12671278 return ret;12681279}12691280
···272272273273 struct usb_anchor rx_submitted;274274275275+ unsigned int rx_pipe;276276+ unsigned int tx_pipe;277277+275278 int net_count;276279 u32 version;277280 int rxinitdone;···540537 }541538542539resubmit_urb:543543- usb_fill_bulk_urb(urb, dev->udev, usb_rcvbulkpipe(dev->udev, 1),540540+ usb_fill_bulk_urb(urb, dev->udev, dev->rx_pipe,544541 urb->transfer_buffer, ESD_USB_RX_BUFFER_SIZE,545542 esd_usb_read_bulk_callback, dev);546543···629626{630627 int actual_length;631628632632- return usb_bulk_msg(dev->udev,633633- usb_sndbulkpipe(dev->udev, 2),634634- msg,629629+ return usb_bulk_msg(dev->udev, dev->tx_pipe, msg,635630 msg->hdr.len * sizeof(u32), /* convert to # of bytes */636631 &actual_length,637632 1000);···640639{641640 int actual_length;642641643643- return usb_bulk_msg(dev->udev,644644- usb_rcvbulkpipe(dev->udev, 1),645645- msg,646646- sizeof(*msg),647647- &actual_length,648648- 1000);642642+ return usb_bulk_msg(dev->udev, dev->rx_pipe, msg,643643+ sizeof(*msg), &actual_length, 1000);649644}650645651646static int esd_usb_setup_rx_urbs(struct esd_usb *dev)···674677675678 urb->transfer_dma = buf_dma;676679677677- usb_fill_bulk_urb(urb, dev->udev,678678- usb_rcvbulkpipe(dev->udev, 1),680680+ usb_fill_bulk_urb(urb, dev->udev, dev->rx_pipe,679681 buf, ESD_USB_RX_BUFFER_SIZE,680682 esd_usb_read_bulk_callback, dev);681683 urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;···899903 /* hnd must not be 0 - MSB is stripped in txdone handling */900904 msg->tx.hnd = BIT(31) | i; /* returned in TX done message */901905902902- usb_fill_bulk_urb(urb, dev->udev, usb_sndbulkpipe(dev->udev, 2), buf,906906+ usb_fill_bulk_urb(urb, dev->udev, dev->tx_pipe, buf,903907 msg->hdr.len * sizeof(u32), /* convert to # of bytes */904908 esd_usb_write_bulk_callback, context);905909···12941298static int esd_usb_probe(struct usb_interface *intf,12951299 const struct usb_device_id *id)12961300{13011301+ struct usb_endpoint_descriptor *ep_in, *ep_out;12971302 struct esd_usb *dev;12981303 union esd_usb_msg *msg;12991304 int i, err;13051305+13061306+ err = usb_find_common_endpoints(intf->cur_altsetting, &ep_in, &ep_out,13071307+ NULL, NULL);13081308+ if (err)13091309+ return err;1300131013011311 dev = kzalloc_obj(*dev);13021312 if (!dev) {···13111309 }1312131013131311 dev->udev = interface_to_usbdev(intf);13121312+ dev->rx_pipe = usb_rcvbulkpipe(dev->udev, ep_in->bEndpointAddress);13131313+ dev->tx_pipe = usb_sndbulkpipe(dev->udev, ep_out->bEndpointAddress);1314131413151315 init_usb_anchor(&dev->rx_submitted);13161316
+7-1
drivers/net/can/usb/etas_es58x/es58x_core.c
···14611461 }1462146214631463 resubmit_urb:14641464+ usb_anchor_urb(urb, &es58x_dev->rx_urbs);14641465 ret = usb_submit_urb(urb, GFP_ATOMIC);14661466+ if (!ret)14671467+ return;14681468+14691469+ usb_unanchor_urb(urb);14701470+14651471 if (ret == -ENODEV) {14661472 for (i = 0; i < es58x_dev->num_can_ch; i++)14671473 if (es58x_dev->netdev[i])14681474 netif_device_detach(es58x_dev->netdev[i]);14691469- } else if (ret)14751475+ } else14701476 dev_err_ratelimited(dev,14711477 "Failed resubmitting read bulk urb: %pe\n",14721478 ERR_PTR(ret));
+40-5
drivers/net/can/usb/f81604.c
···413413{414414 struct f81604_can_frame *frame = urb->transfer_buffer;415415 struct net_device *netdev = urb->context;416416+ struct f81604_port_priv *priv = netdev_priv(netdev);416417 int ret;417418418419 if (!netif_device_present(netdev))···446445 f81604_process_rx_packet(netdev, frame);447446448447resubmit_urb:448448+ usb_anchor_urb(urb, &priv->urbs_anchor);449449 ret = usb_submit_urb(urb, GFP_ATOMIC);450450+ if (!ret)451451+ return;452452+ usb_unanchor_urb(urb);453453+450454 if (ret == -ENODEV)451455 netif_device_detach(netdev);452452- else if (ret)456456+ else453457 netdev_err(netdev,454458 "%s: failed to resubmit read bulk urb: %pe\n",455459 __func__, ERR_PTR(ret));···626620 netdev_info(netdev, "%s: Int URB aborted: %pe\n", __func__,627621 ERR_PTR(urb->status));628622623623+ if (urb->actual_length < sizeof(*data)) {624624+ netdev_warn(netdev, "%s: short int URB: %u < %zu\n",625625+ __func__, urb->actual_length, sizeof(*data));626626+ goto resubmit_urb;627627+ }628628+629629 switch (urb->status) {630630 case 0: /* success */631631 break;···658646 f81604_handle_tx(priv, data);659647660648resubmit_urb:649649+ usb_anchor_urb(urb, &priv->urbs_anchor);661650 ret = usb_submit_urb(urb, GFP_ATOMIC);651651+ if (!ret)652652+ return;653653+ usb_unanchor_urb(urb);654654+662655 if (ret == -ENODEV)663656 netif_device_detach(netdev);664664- else if (ret)657657+ else665658 netdev_err(netdev, "%s: failed to resubmit int urb: %pe\n",666659 __func__, ERR_PTR(ret));667660}···891874 if (!netif_device_present(netdev))892875 return;893876894894- if (urb->status)895895- netdev_info(netdev, "%s: Tx URB error: %pe\n", __func__,896896- ERR_PTR(urb->status));877877+ if (!urb->status)878878+ return;879879+880880+ switch (urb->status) {881881+ case -ENOENT:882882+ case -ECONNRESET:883883+ case -ESHUTDOWN:884884+ return;885885+ default:886886+ break;887887+ }888888+889889+ if (net_ratelimit())890890+ netdev_err(netdev, "%s: Tx URB error: %pe\n", __func__,891891+ ERR_PTR(urb->status));892892+893893+ can_free_echo_skb(netdev, 0, NULL);894894+ netdev->stats.tx_dropped++;895895+ netdev->stats.tx_errors++;896896+897897+ netif_wake_queue(netdev);897898}898899899900static void f81604_clear_reg_work(struct work_struct *work)
+16-6
drivers/net/can/usb/gs_usb.c
···772772 }773773}774774775775-static int gs_usb_set_bittiming(struct net_device *netdev)775775+static int gs_usb_set_bittiming(struct gs_can *dev)776776{777777- struct gs_can *dev = netdev_priv(netdev);778777 struct can_bittiming *bt = &dev->can.bittiming;779778 struct gs_device_bittiming dbt = {780779 .prop_seg = cpu_to_le32(bt->prop_seg),···790791 GFP_KERNEL);791792}792793793793-static int gs_usb_set_data_bittiming(struct net_device *netdev)794794+static int gs_usb_set_data_bittiming(struct gs_can *dev)794795{795795- struct gs_can *dev = netdev_priv(netdev);796796 struct can_bittiming *bt = &dev->can.fd.data_bittiming;797797 struct gs_device_bittiming dbt = {798798 .prop_seg = cpu_to_le32(bt->prop_seg),···10541056 /* if hardware supports timestamps, enable it */10551057 if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)10561058 flags |= GS_CAN_MODE_HW_TIMESTAMP;10591059+10601060+ rc = gs_usb_set_bittiming(dev);10611061+ if (rc) {10621062+ netdev_err(netdev, "failed to set bittiming: %pe\n", ERR_PTR(rc));10631063+ goto out_usb_kill_anchored_urbs;10641064+ }10651065+10661066+ if (ctrlmode & CAN_CTRLMODE_FD) {10671067+ rc = gs_usb_set_data_bittiming(dev);10681068+ if (rc) {10691069+ netdev_err(netdev, "failed to set data bittiming: %pe\n", ERR_PTR(rc));10701070+ goto out_usb_kill_anchored_urbs;10711071+ }10721072+ }1057107310581074 /* finally start device */10591075 dev->can.state = CAN_STATE_ERROR_ACTIVE;···13821370 dev->can.state = CAN_STATE_STOPPED;13831371 dev->can.clock.freq = le32_to_cpu(bt_const.fclk_can);13841372 dev->can.bittiming_const = &dev->bt_const;13851385- dev->can.do_set_bittiming = gs_usb_set_bittiming;1386137313871374 dev->can.ctrlmode_supported = CAN_CTRLMODE_CC_LEN8_DLC;13881375···14051394 * GS_CAN_FEATURE_BT_CONST_EXT is set.14061395 */14071396 dev->can.fd.data_bittiming_const = &dev->bt_const;14081408- dev->can.fd.do_set_data_bittiming = gs_usb_set_data_bittiming;14091397 }1410139814111399 if (feature & GS_CAN_FEATURE_TERMINATION) {
+1-1
drivers/net/can/usb/ucan.c
···748748 len = le16_to_cpu(m->len);749749750750 /* check sanity (length of content) */751751- if (urb->actual_length - pos < len) {751751+ if ((len == 0) || (urb->actual_length - pos < len)) {752752 netdev_warn(up->netdev,753753 "invalid message (short; no data; l:%d)\n",754754 urb->actual_length);
+1-1
drivers/net/dsa/realtek/rtl8365mb.c
···769769out:770770 rtl83xx_unlock(priv);771771772772- return 0;772772+ return ret;773773}774774775775static int rtl8365mb_phy_read(struct realtek_priv *priv, int phy, int regnum)
···18161816 case ice_aqc_opc_lldp_stop:18171817 case ice_aqc_opc_lldp_start:18181818 case ice_aqc_opc_lldp_filter_ctrl:18191819+ case ice_aqc_opc_sff_eeprom:18191820 return true;18201821 }18211822···18421841{18431842 struct libie_aq_desc desc_cpy;18441843 bool is_cmd_for_retry;18441844+ u8 *buf_cpy = NULL;18451845 u8 idx = 0;18461846 u16 opcode;18471847 int status;···18521850 memset(&desc_cpy, 0, sizeof(desc_cpy));1853185118541852 if (is_cmd_for_retry) {18551855- /* All retryable cmds are direct, without buf. */18561856- WARN_ON(buf);18531853+ if (buf) {18541854+ buf_cpy = kmemdup(buf, buf_size, GFP_KERNEL);18551855+ if (!buf_cpy)18561856+ return -ENOMEM;18571857+ }1857185818581859 memcpy(&desc_cpy, desc, sizeof(desc_cpy));18591860 }···18681863 hw->adminq.sq_last_status != LIBIE_AQ_RC_EBUSY)18691864 break;1870186518661866+ if (buf_cpy)18671867+ memcpy(buf, buf_cpy, buf_size);18711868 memcpy(desc, &desc_cpy, sizeof(desc_cpy));18721872-18731869 msleep(ICE_SQ_SEND_DELAY_TIME_MS);1874187018751871 } while (++idx < ICE_SQ_SEND_MAX_EXECUTE);1876187218731873+ kfree(buf_cpy);18771874 return status;18781875}18791876···63986391 struct ice_aqc_lldp_filter_ctrl *cmd;63996392 struct libie_aq_desc desc;6400639364016401- if (vsi->type != ICE_VSI_PF || !ice_fw_supports_lldp_fltr_ctrl(hw))63946394+ if (!ice_fw_supports_lldp_fltr_ctrl(hw))64026395 return -EOPNOTSUPP;6403639664046397 cmd = libie_aq_raw(&desc);
+28-23
drivers/net/ethernet/intel/ice/ice_ethtool.c
···12891289 test_vsi->netdev = netdev;12901290 tx_ring = test_vsi->tx_rings[0];12911291 rx_ring = test_vsi->rx_rings[0];12921292+ /* Dummy q_vector and napi. Fill the minimum required for12931293+ * ice_rxq_pp_create().12941294+ */12951295+ rx_ring->q_vector->napi.dev = netdev;1292129612931297 if (ice_lbtest_prepare_rings(test_vsi)) {12941298 ret = 2;···33323328 rx_rings = kzalloc_objs(*rx_rings, vsi->num_rxq);33333329 if (!rx_rings) {33343330 err = -ENOMEM;33353335- goto done;33313331+ goto free_xdp;33363332 }3337333333383334 ice_for_each_rxq(vsi, i) {···33423338 rx_rings[i].cached_phctime = pf->ptp.cached_phc_time;33433339 rx_rings[i].desc = NULL;33443340 rx_rings[i].xdp_buf = NULL;33413341+ rx_rings[i].xdp_rxq = (struct xdp_rxq_info){ };3345334233463343 /* this is to allow wr32 to have something to write to33473344 * during early allocation of Rx buffers···33603355 }33613356 kfree(rx_rings);33623357 err = -ENOMEM;33633363- goto free_tx;33583358+ goto free_xdp;33643359 }33653360 }33663361···34113406 ice_up(vsi);34123407 }34133408 goto done;34093409+34103410+free_xdp:34113411+ if (xdp_rings) {34123412+ ice_for_each_xdp_txq(vsi, i)34133413+ ice_free_tx_ring(&xdp_rings[i]);34143414+ kfree(xdp_rings);34153415+ }3414341634153417free_tx:34163418 /* error cleanup if the Rx allocations failed after getting Tx */···45174505 u8 addr = ICE_I2C_EEPROM_DEV_ADDR;45184506 struct ice_hw *hw = &pf->hw;45194507 bool is_sfp = false;45204520- unsigned int i, j;45084508+ unsigned int i;45214509 u16 offset = 0;45224510 u8 page = 0;45234511 int status;···45594547 if (page == 0 || !(data[0x2] & 0x4)) {45604548 u32 copy_len;4561454945624562- /* If i2c bus is busy due to slow page change or45634563- * link management access, call can fail. This is normal.45644564- * So we retry this a few times.45654565- */45664566- for (j = 0; j < 4; j++) {45674567- status = ice_aq_sff_eeprom(hw, 0, addr, offset, page,45684568- !is_sfp, value,45694569- SFF_READ_BLOCK_SIZE,45704570- 0, NULL);45714571- netdev_dbg(netdev, "SFF %02X %02X %02X %X = %02X%02X%02X%02X.%02X%02X%02X%02X (%X)\n",45724572- addr, offset, page, is_sfp,45734573- value[0], value[1], value[2], value[3],45744574- value[4], value[5], value[6], value[7],45754575- status);45764576- if (status) {45774577- usleep_range(1500, 2500);45784578- memset(value, 0, SFF_READ_BLOCK_SIZE);45794579- continue;45804580- }45814581- break;45504550+ status = ice_aq_sff_eeprom(hw, 0, addr, offset, page,45514551+ !is_sfp, value,45524552+ SFF_READ_BLOCK_SIZE,45534553+ 0, NULL);45544554+ netdev_dbg(netdev, "SFF %02X %02X %02X %X = %02X%02X%02X%02X.%02X%02X%02X%02X (%pe)\n",45554555+ addr, offset, page, is_sfp,45564556+ value[0], value[1], value[2], value[3],45574557+ value[4], value[5], value[6], value[7],45584558+ ERR_PTR(status));45594559+ if (status) {45604560+ netdev_err(netdev, "%s: error reading module EEPROM: status %pe\n",45614561+ __func__, ERR_PTR(status));45624562+ return status;45824563 }4583456445844565 /* Make sure we have enough room for the new block */
+34-10
drivers/net/ethernet/intel/ice/ice_idc.c
···361361}362362363363/**364364+ * ice_rdma_finalize_setup - Complete RDMA setup after VSI is ready365365+ * @pf: ptr to ice_pf366366+ *367367+ * Sets VSI-dependent information and plugs aux device.368368+ * Must be called after ice_init_rdma(), ice_vsi_rebuild(), and369369+ * ice_dcb_rebuild() complete.370370+ */371371+void ice_rdma_finalize_setup(struct ice_pf *pf)372372+{373373+ struct device *dev = ice_pf_to_dev(pf);374374+ struct iidc_rdma_priv_dev_info *privd;375375+ int ret;376376+377377+ if (!ice_is_rdma_ena(pf) || !pf->cdev_info)378378+ return;379379+380380+ privd = pf->cdev_info->iidc_priv;381381+ if (!privd || !pf->vsi || !pf->vsi[0] || !pf->vsi[0]->netdev)382382+ return;383383+384384+ /* Assign VSI info now that VSI is valid */385385+ privd->netdev = pf->vsi[0]->netdev;386386+ privd->vport_id = pf->vsi[0]->vsi_num;387387+388388+ /* Update QoS info after DCB has been rebuilt */389389+ ice_setup_dcb_qos_info(pf, &privd->qos_info);390390+391391+ ret = ice_plug_aux_dev(pf);392392+ if (ret)393393+ dev_warn(dev, "Failed to plug RDMA aux device: %d\n", ret);394394+}395395+396396+/**364397 * ice_init_rdma - initializes PF for RDMA use365398 * @pf: ptr to ice_pf366399 */···431398 }432399433400 cdev->iidc_priv = privd;434434- privd->netdev = pf->vsi[0]->netdev;435401436402 privd->hw_addr = (u8 __iomem *)pf->hw.hw_addr;437403 cdev->pdev = pf->pdev;438438- privd->vport_id = pf->vsi[0]->vsi_num;439404440405 pf->cdev_info->rdma_protocol |= IIDC_RDMA_PROTOCOL_ROCEV2;441441- ice_setup_dcb_qos_info(pf, &privd->qos_info);442442- ret = ice_plug_aux_dev(pf);443443- if (ret)444444- goto err_plug_aux_dev;406406+445407 return 0;446408447447-err_plug_aux_dev:448448- pf->cdev_info->adev = NULL;449449- xa_erase(&ice_aux_id, pf->aux_idx);450409err_alloc_xa:451410 kfree(privd);452411err_privd_alloc:···457432 if (!ice_is_rdma_ena(pf))458433 return;459434460460- ice_unplug_aux_dev(pf);461435 xa_erase(&ice_aux_id, pf->aux_idx);462436 kfree(pf->cdev_info->iidc_priv);463437 kfree(pf->cdev_info);
+10-5
drivers/net/ethernet/intel/ice/ice_lib.c
···107107 if (!vsi->rxq_map)108108 goto err_rxq_map;109109110110- /* There is no need to allocate q_vectors for a loopback VSI. */111111- if (vsi->type == ICE_VSI_LB)112112- return 0;113113-114110 /* allocate memory for q_vector pointers */115111 vsi->q_vectors = devm_kcalloc(dev, vsi->num_q_vectors,116112 sizeof(*vsi->q_vectors), GFP_KERNEL);···237241 case ICE_VSI_LB:238242 vsi->alloc_txq = 1;239243 vsi->alloc_rxq = 1;244244+ /* A dummy q_vector, no actual IRQ. */245245+ vsi->num_q_vectors = 1;240246 break;241247 default:242248 dev_warn(ice_pf_to_dev(pf), "Unknown VSI type %d\n", vsi_type);···24242426 }24252427 break;24262428 case ICE_VSI_LB:24272427- ret = ice_vsi_alloc_rings(vsi);24292429+ ret = ice_vsi_alloc_q_vectors(vsi);24282430 if (ret)24292431 goto unroll_vsi_init;24322432+24332433+ ret = ice_vsi_alloc_rings(vsi);24342434+ if (ret)24352435+ goto unroll_alloc_q_vector;2430243624312437 ret = ice_vsi_alloc_ring_stats(vsi);24322438 if (ret)24332439 goto unroll_vector_base;24402440+24412441+ /* Simply map the dummy q_vector to the only rx_ring */24422442+ vsi->rx_rings[0]->q_vector = vsi->q_vectors[0];2434244324352444 break;24362445 default:
+6-1
drivers/net/ethernet/intel/ice/ice_main.c
···51385138 if (err)51395139 goto err_init_rdma;5140514051415141+ /* Finalize RDMA: VSI already created, assign info and plug device */51425142+ ice_rdma_finalize_setup(pf);51435143+51415144 ice_service_task_restart(pf);5142514551435146 clear_bit(ICE_DOWN, pf->state);···5172516951735170 devl_assert_locked(priv_to_devlink(pf));5174517151725172+ ice_unplug_aux_dev(pf);51755173 ice_deinit_rdma(pf);51765174 ice_deinit_features(pf);51775175 ice_tc_indir_block_unregister(vsi);···55995595 */56005596 disabled = ice_service_task_stop(pf);5601559755985598+ ice_unplug_aux_dev(pf);56025599 ice_deinit_rdma(pf);5603560056045601 /* Already suspended?, then there is nothing to do */···7864785978657860 ice_health_clear(pf);7866786178677867- ice_plug_aux_dev(pf);78627862+ ice_rdma_finalize_setup(pf);78687863 if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG))78697864 ice_lag_rebuild(pf);78707865
+3-1
drivers/net/ethernet/intel/ice/ice_txrx.c
···560560 i = 0;561561 }562562563563- if (rx_ring->vsi->type == ICE_VSI_PF &&563563+ if ((rx_ring->vsi->type == ICE_VSI_PF ||564564+ rx_ring->vsi->type == ICE_VSI_SF ||565565+ rx_ring->vsi->type == ICE_VSI_LB) &&564566 xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) {565567 xdp_rxq_info_detach_mem_model(&rx_ring->xdp_rxq);566568 xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
···524524 return nb_pkts < budget;525525}526526527527+static u32 igb_sw_irq_prep(struct igb_q_vector *q_vector)528528+{529529+ u32 eics = 0;530530+531531+ if (!napi_if_scheduled_mark_missed(&q_vector->napi))532532+ eics = q_vector->eims_value;533533+534534+ return eics;535535+}536536+527537int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)528538{529539 struct igb_adapter *adapter = netdev_priv(dev);···552542553543 ring = adapter->tx_ring[qid];554544555555- if (test_bit(IGB_RING_FLAG_TX_DISABLED, &ring->flags))556556- return -ENETDOWN;557557-558545 if (!READ_ONCE(ring->xsk_pool))559546 return -EINVAL;560547561561- if (!napi_if_scheduled_mark_missed(&ring->q_vector->napi)) {548548+ if (flags & XDP_WAKEUP_TX) {549549+ if (test_bit(IGB_RING_FLAG_TX_DISABLED, &ring->flags))550550+ return -ENETDOWN;551551+552552+ eics |= igb_sw_irq_prep(ring->q_vector);553553+ }554554+555555+ if (flags & XDP_WAKEUP_RX) {556556+ /* If IGB_FLAG_QUEUE_PAIRS is active, the q_vector557557+ * and NAPI is shared between RX and TX.558558+ * If NAPI is already running it would be marked as missed559559+ * from the TX path, making this RX call a NOP560560+ */561561+ ring = adapter->rx_ring[qid];562562+ eics |= igb_sw_irq_prep(ring->q_vector);563563+ }564564+565565+ if (eics) {562566 /* Cause software interrupt */563563- if (adapter->flags & IGB_FLAG_HAS_MSIX) {564564- eics |= ring->q_vector->eims_value;567567+ if (adapter->flags & IGB_FLAG_HAS_MSIX)565568 wr32(E1000_EICS, eics);566566- } else {569569+ else567570 wr32(E1000_ICS, E1000_ICS_RXDMT0);568568- }569571 }570572571573 return 0;
+24-10
drivers/net/ethernet/intel/igc/igc_main.c
···69066906 return nxmit;69076907}6908690869096909-static void igc_trigger_rxtxq_interrupt(struct igc_adapter *adapter,69106910- struct igc_q_vector *q_vector)69096909+static u32 igc_sw_irq_prep(struct igc_q_vector *q_vector)69116910{69126912- struct igc_hw *hw = &adapter->hw;69136911 u32 eics = 0;6914691269156915- eics |= q_vector->eims_value;69166916- wr32(IGC_EICS, eics);69136913+ if (!napi_if_scheduled_mark_missed(&q_vector->napi))69146914+ eics = q_vector->eims_value;69156915+69166916+ return eics;69176917}6918691869196919int igc_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags)69206920{69216921 struct igc_adapter *adapter = netdev_priv(dev);69226922- struct igc_q_vector *q_vector;69226922+ struct igc_hw *hw = &adapter->hw;69236923 struct igc_ring *ring;69246924+ u32 eics = 0;6924692569256926 if (test_bit(__IGC_DOWN, &adapter->state))69266927 return -ENETDOWN;6927692869286929 if (!igc_xdp_is_enabled(adapter))69296930 return -ENXIO;69306930-69316931+ /* Check if queue_id is valid. Tx and Rx queue numbers are always same */69316932 if (queue_id >= adapter->num_rx_queues)69326933 return -EINVAL;69336934···69376936 if (!ring->xsk_pool)69386937 return -ENXIO;6939693869406940- q_vector = adapter->q_vector[queue_id];69416941- if (!napi_if_scheduled_mark_missed(&q_vector->napi))69426942- igc_trigger_rxtxq_interrupt(adapter, q_vector);69396939+ if (flags & XDP_WAKEUP_RX)69406940+ eics |= igc_sw_irq_prep(ring->q_vector);69416941+69426942+ if (flags & XDP_WAKEUP_TX) {69436943+ /* If IGC_FLAG_QUEUE_PAIRS is active, the q_vector69446944+ * and NAPI is shared between RX and TX.69456945+ * If NAPI is already running it would be marked as missed69466946+ * from the RX path, making this TX call a NOP69476947+ */69486948+ ring = adapter->tx_ring[queue_id];69496949+ eics |= igc_sw_irq_prep(ring->q_vector);69506950+ }69516951+69526952+ if (eics)69536953+ /* Cause software interrupt */69546954+ wr32(IGC_EICS, eics);6943695569446956 return 0;69456957}
···10491049{10501050 int status;1051105110521052+ /* if FW logging isn't supported it means no configuration was done */10531053+ if (!libie_fwlog_supported(fwlog))10541054+ return;10551055+10521056 /* make sure FW logging is disabled to not put the FW in a weird state10531057 * for the next driver load10541058 */
···37483748 mtk_stop(dev);3749374937503750 old_prog = rcu_replace_pointer(eth->prog, prog, lockdep_rtnl_is_held());37513751+37523752+ if (netif_running(dev) && need_update) {37533753+ int err;37543754+37553755+ err = mtk_open(dev);37563756+ if (err) {37573757+ rcu_assign_pointer(eth->prog, old_prog);37583758+37593759+ return err;37603760+ }37613761+ }37623762+37513763 if (old_prog)37523764 bpf_prog_put(old_prog);37533753-37543754- if (netif_running(dev) && need_update)37553755- return mtk_open(dev);3756376537573766 return 0;37583767}
+18-5
drivers/net/ethernet/microsoft/mana/mana_en.c
···17701770 ndev = txq->ndev;17711771 apc = netdev_priv(ndev);1772177217731773+ /* Limit CQEs polled to 4 wraparounds of the CQ to ensure the17741774+ * doorbell can be rung in time for the hardware's requirement17751775+ * of at least one doorbell ring every 8 wraparounds.17761776+ */17731777 comp_read = mana_gd_poll_cq(cq->gdma_cq, completions,17741774- CQE_POLLING_BUFFER);17781778+ min((cq->gdma_cq->queue_size /17791779+ COMP_ENTRY_SIZE) * 4,17801780+ CQE_POLLING_BUFFER));1775178117761782 if (comp_read < 1)17771783 return;···21622156 struct mana_rxq *rxq = cq->rxq;21632157 int comp_read, i;2164215821652165- comp_read = mana_gd_poll_cq(cq->gdma_cq, comp, CQE_POLLING_BUFFER);21592159+ /* Limit CQEs polled to 4 wraparounds of the CQ to ensure the21602160+ * doorbell can be rung in time for the hardware's requirement21612161+ * of at least one doorbell ring every 8 wraparounds.21622162+ */21632163+ comp_read = mana_gd_poll_cq(cq->gdma_cq, comp,21642164+ min((cq->gdma_cq->queue_size /21652165+ COMP_ENTRY_SIZE) * 4,21662166+ CQE_POLLING_BUFFER));21662167 WARN_ON_ONCE(comp_read > CQE_POLLING_BUFFER);2167216821682169 rxq->xdp_flush = false;···22142201 mana_gd_ring_cq(gdma_queue, SET_ARM_BIT);22152202 cq->work_done_since_doorbell = 0;22162203 napi_complete_done(&cq->napi, w);22172217- } else if (cq->work_done_since_doorbell >22182218- cq->gdma_cq->queue_size / COMP_ENTRY_SIZE * 4) {22042204+ } else if (cq->work_done_since_doorbell >=22052205+ (cq->gdma_cq->queue_size / COMP_ENTRY_SIZE) * 4) {22192206 /* MANA hardware requires at least one doorbell ring every 822202207 * wraparounds of CQ even if there is no need to arm the CQ.22212221- * This driver rings the doorbell as soon as we have exceeded22082208+ * This driver rings the doorbell as soon as it has processed22222209 * 4 wraparounds.22232210 */22242211 mana_gd_ring_cq(gdma_queue, 0);
+1
drivers/net/ethernet/stmicro/stmmac/stmmac.h
···323323 void __iomem *ptpaddr;324324 void __iomem *estaddr;325325 unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];326326+ unsigned int num_double_vlans;326327 int sfty_irq;327328 int sfty_ce_irq;328329 int sfty_ue_irq;
+47-6
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···156156static void stmmac_flush_tx_descriptors(struct stmmac_priv *priv, int queue);157157static void stmmac_set_dma_operation_mode(struct stmmac_priv *priv, u32 txmode,158158 u32 rxmode, u32 chan);159159+static int stmmac_vlan_restore(struct stmmac_priv *priv);159160160161#ifdef CONFIG_DEBUG_FS161162static const struct net_device_ops stmmac_netdev_ops;···4108410741094108 phylink_start(priv->phylink);4110410941104110+ stmmac_vlan_restore(priv);41114111+41114112 ret = stmmac_request_irq(dev);41124113 if (ret)41134114 goto irq_error;···67696766 hash = 0;67706767 }6771676867696769+ if (!netif_running(priv->dev))67706770+ return 0;67716771+67726772 return stmmac_update_vlan_hash(priv, priv->hw, hash, pmatch, is_double);67736773}67746774···67816775static int stmmac_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid)67826776{67836777 struct stmmac_priv *priv = netdev_priv(ndev);67786778+ unsigned int num_double_vlans;67846779 bool is_double = false;67856780 int ret;67866781···67936786 is_double = true;6794678767956788 set_bit(vid, priv->active_vlans);67966796- ret = stmmac_vlan_update(priv, is_double);67896789+ num_double_vlans = priv->num_double_vlans + is_double;67906790+ ret = stmmac_vlan_update(priv, num_double_vlans);67976791 if (ret) {67986792 clear_bit(vid, priv->active_vlans);67996793 goto err_pm_put;···6802679468036795 if (priv->hw->num_vlan) {68046796 ret = stmmac_add_hw_vlan_rx_fltr(priv, ndev, priv->hw, proto, vid);68056805- if (ret)67976797+ if (ret) {67986798+ clear_bit(vid, priv->active_vlans);67996799+ stmmac_vlan_update(priv, priv->num_double_vlans);68066800 goto err_pm_put;68016801+ }68076802 }68036803+68046804+ priv->num_double_vlans = num_double_vlans;68056805+68086806err_pm_put:68096807 pm_runtime_put(priv->device);68106808···68236809static int stmmac_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vid)68246810{68256811 struct stmmac_priv *priv = netdev_priv(ndev);68126812+ unsigned int num_double_vlans;68266813 bool is_double = false;68276814 int ret;68286815···68356820 is_double = true;6836682168376822 clear_bit(vid, priv->active_vlans);68236823+ num_double_vlans = priv->num_double_vlans - is_double;68246824+ ret = stmmac_vlan_update(priv, num_double_vlans);68256825+ if (ret) {68266826+ set_bit(vid, priv->active_vlans);68276827+ goto del_vlan_error;68286828+ }6838682968396830 if (priv->hw->num_vlan) {68406831 ret = stmmac_del_hw_vlan_rx_fltr(priv, ndev, priv->hw, proto, vid);68416841- if (ret)68326832+ if (ret) {68336833+ set_bit(vid, priv->active_vlans);68346834+ stmmac_vlan_update(priv, priv->num_double_vlans);68426835 goto del_vlan_error;68366836+ }68436837 }6844683868456845- ret = stmmac_vlan_update(priv, is_double);68396839+ priv->num_double_vlans = num_double_vlans;6846684068476841del_vlan_error:68486842 pm_runtime_put(priv->device);68436843+68446844+ return ret;68456845+}68466846+68476847+static int stmmac_vlan_restore(struct stmmac_priv *priv)68486848+{68496849+ int ret;68506850+68516851+ if (!(priv->dev->features & NETIF_F_VLAN_FEATURES))68526852+ return 0;68536853+68546854+ if (priv->hw->num_vlan)68556855+ stmmac_restore_hw_vlan_rx_fltr(priv, priv->dev, priv->hw);68566856+68576857+ ret = stmmac_vlan_update(priv, priv->num_double_vlans);68586858+ if (ret)68596859+ netdev_err(priv->dev, "Failed to restore VLANs\n");6849686068506861 return ret;68516862}···83008259 stmmac_init_coalesce(priv);83018260 phylink_rx_clk_stop_block(priv->phylink);83028261 stmmac_set_rx_mode(ndev);83038303-83048304- stmmac_restore_hw_vlan_rx_fltr(priv, ndev, priv->hw);83058262 phylink_rx_clk_stop_unblock(priv->phylink);82638263+82648264+ stmmac_vlan_restore(priv);8306826583078266 stmmac_enable_all_queues(priv);83088267 stmmac_enable_all_dma_irq(priv);
+31-29
drivers/net/ethernet/stmicro/stmmac/stmmac_vlan.c
···7676 }77777878 hw->vlan_filter[0] = vid;7979- vlan_write_single(dev, vid);7979+8080+ if (netif_running(dev))8181+ vlan_write_single(dev, vid);80828183 return 0;8284 }···9997 return -EPERM;10098 }10199102102- ret = vlan_write_filter(dev, hw, index, val);100100+ if (netif_running(dev)) {101101+ ret = vlan_write_filter(dev, hw, index, val);102102+ if (ret)103103+ return ret;104104+ }103105104104- if (!ret)105105- hw->vlan_filter[index] = val;106106+ hw->vlan_filter[index] = val;106107107107- return ret;108108+ return 0;108109}109110110111static int vlan_del_hw_rx_fltr(struct net_device *dev,···120115 if (hw->num_vlan == 1) {121116 if ((hw->vlan_filter[0] & VLAN_TAG_VID) == vid) {122117 hw->vlan_filter[0] = 0;123123- vlan_write_single(dev, 0);118118+119119+ if (netif_running(dev))120120+ vlan_write_single(dev, 0);124121 }125122 return 0;126123 }···131124 for (i = 0; i < hw->num_vlan; i++) {132125 if ((hw->vlan_filter[i] & VLAN_TAG_DATA_VEN) &&133126 ((hw->vlan_filter[i] & VLAN_TAG_DATA_VID) == vid)) {134134- ret = vlan_write_filter(dev, hw, i, 0);135127136136- if (!ret)137137- hw->vlan_filter[i] = 0;138138- else139139- return ret;128128+ if (netif_running(dev)) {129129+ ret = vlan_write_filter(dev, hw, i, 0);130130+ if (ret)131131+ return ret;132132+ }133133+134134+ hw->vlan_filter[i] = 0;140135 }141136 }142137143143- return ret;138138+ return 0;144139}145140146141static void vlan_restore_hw_rx_fltr(struct net_device *dev,147142 struct mac_device_info *hw)148143{149149- void __iomem *ioaddr = hw->pcsr;150150- u32 value;151151- u32 hash;152152- u32 val;153144 int i;154145155146 /* Single Rx VLAN Filter */···157152 }158153159154 /* Extended Rx VLAN Filter Enable */160160- for (i = 0; i < hw->num_vlan; i++) {161161- if (hw->vlan_filter[i] & VLAN_TAG_DATA_VEN) {162162- val = hw->vlan_filter[i];163163- vlan_write_filter(dev, hw, i, val);164164- }165165- }166166-167167- hash = readl(ioaddr + VLAN_HASH_TABLE);168168- if (hash & VLAN_VLHT) {169169- value = readl(ioaddr + VLAN_TAG);170170- value |= VLAN_VTHM;171171- writel(value, ioaddr + VLAN_TAG);172172- }155155+ for (i = 0; i < hw->num_vlan; i++)156156+ vlan_write_filter(dev, hw, i, hw->vlan_filter[i]);173157}174158175159static void vlan_update_hash(struct mac_device_info *hw, u32 hash,···177183 value |= VLAN_EDVLP;178184 value |= VLAN_ESVL;179185 value |= VLAN_DOVLTC;186186+ } else {187187+ value &= ~VLAN_EDVLP;188188+ value &= ~VLAN_ESVL;189189+ value &= ~VLAN_DOVLTC;180190 }181191182192 writel(value, ioaddr + VLAN_TAG);···191193 value |= VLAN_EDVLP;192194 value |= VLAN_ESVL;193195 value |= VLAN_DOVLTC;196196+ } else {197197+ value &= ~VLAN_EDVLP;198198+ value &= ~VLAN_ESVL;199199+ value &= ~VLAN_DOVLTC;194200 }195201196202 writel(value | perfect_match, ioaddr + VLAN_TAG);
+1-1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
···391391 cpsw_ale_set_allmulti(common->ale,392392 ndev->flags & IFF_ALLMULTI, port->port_id);393393394394- port_mask = ALE_PORT_HOST;394394+ port_mask = BIT(port->port_id) | ALE_PORT_HOST;395395 /* Clear all mcast from ALE */396396 cpsw_ale_flush_multicast(common->ale, port_mask, -1);397397
+4-5
drivers/net/ethernet/ti/cpsw_ale.c
···450450 ale->port_mask_bits);451451 if ((mask & port_mask) == 0)452452 return; /* ports dont intersect, not interested */453453- mask &= ~port_mask;453453+ mask &= (~port_mask | ALE_PORT_HOST);454454455455- /* free if only remaining port is host port */456456- if (mask)455455+ if (mask == 0x0 || mask == ALE_PORT_HOST)456456+ cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE);457457+ else457458 cpsw_ale_set_port_mask(ale_entry, mask,458459 ale->port_mask_bits);459459- else460460- cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE);461460}462461463462int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask, int vid)
+8
drivers/net/ethernet/ti/icssg/icssg_prueth.c
···273273 if (ret)274274 goto disable_class;275275276276+ /* Reset link state to force reconfiguration in277277+ * emac_adjust_link(). Without this, if the link was already up278278+ * before restart, emac_adjust_link() won't detect any state279279+ * change and will skip critical configuration like writing280280+ * speed to firmware.281281+ */282282+ emac->link = 0;283283+276284 mutex_lock(&emac->ndev->phydev->lock);277285 emac_adjust_link(emac->ndev);278286 mutex_unlock(&emac->ndev->phydev->lock);
···21302130 {21312131 struct ipv6hdr *pip6;2132213221332133+ /* check if nd_tbl is not initiliazed due to21342134+ * ipv6.disable=1 set during boot21352135+ */21362136+ if (!ipv6_stub->nd_tbl)21372137+ return false;21332138 if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))21342139 return false;21352140 pip6 = ipv6_hdr(skb);
···668668 struct rsi_hw *adapter = hw->priv;669669 struct rsi_common *common = adapter->priv;670670 struct ieee80211_conf *conf = &hw->conf;671671- int status = -EOPNOTSUPP;671671+ int status = 0;672672673673 mutex_lock(&common->mutex);674674
+2
drivers/net/wireless/st/cw1200/pm.c
···264264 wiphy_err(priv->hw->wiphy,265265 "PM request failed: %d. WoW is disabled.\n", ret);266266 cw1200_wow_resume(hw);267267+ mutex_unlock(&priv->conf_mutex);267268 return -EBUSY;268269 }269270270271 /* Force resume if event is coming from the device. */271272 if (atomic_read(&priv->bh_rx)) {272273 cw1200_wow_resume(hw);274274+ mutex_unlock(&priv->conf_mutex);273275 return -EAGAIN;274276 }275277
+2-2
drivers/net/wireless/ti/wlcore/main.c
···18751875 wl->wow_enabled);18761876 WARN_ON(!wl->wow_enabled);1877187718781878+ mutex_lock(&wl->mutex);18791879+18781880 ret = pm_runtime_force_resume(wl->dev);18791881 if (ret < 0) {18801882 wl1271_error("ELP wakeup failure!");···18921890 if (test_and_clear_bit(WL1271_FLAG_PENDING_WORK, &wl->flags))18931891 run_irq_work = true;18941892 spin_unlock_irqrestore(&wl->wl_lock, flags);18951895-18961896- mutex_lock(&wl->mutex);1897189318981894 /* test the recovery flag before calling any SDIO functions */18991895 pending_recovery = test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS,
+17-17
drivers/net/xen-netfront.c
···1646164616471647 /* avoid the race with XDP headroom adjustment */16481648 wait_event(module_wq,16491649- xenbus_read_driver_state(np->xbdev->otherend) ==16491649+ xenbus_read_driver_state(np->xbdev, np->xbdev->otherend) ==16501650 XenbusStateReconfigured);16511651 np->netfront_xdp_enabled = true;16521652···17641764 do {17651765 xenbus_switch_state(dev, XenbusStateInitialising);17661766 err = wait_event_timeout(module_wq,17671767- xenbus_read_driver_state(dev->otherend) !=17671767+ xenbus_read_driver_state(dev, dev->otherend) !=17681768 XenbusStateClosed &&17691769- xenbus_read_driver_state(dev->otherend) !=17691769+ xenbus_read_driver_state(dev, dev->otherend) !=17701770 XenbusStateUnknown, XENNET_TIMEOUT);17711771 } while (!err);17721772···26262626{26272627 int ret;2628262826292629- if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)26292629+ if (xenbus_read_driver_state(dev, dev->otherend) == XenbusStateClosed)26302630 return;26312631 do {26322632 xenbus_switch_state(dev, XenbusStateClosing);26332633 ret = wait_event_timeout(module_wq,26342634- xenbus_read_driver_state(dev->otherend) ==26352635- XenbusStateClosing ||26362636- xenbus_read_driver_state(dev->otherend) ==26372637- XenbusStateClosed ||26382638- xenbus_read_driver_state(dev->otherend) ==26392639- XenbusStateUnknown,26402640- XENNET_TIMEOUT);26342634+ xenbus_read_driver_state(dev, dev->otherend) ==26352635+ XenbusStateClosing ||26362636+ xenbus_read_driver_state(dev, dev->otherend) ==26372637+ XenbusStateClosed ||26382638+ xenbus_read_driver_state(dev, dev->otherend) ==26392639+ XenbusStateUnknown,26402640+ XENNET_TIMEOUT);26412641 } while (!ret);2642264226432643- if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)26432643+ if (xenbus_read_driver_state(dev, dev->otherend) == XenbusStateClosed)26442644 return;2645264526462646 do {26472647 xenbus_switch_state(dev, XenbusStateClosed);26482648 ret = wait_event_timeout(module_wq,26492649- xenbus_read_driver_state(dev->otherend) ==26502650- XenbusStateClosed ||26512651- xenbus_read_driver_state(dev->otherend) ==26522652- XenbusStateUnknown,26532653- XENNET_TIMEOUT);26492649+ xenbus_read_driver_state(dev, dev->otherend) ==26502650+ XenbusStateClosed ||26512651+ xenbus_read_driver_state(dev, dev->otherend) ==26522652+ XenbusStateUnknown,26532653+ XENNET_TIMEOUT);26542654 } while (!ret);26552655}26562656
+12-16
drivers/nvme/host/core.c
···20462046 if (id->nabspf)20472047 boundary = (le16_to_cpu(id->nabspf) + 1) * bs;20482048 } else {20492049- /*20502050- * Use the controller wide atomic write unit. This sucks20512051- * because the limit is defined in terms of logical blocks while20522052- * namespaces can have different formats, and because there is20532053- * no clear language in the specification prohibiting different20542054- * values for different controllers in the subsystem.20552055- */20562056- atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs;20492049+ if (ns->ctrl->awupf)20502050+ dev_info_once(ns->ctrl->device,20512051+ "AWUPF ignored, only NAWUPF accepted\n");20522052+ atomic_bs = bs;20572053 }2058205420592055 lim->atomic_write_hw_max = atomic_bs;···32183222 memcpy(subsys->model, id->mn, sizeof(subsys->model));32193223 subsys->vendor_id = le16_to_cpu(id->vid);32203224 subsys->cmic = id->cmic;32213221- subsys->awupf = le16_to_cpu(id->awupf);3222322532233226 /* Versions prior to 1.4 don't necessarily report a valid type */32243227 if (id->cntrltype == NVME_CTRL_DISC ||···36503655 dev_pm_qos_expose_latency_tolerance(ctrl->device);36513656 else if (!ctrl->apst_enabled && prev_apst_enabled)36523657 dev_pm_qos_hide_latency_tolerance(ctrl->device);36583658+ ctrl->awupf = le16_to_cpu(id->awupf);36533659out_free:36543660 kfree(id);36553661 return ret;···4181418541824186 nvme_mpath_add_disk(ns, info->anagrpid);41834187 nvme_fault_inject_init(&ns->fault_inject, ns->disk->disk_name);41844184-41854185- /*41864186- * Set ns->disk->device->driver_data to ns so we can access41874187- * ns->head->passthru_err_log_enabled in41884188- * nvme_io_passthru_err_log_enabled_[store | show]().41894189- */41904190- dev_set_drvdata(disk_to_dev(ns->disk), ns);4191418841924189 return;41934190···48534864 ret = blk_mq_alloc_tag_set(set);48544865 if (ret)48554866 return ret;48674867+48684868+ /*48694869+ * If a previous admin queue exists (e.g., from before a reset),48704870+ * put it now before allocating a new one to avoid orphaning it.48714871+ */48724872+ if (ctrl->admin_q)48734873+ blk_put_queue(ctrl->admin_q);4856487448574875 ctrl->admin_q = blk_mq_alloc_queue(set, &lim, NULL);48584876 if (IS_ERR(ctrl->admin_q)) {
···13001300 mutex_lock(&head->subsys->lock);13011301 /*13021302 * We are called when all paths have been removed, and at that point13031303- * head->list is expected to be empty. However, nvme_remove_ns() and13031303+ * head->list is expected to be empty. However, nvme_ns_remove() and13041304 * nvme_init_ns_head() can run concurrently and so if head->delayed_13051305 * removal_secs is configured, it is possible that by the time we reach13061306 * this point, head->list may no longer be empty. Therefore, we recheck···13101310 if (!list_empty(&head->list))13111311 goto out;1312131213131313- if (head->delayed_removal_secs) {13141314- /*13151315- * Ensure that no one could remove this module while the head13161316- * remove work is pending.13171317- */13181318- if (!try_module_get(THIS_MODULE))13191319- goto out;13131313+ /*13141314+ * Ensure that no one could remove this module while the head13151315+ * remove work is pending.13161316+ */13171317+ if (head->delayed_removal_secs && try_module_get(THIS_MODULE)) {13201318 mod_delayed_work(nvme_wq, &head->remove_work,13211319 head->delayed_removal_secs * HZ);13221320 } else {
+56-1
drivers/nvme/host/nvme.h
···180180 NVME_QUIRK_DMAPOOL_ALIGN_512 = (1 << 22),181181};182182183183+static inline char *nvme_quirk_name(enum nvme_quirks q)184184+{185185+ switch (q) {186186+ case NVME_QUIRK_STRIPE_SIZE:187187+ return "stripe_size";188188+ case NVME_QUIRK_IDENTIFY_CNS:189189+ return "identify_cns";190190+ case NVME_QUIRK_DEALLOCATE_ZEROES:191191+ return "deallocate_zeroes";192192+ case NVME_QUIRK_DELAY_BEFORE_CHK_RDY:193193+ return "delay_before_chk_rdy";194194+ case NVME_QUIRK_NO_APST:195195+ return "no_apst";196196+ case NVME_QUIRK_NO_DEEPEST_PS:197197+ return "no_deepest_ps";198198+ case NVME_QUIRK_QDEPTH_ONE:199199+ return "qdepth_one";200200+ case NVME_QUIRK_MEDIUM_PRIO_SQ:201201+ return "medium_prio_sq";202202+ case NVME_QUIRK_IGNORE_DEV_SUBNQN:203203+ return "ignore_dev_subnqn";204204+ case NVME_QUIRK_DISABLE_WRITE_ZEROES:205205+ return "disable_write_zeroes";206206+ case NVME_QUIRK_SIMPLE_SUSPEND:207207+ return "simple_suspend";208208+ case NVME_QUIRK_SINGLE_VECTOR:209209+ return "single_vector";210210+ case NVME_QUIRK_128_BYTES_SQES:211211+ return "128_bytes_sqes";212212+ case NVME_QUIRK_SHARED_TAGS:213213+ return "shared_tags";214214+ case NVME_QUIRK_NO_TEMP_THRESH_CHANGE:215215+ return "no_temp_thresh_change";216216+ case NVME_QUIRK_NO_NS_DESC_LIST:217217+ return "no_ns_desc_list";218218+ case NVME_QUIRK_DMA_ADDRESS_BITS_48:219219+ return "dma_address_bits_48";220220+ case NVME_QUIRK_SKIP_CID_GEN:221221+ return "skip_cid_gen";222222+ case NVME_QUIRK_BOGUS_NID:223223+ return "bogus_nid";224224+ case NVME_QUIRK_NO_SECONDARY_TEMP_THRESH:225225+ return "no_secondary_temp_thresh";226226+ case NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND:227227+ return "force_no_simple_suspend";228228+ case NVME_QUIRK_BROKEN_MSI:229229+ return "broken_msi";230230+ case NVME_QUIRK_DMAPOOL_ALIGN_512:231231+ return "dmapool_align_512";232232+ }233233+234234+ return "unknown";235235+}236236+183237/*184238 * Common request structure for NVMe passthrough. All drivers must have185239 * this structure as the first member of their request-private data.···464410465411 enum nvme_ctrl_type cntrltype;466412 enum nvme_dctype dctype;413413+414414+ u16 awupf; /* 0's based value. */467415};468416469417static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl)···498442 u8 cmic;499443 enum nvme_subsys_type subtype;500444 u16 vendor_id;501501- u16 awupf; /* 0's based value. */502445 struct ida ns_ida;503446#ifdef CONFIG_NVME_MULTIPATH504447 enum nvme_iopolicy iopolicy;
+184-2
drivers/nvme/host/pci.c
···7272static_assert(MAX_PRP_RANGE / NVME_CTRL_PAGE_SIZE <=7373 (1 /* prp1 */ + NVME_MAX_NR_DESCRIPTORS * PRPS_PER_PAGE));74747575+struct quirk_entry {7676+ u16 vendor_id;7777+ u16 dev_id;7878+ u32 enabled_quirks;7979+ u32 disabled_quirks;8080+};8181+7582static int use_threaded_interrupts;7683module_param(use_threaded_interrupts, int, 0444);7784···108101static unsigned int io_queue_depth = 1024;109102module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 0644);110103MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2 and < 4096");104104+105105+static struct quirk_entry *nvme_pci_quirk_list;106106+static unsigned int nvme_pci_quirk_count;107107+108108+/* Helper to parse individual quirk names */109109+static int nvme_parse_quirk_names(char *quirk_str, struct quirk_entry *entry)110110+{111111+ int i;112112+ size_t field_len;113113+ bool disabled, found;114114+ char *p = quirk_str, *field;115115+116116+ while ((field = strsep(&p, ",")) && *field) {117117+ disabled = false;118118+ found = false;119119+120120+ if (*field == '^') {121121+ /* Skip the '^' character */122122+ disabled = true;123123+ field++;124124+ }125125+126126+ field_len = strlen(field);127127+ for (i = 0; i < 32; i++) {128128+ unsigned int bit = 1U << i;129129+ char *q_name = nvme_quirk_name(bit);130130+ size_t q_len = strlen(q_name);131131+132132+ if (!strcmp(q_name, "unknown"))133133+ break;134134+135135+ if (!strcmp(q_name, field) &&136136+ q_len == field_len) {137137+ if (disabled)138138+ entry->disabled_quirks |= bit;139139+ else140140+ entry->enabled_quirks |= bit;141141+ found = true;142142+ break;143143+ }144144+ }145145+146146+ if (!found) {147147+ pr_err("nvme: unrecognized quirk %s\n", field);148148+ return -EINVAL;149149+ }150150+ }151151+ return 0;152152+}153153+154154+/* Helper to parse a single VID:DID:quirk_names entry */155155+static int nvme_parse_quirk_entry(char *s, struct quirk_entry *entry)156156+{157157+ char *field;158158+159159+ field = strsep(&s, ":");160160+ if (!field || kstrtou16(field, 16, &entry->vendor_id))161161+ return -EINVAL;162162+163163+ field = strsep(&s, ":");164164+ if (!field || kstrtou16(field, 16, &entry->dev_id))165165+ return -EINVAL;166166+167167+ field = strsep(&s, ":");168168+ if (!field)169169+ return -EINVAL;170170+171171+ return nvme_parse_quirk_names(field, entry);172172+}173173+174174+static int quirks_param_set(const char *value, const struct kernel_param *kp)175175+{176176+ int count, err, i;177177+ struct quirk_entry *qlist;178178+ char *field, *val, *sep_ptr;179179+180180+ err = param_set_copystring(value, kp);181181+ if (err)182182+ return err;183183+184184+ val = kstrdup(value, GFP_KERNEL);185185+ if (!val)186186+ return -ENOMEM;187187+188188+ if (!*val)189189+ goto out_free_val;190190+191191+ count = 1;192192+ for (i = 0; val[i]; i++) {193193+ if (val[i] == '-')194194+ count++;195195+ }196196+197197+ qlist = kcalloc(count, sizeof(*qlist), GFP_KERNEL);198198+ if (!qlist) {199199+ err = -ENOMEM;200200+ goto out_free_val;201201+ }202202+203203+ i = 0;204204+ sep_ptr = val;205205+ while ((field = strsep(&sep_ptr, "-"))) {206206+ if (nvme_parse_quirk_entry(field, &qlist[i])) {207207+ pr_err("nvme: failed to parse quirk string %s\n",208208+ value);209209+ goto out_free_qlist;210210+ }211211+212212+ i++;213213+ }214214+215215+ kfree(nvme_pci_quirk_list);216216+ nvme_pci_quirk_count = count;217217+ nvme_pci_quirk_list = qlist;218218+ goto out_free_val;219219+220220+out_free_qlist:221221+ kfree(qlist);222222+out_free_val:223223+ kfree(val);224224+ return err;225225+}226226+227227+static char quirks_param[128];228228+static const struct kernel_param_ops quirks_param_ops = {229229+ .set = quirks_param_set,230230+ .get = param_get_string,231231+};232232+233233+static struct kparam_string quirks_param_string = {234234+ .maxlen = sizeof(quirks_param),235235+ .string = quirks_param,236236+};237237+238238+module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0444);239239+MODULE_PARM_DESC(quirks, "Enable/disable NVMe quirks by specifying "240240+ "quirks=VID:DID:quirk_names");111241112242static int io_queue_count_set(const char *val, const struct kernel_param *kp)113243{···16401496 struct nvme_queue *nvmeq = hctx->driver_data;16411497 bool found;1642149816431643- if (!nvme_cqe_pending(nvmeq))14991499+ if (!test_bit(NVMEQ_POLLED, &nvmeq->flags) ||15001500+ !nvme_cqe_pending(nvmeq))16441501 return 0;1645150216461503 spin_lock(&nvmeq->cq_poll_lock);···29192774 dev->nr_write_queues = write_queues;29202775 dev->nr_poll_queues = poll_queues;2921277629222922- nr_io_queues = dev->nr_allocated_queues - 1;27772777+ if (dev->ctrl.tagset) {27782778+ /*27792779+ * The set's maps are allocated only once at initialization27802780+ * time. We can't add special queues later if their mq_map27812781+ * wasn't preallocated.27822782+ */27832783+ if (dev->ctrl.tagset->nr_maps < 3)27842784+ dev->nr_poll_queues = 0;27852785+ if (dev->ctrl.tagset->nr_maps < 2)27862786+ dev->nr_write_queues = 0;27872787+ }27882788+27892789+ /*27902790+ * The initial number of allocated queue slots may be too large if the27912791+ * user reduced the special queue parameters. Cap the value to the27922792+ * number we need for this round.27932793+ */27942794+ nr_io_queues = min(nvme_max_io_queues(dev),27952795+ dev->nr_allocated_queues - 1);29232796 result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues);29242797 if (result < 0)29252798 return result;···36213458 return 0;36223459}3623346034613461+static struct quirk_entry *detect_dynamic_quirks(struct pci_dev *pdev)34623462+{34633463+ int i;34643464+34653465+ for (i = 0; i < nvme_pci_quirk_count; i++)34663466+ if (pdev->vendor == nvme_pci_quirk_list[i].vendor_id &&34673467+ pdev->device == nvme_pci_quirk_list[i].dev_id)34683468+ return &nvme_pci_quirk_list[i];34693469+34703470+ return NULL;34713471+}34723472+36243473static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,36253474 const struct pci_device_id *id)36263475{36273476 unsigned long quirks = id->driver_data;36283477 int node = dev_to_node(&pdev->dev);36293478 struct nvme_dev *dev;34793479+ struct quirk_entry *qentry;36303480 int ret = -ENOMEM;3631348136323482 dev = kzalloc_node(struct_size(dev, descriptor_pools, nr_node_ids),···36703494 dev_info(&pdev->dev,36713495 "platform quirk: setting simple suspend\n");36723496 quirks |= NVME_QUIRK_SIMPLE_SUSPEND;34973497+ }34983498+ qentry = detect_dynamic_quirks(pdev);34993499+ if (qentry) {35003500+ quirks |= qentry->enabled_quirks;35013501+ quirks &= ~qentry->disabled_quirks;36733502 }36743503 ret = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops,36753504 quirks);···4276409542774096static void __exit nvme_exit(void)42784097{40984098+ kfree(nvme_pci_quirk_list);42794099 pci_unregister_driver(&nvme_driver);42804100 flush_workqueue(nvme_wq);42814101}
···25252626struct nvme_tcp_queue;27272828-/* Define the socket priority to use for connections were it is desirable2828+/*2929+ * Define the socket priority to use for connections where it is desirable2930 * that the NIC consider performing optimized packet processing or filtering.3031 * A non-zero value being sufficient to indicate general consideration of any3132 * possible optimization. Making it a module param allows for alternative···927926 req->curr_bio = req->curr_bio->bi_next;928927929928 /*930930- * If we don`t have any bios it means that controller929929+ * If we don't have any bios it means the controller931930 * sent more data than we requested, hence error932931 */933932 if (!req->curr_bio) {
+11-4
drivers/nvme/target/fcloop.c
···491491 struct fcloop_rport *rport = remoteport->private;492492 struct nvmet_fc_target_port *targetport = rport->targetport;493493 struct fcloop_tport *tport;494494+ int ret = 0;494495495496 if (!targetport) {496497 /*···501500 * We end up here from delete association exchange:502501 * nvmet_fc_xmt_disconnect_assoc sends an async request.503502 *504504- * Return success because this is what LLDDs do; silently505505- * drop the response.503503+ * Return success when remoteport is still online because this504504+ * is what LLDDs do and silently drop the response. Otherwise,505505+ * return with error to signal upper layer to perform the lsrsp506506+ * resource cleanup.506507 */507507- lsrsp->done(lsrsp);508508+ if (remoteport->port_state == FC_OBJSTATE_ONLINE)509509+ lsrsp->done(lsrsp);510510+ else511511+ ret = -ENODEV;512512+508513 kmem_cache_free(lsreq_cache, tls_req);509509- return 0;514514+ return ret;510515 }511516512517 memcpy(lsreq->rspaddr, lsrsp->rspbuf,
+4-4
drivers/pci/xen-pcifront.c
···856856 int err;857857858858 /* Only connect once */859859- if (xenbus_read_driver_state(pdev->xdev->nodename) !=859859+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) !=860860 XenbusStateInitialised)861861 return;862862···876876 enum xenbus_state prev_state;877877878878879879- prev_state = xenbus_read_driver_state(pdev->xdev->nodename);879879+ prev_state = xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename);880880881881 if (prev_state >= XenbusStateClosing)882882 goto out;···895895896896static void pcifront_attach_devices(struct pcifront_device *pdev)897897{898898- if (xenbus_read_driver_state(pdev->xdev->nodename) ==898898+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) ==899899 XenbusStateReconfiguring)900900 pcifront_connect(pdev);901901}···909909 struct pci_dev *pci_dev;910910 char str[64];911911912912- state = xenbus_read_driver_state(pdev->xdev->nodename);912912+ state = xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename);913913 if (state == XenbusStateInitialised) {914914 dev_dbg(&pdev->xdev->dev, "Handle skipped connect.\n");915915 /* We missed Connected and need to initialize. */
+2-3
drivers/pinctrl/cirrus/pinctrl-cs42l43.c
···574574 if (child) {575575 ret = devm_add_action_or_reset(&pdev->dev,576576 cs42l43_fwnode_put, child);577577- if (ret) {578578- fwnode_handle_put(child);577577+ if (ret)579578 return ret;580580- }579579+581580 if (!child->dev)582581 child->dev = priv->dev;583582 fwnode = child;
···679679 unsigned int *num_maps)680680{681681 struct device *dev = pctldev->dev;682682- struct device_node *pnode;683682 unsigned long *configs = NULL;684683 unsigned int num_configs = 0;685684 struct property *prop;···692693 return -ENOENT;693694 }694695695695- pnode = of_get_parent(np);696696+ struct device_node *pnode __free(device_node) = of_get_parent(np);696697 if (!pnode) {697698 dev_info(dev, "Missing function node\n");698699 return -EINVAL;
+2-2
drivers/pinctrl/pinconf-generic.c
···351351352352 ret = parse_dt_cfg(np, dt_params, ARRAY_SIZE(dt_params), cfg, &ncfg);353353 if (ret)354354- return ret;354354+ goto out;355355 if (pctldev && pctldev->desc->num_custom_params &&356356 pctldev->desc->custom_params) {357357 ret = parse_dt_cfg(np, pctldev->desc->custom_params,358358 pctldev->desc->num_custom_params, cfg, &ncfg);359359 if (ret)360360- return ret;360360+ goto out;361361 }362362363363 /* no configs found at all */
···9393 if (ret < 0)9494 goto out;95959696- print_hex_dump_bytes("set new password data: ", DUMP_PREFIX_NONE, buffer, buffer_size);9796 ret = call_password_interface(wmi_priv.password_attr_wdev, buffer, buffer_size);9897 /* on success copy the new password to current password */9998 if (!ret)
···223223 *con_id = "avdd";224224 *gpio_flags = GPIO_ACTIVE_HIGH;225225 break;226226+ case INT3472_GPIO_TYPE_DOVDD:227227+ *con_id = "dovdd";228228+ *gpio_flags = GPIO_ACTIVE_HIGH;229229+ break;226230 case INT3472_GPIO_TYPE_HANDSHAKE:227231 *con_id = "dvdd";228232 *gpio_flags = GPIO_ACTIVE_HIGH;···255251 * 0x0b Power enable256252 * 0x0c Clock enable257253 * 0x0d Privacy LED254254+ * 0x10 DOVDD (digital I/O voltage)258255 * 0x13 Hotplug detect259256 *260257 * There are some known platform specific quirks where that does not quite···337332 case INT3472_GPIO_TYPE_CLK_ENABLE:338333 case INT3472_GPIO_TYPE_PRIVACY_LED:339334 case INT3472_GPIO_TYPE_POWER_ENABLE:335335+ case INT3472_GPIO_TYPE_DOVDD:340336 case INT3472_GPIO_TYPE_HANDSHAKE:341337 gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, con_id, gpio_flags);342338 if (IS_ERR(gpio)) {···362356 case INT3472_GPIO_TYPE_POWER_ENABLE:363357 second_sensor = int3472->quirks.avdd_second_sensor;364358 fallthrough;359359+ case INT3472_GPIO_TYPE_DOVDD:365360 case INT3472_GPIO_TYPE_HANDSHAKE:366361 ret = skl_int3472_register_regulator(int3472, gpio, enable_time_us,367362 con_id, second_sensor);
+4-2
drivers/platform/x86/lenovo/thinkpad_acpi.c
···95259525{95269526 switch (what) {95279527 case THRESHOLD_START:95289528- if ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_START, ret, battery))95289528+ if (!battery_info.batteries[battery].start_support ||95299529+ ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_START, ret, battery)))95299530 return -ENODEV;9530953195319532 /* The value is in the low 8 bits of the response */95329533 *ret = *ret & 0xFF;95339534 return 0;95349535 case THRESHOLD_STOP:95359535- if ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_STOP, ret, battery))95369536+ if (!battery_info.batteries[battery].stop_support ||95379537+ ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_STOP, ret, battery)))95369538 return -ENODEV;95379539 /* Value is in lower 8 bits */95389540 *ret = *ret & 0xFF;
···361361 * since we use this queue depth most of times.362362 */363363 if (scsi_realloc_sdev_budget_map(sdev, depth)) {364364+ kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags);364365 put_device(&starget->dev);365366 kfree(sdev);366367 goto out;
+1-1
drivers/scsi/xen-scsifront.c
···11751175 return;11761176 }1177117711781178- if (xenbus_read_driver_state(dev->nodename) ==11781178+ if (xenbus_read_driver_state(dev, dev->nodename) ==11791179 XenbusStateInitialised)11801180 scsifront_do_lun_hotplug(info, VSCSIFRONT_OP_ADD_LUN);11811181
···108108 const char *page, size_t count)109109{110110 ssize_t read_bytes;111111- struct file *fp;112111 ssize_t r = -EINVAL;112112+ struct path path = {};113113114114 mutex_lock(&target_devices_lock);115115 if (target_devices) {···131131 db_root_stage[read_bytes - 1] = '\0';132132133133 /* validate new db root before accepting it */134134- fp = filp_open(db_root_stage, O_RDONLY, 0);135135- if (IS_ERR(fp)) {134134+ r = kern_path(db_root_stage, LOOKUP_FOLLOW | LOOKUP_DIRECTORY, &path);135135+ if (r) {136136 pr_err("db_root: cannot open: %s\n", db_root_stage);137137+ if (r == -ENOTDIR)138138+ pr_err("db_root: not a directory: %s\n", db_root_stage);137139 goto unlock;138140 }139139- if (!S_ISDIR(file_inode(fp)->i_mode)) {140140- filp_close(fp, NULL);141141- pr_err("db_root: not a directory: %s\n", db_root_stage);142142- goto unlock;143143- }144144- filp_close(fp, NULL);141141+ path_put(&path);145142146143 strscpy(db_root, db_root_stage);147144 pr_debug("Target_Core_ConfigFS: db_root set to %s\n", db_root);
+6-2
drivers/video/fbdev/au1100fb.c
···380380#define panel_is_color(panel) (panel->control_base & LCD_CONTROL_PC)381381#define panel_swap_rgb(panel) (panel->control_base & LCD_CONTROL_CCO)382382383383-#if defined(CONFIG_COMPILE_TEST) && !defined(CONFIG_MIPS)384384-/* This is only defined to be able to compile this driver on non-mips platforms */383383+#if defined(CONFIG_COMPILE_TEST) && (!defined(CONFIG_MIPS) || defined(CONFIG_64BIT))384384+/*385385+ * KSEG1ADDR() is defined in arch/mips/include/asm/addrspace.h386386+ * for 32 bit configurations. Provide a stub for compile testing387387+ * on other platforms.388388+ */385389#define KSEG1ADDR(x) (x)386390#endif387391
+2-5
drivers/xen/xen-acpi-processor.c
···378378 acpi_psd[acpi_id].domain);379379 }380380381381- status = acpi_evaluate_object(handle, "_CST", NULL, &buffer);382382- if (ACPI_FAILURE(status)) {383383- if (!pblk)384384- return AE_OK;385385- }381381+ if (!pblk && !acpi_has_method(handle, "_CST"))382382+ return AE_OK;386383 /* .. and it has a C-state */387384 __set_bit(acpi_id, acpi_id_cst_present);388385
+5-5
drivers/xen/xen-pciback/xenbus.c
···149149150150 mutex_lock(&pdev->dev_lock);151151 /* Make sure we only do this setup once */152152- if (xenbus_read_driver_state(pdev->xdev->nodename) !=152152+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) !=153153 XenbusStateInitialised)154154 goto out;155155156156 /* Wait for frontend to state that it has published the configuration */157157- if (xenbus_read_driver_state(pdev->xdev->otherend) !=157157+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->otherend) !=158158 XenbusStateInitialised)159159 goto out;160160···374374 dev_dbg(&pdev->xdev->dev, "Reconfiguring device ...\n");375375376376 mutex_lock(&pdev->dev_lock);377377- if (xenbus_read_driver_state(pdev->xdev->nodename) != state)377377+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != state)378378 goto out;379379380380 err = xenbus_scanf(XBT_NIL, pdev->xdev->nodename, "num_devs", "%d",···572572 /* It's possible we could get the call to setup twice, so make sure573573 * we're not already connected.574574 */575575- if (xenbus_read_driver_state(pdev->xdev->nodename) !=575575+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) !=576576 XenbusStateInitWait)577577 goto out;578578···662662 struct xen_pcibk_device *pdev =663663 container_of(watch, struct xen_pcibk_device, be_watch);664664665665- switch (xenbus_read_driver_state(pdev->xdev->nodename)) {665665+ switch (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename)) {666666 case XenbusStateInitWait:667667 xen_pcibk_setup_backend(pdev);668668 break;
+14-3
drivers/xen/xenbus/xenbus_client.c
···226226 struct xenbus_transaction xbt;227227 int current_state;228228 int err, abort;229229+ bool vanished = false;229230230230- if (state == dev->state)231231+ if (state == dev->state || dev->vanished)231232 return 0;232233233234again:···243242 err = xenbus_scanf(xbt, dev->nodename, "state", "%d", ¤t_state);244243 if (err != 1)245244 goto abort;245245+ if (current_state != dev->state && current_state == XenbusStateInitialising) {246246+ vanished = true;247247+ goto abort;248248+ }246249247250 err = xenbus_printf(xbt, dev->nodename, "state", "%d", state);248251 if (err) {···261256 if (err == -EAGAIN && !abort)262257 goto again;263258 xenbus_switch_fatal(dev, depth, err, "ending transaction");264264- } else259259+ } else if (!vanished)265260 dev->state = state;266261267262 return 0;···936931937932/**938933 * xenbus_read_driver_state - read state from a store path934934+ * @dev: xenbus device pointer939935 * @path: path for driver940936 *941937 * Returns: the state of the driver rooted at the given store path, or942938 * XenbusStateUnknown if no state can be read.943939 */944944-enum xenbus_state xenbus_read_driver_state(const char *path)940940+enum xenbus_state xenbus_read_driver_state(const struct xenbus_device *dev,941941+ const char *path)945942{946943 enum xenbus_state result;944944+945945+ if (dev && dev->vanished)946946+ return XenbusStateUnknown;947947+947948 int err = xenbus_gather(XBT_NIL, path, "state", "%d", &result, NULL);948949 if (err)949950 result = XenbusStateUnknown;
+39-3
drivers/xen/xenbus/xenbus_probe.c
···191191 return;192192 }193193194194- state = xenbus_read_driver_state(dev->otherend);194194+ state = xenbus_read_driver_state(dev, dev->otherend);195195196196 dev_dbg(&dev->dev, "state is %d, (%s), %s, %s\n",197197 state, xenbus_strstate(state), dev->otherend_watch.node, path);···364364 * closed.365365 */366366 if (!drv->allow_rebind ||367367- xenbus_read_driver_state(dev->nodename) == XenbusStateClosing)367367+ xenbus_read_driver_state(dev, dev->nodename) == XenbusStateClosing)368368 xenbus_switch_state(dev, XenbusStateClosed);369369}370370EXPORT_SYMBOL_GPL(xenbus_dev_remove);···444444 info.dev = NULL;445445 bus_for_each_dev(bus, NULL, &info, cleanup_dev);446446 if (info.dev) {447447+ dev_warn(&info.dev->dev,448448+ "device forcefully removed from xenstore\n");449449+ info.dev->vanished = true;447450 device_unregister(&info.dev->dev);448451 put_device(&info.dev->dev);449452 }···517514 size_t stringlen;518515 char *tmpstring;519516520520- enum xenbus_state state = xenbus_read_driver_state(nodename);517517+ enum xenbus_state state = xenbus_read_driver_state(NULL, nodename);521518522519 if (state != XenbusStateInitialising) {523520 /* Device is not new, so ignore it. This can happen if a···662659 return;663660664661 dev = xenbus_device_find(root, &bus->bus);662662+ /*663663+ * Backend domain crash results in not coordinated frontend removal,664664+ * without going through XenbusStateClosing. If this is a new instance665665+ * of the same device Xen tools will have reset the state to666666+ * XenbusStateInitializing.667667+ * It might be that the backend crashed early during the init phase of668668+ * device setup, in which case the known state would have been669669+ * XenbusStateInitializing. So test the backend domid to match the670670+ * saved one. In case the new backend happens to have the same domid as671671+ * the old one, we can just carry on, as there is no inconsistency672672+ * resulting in this case.673673+ */674674+ if (dev && !strcmp(bus->root, "device")) {675675+ enum xenbus_state state = xenbus_read_driver_state(dev, dev->nodename);676676+ unsigned int backend = xenbus_read_unsigned(root, "backend-id",677677+ dev->otherend_id);678678+679679+ if (state == XenbusStateInitialising &&680680+ (state != dev->state || backend != dev->otherend_id)) {681681+ /*682682+ * State has been reset, assume the old one vanished683683+ * and new one needs to be probed.684684+ */685685+ dev_warn(&dev->dev,686686+ "state reset occurred, reconnecting\n");687687+ dev->vanished = true;688688+ }689689+ if (dev->vanished) {690690+ device_unregister(&dev->dev);691691+ put_device(&dev->dev);692692+ dev = NULL;693693+ }694694+ }665695 if (!dev)666696 xenbus_probe_node(bus, type, root);667697 else
+1-1
drivers/xen/xenbus/xenbus_probe_frontend.c
···253253 } else if (xendev->state < XenbusStateConnected) {254254 enum xenbus_state rstate = XenbusStateUnknown;255255 if (xendev->otherend)256256- rstate = xenbus_read_driver_state(xendev->otherend);256256+ rstate = xenbus_read_driver_state(xendev, xendev->otherend);257257 pr_warn("Timeout connecting to device: %s (local state %d, remote state %d)\n",258258 xendev->nodename, xendev->state, rstate);259259 }
-1
fs/btrfs/block-group.c
···33403340 btrfs_abort_transaction(trans, ret);33413341 goto out_put;33423342 }33433343- WARN_ON(ret);3344334333453344 /* We've already setup this transaction, go ahead and exit */33463345 if (block_group->cache_generation == trans->transid &&
+1-1
fs/btrfs/delayed-inode.c
···16571657 if (unlikely(ret)) {16581658 btrfs_err(trans->fs_info,16591659"failed to add delayed dir index item, root: %llu, inode: %llu, index: %llu, error: %d",16601660- index, btrfs_root_id(node->root), node->inode_id, ret);16601660+ btrfs_root_id(node->root), node->inode_id, index, ret);16611661 btrfs_delayed_item_release_metadata(dir->root, item);16621662 btrfs_release_delayed_item(item);16631663 }
+21-15
fs/btrfs/disk-io.c
···19941994 int level = btrfs_super_log_root_level(disk_super);1995199519961996 if (unlikely(fs_devices->rw_devices == 0)) {19971997- btrfs_warn(fs_info, "log replay required on RO media");19971997+ btrfs_err(fs_info, "log replay required on RO media");19981998 return -EIO;19991999 }20002000···20082008 check.owner_root = BTRFS_TREE_LOG_OBJECTID;20092009 log_tree_root->node = read_tree_block(fs_info, bytenr, &check);20102010 if (IS_ERR(log_tree_root->node)) {20112011- btrfs_warn(fs_info, "failed to read log tree");20122011 ret = PTR_ERR(log_tree_root->node);20132012 log_tree_root->node = NULL;20132013+ btrfs_err(fs_info, "failed to read log tree with error: %d", ret);20142014 btrfs_put_root(log_tree_root);20152015 return ret;20162016 }···20232023 /* returns with log_tree_root freed on success */20242024 ret = btrfs_recover_log_trees(log_tree_root);20252025 btrfs_put_root(log_tree_root);20262026- if (ret) {20272027- btrfs_handle_fs_error(fs_info, ret,20282028- "Failed to recover log tree");20262026+ if (unlikely(ret)) {20272027+ ASSERT(BTRFS_FS_ERROR(fs_info) != 0);20282028+ btrfs_err(fs_info, "failed to recover log trees with error: %d", ret);20292029 return ret;20302030 }20312031···29722972 task = kthread_run(btrfs_uuid_rescan_kthread, fs_info, "btrfs-uuid");29732973 if (IS_ERR(task)) {29742974 /* fs_info->update_uuid_tree_gen remains 0 in all error case */29752975- btrfs_warn(fs_info, "failed to start uuid_rescan task");29762975 up(&fs_info->uuid_tree_rescan_sem);29772976 return PTR_ERR(task);29782977 }···31873188 if (incompat & ~BTRFS_FEATURE_INCOMPAT_SUPP) {31883189 btrfs_err(fs_info,31893190 "cannot mount because of unknown incompat features (0x%llx)",31903190- incompat);31913191+ incompat & ~BTRFS_FEATURE_INCOMPAT_SUPP);31913192 return -EINVAL;31923193 }31933194···32193220 if (compat_ro_unsupp && is_rw_mount) {32203221 btrfs_err(fs_info,32213222 "cannot mount read-write because of unknown compat_ro features (0x%llx)",32223222- compat_ro);32233223+ compat_ro_unsupp);32233224 return -EINVAL;32243225 }32253226···32323233 !btrfs_test_opt(fs_info, NOLOGREPLAY)) {32333234 btrfs_err(fs_info,32343235"cannot replay dirty log with unsupported compat_ro features (0x%llx), try rescue=nologreplay",32353235- compat_ro);32363236+ compat_ro_unsupp);32363237 return -EINVAL;32373238 }32383239···36413642 fs_info->fs_root = btrfs_get_fs_root(fs_info, BTRFS_FS_TREE_OBJECTID, true);36423643 if (IS_ERR(fs_info->fs_root)) {36433644 ret = PTR_ERR(fs_info->fs_root);36443644- btrfs_warn(fs_info, "failed to read fs tree: %d", ret);36453645+ btrfs_err(fs_info, "failed to read fs tree: %d", ret);36453646 fs_info->fs_root = NULL;36463647 goto fail_qgroup;36473648 }···36623663 btrfs_info(fs_info, "checking UUID tree");36633664 ret = btrfs_check_uuid_tree(fs_info);36643665 if (ret) {36653665- btrfs_warn(fs_info,36663666- "failed to check the UUID tree: %d", ret);36663666+ btrfs_err(fs_info, "failed to check the UUID tree: %d", ret);36673667 close_ctree(fs_info);36683668 return ret;36693669 }···43974399 */43984400 btrfs_flush_workqueue(fs_info->delayed_workers);4399440144004400- ret = btrfs_commit_super(fs_info);44014401- if (ret)44024402- btrfs_err(fs_info, "commit super ret %d", ret);44024402+ /*44034403+ * If the filesystem is shutdown, then an attempt to commit the44044404+ * super block (or any write) will just fail. Since we freeze44054405+ * the filesystem before shutting it down, the filesystem is in44064406+ * a consistent state and we don't need to commit super blocks.44074407+ */44084408+ if (!btrfs_is_shutdown(fs_info)) {44094409+ ret = btrfs_commit_super(fs_info);44104410+ if (ret)44114411+ btrfs_err(fs_info, "commit super block returned %d", ret);44124412+ }44034413 }4404441444054415 kthread_stop(fs_info->transaction_kthread);
+7-1
fs/btrfs/extent-tree.c
···29332933 while (!TRANS_ABORTED(trans) && cached_state) {29342934 struct extent_state *next_state;2935293529362936- if (btrfs_test_opt(fs_info, DISCARD_SYNC))29362936+ if (btrfs_test_opt(fs_info, DISCARD_SYNC)) {29372937 ret = btrfs_discard_extent(fs_info, start,29382938 end + 1 - start, NULL, true);29392939+ if (ret) {29402940+ btrfs_warn(fs_info,29412941+ "discard failed for extent [%llu, %llu]: errno=%d %s",29422942+ start, end, ret, btrfs_decode_error(ret));29432943+ }29442944+ }2939294529402946 next_state = btrfs_next_extent_state(unpin, cached_state);29412947 btrfs_clear_extent_dirty(unpin, start, end, &cached_state);
+17-2
fs/btrfs/inode.c
···13921392 return ret;1393139313941394free_reserved:13951395+ /*13961396+ * If we have reserved an extent for the current range and failed to13971397+ * create the respective extent map or ordered extent, it means that13981398+ * when we reserved the extent we decremented the extent's size from13991399+ * the data space_info's bytes_may_use counter and14001400+ * incremented the space_info's bytes_reserved counter by the same14011401+ * amount.14021402+ *14031403+ * We must make sure extent_clear_unlock_delalloc() does not try14041404+ * to decrement again the data space_info's bytes_may_use counter, which14051405+ * will be handled by btrfs_free_reserved_extent().14061406+ *14071407+ * Therefore we do not pass it the flag EXTENT_CLEAR_DATA_RESV, but only14081408+ * EXTENT_CLEAR_META_RESV.14091409+ */13951410 extent_clear_unlock_delalloc(inode, file_offset, cur_end, locked_folio, cached,13961411 EXTENT_LOCKED | EXTENT_DELALLOC |13971412 EXTENT_DELALLOC_NEW |13981398- EXTENT_DEFRAG | EXTENT_DO_ACCOUNTING,14131413+ EXTENT_DEFRAG | EXTENT_CLEAR_META_RESV,13991414 PAGE_UNLOCK | PAGE_START_WRITEBACK |14001415 PAGE_END_WRITEBACK);14011416 btrfs_qgroup_free_data(inode, NULL, file_offset, cur_len, NULL);···47794764 spin_unlock(&dest->root_item_lock);47804765 btrfs_warn(fs_info,47814766 "attempt to delete subvolume %llu with active swapfile",47824782- btrfs_root_id(root));47674767+ btrfs_root_id(dest));47834768 ret = -EPERM;47844769 goto out_up_write;47854770 }
+6-1
fs/btrfs/ioctl.c
···45814581{45824582 struct btrfs_inode *inode = BTRFS_I(file_inode(iocb->ki_filp));45834583 struct extent_io_tree *io_tree = &inode->io_tree;45844584- struct page **pages;45844584+ struct page **pages = NULL;45854585 struct btrfs_uring_priv *priv = NULL;45864586 unsigned long nr_pages;45874587 int ret;···46394639 btrfs_unlock_extent(io_tree, start, lockend, &cached_state);46404640 btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);46414641 kfree(priv);46424642+ for (int i = 0; i < nr_pages; i++) {46434643+ if (pages[i])46444644+ __free_page(pages[i]);46454645+ }46464646+ kfree(pages);46424647 return ret;46434648}46444649
···47234723 ret = btrfs_remove_dev_extents(trans, chunk_map);47244724 if (unlikely(ret)) {47254725 btrfs_abort_transaction(trans, ret);47264726+ btrfs_end_transaction(trans);47264727 return ret;47274728 }47284729···47334732 if (unlikely(ret)) {47344733 mutex_unlock(&trans->fs_info->chunk_mutex);47354734 btrfs_abort_transaction(trans, ret);47354735+ btrfs_end_transaction(trans);47364736 return ret;47374737 }47384738 }···47524750 ret = remove_chunk_stripes(trans, chunk_map, path);47534751 if (unlikely(ret)) {47544752 btrfs_abort_transaction(trans, ret);47534753+ btrfs_end_transaction(trans);47554754 return ret;47564755 }47574756···59855982 struct btrfs_block_group *dest_bg;5986598359875984 dest_bg = btrfs_lookup_block_group(fs_info, new_addr);59855985+ if (unlikely(!dest_bg))59865986+ return -EUCLEAN;59875987+59885988 adjust_block_group_remap_bytes(trans, dest_bg, -overlap_length);59895989 btrfs_put_block_group(dest_bg);59905990 ret = btrfs_add_to_free_space_tree(trans,
+1-1
fs/btrfs/scrub.c
···743743 btrfs_warn_rl(fs_info,744744 "scrub: tree block %llu mirror %u has bad fsid, has %pU want %pU",745745 logical, stripe->mirror_num,746746- header->fsid, fs_info->fs_devices->fsid);746746+ header->fsid, fs_info->fs_devices->metadata_uuid);747747 return;748748 }749749 if (memcmp(header->chunk_tree_uuid, fs_info->chunk_tree_uuid,
+2-2
fs/btrfs/tree-checker.c
···17401740 objectid > BTRFS_LAST_FREE_OBJECTID)) {17411741 extent_err(leaf, slot,17421742 "invalid extent data backref objectid value %llu",17431743- root);17431743+ objectid);17441744 return -EUCLEAN;17451745 }17461746 if (unlikely(!IS_ALIGNED(offset, leaf->fs_info->sectorsize))) {···19211921 if (unlikely(prev_key->offset + prev_len > key->offset)) {19221922 generic_err(leaf, slot,19231923 "dev extent overlap, prev offset %llu len %llu current offset %llu",19241924- prev_key->objectid, prev_len, key->offset);19241924+ prev_key->offset, prev_len, key->offset);19251925 return -EUCLEAN;19261926 }19271927 }
+5-3
fs/btrfs/volumes.c
···6907690769086908 ret = btrfs_translate_remap(fs_info, &new_logical, length);69096909 if (ret)69106910- return ret;69106910+ goto out;6911691169126912 if (new_logical != logical) {69136913 btrfs_free_chunk_map(map);···69216921 }6922692269236923 num_copies = btrfs_chunk_map_num_copies(map);69246924- if (io_geom.mirror_num > num_copies)69256925- return -EINVAL;69246924+ if (io_geom.mirror_num > num_copies) {69256925+ ret = -EINVAL;69266926+ goto out;69276927+ }6926692869276929 map_offset = logical - map->start;69286930 io_geom.raid56_full_stripe_start = (u64)-1;
+12-3
fs/iomap/buffered-io.c
···8080{8181 struct iomap_folio_state *ifs = folio->private;8282 unsigned long flags;8383- bool uptodate = true;8383+ bool mark_uptodate = true;84848585 if (folio_test_uptodate(folio))8686 return;87878888 if (ifs) {8989 spin_lock_irqsave(&ifs->state_lock, flags);9090- uptodate = ifs_set_range_uptodate(folio, ifs, off, len);9090+ /*9191+ * If a read with bytes pending is in progress, we must not call9292+ * folio_mark_uptodate(). The read completion path9393+ * (iomap_read_end()) will call folio_end_read(), which uses XOR9494+ * semantics to set the uptodate bit. If we set it here, the XOR9595+ * in folio_end_read() will clear it, leaving the folio not9696+ * uptodate.9797+ */9898+ mark_uptodate = ifs_set_range_uptodate(folio, ifs, off, len) &&9999+ !ifs->read_bytes_pending;91100 spin_unlock_irqrestore(&ifs->state_lock, flags);92101 }931029494- if (uptodate)103103+ if (mark_uptodate)95104 folio_mark_uptodate(folio);96105}97106
+14-1
fs/iomap/direct-io.c
···8787 return FSERR_DIRECTIO_READ;8888}89899090+static inline bool should_report_dio_fserror(const struct iomap_dio *dio)9191+{9292+ switch (dio->error) {9393+ case 0:9494+ case -EAGAIN:9595+ case -ENOTBLK:9696+ /* don't send fsnotify for success or magic retry codes */9797+ return false;9898+ default:9999+ return true;100100+ }101101+}102102+90103ssize_t iomap_dio_complete(struct iomap_dio *dio)91104{92105 const struct iomap_dio_ops *dops = dio->dops;···1099611097 if (dops && dops->end_io)11198 ret = dops->end_io(iocb, dio->size, ret, dio->flags);112112- if (dio->error)9999+ if (should_report_dio_fserror(dio))113100 fserror_report_io(file_inode(iocb->ki_filp),114101 iomap_dio_err_type(dio), offset, dio->size,115102 dio->error, GFP_NOFS);
+7-6
fs/iomap/ioend.c
···215215 WARN_ON_ONCE(!folio->private && map_len < dirty_len);216216217217 switch (wpc->iomap.type) {218218- case IOMAP_INLINE:219219- WARN_ON_ONCE(1);220220- return -EIO;218218+ case IOMAP_UNWRITTEN:219219+ ioend_flags |= IOMAP_IOEND_UNWRITTEN;220220+ break;221221+ case IOMAP_MAPPED:222222+ break;221223 case IOMAP_HOLE:222224 return map_len;223225 default:224224- break;226226+ WARN_ON_ONCE(1);227227+ return -EIO;225228 }226229227227- if (wpc->iomap.type == IOMAP_UNWRITTEN)228228- ioend_flags |= IOMAP_IOEND_UNWRITTEN;229230 if (wpc->iomap.flags & IOMAP_F_SHARED)230231 ioend_flags |= IOMAP_IOEND_SHARED;231232 if (folio_test_dropbehind(folio))
+212-16
fs/netfs/direct_write.c
···1010#include "internal.h"11111212/*1313+ * Perform the cleanup rituals after an unbuffered write is complete.1414+ */1515+static void netfs_unbuffered_write_done(struct netfs_io_request *wreq)1616+{1717+ struct netfs_inode *ictx = netfs_inode(wreq->inode);1818+1919+ _enter("R=%x", wreq->debug_id);2020+2121+ /* Okay, declare that all I/O is complete. */2222+ trace_netfs_rreq(wreq, netfs_rreq_trace_write_done);2323+2424+ if (!wreq->error)2525+ netfs_update_i_size(ictx, &ictx->inode, wreq->start, wreq->transferred);2626+2727+ if (wreq->origin == NETFS_DIO_WRITE &&2828+ wreq->mapping->nrpages) {2929+ /* mmap may have got underfoot and we may now have folios3030+ * locally covering the region we just wrote. Attempt to3131+ * discard the folios, but leave in place any modified locally.3232+ * ->write_iter() is prevented from interfering by the DIO3333+ * counter.3434+ */3535+ pgoff_t first = wreq->start >> PAGE_SHIFT;3636+ pgoff_t last = (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT;3737+3838+ invalidate_inode_pages2_range(wreq->mapping, first, last);3939+ }4040+4141+ if (wreq->origin == NETFS_DIO_WRITE)4242+ inode_dio_end(wreq->inode);4343+4444+ _debug("finished");4545+ netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip);4646+ /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */4747+4848+ if (wreq->iocb) {4949+ size_t written = umin(wreq->transferred, wreq->len);5050+5151+ wreq->iocb->ki_pos += written;5252+ if (wreq->iocb->ki_complete) {5353+ trace_netfs_rreq(wreq, netfs_rreq_trace_ki_complete);5454+ wreq->iocb->ki_complete(wreq->iocb, wreq->error ?: written);5555+ }5656+ wreq->iocb = VFS_PTR_POISON;5757+ }5858+5959+ netfs_clear_subrequests(wreq);6060+}6161+6262+/*6363+ * Collect the subrequest results of unbuffered write subrequests.6464+ */6565+static void netfs_unbuffered_write_collect(struct netfs_io_request *wreq,6666+ struct netfs_io_stream *stream,6767+ struct netfs_io_subrequest *subreq)6868+{6969+ trace_netfs_collect_sreq(wreq, subreq);7070+7171+ spin_lock(&wreq->lock);7272+ list_del_init(&subreq->rreq_link);7373+ spin_unlock(&wreq->lock);7474+7575+ wreq->transferred += subreq->transferred;7676+ iov_iter_advance(&wreq->buffer.iter, subreq->transferred);7777+7878+ stream->collected_to = subreq->start + subreq->transferred;7979+ wreq->collected_to = stream->collected_to;8080+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_done);8181+8282+ trace_netfs_collect_stream(wreq, stream);8383+ trace_netfs_collect_state(wreq, wreq->collected_to, 0);8484+}8585+8686+/*8787+ * Write data to the server without going through the pagecache and without8888+ * writing it to the local cache. We dispatch the subrequests serially and8989+ * wait for each to complete before dispatching the next, lest we leave a gap9090+ * in the data written due to a failure such as ENOSPC. We could, however9191+ * attempt to do preparation such as content encryption for the next subreq9292+ * whilst the current is in progress.9393+ */9494+static int netfs_unbuffered_write(struct netfs_io_request *wreq)9595+{9696+ struct netfs_io_subrequest *subreq = NULL;9797+ struct netfs_io_stream *stream = &wreq->io_streams[0];9898+ int ret;9999+100100+ _enter("%llx", wreq->len);101101+102102+ if (wreq->origin == NETFS_DIO_WRITE)103103+ inode_dio_begin(wreq->inode);104104+105105+ stream->collected_to = wreq->start;106106+107107+ for (;;) {108108+ bool retry = false;109109+110110+ if (!subreq) {111111+ netfs_prepare_write(wreq, stream, wreq->start + wreq->transferred);112112+ subreq = stream->construct;113113+ stream->construct = NULL;114114+ stream->front = NULL;115115+ }116116+117117+ /* Check if (re-)preparation failed. */118118+ if (unlikely(test_bit(NETFS_SREQ_FAILED, &subreq->flags))) {119119+ netfs_write_subrequest_terminated(subreq, subreq->error);120120+ wreq->error = subreq->error;121121+ break;122122+ }123123+124124+ iov_iter_truncate(&subreq->io_iter, wreq->len - wreq->transferred);125125+ if (!iov_iter_count(&subreq->io_iter))126126+ break;127127+128128+ subreq->len = netfs_limit_iter(&subreq->io_iter, 0,129129+ stream->sreq_max_len,130130+ stream->sreq_max_segs);131131+ iov_iter_truncate(&subreq->io_iter, subreq->len);132132+ stream->submit_extendable_to = subreq->len;133133+134134+ trace_netfs_sreq(subreq, netfs_sreq_trace_submit);135135+ stream->issue_write(subreq);136136+137137+ /* Async, need to wait. */138138+ netfs_wait_for_in_progress_stream(wreq, stream);139139+140140+ if (test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {141141+ retry = true;142142+ } else if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) {143143+ ret = subreq->error;144144+ wreq->error = ret;145145+ netfs_see_subrequest(subreq, netfs_sreq_trace_see_failed);146146+ subreq = NULL;147147+ break;148148+ }149149+ ret = 0;150150+151151+ if (!retry) {152152+ netfs_unbuffered_write_collect(wreq, stream, subreq);153153+ subreq = NULL;154154+ if (wreq->transferred >= wreq->len)155155+ break;156156+ if (!wreq->iocb && signal_pending(current)) {157157+ ret = wreq->transferred ? -EINTR : -ERESTARTSYS;158158+ trace_netfs_rreq(wreq, netfs_rreq_trace_intr);159159+ break;160160+ }161161+ continue;162162+ }163163+164164+ /* We need to retry the last subrequest, so first reset the165165+ * iterator, taking into account what, if anything, we managed166166+ * to transfer.167167+ */168168+ subreq->error = -EAGAIN;169169+ trace_netfs_sreq(subreq, netfs_sreq_trace_retry);170170+ if (subreq->transferred > 0)171171+ iov_iter_advance(&wreq->buffer.iter, subreq->transferred);172172+173173+ if (stream->source == NETFS_UPLOAD_TO_SERVER &&174174+ wreq->netfs_ops->retry_request)175175+ wreq->netfs_ops->retry_request(wreq, stream);176176+177177+ __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);178178+ __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags);179179+ __clear_bit(NETFS_SREQ_FAILED, &subreq->flags);180180+ subreq->io_iter = wreq->buffer.iter;181181+ subreq->start = wreq->start + wreq->transferred;182182+ subreq->len = wreq->len - wreq->transferred;183183+ subreq->transferred = 0;184184+ subreq->retry_count += 1;185185+ stream->sreq_max_len = UINT_MAX;186186+ stream->sreq_max_segs = INT_MAX;187187+188188+ netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);189189+ stream->prepare_write(subreq);190190+191191+ __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);192192+ netfs_stat(&netfs_n_wh_retry_write_subreq);193193+ }194194+195195+ netfs_unbuffered_write_done(wreq);196196+ _leave(" = %d", ret);197197+ return ret;198198+}199199+200200+static void netfs_unbuffered_write_async(struct work_struct *work)201201+{202202+ struct netfs_io_request *wreq = container_of(work, struct netfs_io_request, work);203203+204204+ netfs_unbuffered_write(wreq);205205+ netfs_put_request(wreq, netfs_rreq_trace_put_complete);206206+}207207+208208+/*13209 * Perform an unbuffered write where we may have to do an RMW operation on an14210 * encrypted file. This can also be used for direct I/O writes.15211 */···26670 */26771 wreq->buffer.iter = *iter;26872 }7373+7474+ wreq->len = iov_iter_count(&wreq->buffer.iter);26975 }2707627177 __set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags);272272- if (async)273273- __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);2747827579 /* Copy the data into the bounce buffer and encrypt it. */27680 // TODO2778127882 /* Dispatch the write. */27983 __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags);280280- if (async)8484+8585+ if (async) {8686+ INIT_WORK(&wreq->work, netfs_unbuffered_write_async);28187 wreq->iocb = iocb;282282- wreq->len = iov_iter_count(&wreq->buffer.iter);283283- ret = netfs_unbuffered_write(wreq, is_sync_kiocb(iocb), wreq->len);284284- if (ret < 0) {285285- _debug("begin = %zd", ret);286286- goto out;287287- }288288-289289- if (!async) {290290- ret = netfs_wait_for_write(wreq);291291- if (ret > 0)292292- iocb->ki_pos += ret;293293- } else {8888+ queue_work(system_dfl_wq, &wreq->work);29489 ret = -EIOCBQUEUED;9090+ } else {9191+ ret = netfs_unbuffered_write(wreq);9292+ if (ret < 0) {9393+ _debug("begin = %zd", ret);9494+ } else {9595+ iocb->ki_pos += wreq->transferred;9696+ ret = wreq->transferred ?: wreq->error;9797+ }9898+9999+ netfs_put_request(wreq, netfs_rreq_trace_put_complete);295100 }296101297297-out:298102 netfs_put_request(wreq, netfs_rreq_trace_put_return);299103 return ret;300104
···399399 ictx->ops->invalidate_cache(wreq);400400 }401401402402- if ((wreq->origin == NETFS_UNBUFFERED_WRITE ||403403- wreq->origin == NETFS_DIO_WRITE) &&404404- !wreq->error)405405- netfs_update_i_size(ictx, &ictx->inode, wreq->start, wreq->transferred);406406-407407- if (wreq->origin == NETFS_DIO_WRITE &&408408- wreq->mapping->nrpages) {409409- /* mmap may have got underfoot and we may now have folios410410- * locally covering the region we just wrote. Attempt to411411- * discard the folios, but leave in place any modified locally.412412- * ->write_iter() is prevented from interfering by the DIO413413- * counter.414414- */415415- pgoff_t first = wreq->start >> PAGE_SHIFT;416416- pgoff_t last = (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT;417417- invalidate_inode_pages2_range(wreq->mapping, first, last);418418- }419419-420420- if (wreq->origin == NETFS_DIO_WRITE)421421- inode_dio_end(wreq->inode);422422-423402 _debug("finished");424403 netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip);425404 /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */
+3-38
fs/netfs/write_issue.c
···154154 * Prepare a write subrequest. We need to allocate a new subrequest155155 * if we don't have one.156156 */157157-static void netfs_prepare_write(struct netfs_io_request *wreq,158158- struct netfs_io_stream *stream,159159- loff_t start)157157+void netfs_prepare_write(struct netfs_io_request *wreq,158158+ struct netfs_io_stream *stream,159159+ loff_t start)160160{161161 struct netfs_io_subrequest *subreq;162162 struct iov_iter *wreq_iter = &wreq->buffer.iter;···696696 ret = netfs_wait_for_write(wreq);697697 netfs_put_request(wreq, netfs_rreq_trace_put_return);698698 return ret;699699-}700700-701701-/*702702- * Write data to the server without going through the pagecache and without703703- * writing it to the local cache.704704- */705705-int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t len)706706-{707707- struct netfs_io_stream *upload = &wreq->io_streams[0];708708- ssize_t part;709709- loff_t start = wreq->start;710710- int error = 0;711711-712712- _enter("%zx", len);713713-714714- if (wreq->origin == NETFS_DIO_WRITE)715715- inode_dio_begin(wreq->inode);716716-717717- while (len) {718718- // TODO: Prepare content encryption719719-720720- _debug("unbuffered %zx", len);721721- part = netfs_advance_write(wreq, upload, start, len, false);722722- start += part;723723- len -= part;724724- rolling_buffer_advance(&wreq->buffer, part);725725- if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags))726726- netfs_wait_for_paused_write(wreq);727727- if (test_bit(NETFS_RREQ_FAILED, &wreq->flags))728728- break;729729- }730730-731731- netfs_end_issue_write(wreq);732732- _leave(" = %d", error);733733- return error;734699}735700736701/*
+11-11
fs/nfsd/nfsctl.c
···377377}378378379379/*380380- * write_threads - Start NFSD, or report the current number of running threads380380+ * write_threads - Start NFSD, or report the configured number of threads381381 *382382 * Input:383383 * buf: ignored384384 * size: zero385385 * Output:386386 * On success: passed-in buffer filled with '\n'-terminated C387387- * string numeric value representing the number of388388- * running NFSD threads;387387+ * string numeric value representing the configured388388+ * number of NFSD threads;389389 * return code is the size in bytes of the string390390 * On error: return code is zero391391 *···399399 * Output:400400 * On success: NFS service is started;401401 * passed-in buffer filled with '\n'-terminated C402402- * string numeric value representing the number of403403- * running NFSD threads;402402+ * string numeric value representing the configured403403+ * number of NFSD threads;404404 * return code is the size in bytes of the string405405 * On error: return code is zero or a negative errno value406406 */···430430}431431432432/*433433- * write_pool_threads - Set or report the current number of threads per pool433433+ * write_pool_threads - Set or report the configured number of threads per pool434434 *435435 * Input:436436 * buf: ignored···447447 * Output:448448 * On success: passed-in buffer filled with '\n'-terminated C449449 * string containing integer values representing the450450- * number of NFSD threads in each pool;450450+ * configured number of NFSD threads in each pool;451451 * return code is the size in bytes of the string452452 * On error: return code is zero or a negative errno value453453 */···16471647 if (attr)16481648 nn->min_threads = nla_get_u32(attr);1649164916501650- ret = nfsd_svc(nrpools, nthreads, net, get_current_cred(), scope);16501650+ ret = nfsd_svc(nrpools, nthreads, net, current_cred(), scope);16511651 if (ret > 0)16521652 ret = 0;16531653out_unlock:···16571657}1658165816591659/**16601660- * nfsd_nl_threads_get_doit - get the number of running threads16601660+ * nfsd_nl_threads_get_doit - get the maximum number of running threads16611661 * @skb: reply buffer16621662 * @info: netlink metadata and command arguments16631663 *···17001700 struct svc_pool *sp = &nn->nfsd_serv->sv_pools[i];1701170117021702 err = nla_put_u32(skb, NFSD_A_SERVER_THREADS,17031703- sp->sp_nrthreads);17031703+ sp->sp_nrthrmax);17041704 if (err)17051705 goto err_unlock;17061706 }···20002000 }2001200120022002 ret = svc_xprt_create_from_sa(serv, xcl_name, net, sa, 0,20032003- get_current_cred());20032003+ current_cred());20042004 /* always save the latest error */20052005 if (ret < 0)20062006 err = ret;
+4-3
fs/nfsd/nfssvc.c
···239239240240int nfsd_nrthreads(struct net *net)241241{242242- int rv = 0;242242+ int i, rv = 0;243243 struct nfsd_net *nn = net_generic(net, nfsd_net_id);244244245245 mutex_lock(&nfsd_mutex);246246 if (nn->nfsd_serv)247247- rv = nn->nfsd_serv->sv_nrthreads;247247+ for (i = 0; i < nn->nfsd_serv->sv_nrpools; ++i)248248+ rv += nn->nfsd_serv->sv_pools[i].sp_nrthrmax;248249 mutex_unlock(&nfsd_mutex);249250 return rv;250251}···660659661660 if (serv)662661 for (i = 0; i < serv->sv_nrpools && i < n; i++)663663- nthreads[i] = serv->sv_pools[i].sp_nrthreads;662662+ nthreads[i] = serv->sv_pools[i].sp_nrthrmax;664663 return 0;665664}666665
+14-1
fs/nsfs.c
···199199 return false;200200}201201202202+static bool may_use_nsfs_ioctl(unsigned int cmd)203203+{204204+ switch (_IOC_NR(cmd)) {205205+ case _IOC_NR(NS_MNT_GET_NEXT):206206+ fallthrough;207207+ case _IOC_NR(NS_MNT_GET_PREV):208208+ return may_see_all_namespaces();209209+ }210210+ return true;211211+}212212+202213static long ns_ioctl(struct file *filp, unsigned int ioctl,203214 unsigned long arg)204215{···225214226215 if (!nsfs_ioctl_valid(ioctl))227216 return -ENOIOCTLCMD;217217+ if (!may_use_nsfs_ioctl(ioctl))218218+ return -EPERM;228219229220 ns = get_proc_ns(file_inode(filp));230221 switch (ioctl) {···627614 return ERR_PTR(-EOPNOTSUPP);628615 }629616630630- if (owning_ns && !ns_capable(owning_ns, CAP_SYS_ADMIN)) {617617+ if (owning_ns && !may_see_all_namespaces()) {631618 ns->ops->put(ns);632619 return ERR_PTR(-EPERM);633620 }
···332332333333 /*334334 * We need to release all dentries for the cached directories335335- * before we kill the sb.335335+ * and close all deferred file handles before we kill the sb.336336 */337337 if (cifs_sb->root) {338338 close_all_cached_dirs(cifs_sb);339339+ cifs_close_all_deferred_files_sb(cifs_sb);340340+341341+ /* Wait for all pending oplock breaks to complete */342342+ flush_workqueue(cifsoplockd_wq);339343340344 /* finally release root dentry */341345 dput(cifs_sb->root);···872868 spin_unlock(&tcon->tc_lock);873869 spin_unlock(&cifs_tcp_ses_lock);874870875875- cifs_close_all_deferred_files(tcon);876871 /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */877872 /* cancel_notify_requests(tcon); */878873 if (tcon->ses && tcon->ses->server) {
···711711 mutex_init(&cfile->fh_mutex);712712 spin_lock_init(&cfile->file_info_lock);713713714714- cifs_sb_active(inode->i_sb);715715-716714 /*717715 * If the server returned a read oplock and we have mandatory brlocks,718716 * set oplock level to None.···765767 struct inode *inode = d_inode(cifs_file->dentry);766768 struct cifsInodeInfo *cifsi = CIFS_I(inode);767769 struct cifsLockInfo *li, *tmp;768768- struct super_block *sb = inode->i_sb;769770770771 /*771772 * Delete any outstanding lock records. We'll lose them when the file···782785783786 cifs_put_tlink(cifs_file->tlink);784787 dput(cifs_file->dentry);785785- cifs_sb_deactive(sb);786788 kfree(cifs_file->symlink_target);787789 kfree(cifs_file);788790}···31593163 __u64 persistent_fid, volatile_fid;31603164 __u16 net_fid;3161316531623162- /*31633163- * Hold a reference to the superblock to prevent it and its inodes from31643164- * being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put()31653165- * may release the last reference to the sb and trigger inode eviction.31663166- */31673167- cifs_sb_active(sb);31683166 wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,31693167 TASK_UNINTERRUPTIBLE);31703168···32433253 cifs_put_tlink(tlink);32443254out:32453255 cifs_done_oplock_break(cinode);32463246- cifs_sb_deactive(sb);32473256}3248325732493258static int cifs_swap_activate(struct swap_info_struct *sis,
+42
fs/smb/client/misc.c
···2828#include "fs_context.h"2929#include "cached_dir.h"30303131+struct tcon_list {3232+ struct list_head entry;3333+ struct cifs_tcon *tcon;3434+};3535+3136/* The xid serves as a useful identifier for each incoming vfs request,3237 in a similar way to the mid which is useful to track each sent smb,3338 and CurrentXid can also provide a running counter (although it···555550 list_for_each_entry_safe(tmp_list, tmp_next_list, &file_head, list) {556551 _cifsFileInfo_put(tmp_list->cfile, true, false);557552 list_del(&tmp_list->list);553553+ kfree(tmp_list);554554+ }555555+}556556+557557+void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb)558558+{559559+ struct rb_root *root = &cifs_sb->tlink_tree;560560+ struct rb_node *node;561561+ struct cifs_tcon *tcon;562562+ struct tcon_link *tlink;563563+ struct tcon_list *tmp_list, *q;564564+ LIST_HEAD(tcon_head);565565+566566+ spin_lock(&cifs_sb->tlink_tree_lock);567567+ for (node = rb_first(root); node; node = rb_next(node)) {568568+ tlink = rb_entry(node, struct tcon_link, tl_rbnode);569569+ tcon = tlink_tcon(tlink);570570+ if (IS_ERR(tcon))571571+ continue;572572+ tmp_list = kmalloc_obj(struct tcon_list, GFP_ATOMIC);573573+ if (tmp_list == NULL)574574+ break;575575+ tmp_list->tcon = tcon;576576+ /* Take a reference on tcon to prevent it from being freed */577577+ spin_lock(&tcon->tc_lock);578578+ ++tcon->tc_count;579579+ trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,580580+ netfs_trace_tcon_ref_get_close_defer_files);581581+ spin_unlock(&tcon->tc_lock);582582+ list_add_tail(&tmp_list->entry, &tcon_head);583583+ }584584+ spin_unlock(&cifs_sb->tlink_tree_lock);585585+586586+ list_for_each_entry_safe(tmp_list, q, &tcon_head, entry) {587587+ cifs_close_all_deferred_files(tmp_list->tcon);588588+ list_del(&tmp_list->entry);589589+ cifs_put_tcon(tmp_list->tcon, netfs_trace_tcon_ref_put_close_defer_files);558590 kfree(tmp_list);559591 }560592}
+2-1
fs/smb/client/smb1encrypt.c
···11111212#include <linux/fips.h>1313#include <crypto/md5.h>1414+#include <crypto/utils.h>1415#include "cifsproto.h"1516#include "smb1proto.h"1617#include "cifs_debug.h"···132131/* cifs_dump_mem("what we think it should be: ",133132 what_we_think_sig_should_be, 16); */134133135135- if (memcmp(server_response_sig, what_we_think_sig_should_be, 8))134134+ if (crypto_memneq(server_response_sig, what_we_think_sig_should_be, 8))136135 return -EACCES;137136 else138137 return 0;
···8383 } Data;8484} __packed;85858686-/* equivalent of the contents of SMB3.1.1 POSIX open context response */8686+/*8787+ * See POSIX-SMB2 2.2.14.2.168888+ * Link: https://gitlab.com/samba-team/smb3-posix-spec/-/blob/master/smb3_posix_extensions.md8989+ */8790struct create_posix_rsp {8891 struct create_context_hdr ccontext;8992 __u8 Name[16];
+3
fs/verity/Kconfig
···2233config FS_VERITY44 bool "FS Verity (read-only file-based authenticity protection)"55+ # Filesystems cache the Merkle tree at a 64K aligned offset in the66+ # pagecache. That approach assumes the page size is at most 64K.77+ depends on PAGE_SHIFT <= 1658 select CRYPTO_HASH_INFO69 select CRYPTO_LIB_SHA256710 select CRYPTO_LIB_SHA512
···1212#include <linux/hrtimer.h>1313#include <linux/workqueue.h>14141515-#define KUNIT_IRQ_TEST_HRTIMER_INTERVAL us_to_ktime(5)1616-1715struct kunit_irq_test_state {1816 bool (*func)(void *test_specific_state);1917 void *test_specific_state;2018 bool task_func_reported_failure;2119 bool hardirq_func_reported_failure;2220 bool softirq_func_reported_failure;2121+ atomic_t task_func_calls;2322 atomic_t hardirq_func_calls;2423 atomic_t softirq_func_calls;2424+ ktime_t interval;2525 struct hrtimer timer;2626 struct work_struct bh_work;2727};···3030{3131 struct kunit_irq_test_state *state =3232 container_of(timer, typeof(*state), timer);3333+ int task_calls, hardirq_calls, softirq_calls;33343435 WARN_ON_ONCE(!in_hardirq());3535- atomic_inc(&state->hardirq_func_calls);3636+ task_calls = atomic_read(&state->task_func_calls);3737+ hardirq_calls = atomic_inc_return(&state->hardirq_func_calls);3838+ softirq_calls = atomic_read(&state->softirq_func_calls);3939+4040+ /*4141+ * If the timer is firing too often for the softirq or task to ever have4242+ * a chance to run, increase the timer interval. This is needed on very4343+ * slow systems.4444+ */4545+ if (hardirq_calls >= 20 && (softirq_calls == 0 || task_calls == 0))4646+ state->interval = ktime_add_ns(state->interval, 250);36473748 if (!state->func(state->test_specific_state))3849 state->hardirq_func_reported_failure = true;39504040- hrtimer_forward_now(&state->timer, KUNIT_IRQ_TEST_HRTIMER_INTERVAL);5151+ hrtimer_forward_now(&state->timer, state->interval);4152 queue_work(system_bh_wq, &state->bh_work);4253 return HRTIMER_RESTART;4354}···9786 struct kunit_irq_test_state state = {9887 .func = func,9988 .test_specific_state = test_specific_state,8989+ /*9090+ * Start with a 5us timer interval. If the system can't keep9191+ * up, kunit_irq_test_timer_func() will increase it.9292+ */9393+ .interval = us_to_ktime(5),10094 };10195 unsigned long end_jiffies;102102- int hardirq_calls, softirq_calls;103103- bool allctx = false;9696+ int task_calls, hardirq_calls, softirq_calls;1049710598 /*10699 * Set up a hrtimer (the way we access hardirq context) and a work···119104 * and hardirq), or 1 second, whichever comes first.120105 */121106 end_jiffies = jiffies + HZ;122122- hrtimer_start(&state.timer, KUNIT_IRQ_TEST_HRTIMER_INTERVAL,123123- HRTIMER_MODE_REL_HARD);124124- for (int task_calls = 0, calls = 0;125125- ((calls < max_iterations) || !allctx) &&126126- !time_after(jiffies, end_jiffies);127127- task_calls++) {107107+ hrtimer_start(&state.timer, state.interval, HRTIMER_MODE_REL_HARD);108108+ do {128109 if (!func(test_specific_state))129110 state.task_func_reported_failure = true;130111112112+ task_calls = atomic_inc_return(&state.task_func_calls);131113 hardirq_calls = atomic_read(&state.hardirq_func_calls);132114 softirq_calls = atomic_read(&state.softirq_func_calls);133133- calls = task_calls + hardirq_calls + softirq_calls;134134- allctx = (task_calls > 0) && (hardirq_calls > 0) &&135135- (softirq_calls > 0);136136- }115115+ } while ((task_calls + hardirq_calls + softirq_calls < max_iterations ||116116+ (task_calls == 0 || hardirq_calls == 0 ||117117+ softirq_calls == 0)) &&118118+ !time_after(jiffies, end_jiffies));137119138120 /* Cancel the timer and work. */139121 hrtimer_cancel(&state.timer);
+2
include/linux/device/bus.h
···3535 * otherwise. It may also return error code if determining that3636 * the driver supports the device is not possible. In case of3737 * -EPROBE_DEFER it will queue the device for deferred probing.3838+ * Note: This callback may be invoked with or without the device3939+ * lock held.3840 * @uevent: Called when a device is added, removed, or a few other things3941 * that generate uevents to add the environment variables.4042 * @probe: Called when a new device or driver add to this bus, and callback
···836836 * raw_event and event should return negative on error, any other value will837837 * pass the event on to .event() typically return 0 for success.838838 *839839+ * report_fixup must return a report descriptor pointer whose lifetime is at840840+ * least that of the input rdesc. This is usually done by mutating the input841841+ * rdesc and returning it or a sub-portion of it. In case a new buffer is842842+ * allocated and returned, the implementation of report_fixup is responsible for843843+ * freeing it later.844844+ *839845 * input_mapping shall return a negative value to completely ignore this usage840846 * (e.g. doubled or invalid usage), zero to continue with parsing of this841847 * usage by generic code (no special handling needed) or positive to skip
···7788struct mm_struct;991010+/* opaque kthread data */1111+struct kthread;1212+1313+/*1414+ * When "(p->flags & PF_KTHREAD)" is set the task is a kthread and will1515+ * always remain a kthread. For kthreads p->worker_private always1616+ * points to a struct kthread. For tasks that are not kthreads1717+ * p->worker_private is used to point to other things.1818+ *1919+ * Return NULL for any task that is not a kthread.2020+ */2121+static inline struct kthread *tsk_is_kthread(struct task_struct *p)2222+{2323+ if (p->flags & PF_KTHREAD)2424+ return p->worker_private;2525+ return NULL;2626+}2727+1028__printf(4, 5)1129struct task_struct *kthread_create_on_node(int (*threadfn)(void *data),1230 void *data,···11698int kthread_park(struct task_struct *k);11799void kthread_unpark(struct task_struct *k);118100void kthread_parkme(void);119119-void kthread_exit(long result) __noreturn;101101+#define kthread_exit(result) do_exit(result)120102void kthread_complete_and_exit(struct completion *, long) __noreturn;121103int kthreads_update_housekeeping(void);104104+void kthread_do_exit(struct kthread *, long);122105123106int kthreadd(void *unused);124107extern struct task_struct *kthreadd_task;
+22-5
include/linux/netdevice.h
···47114711static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu)47124712{47134713 spin_lock(&txq->_xmit_lock);47144714- /* Pairs with READ_ONCE() in __dev_queue_xmit() */47144714+ /* Pairs with READ_ONCE() in netif_tx_owned() */47154715 WRITE_ONCE(txq->xmit_lock_owner, cpu);47164716}47174717···47294729static inline void __netif_tx_lock_bh(struct netdev_queue *txq)47304730{47314731 spin_lock_bh(&txq->_xmit_lock);47324732- /* Pairs with READ_ONCE() in __dev_queue_xmit() */47324732+ /* Pairs with READ_ONCE() in netif_tx_owned() */47334733 WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id());47344734}47354735···47384738 bool ok = spin_trylock(&txq->_xmit_lock);4739473947404740 if (likely(ok)) {47414741- /* Pairs with READ_ONCE() in __dev_queue_xmit() */47414741+ /* Pairs with READ_ONCE() in netif_tx_owned() */47424742 WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id());47434743 }47444744 return ok;···4746474647474747static inline void __netif_tx_unlock(struct netdev_queue *txq)47484748{47494749- /* Pairs with READ_ONCE() in __dev_queue_xmit() */47494749+ /* Pairs with READ_ONCE() in netif_tx_owned() */47504750 WRITE_ONCE(txq->xmit_lock_owner, -1);47514751 spin_unlock(&txq->_xmit_lock);47524752}4753475347544754static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)47554755{47564756- /* Pairs with READ_ONCE() in __dev_queue_xmit() */47564756+ /* Pairs with READ_ONCE() in netif_tx_owned() */47574757 WRITE_ONCE(txq->xmit_lock_owner, -1);47584758 spin_unlock_bh(&txq->_xmit_lock);47594759}···48454845 spin_unlock(&dev->tx_global_lock);48464846 local_bh_enable();48474847}48484848+48494849+#ifndef CONFIG_PREEMPT_RT48504850+static inline bool netif_tx_owned(struct netdev_queue *txq, unsigned int cpu)48514851+{48524852+ /* Other cpus might concurrently change txq->xmit_lock_owner48534853+ * to -1 or to their cpu id, but not to our id.48544854+ */48554855+ return READ_ONCE(txq->xmit_lock_owner) == cpu;48564856+}48574857+48584858+#else48594859+static inline bool netif_tx_owned(struct netdev_queue *txq, unsigned int cpu)48604860+{48614861+ return rt_mutex_owner(&txq->_xmit_lock.lock) == current;48624862+}48634863+48644864+#endif4848486548494866static inline void netif_addr_lock(struct net_device *dev)48504867{
···1313/**1414 * enum mlxreg_wdt_type - type of HW watchdog1515 *1616- * TYPE1 HW watchdog implementation exist in old systems.1717- * All new systems have TYPE2 HW watchdog.1818- * TYPE3 HW watchdog can exist on all systems with new CPLD.1919- * TYPE3 is selected by WD capability bit.1616+ * @MLX_WDT_TYPE1: HW watchdog implementation in old systems.1717+ * @MLX_WDT_TYPE2: All new systems have TYPE2 HW watchdog.1818+ * @MLX_WDT_TYPE3: HW watchdog that can exist on all systems with new CPLD.1919+ * TYPE3 is selected by WD capability bit.2020 */2121enum mlxreg_wdt_type {2222 MLX_WDT_TYPE1,···3535 * @MLXREG_HOTPLUG_LC_SYNCED: entry for line card synchronization events, coming3636 * after hardware-firmware synchronization handshake;3737 * @MLXREG_HOTPLUG_LC_READY: entry for line card ready events, indicating line card3838- PHYs ready / unready state;3838+ * PHYs ready / unready state;3939 * @MLXREG_HOTPLUG_LC_ACTIVE: entry for line card active events, indicating firmware4040 * availability / unavailability for the ports on line card;4141 * @MLXREG_HOTPLUG_LC_THERMAL: entry for line card thermal shutdown events, positive···123123 * @reg_pwr: attribute power register;124124 * @reg_ena: attribute enable register;125125 * @mode: access mode;126126- * @np - pointer to node platform associated with attribute;127127- * @hpdev - hotplug device data;126126+ * @np: pointer to node platform associated with attribute;127127+ * @hpdev: hotplug device data;128128 * @notifier: pointer to event notifier block;129129 * @health_cntr: dynamic device health indication counter;130130 * @attached: true if device has been attached after good health indication;
+3-2
include/linux/platform_data/x86/int3472.h
···2626#define INT3472_GPIO_TYPE_POWER_ENABLE 0x0b2727#define INT3472_GPIO_TYPE_CLK_ENABLE 0x0c2828#define INT3472_GPIO_TYPE_PRIVACY_LED 0x0d2929+#define INT3472_GPIO_TYPE_DOVDD 0x102930#define INT3472_GPIO_TYPE_HANDSHAKE 0x123031#define INT3472_GPIO_TYPE_HOTPLUG_DETECT 0x133132···3433#define INT3472_MAX_SENSOR_GPIOS 33534#define INT3472_MAX_REGULATORS 336353737-/* E.g. "avdd\0" */3838-#define GPIO_SUPPLY_NAME_LENGTH 53636+/* E.g. "dovdd\0" */3737+#define GPIO_SUPPLY_NAME_LENGTH 63938/* 12 chars for acpi_dev_name() + "-", e.g. "ABCD1234:00-" */4039#define GPIO_REGULATOR_NAME_LENGTH (12 + GPIO_SUPPLY_NAME_LENGTH)4140/* lower- and upper-case mapping */
+1
include/linux/ring_buffer.h
···248248249249int ring_buffer_map(struct trace_buffer *buffer, int cpu,250250 struct vm_area_struct *vma);251251+void ring_buffer_map_dup(struct trace_buffer *buffer, int cpu);251252int ring_buffer_unmap(struct trace_buffer *buffer, int cpu);252253int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu);253254#endif /* _LINUX_RING_BUFFER_H */
+20-34
include/linux/uaccess.h
···647647/* Define RW variant so the below _mode macro expansion works */648648#define masked_user_rw_access_begin(u) masked_user_access_begin(u)649649#define user_rw_access_begin(u, s) user_access_begin(u, s)650650-#define user_rw_access_end() user_access_end()651650652651/* Scoped user access */653653-#define USER_ACCESS_GUARD(_mode) \654654-static __always_inline void __user * \655655-class_user_##_mode##_begin(void __user *ptr) \656656-{ \657657- return ptr; \658658-} \659659- \660660-static __always_inline void \661661-class_user_##_mode##_end(void __user *ptr) \662662-{ \663663- user_##_mode##_access_end(); \664664-} \665665- \666666-DEFINE_CLASS(user_ ##_mode## _access, void __user *, \667667- class_user_##_mode##_end(_T), \668668- class_user_##_mode##_begin(ptr), void __user *ptr) \669669- \670670-static __always_inline class_user_##_mode##_access_t \671671-class_user_##_mode##_access_ptr(void __user *scope) \672672-{ \673673- return scope; \674674-}675652676676-USER_ACCESS_GUARD(read)677677-USER_ACCESS_GUARD(write)678678-USER_ACCESS_GUARD(rw)679679-#undef USER_ACCESS_GUARD653653+/* Cleanup wrapper functions */654654+static __always_inline void __scoped_user_read_access_end(const void *p)655655+{656656+ user_read_access_end();657657+};658658+static __always_inline void __scoped_user_write_access_end(const void *p)659659+{660660+ user_write_access_end();661661+};662662+static __always_inline void __scoped_user_rw_access_end(const void *p)663663+{664664+ user_access_end();665665+};680666681667/**682668 * __scoped_user_access_begin - Start a scoped user access···736750 *737751 * Don't use directly. Use scoped_masked_user_$MODE_access() instead.738752 */739739-#define __scoped_user_access(mode, uptr, size, elbl) \740740-for (bool done = false; !done; done = true) \741741- for (void __user *_tmpptr = __scoped_user_access_begin(mode, uptr, size, elbl); \742742- !done; done = true) \743743- for (CLASS(user_##mode##_access, scope)(_tmpptr); !done; done = true) \744744- /* Force modified pointer usage within the scope */ \745745- for (const typeof(uptr) uptr = _tmpptr; !done; done = true)753753+#define __scoped_user_access(mode, uptr, size, elbl) \754754+for (bool done = false; !done; done = true) \755755+ for (auto _tmpptr = __scoped_user_access_begin(mode, uptr, size, elbl); \756756+ !done; done = true) \757757+ /* Force modified pointer usage within the scope */ \758758+ for (const auto uptr __cleanup(__scoped_user_##mode##_access_end) = \759759+ _tmpptr; !done; done = true)746760747761/**748762 * scoped_user_read_access_size - Start a scoped user read access with given size
···597597 * @pending: current number of XSkFQEs to refill598598 * @thresh: threshold below which the queue is refilled599599 * @buf_len: HW-writeable length per each buffer600600+ * @truesize: step between consecutive buffers, 0 if none exists600601 * @nid: ID of the closest NUMA node with memory601602 */602603struct libeth_xskfq {···615614 u32 thresh;616615617616 u32 buf_len;617617+ u32 truesize;618618+618619 int nid;619620};620621
+7
include/net/netfilter/nf_tables.h
···320320 * @NFT_ITER_UNSPEC: unspecified, to catch errors321321 * @NFT_ITER_READ: read-only iteration over set elements322322 * @NFT_ITER_UPDATE: iteration under mutex to update set element state323323+ * @NFT_ITER_UPDATE_CLONE: clone set before iteration under mutex to update element323324 */324325enum nft_iter_type {325326 NFT_ITER_UNSPEC,326327 NFT_ITER_READ,327328 NFT_ITER_UPDATE,329329+ NFT_ITER_UPDATE_CLONE,328330};329331330332struct nft_set;···18621860 struct nft_elem_priv *priv[NFT_TRANS_GC_BATCHCOUNT];18631861 struct rcu_head rcu;18641862};18631863+18641864+static inline int nft_trans_gc_space(const struct nft_trans_gc *trans)18651865+{18661866+ return NFT_TRANS_GC_BATCHCOUNT - trans->count;18671867+}1865186818661869static inline void nft_ctx_update(struct nft_ctx *ctx,18671870 const struct nft_trans *trans)
+10
include/net/sch_generic.h
···778778static inline void qdisc_reset_all_tx_gt(struct net_device *dev, unsigned int i)779779{780780 struct Qdisc *qdisc;781781+ bool nolock;781782782783 for (; i < dev->num_tx_queues; i++) {783784 qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc);784785 if (qdisc) {786786+ nolock = qdisc->flags & TCQ_F_NOLOCK;787787+788788+ if (nolock)789789+ spin_lock_bh(&qdisc->seqlock);785790 spin_lock_bh(qdisc_lock(qdisc));786791 qdisc_reset(qdisc);787792 spin_unlock_bh(qdisc_lock(qdisc));793793+ if (nolock) {794794+ clear_bit(__QDISC_STATE_MISSED, &qdisc->state);795795+ clear_bit(__QDISC_STATE_DRAINING, &qdisc->state);796796+ spin_unlock_bh(&qdisc->seqlock);797797+ }788798 }789799 }790800}
···188188/*189189 * If COOP_TASKRUN is set, get notified if task work is available for190190 * running and a kernel transition would be needed to run it. This sets191191- * IORING_SQ_TASKRUN in the sq ring flags. Not valid with COOP_TASKRUN.191191+ * IORING_SQ_TASKRUN in the sq ring flags. Not valid without COOP_TASKRUN192192+ * or DEFER_TASKRUN.192193 */193194#define IORING_SETUP_TASKRUN_FLAG (1U << 9)194195#define IORING_SETUP_SQE128 (1U << 10) /* SQEs are 128 byte */
···19021902 default n19031903 depends on IO_URING19041904 help19051905- Enable mock files for io_uring subststem testing. The ABI might19051905+ Enable mock files for io_uring subsystem testing. The ABI might19061906 still change, so it's still experimental and should only be enabled19071907 for specific test purposes.19081908
···10021002 mutex_lock(&tr->mutex);1003100310041004 shim_link = cgroup_shim_find(tr, bpf_func);10051005- if (shim_link) {10051005+ if (shim_link && !IS_ERR(bpf_link_inc_not_zero(&shim_link->link.link))) {10061006 /* Reusing existing shim attached by the other program. */10071007- bpf_link_inc(&shim_link->link.link);10081008-10091007 mutex_unlock(&tr->mutex);10101008 bpf_trampoline_put(tr); /* bpf_trampoline_get above */10111009 return 0;
+34-4
kernel/bpf/verifier.c
···25112511 if ((u32)reg->s32_min_value <= (u32)reg->s32_max_value) {25122512 reg->u32_min_value = max_t(u32, reg->s32_min_value, reg->u32_min_value);25132513 reg->u32_max_value = min_t(u32, reg->s32_max_value, reg->u32_max_value);25142514+ } else {25152515+ if (reg->u32_max_value < (u32)reg->s32_min_value) {25162516+ /* See __reg64_deduce_bounds() for detailed explanation.25172517+ * Refine ranges in the following situation:25182518+ *25192519+ * 0 U32_MAX25202520+ * | [xxxxxxxxxxxxxx u32 range xxxxxxxxxxxxxx] |25212521+ * |----------------------------|----------------------------|25222522+ * |xxxxx s32 range xxxxxxxxx] [xxxxxxx|25232523+ * 0 S32_MAX S32_MIN -125242524+ */25252525+ reg->s32_min_value = (s32)reg->u32_min_value;25262526+ reg->u32_max_value = min_t(u32, reg->u32_max_value, reg->s32_max_value);25272527+ } else if ((u32)reg->s32_max_value < reg->u32_min_value) {25282528+ /*25292529+ * 0 U32_MAX25302530+ * | [xxxxxxxxxxxxxx u32 range xxxxxxxxxxxxxx] |25312531+ * |----------------------------|----------------------------|25322532+ * |xxxxxxxxx] [xxxxxxxxxxxx s32 range |25332533+ * 0 S32_MAX S32_MIN -125342534+ */25352535+ reg->s32_max_value = (s32)reg->u32_max_value;25362536+ reg->u32_min_value = max_t(u32, reg->u32_min_value, reg->s32_min_value);25372537+ }25142538 }25152539}25162540···1735917335 * in verifier state, save R in linked_regs if R->id == id.1736017336 * If there are too many Rs sharing same id, reset id for leftover Rs.1736117337 */1736217362-static void collect_linked_regs(struct bpf_verifier_state *vstate, u32 id,1733817338+static void collect_linked_regs(struct bpf_verifier_env *env,1733917339+ struct bpf_verifier_state *vstate,1734017340+ u32 id,1736317341 struct linked_regs *linked_regs)1736417342{1734317343+ struct bpf_insn_aux_data *aux = env->insn_aux_data;1736517344 struct bpf_func_state *func;1736617345 struct bpf_reg_state *reg;1734617346+ u16 live_regs;1736717347 int i, j;17368173481736917349 id = id & ~BPF_ADD_CONST;1737017350 for (i = vstate->curframe; i >= 0; i--) {1735117351+ live_regs = aux[frame_insn_idx(vstate, i)].live_regs_before;1737117352 func = vstate->frame[i];1737217353 for (j = 0; j < BPF_REG_FP; j++) {1735417354+ if (!(live_regs & BIT(j)))1735517355+ continue;1737317356 reg = &func->regs[j];1737417357 __collect_linked_regs(linked_regs, reg, id, i, j, true);1737517358 }···1759117560 * if parent state is created.1759217561 */1759317562 if (BPF_SRC(insn->code) == BPF_X && src_reg->type == SCALAR_VALUE && src_reg->id)1759417594- collect_linked_regs(this_branch, src_reg->id, &linked_regs);1756317563+ collect_linked_regs(env, this_branch, src_reg->id, &linked_regs);1759517564 if (dst_reg->type == SCALAR_VALUE && dst_reg->id)1759617596- collect_linked_regs(this_branch, dst_reg->id, &linked_regs);1756517565+ collect_linked_regs(env, this_branch, dst_reg->id, &linked_regs);1759717566 if (linked_regs.cnt > 1) {1759817567 err = push_jmp_history(env, this_branch, 0, linked_regs_pack(&linked_regs));1759917568 if (err)···2529225261BTF_ID(func, do_exit)2529325262BTF_ID(func, do_group_exit)2529425263BTF_ID(func, kthread_complete_and_exit)2529525295-BTF_ID(func, kthread_exit)2529625264BTF_ID(func, make_task_dead)2529725265BTF_SET_END(noreturn_deny)2529825266
+1
kernel/cgroup/cgroup.c
···2608260826092609 mgctx->tset.nr_tasks++;2610261026112611+ css_set_skip_task_iters(cset, task);26112612 list_move_tail(&task->cg_list, &cset->mg_tasks);26122613 if (list_empty(&cset->mg_node))26132614 list_add_tail(&cset->mg_node,
+149-73
kernel/cgroup/cpuset.c
···6262};63636464/*6565+ * CPUSET Locking Convention6666+ * -------------------------6767+ *6868+ * Below are the four global/local locks guarding cpuset structures in lock6969+ * acquisition order:7070+ * - cpuset_top_mutex7171+ * - cpu_hotplug_lock (cpus_read_lock/cpus_write_lock)7272+ * - cpuset_mutex7373+ * - callback_lock (raw spinlock)7474+ *7575+ * As cpuset will now indirectly flush a number of different workqueues in7676+ * housekeeping_update() to update housekeeping cpumasks when the set of7777+ * isolated CPUs is going to be changed, it may be vulnerable to deadlock7878+ * if we hold cpus_read_lock while calling into housekeeping_update().7979+ *8080+ * The first cpuset_top_mutex will be held except when calling into8181+ * cpuset_handle_hotplug() from the CPU hotplug code where cpus_write_lock8282+ * and cpuset_mutex will be held instead. The main purpose of this mutex8383+ * is to prevent regular cpuset control file write actions from interfering8484+ * with the call to housekeeping_update(), though CPU hotplug operation can8585+ * still happen in parallel. This mutex also provides protection for some8686+ * internal variables.8787+ *8888+ * A task must hold all the remaining three locks to modify externally visible8989+ * or used fields of cpusets, though some of the internally used cpuset fields9090+ * and internal variables can be modified without holding callback_lock. If only9191+ * reliable read access of the externally used fields are needed, a task can9292+ * hold either cpuset_mutex or callback_lock which are exposed to other9393+ * external subsystems.9494+ *9595+ * If a task holds cpu_hotplug_lock and cpuset_mutex, it blocks others,9696+ * ensuring that it is the only task able to also acquire callback_lock and9797+ * be able to modify cpusets. It can perform various checks on the cpuset9898+ * structure first, knowing nothing will change. It can also allocate memory9999+ * without holding callback_lock. While it is performing these checks, various100100+ * callback routines can briefly acquire callback_lock to query cpusets. Once101101+ * it is ready to make the changes, it takes callback_lock, blocking everyone102102+ * else.103103+ *104104+ * Calls to the kernel memory allocator cannot be made while holding105105+ * callback_lock which is a spinlock, as the memory allocator may sleep or106106+ * call back into cpuset code and acquire callback_lock.107107+ *108108+ * Now, the task_struct fields mems_allowed and mempolicy may be changed109109+ * by other task, we use alloc_lock in the task_struct fields to protect110110+ * them.111111+ *112112+ * The cpuset_common_seq_show() handlers only hold callback_lock across113113+ * small pieces of code, such as when reading out possibly multi-word114114+ * cpumasks and nodemasks.115115+ */116116+117117+static DEFINE_MUTEX(cpuset_top_mutex);118118+static DEFINE_MUTEX(cpuset_mutex);119119+120120+/*121121+ * File level internal variables below follow one of the following exclusion122122+ * rules.123123+ *124124+ * RWCS: Read/write-able by holding either cpus_write_lock (and optionally125125+ * cpuset_mutex) or both cpus_read_lock and cpuset_mutex.126126+ *127127+ * CSCB: Readable by holding either cpuset_mutex or callback_lock. Writable128128+ * by holding both cpuset_mutex and callback_lock.129129+ *130130+ * T: Read/write-able by holding the cpuset_top_mutex.131131+ */132132+133133+/*65134 * For local partitions, update to subpartitions_cpus & isolated_cpus is done66135 * in update_parent_effective_cpumask(). For remote partitions, it is done in67136 * the remote_partition_*() and remote_cpus_update() helpers.···13970 * Exclusive CPUs distributed out to local or remote sub-partitions of14071 * top_cpuset14172 */142142-static cpumask_var_t subpartitions_cpus;7373+static cpumask_var_t subpartitions_cpus; /* RWCS */1437414475/*145145- * Exclusive CPUs in isolated partitions7676+ * Exclusive CPUs in isolated partitions (shown in cpuset.cpus.isolated)14677 */147147-static cpumask_var_t isolated_cpus;7878+static cpumask_var_t isolated_cpus; /* CSCB */1487914980/*150150- * isolated_cpus updating flag (protected by cpuset_mutex)151151- * Set if isolated_cpus is going to be updated in the current152152- * cpuset_mutex crtical section.8181+ * Set if housekeeping cpumasks are to be updated.15382 */154154-static bool isolated_cpus_updating;8383+static bool update_housekeeping; /* RWCS */8484+8585+/*8686+ * Copy of isolated_cpus to be passed to housekeeping_update()8787+ */8888+static cpumask_var_t isolated_hk_cpus; /* T */1558915690/*15791 * A flag to force sched domain rebuild at the end of an operation.···17098 * Note that update_relax_domain_level() in cpuset-v1.c can still call17199 * rebuild_sched_domains_locked() directly without using this flag.172100 */173173-static bool force_sd_rebuild;101101+static bool force_sd_rebuild; /* RWCS */174102175103/*176104 * Partition root states:···290218 .partition_root_state = PRS_ROOT,291219};292220293293-/*294294- * There are two global locks guarding cpuset structures - cpuset_mutex and295295- * callback_lock. The cpuset code uses only cpuset_mutex. Other kernel296296- * subsystems can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset297297- * structures. Note that cpuset_mutex needs to be a mutex as it is used in298298- * paths that rely on priority inheritance (e.g. scheduler - on RT) for299299- * correctness.300300- *301301- * A task must hold both locks to modify cpusets. If a task holds302302- * cpuset_mutex, it blocks others, ensuring that it is the only task able to303303- * also acquire callback_lock and be able to modify cpusets. It can perform304304- * various checks on the cpuset structure first, knowing nothing will change.305305- * It can also allocate memory while just holding cpuset_mutex. While it is306306- * performing these checks, various callback routines can briefly acquire307307- * callback_lock to query cpusets. Once it is ready to make the changes, it308308- * takes callback_lock, blocking everyone else.309309- *310310- * Calls to the kernel memory allocator can not be made while holding311311- * callback_lock, as that would risk double tripping on callback_lock312312- * from one of the callbacks into the cpuset code from within313313- * __alloc_pages().314314- *315315- * If a task is only holding callback_lock, then it has read-only316316- * access to cpusets.317317- *318318- * Now, the task_struct fields mems_allowed and mempolicy may be changed319319- * by other task, we use alloc_lock in the task_struct fields to protect320320- * them.321321- *322322- * The cpuset_common_seq_show() handlers only hold callback_lock across323323- * small pieces of code, such as when reading out possibly multi-word324324- * cpumasks and nodemasks.325325- */326326-327327-static DEFINE_MUTEX(cpuset_mutex);328328-329221/**330222 * cpuset_lock - Acquire the global cpuset mutex331223 *···319283 */320284void cpuset_full_lock(void)321285{286286+ mutex_lock(&cpuset_top_mutex);322287 cpus_read_lock();323288 mutex_lock(&cpuset_mutex);324289}···328291{329292 mutex_unlock(&cpuset_mutex);330293 cpus_read_unlock();294294+ mutex_unlock(&cpuset_top_mutex);331295}332296333297#ifdef CONFIG_LOCKDEP334298bool lockdep_is_cpuset_held(void)335299{336336- return lockdep_is_held(&cpuset_mutex);300300+ return lockdep_is_held(&cpuset_mutex) ||301301+ lockdep_is_held(&cpuset_top_mutex);337302}338303#endif339304···1000961 * offline CPUs, a warning is emitted and we return directly to1001962 * prevent the panic.1002963 */10031003- for (i = 0; i < ndoms; ++i) {964964+ for (i = 0; doms && i < ndoms; i++) {1004965 if (WARN_ON_ONCE(!cpumask_subset(doms[i], cpu_active_mask)))1005966 return;1006967 }···12001161static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus)12011162{12021163 WARN_ON_ONCE(old_prs == new_prs);12031203- if (new_prs == PRS_ISOLATED)11641164+ lockdep_assert_held(&callback_lock);11651165+ lockdep_assert_held(&cpuset_mutex);11661166+ if (new_prs == PRS_ISOLATED) {11671167+ if (cpumask_subset(xcpus, isolated_cpus))11681168+ return;12041169 cpumask_or(isolated_cpus, isolated_cpus, xcpus);12051205- else11701170+ } else {11711171+ if (!cpumask_intersects(xcpus, isolated_cpus))11721172+ return;12061173 cpumask_andnot(isolated_cpus, isolated_cpus, xcpus);12071207-12081208- isolated_cpus_updating = true;11741174+ }11751175+ update_housekeeping = true;12091176}1210117712111178/*···12641219 isolated_cpus_update(old_prs, parent->partition_root_state,12651220 xcpus);1266122112671267- cpumask_and(xcpus, xcpus, cpu_active_mask);12681222 cpumask_or(parent->effective_cpus, parent->effective_cpus, xcpus);12231223+ cpumask_and(parent->effective_cpus, parent->effective_cpus, cpu_active_mask);12691224}1270122512711226/*···13291284}1330128513311286/*13321332- * update_isolation_cpumasks - Update external isolation related CPU masks12871287+ * update_hk_sched_domains - Update HK cpumasks & rebuild sched domains13331288 *13341334- * The following external CPU masks will be updated if necessary:13351335- * - workqueue unbound cpumask12891289+ * Update housekeeping cpumasks and rebuild sched domains if necessary.12901290+ * This should be called at the end of cpuset or hotplug actions.13361291 */13371337-static void update_isolation_cpumasks(void)12921292+static void update_hk_sched_domains(void)13381293{13391339- int ret;12941294+ if (update_housekeeping) {12951295+ /* Updating HK cpumasks implies rebuild sched domains */12961296+ update_housekeeping = false;12971297+ force_sd_rebuild = true;12981298+ cpumask_copy(isolated_hk_cpus, isolated_cpus);1340129913411341- if (!isolated_cpus_updating)13421342- return;13001300+ /*13011301+ * housekeeping_update() is now called without holding13021302+ * cpus_read_lock and cpuset_mutex. Only cpuset_top_mutex13031303+ * is still being held for mutual exclusion.13041304+ */13051305+ mutex_unlock(&cpuset_mutex);13061306+ cpus_read_unlock();13071307+ WARN_ON_ONCE(housekeeping_update(isolated_hk_cpus));13081308+ cpus_read_lock();13091309+ mutex_lock(&cpuset_mutex);13101310+ }13111311+ /* force_sd_rebuild will be cleared in rebuild_sched_domains_locked() */13121312+ if (force_sd_rebuild)13131313+ rebuild_sched_domains_locked();13141314+}1343131513441344- ret = housekeeping_update(isolated_cpus);13451345- WARN_ON_ONCE(ret < 0);13461346-13471347- isolated_cpus_updating = false;13161316+/*13171317+ * Work function to invoke update_hk_sched_domains()13181318+ */13191319+static void hk_sd_workfn(struct work_struct *work)13201320+{13211321+ cpuset_full_lock();13221322+ update_hk_sched_domains();13231323+ cpuset_full_unlock();13481324}1349132513501326/**···15161450 cs->remote_partition = true;15171451 cpumask_copy(cs->effective_xcpus, tmp->new_cpus);15181452 spin_unlock_irq(&callback_lock);15191519- update_isolation_cpumasks();15201453 cpuset_force_rebuild();15211454 cs->prs_err = 0;15221455···15601495 compute_excpus(cs, cs->effective_xcpus);15611496 reset_partition_data(cs);15621497 spin_unlock_irq(&callback_lock);15631563- update_isolation_cpumasks();15641498 cpuset_force_rebuild();1565149915661500 /*···16301566 if (xcpus)16311567 cpumask_copy(cs->exclusive_cpus, xcpus);16321568 spin_unlock_irq(&callback_lock);16331633- update_isolation_cpumasks();16341569 if (adding || deleting)16351570 cpuset_force_rebuild();16361571···19731910 partition_xcpus_add(new_prs, parent, tmp->delmask);1974191119751912 spin_unlock_irq(&callback_lock);19761976- update_isolation_cpumasks();1977191319781914 if ((old_prs != new_prs) && (cmd == partcmd_update))19791915 update_partition_exclusive_flag(cs, new_prs);···22172155 WARN_ON(!is_in_v2_mode() &&22182156 !cpumask_equal(cp->cpus_allowed, cp->effective_cpus));2219215722202220- cpuset_update_tasks_cpumask(cp, cp->effective_cpus);21582158+ cpuset_update_tasks_cpumask(cp, tmp->new_cpus);2221215922222160 /*22232161 * On default hierarchy, inherit the CS_SCHED_LOAD_BALANCE···29402878 else if (isolcpus_updated)29412879 isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus);29422880 spin_unlock_irq(&callback_lock);29432943- update_isolation_cpumasks();2944288129452882 /* Force update if switching back to member & update effective_xcpus */29462883 update_cpumasks_hier(cs, &tmpmask, !new_prs);···32293168 }3230316932313170 free_cpuset(trialcs);32323232- if (force_sd_rebuild)32333233- rebuild_sched_domains_locked();32343171out_unlock:31723172+ update_hk_sched_domains();32353173 cpuset_full_unlock();32363174 if (of_cft(of)->private == FILE_MEMLIST)32373175 schedule_flush_migrate_mm();···33383278 cpuset_full_lock();33393279 if (is_cpuset_online(cs))33403280 retval = update_prstate(cs, val);32813281+ update_hk_sched_domains();33413282 cpuset_full_unlock();33423283 return retval ?: nbytes;33433284}···35133452 /* Reset valid partition back to member */35143453 if (is_partition_valid(cs))35153454 update_prstate(cs, PRS_MEMBER);34553455+ update_hk_sched_domains();35163456 cpuset_full_unlock();35173457}35183458···36693607 BUG_ON(!alloc_cpumask_var(&top_cpuset.exclusive_cpus, GFP_KERNEL));36703608 BUG_ON(!zalloc_cpumask_var(&subpartitions_cpus, GFP_KERNEL));36713609 BUG_ON(!zalloc_cpumask_var(&isolated_cpus, GFP_KERNEL));36103610+ BUG_ON(!zalloc_cpumask_var(&isolated_hk_cpus, GFP_KERNEL));3672361136733612 cpumask_setall(top_cpuset.cpus_allowed);36743613 nodes_setall(top_cpuset.mems_allowed);···38413778 */38423779static void cpuset_handle_hotplug(void)38433780{37813781+ static DECLARE_WORK(hk_sd_work, hk_sd_workfn);38443782 static cpumask_t new_cpus;38453783 static nodemask_t new_mems;38463784 bool cpus_updated, mems_updated;···39233859 rcu_read_unlock();39243860 }3925386139263926- /* rebuild sched domains if necessary */39273927- if (force_sd_rebuild)39283928- rebuild_sched_domains_cpuslocked();38623862+38633863+ /*38643864+ * Queue a work to call housekeeping_update() & rebuild_sched_domains()38653865+ * There will be a slight delay before the HK_TYPE_DOMAIN housekeeping38663866+ * cpumask can correctly reflect what is in isolated_cpus.38673867+ *38683868+ * We rely on WORK_STRUCT_PENDING_BIT to not requeue a work item that38693869+ * is still pending. Before the pending bit is cleared, the work data38703870+ * is copied out and work item dequeued. So it is possible to queue38713871+ * the work again before the hk_sd_workfn() is invoked to process the38723872+ * previously queued work. Since hk_sd_workfn() doesn't use the work38733873+ * item at all, this is not a problem.38743874+ */38753875+ if (update_housekeeping || force_sd_rebuild)38763876+ queue_work(system_unbound_wq, &hk_sd_work);3929387739303878 free_tmpmasks(ptmp);39313879}
···8585 return k->worker_private;8686}87878888-/*8989- * Variant of to_kthread() that doesn't assume @p is a kthread.9090- *9191- * When "(p->flags & PF_KTHREAD)" is set the task is a kthread and will9292- * always remain a kthread. For kthreads p->worker_private always9393- * points to a struct kthread. For tasks that are not kthreads9494- * p->worker_private is used to point to other things.9595- *9696- * Return NULL for any task that is not a kthread.9797- */9898-static inline struct kthread *__to_kthread(struct task_struct *p)9999-{100100- void *kthread = p->worker_private;101101- if (kthread && !(p->flags & PF_KTHREAD))102102- kthread = NULL;103103- return kthread;104104-}105105-10688void get_kthread_comm(char *buf, size_t buf_size, struct task_struct *tsk)10789{10890 struct kthread *kthread = to_kthread(tsk);···175193176194bool kthread_should_stop_or_park(void)177195{178178- struct kthread *kthread = __to_kthread(current);196196+ struct kthread *kthread = tsk_is_kthread(current);179197180198 if (!kthread)181199 return false;···216234 */217235void *kthread_func(struct task_struct *task)218236{219219- struct kthread *kthread = __to_kthread(task);237237+ struct kthread *kthread = tsk_is_kthread(task);220238 if (kthread)221239 return kthread->threadfn;222240 return NULL;···248266 */249267void *kthread_probe_data(struct task_struct *task)250268{251251- struct kthread *kthread = __to_kthread(task);269269+ struct kthread *kthread = tsk_is_kthread(task);252270 void *data = NULL;253271254272 if (kthread)···291309}292310EXPORT_SYMBOL_GPL(kthread_parkme);293311294294-/**295295- * kthread_exit - Cause the current kthread return @result to kthread_stop().296296- * @result: The integer value to return to kthread_stop().297297- *298298- * While kthread_exit can be called directly, it exists so that299299- * functions which do some additional work in non-modular code such as300300- * module_put_and_kthread_exit can be implemented.301301- *302302- * Does not return.303303- */304304-void __noreturn kthread_exit(long result)312312+void kthread_do_exit(struct kthread *kthread, long result)305313{306306- struct kthread *kthread = to_kthread(current);307314 kthread->result = result;308315 if (!list_empty(&kthread->affinity_node)) {309316 mutex_lock(&kthread_affinity_lock);···304333 kthread->preferred_affinity = NULL;305334 }306335 }307307- do_exit(0);308336}309309-EXPORT_SYMBOL(kthread_exit);310337311338/**312339 * kthread_complete_and_exit - Exit the current kthread.···652683653684bool kthread_is_per_cpu(struct task_struct *p)654685{655655- struct kthread *kthread = __to_kthread(p);686686+ struct kthread *kthread = tsk_is_kthread(p);656687 if (!kthread)657688 return false;658689
+13-10
kernel/module/Kconfig
···169169 make them incompatible with the kernel you are running. If170170 unsure, say N.171171172172+if MODVERSIONS173173+172174choice173175 prompt "Module versioning implementation"174174- depends on MODVERSIONS175176 help176177 Select the tool used to calculate symbol versions for modules.177178···207206208207config ASM_MODVERSIONS209208 bool210210- default HAVE_ASM_MODVERSIONS && MODVERSIONS209209+ default HAVE_ASM_MODVERSIONS211210 help212211 This enables module versioning for exported symbols also from213212 assembly. This can be enabled only when the target architecture···215214216215config EXTENDED_MODVERSIONS217216 bool "Extended Module Versioning Support"218218- depends on MODVERSIONS219217 help220218 This enables extended MODVERSIONs support, allowing long symbol221219 names to be versioned.···224224225225config BASIC_MODVERSIONS226226 bool "Basic Module Versioning Support"227227- depends on MODVERSIONS228227 default y229228 help230229 This enables basic MODVERSIONS support, allowing older tools or···235236236237 This is enabled by default when MODVERSIONS are enabled.237238 If unsure, say Y.239239+240240+endif # MODVERSIONS238241239242config MODULE_SRCVERSION_ALL240243 bool "Source checksum for all modules"···278277 Reject unsigned modules or signed modules for which we don't have a279278 key. Without this, such modules will simply taint the kernel.280279280280+if MODULE_SIG || IMA_APPRAISE_MODSIG281281+281282config MODULE_SIG_ALL282283 bool "Automatically sign all modules"283284 default y284284- depends on MODULE_SIG || IMA_APPRAISE_MODSIG285285 help286286 Sign all modules during make modules_install. Without this option,287287 modules must be signed manually, using the scripts/sign-file tool.···292290293291choice294292 prompt "Hash algorithm to sign modules"295295- depends on MODULE_SIG || IMA_APPRAISE_MODSIG296293 default MODULE_SIG_SHA512297294 help298295 This determines which sort of hashing algorithm will be used during···328327329328config MODULE_SIG_HASH330329 string331331- depends on MODULE_SIG || IMA_APPRAISE_MODSIG332330 default "sha256" if MODULE_SIG_SHA256333331 default "sha384" if MODULE_SIG_SHA384334332 default "sha512" if MODULE_SIG_SHA512335333 default "sha3-256" if MODULE_SIG_SHA3_256336334 default "sha3-384" if MODULE_SIG_SHA3_384337335 default "sha3-512" if MODULE_SIG_SHA3_512336336+337337+endif # MODULE_SIG || IMA_APPRAISE_MODSIG338338339339config MODULE_COMPRESS340340 bool "Module compression"···352350353351 If unsure, say N.354352353353+if MODULE_COMPRESS354354+355355choice356356 prompt "Module compression type"357357- depends on MODULE_COMPRESS358357 help359358 Choose the supported algorithm for module compression.360359···382379config MODULE_COMPRESS_ALL383380 bool "Automatically compress all modules"384381 default y385385- depends on MODULE_COMPRESS386382 help387383 Compress all modules during 'make modules_install'.388384···391389392390config MODULE_DECOMPRESS393391 bool "Support in-kernel module decompression"394394- depends on MODULE_COMPRESS395392 select ZLIB_INFLATE if MODULE_COMPRESS_GZIP396393 select XZ_DEC if MODULE_COMPRESS_XZ397394 select ZSTD_DECOMPRESS if MODULE_COMPRESS_ZSTD···400399 load pinning security policy is enabled.401400402401 If unsure, say N.402402+403403+endif # MODULE_COMPRESS403404404405config MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS405406 bool "Allow loading of modules with missing namespace imports"
+7-6
kernel/module/main.c
···15681568 break;1569156915701570 default:15711571+ if (sym[i].st_shndx >= info->hdr->e_shnum) {15721572+ pr_err("%s: Symbol %s has an invalid section index %u (max %u)\n",15731573+ mod->name, name, sym[i].st_shndx, info->hdr->e_shnum - 1);15741574+ ret = -ENOEXEC;15751575+ break;15761576+ }15771577+15711578 /* Divert to percpu allocation if a percpu var. */15721579 if (sym[i].st_shndx == info->index.pcpu)15731580 secbase = (unsigned long)mod_percpu(mod);···35513544 mutex_unlock(&module_mutex);35523545 free_module:35533546 mod_stat_bump_invalid(info, flags);35543554- /* Free lock-classes; relies on the preceding sync_rcu() */35553555- for_class_mod_mem_type(type, core_data) {35563556- lockdep_free_key_range(mod->mem[type].base,35573557- mod->mem[type].size);35583558- }35593559-35603547 module_memory_restore_rox(mod);35613548 module_deallocate(mod, info);35623549 free_copy:
···7474 * info communication. The following flag indicates whether ops.init()7575 * finished successfully.7676 */7777- SCX_EFLAG_INITIALIZED,7777+ SCX_EFLAG_INITIALIZED = 1LLU << 0,7878};79798080/*
+1-3
kernel/sched/isolation.c
···123123 struct cpumask *trial, *old = NULL;124124 int err;125125126126- lockdep_assert_cpus_held();127127-128126 trial = kmalloc(cpumask_size(), GFP_KERNEL);129127 if (!trial)130128 return -ENOMEM;···134136 }135137136138 if (!housekeeping.flags)137137- static_branch_enable_cpuslocked(&housekeeping_overridden);139139+ static_branch_enable(&housekeeping_overridden);138140139141 if (housekeeping.flags & HK_FLAG_DOMAIN)140142 old = housekeeping_cpumask_dereference(HK_TYPE_DOMAIN);
+30
kernel/sched/syscalls.c
···284284 uid_eq(cred->euid, pcred->uid));285285}286286287287+#ifdef CONFIG_RT_MUTEXES288288+static inline void __setscheduler_dl_pi(int newprio, int policy,289289+ struct task_struct *p,290290+ struct sched_change_ctx *scope)291291+{292292+ /*293293+ * In case a DEADLINE task (either proper or boosted) gets294294+ * setscheduled to a lower priority class, check if it neeeds to295295+ * inherit parameters from a potential pi_task. In that case make296296+ * sure replenishment happens with the next enqueue.297297+ */298298+299299+ if (dl_prio(newprio) && !dl_policy(policy)) {300300+ struct task_struct *pi_task = rt_mutex_get_top_task(p);301301+302302+ if (pi_task) {303303+ p->dl.pi_se = pi_task->dl.pi_se;304304+ scope->flags |= ENQUEUE_REPLENISH;305305+ }306306+ }307307+}308308+#else /* !CONFIG_RT_MUTEXES */309309+static inline void __setscheduler_dl_pi(int newprio, int policy,310310+ struct task_struct *p,311311+ struct sched_change_ctx *scope)312312+{313313+}314314+#endif /* !CONFIG_RT_MUTEXES */315315+287316#ifdef CONFIG_UCLAMP_TASK288317289318static int uclamp_validate(struct task_struct *p,···684655 __setscheduler_params(p, attr);685656 p->sched_class = next_class;686657 p->prio = newprio;658658+ __setscheduler_dl_pi(newprio, policy, p, scope);687659 }688660 __setscheduler_uclamp(p, attr);689661
···2653265326542654 if (aux_clock) {26552655 /* Auxiliary clocks are similar to TAI and do not have leap seconds */26562656- if (txc->status & (STA_INS | STA_DEL))26562656+ if (txc->modes & ADJ_STATUS &&26572657+ txc->status & (STA_INS | STA_DEL))26572658 return -EINVAL;2658265926592660 /* No TAI offset setting */···26622661 return -EINVAL;2663266226642663 /* No PPS support either */26652665- if (txc->status & (STA_PPSFREQ | STA_PPSTIME))26642664+ if (txc->modes & ADJ_STATUS &&26652665+ txc->status & (STA_PPSFREQ | STA_PPSTIME))26662666 return -EINVAL;26672667 }26682668
+1-3
kernel/time/timer_migration.c
···15591559 cpumask_var_t cpumask __free(free_cpumask_var) = CPUMASK_VAR_NULL;15601560 int cpu;1561156115621562- lockdep_assert_cpus_held();15631563-15641562 if (!works)15651563 return -ENOMEM;15661564 if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))···15681570 * First set previously isolated CPUs as available (unisolate).15691571 * This cpumask contains only CPUs that switched to available now.15701572 */15731573+ guard(cpus_read_lock)();15711574 cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask);15721575 cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask);15731576···16251626 cpumask_andnot(cpumask, cpu_possible_mask, housekeeping_cpumask(HK_TYPE_DOMAIN));1626162716271628 /* Protect against RCU torture hotplug testing */16281628- guard(cpus_read_lock)();16291629 return tmigr_isolated_exclude_cpumask(cpumask);16301630}16311631late_initcall(tmigr_init_isolation);
+1-2
kernel/trace/blktrace.c
···383383 cpu = raw_smp_processor_id();384384385385 if (blk_tracer) {386386- tracing_record_cmdline(current);387387-388386 buffer = blk_tr->array_buffer.buffer;389387 trace_ctx = tracing_gen_ctx_flags(0);390388 switch (bt->version) {···417419 if (!event)418420 return;419421422422+ tracing_record_cmdline(current);420423 switch (bt->version) {421424 case 1:422425 record_blktrace_event(ring_buffer_event_data(event),
+4
kernel/trace/ftrace.c
···64046404 new_filter_hash = old_filter_hash;64056405 }64066406 } else {64076407+ guard(mutex)(&ftrace_lock);64076408 err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);64086409 /*64096410 * new_filter_hash is dup-ed, so we need to release it anyway,···65316530 ops->func_hash->filter_hash = NULL;65326531 }65336532 } else {65336533+ guard(mutex)(&ftrace_lock);65346534 err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);65356535 /*65366536 * new_filter_hash is dup-ed, so we need to release it anyway,···86138611 struct trace_pid_list *pid_list;86148612 struct trace_array *tr = data;8615861386148614+ guard(preempt)();86168615 pid_list = rcu_dereference_sched(tr->function_pids);86178616 trace_filter_add_remove_task(pid_list, self, task);86188617···86278624 struct trace_pid_list *pid_list;86288625 struct trace_array *tr = data;8629862686278627+ guard(preempt)();86308628 pid_list = rcu_dereference_sched(tr->function_pids);86318629 trace_filter_add_remove_task(pid_list, NULL, task);86328630
+21
kernel/trace/ring_buffer.c
···73107310 return err;73117311}7312731273137313+/*73147314+ * This is called when a VMA is duplicated (e.g., on fork()) to increment73157315+ * the user_mapped counter without remapping pages.73167316+ */73177317+void ring_buffer_map_dup(struct trace_buffer *buffer, int cpu)73187318+{73197319+ struct ring_buffer_per_cpu *cpu_buffer;73207320+73217321+ if (WARN_ON(!cpumask_test_cpu(cpu, buffer->cpumask)))73227322+ return;73237323+73247324+ cpu_buffer = buffer->buffers[cpu];73257325+73267326+ guard(mutex)(&cpu_buffer->mapping_lock);73277327+73287328+ if (cpu_buffer->user_mapped)73297329+ __rb_inc_dec_mapped(cpu_buffer, true);73307330+ else73317331+ WARN(1, "Unexpected buffer stat, it should be mapped");73327332+}73337333+73137334int ring_buffer_unmap(struct trace_buffer *buffer, int cpu)73147335{73157336 struct ring_buffer_per_cpu *cpu_buffer;
+16-3
kernel/trace/trace.c
···82138213static inline void put_snapshot_map(struct trace_array *tr) { }82148214#endif8215821582168216+/*82178217+ * This is called when a VMA is duplicated (e.g., on fork()) to increment82188218+ * the user_mapped counter without remapping pages.82198219+ */82208220+static void tracing_buffers_mmap_open(struct vm_area_struct *vma)82218221+{82228222+ struct ftrace_buffer_info *info = vma->vm_file->private_data;82238223+ struct trace_iterator *iter = &info->iter;82248224+82258225+ ring_buffer_map_dup(iter->array_buffer->buffer, iter->cpu_file);82268226+}82278227+82168228static void tracing_buffers_mmap_close(struct vm_area_struct *vma)82178229{82188230 struct ftrace_buffer_info *info = vma->vm_file->private_data;···82448232}8245823382468234static const struct vm_operations_struct tracing_buffers_vmops = {82358235+ .open = tracing_buffers_mmap_open,82478236 .close = tracing_buffers_mmap_close,82488237 .may_split = tracing_buffers_may_split,82498238};···93509337}9351933893529339static int93539353-allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, int size)93409340+allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, unsigned long size)93549341{93559342 enum ring_buffer_flags rb_flags;93569343 struct trace_scratch *tscratch;···94059392 }94069393}9407939494089408-static int allocate_trace_buffers(struct trace_array *tr, int size)93959395+static int allocate_trace_buffers(struct trace_array *tr, unsigned long size)94099396{94109397 int ret;94119398···10769107561077010757__init static int tracer_alloc_buffers(void)1077110758{1077210772- int ring_buf_size;1075910759+ unsigned long ring_buf_size;1077310760 int ret = -ENOMEM;10774107611077510762
+44-16
kernel/trace/trace_events.c
···10391039 struct trace_pid_list *pid_list;10401040 struct trace_array *tr = data;1041104110421042+ guard(preempt)();10421043 pid_list = rcu_dereference_raw(tr->filtered_pids);10431044 trace_filter_add_remove_task(pid_list, NULL, task);10441045···10551054 struct trace_pid_list *pid_list;10561055 struct trace_array *tr = data;1057105610571057+ guard(preempt)();10581058 pid_list = rcu_dereference_sched(tr->filtered_pids);10591059 trace_filter_add_remove_task(pid_list, self, task);10601060···4493449144944492static __init int setup_trace_event(char *str)44954493{44964496- strscpy(bootup_event_buf, str, COMMAND_LINE_SIZE);44944494+ if (bootup_event_buf[0] != '\0')44954495+ strlcat(bootup_event_buf, ",", COMMAND_LINE_SIZE);44964496+44974497+ strlcat(bootup_event_buf, str, COMMAND_LINE_SIZE);44984498+44974499 trace_set_ring_buffer_expanded(NULL);44984500 disable_tracing_selftest("running event tracing");44994501···46744668 return 0;46754669}4676467046774677-__init void46784678-early_enable_events(struct trace_array *tr, char *buf, bool disable_first)46714671+/*46724672+ * Helper function to enable or disable a comma-separated list of events46734673+ * from the bootup buffer.46744674+ */46754675+static __init void __early_set_events(struct trace_array *tr, char *buf, bool enable)46794676{46804677 char *token;46814681- int ret;4682467846834683- while (true) {46844684- token = strsep(&buf, ",");46854685-46864686- if (!token)46874687- break;46884688-46794679+ while ((token = strsep(&buf, ","))) {46894680 if (*token) {46904690- /* Restarting syscalls requires that we stop them first */46914691- if (disable_first)46814681+ if (enable) {46824682+ if (ftrace_set_clr_event(tr, token, 1))46834683+ pr_warn("Failed to enable trace event: %s\n", token);46844684+ } else {46924685 ftrace_set_clr_event(tr, token, 0);46934693-46944694- ret = ftrace_set_clr_event(tr, token, 1);46954695- if (ret)46964696- pr_warn("Failed to enable trace event: %s\n", token);46864686+ }46974687 }4698468846994689 /* Put back the comma to allow this to be called again */47004690 if (buf)47014691 *(buf - 1) = ',';47024692 }46934693+}46944694+46954695+/**46964696+ * early_enable_events - enable events from the bootup buffer46974697+ * @tr: The trace array to enable the events in46984698+ * @buf: The buffer containing the comma separated list of events46994699+ * @disable_first: If true, disable all events in @buf before enabling them47004700+ *47014701+ * This function enables events from the bootup buffer. If @disable_first47024702+ * is true, it will first disable all events in the buffer before enabling47034703+ * them.47044704+ *47054705+ * For syscall events, which rely on a global refcount to register the47064706+ * SYSCALL_WORK_SYSCALL_TRACEPOINT flag (especially for pid 1), we must47074707+ * ensure the refcount hits zero before re-enabling them. A simple47084708+ * "disable then enable" per-event is not enough if multiple syscalls are47094709+ * used, as the refcount will stay above zero. Thus, we need a two-phase47104710+ * approach: disable all, then enable all.47114711+ */47124712+__init void47134713+early_enable_events(struct trace_array *tr, char *buf, bool disable_first)47144714+{47154715+ if (disable_first)47164716+ __early_set_events(tr, buf, false);47174717+47184718+ __early_set_events(tr, buf, true);47034719}4704472047054721static __init int event_trace_enable(void)
+3
kernel/trace/trace_events_trigger.c
···50505151void trigger_data_free(struct event_trigger_data *data)5252{5353+ if (!data)5454+ return;5555+5356 if (data->cmd_ops->set_filter)5457 data->cmd_ops->set_filter(NULL, data, NULL);5558
···11+CONFIG_KUNIT=y22+33+# These kconfig options select all the CONFIG_CRYPTO_LIB_* symbols that have a44+# corresponding KUnit test. Those symbols cannot be directly enabled here,55+# since they are hidden symbols.66+CONFIG_CRYPTO=y77+CONFIG_CRYPTO_ADIANTUM=y88+CONFIG_CRYPTO_BLAKE2B=y99+CONFIG_CRYPTO_CHACHA20POLY1305=y1010+CONFIG_CRYPTO_HCTR2=y1111+CONFIG_CRYPTO_MD5=y1212+CONFIG_CRYPTO_MLDSA=y1313+CONFIG_CRYPTO_SHA1=y1414+CONFIG_CRYPTO_SHA256=y1515+CONFIG_CRYPTO_SHA512=y1616+CONFIG_CRYPTO_SHA3=y1717+CONFIG_INET=y1818+CONFIG_IPV6=y1919+CONFIG_NET=y2020+CONFIG_NETDEVICES=y2121+CONFIG_WIREGUARD=y2222+2323+CONFIG_CRYPTO_LIB_BLAKE2B_KUNIT_TEST=y2424+CONFIG_CRYPTO_LIB_BLAKE2S_KUNIT_TEST=y2525+CONFIG_CRYPTO_LIB_CURVE25519_KUNIT_TEST=y2626+CONFIG_CRYPTO_LIB_MD5_KUNIT_TEST=y2727+CONFIG_CRYPTO_LIB_MLDSA_KUNIT_TEST=y2828+CONFIG_CRYPTO_LIB_NH_KUNIT_TEST=y2929+CONFIG_CRYPTO_LIB_POLY1305_KUNIT_TEST=y3030+CONFIG_CRYPTO_LIB_POLYVAL_KUNIT_TEST=y3131+CONFIG_CRYPTO_LIB_SHA1_KUNIT_TEST=y3232+CONFIG_CRYPTO_LIB_SHA256_KUNIT_TEST=y3333+CONFIG_CRYPTO_LIB_SHA512_KUNIT_TEST=y3434+CONFIG_CRYPTO_LIB_SHA3_KUNIT_TEST=y
+12-23
lib/crypto/tests/Kconfig
···2233config CRYPTO_LIB_BLAKE2B_KUNIT_TEST44 tristate "KUnit tests for BLAKE2b" if !KUNIT_ALL_TESTS55- depends on KUNIT55+ depends on KUNIT && CRYPTO_LIB_BLAKE2B66 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS77 select CRYPTO_LIB_BENCHMARK_VISIBLE88- select CRYPTO_LIB_BLAKE2B98 help109 KUnit tests for the BLAKE2b cryptographic hash function.1110···1314 depends on KUNIT1415 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS1516 select CRYPTO_LIB_BENCHMARK_VISIBLE1616- # No need to select CRYPTO_LIB_BLAKE2S here, as that option doesn't1717+ # No need to depend on CRYPTO_LIB_BLAKE2S here, as that option doesn't1718 # exist; the BLAKE2s code is always built-in for the /dev/random driver.1819 help1920 KUnit tests for the BLAKE2s cryptographic hash function.20212122config CRYPTO_LIB_CURVE25519_KUNIT_TEST2223 tristate "KUnit tests for Curve25519" if !KUNIT_ALL_TESTS2323- depends on KUNIT2424+ depends on KUNIT && CRYPTO_LIB_CURVE255192425 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS2526 select CRYPTO_LIB_BENCHMARK_VISIBLE2626- select CRYPTO_LIB_CURVE255192727 help2828 KUnit tests for the Curve25519 Diffie-Hellman function.29293030config CRYPTO_LIB_MD5_KUNIT_TEST3131 tristate "KUnit tests for MD5" if !KUNIT_ALL_TESTS3232- depends on KUNIT3232+ depends on KUNIT && CRYPTO_LIB_MD53333 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS3434 select CRYPTO_LIB_BENCHMARK_VISIBLE3535- select CRYPTO_LIB_MD53635 help3736 KUnit tests for the MD5 cryptographic hash function and its3837 corresponding HMAC.39384039config CRYPTO_LIB_MLDSA_KUNIT_TEST4140 tristate "KUnit tests for ML-DSA" if !KUNIT_ALL_TESTS4242- depends on KUNIT4141+ depends on KUNIT && CRYPTO_LIB_MLDSA4342 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS4443 select CRYPTO_LIB_BENCHMARK_VISIBLE4545- select CRYPTO_LIB_MLDSA4644 help4745 KUnit tests for the ML-DSA digital signature algorithm.48464947config CRYPTO_LIB_NH_KUNIT_TEST5048 tristate "KUnit tests for NH" if !KUNIT_ALL_TESTS5151- depends on KUNIT4949+ depends on KUNIT && CRYPTO_LIB_NH5250 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS5353- select CRYPTO_LIB_NH5451 help5552 KUnit tests for the NH almost-universal hash function.56535754config CRYPTO_LIB_POLY1305_KUNIT_TEST5855 tristate "KUnit tests for Poly1305" if !KUNIT_ALL_TESTS5959- depends on KUNIT5656+ depends on KUNIT && CRYPTO_LIB_POLY13056057 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS6158 select CRYPTO_LIB_BENCHMARK_VISIBLE6262- select CRYPTO_LIB_POLY13056359 help6460 KUnit tests for the Poly1305 library functions.65616662config CRYPTO_LIB_POLYVAL_KUNIT_TEST6763 tristate "KUnit tests for POLYVAL" if !KUNIT_ALL_TESTS6868- depends on KUNIT6464+ depends on KUNIT && CRYPTO_LIB_POLYVAL6965 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS7066 select CRYPTO_LIB_BENCHMARK_VISIBLE7171- select CRYPTO_LIB_POLYVAL7267 help7368 KUnit tests for the POLYVAL library functions.74697570config CRYPTO_LIB_SHA1_KUNIT_TEST7671 tristate "KUnit tests for SHA-1" if !KUNIT_ALL_TESTS7777- depends on KUNIT7272+ depends on KUNIT && CRYPTO_LIB_SHA17873 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS7974 select CRYPTO_LIB_BENCHMARK_VISIBLE8080- select CRYPTO_LIB_SHA18175 help8276 KUnit tests for the SHA-1 cryptographic hash function and its8377 corresponding HMAC.···7987# included, for consistency with the naming used elsewhere (e.g. CRYPTO_SHA256).8088config CRYPTO_LIB_SHA256_KUNIT_TEST8189 tristate "KUnit tests for SHA-224 and SHA-256" if !KUNIT_ALL_TESTS8282- depends on KUNIT9090+ depends on KUNIT && CRYPTO_LIB_SHA2568391 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS8492 select CRYPTO_LIB_BENCHMARK_VISIBLE8585- select CRYPTO_LIB_SHA2568693 help8794 KUnit tests for the SHA-224 and SHA-256 cryptographic hash functions8895 and their corresponding HMACs.···9099# included, for consistency with the naming used elsewhere (e.g. CRYPTO_SHA512).91100config CRYPTO_LIB_SHA512_KUNIT_TEST92101 tristate "KUnit tests for SHA-384 and SHA-512" if !KUNIT_ALL_TESTS9393- depends on KUNIT102102+ depends on KUNIT && CRYPTO_LIB_SHA51294103 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS95104 select CRYPTO_LIB_BENCHMARK_VISIBLE9696- select CRYPTO_LIB_SHA51297105 help98106 KUnit tests for the SHA-384 and SHA-512 cryptographic hash functions99107 and their corresponding HMACs.100108101109config CRYPTO_LIB_SHA3_KUNIT_TEST102110 tristate "KUnit tests for SHA-3" if !KUNIT_ALL_TESTS103103- depends on KUNIT111111+ depends on KUNIT && CRYPTO_LIB_SHA3104112 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS105113 select CRYPTO_LIB_BENCHMARK_VISIBLE106106- select CRYPTO_LIB_SHA3107114 help108115 KUnit tests for the SHA3 cryptographic hash and XOF functions,109116 including SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128 and
+125-106
lib/kunit/test.c
···9494 unsigned long total;9595};96969797-static bool kunit_should_print_stats(struct kunit_result_stats stats)9797+static bool kunit_should_print_stats(struct kunit_result_stats *stats)9898{9999 if (kunit_stats_enabled == 0)100100 return false;···102102 if (kunit_stats_enabled == 2)103103 return true;104104105105- return (stats.total > 1);105105+ return (stats->total > 1);106106}107107108108static void kunit_print_test_stats(struct kunit *test,109109- struct kunit_result_stats stats)109109+ struct kunit_result_stats *stats)110110{111111 if (!kunit_should_print_stats(stats))112112 return;···115115 KUNIT_SUBTEST_INDENT116116 "# %s: pass:%lu fail:%lu skip:%lu total:%lu",117117 test->name,118118- stats.passed,119119- stats.failed,120120- stats.skipped,121121- stats.total);118118+ stats->passed,119119+ stats->failed,120120+ stats->skipped,121121+ stats->total);122122}123123124124/* Append formatted message to log. */···600600}601601602602static void kunit_print_suite_stats(struct kunit_suite *suite,603603- struct kunit_result_stats suite_stats,604604- struct kunit_result_stats param_stats)603603+ struct kunit_result_stats *suite_stats,604604+ struct kunit_result_stats *param_stats)605605{606606 if (kunit_should_print_stats(suite_stats)) {607607 kunit_log(KERN_INFO, suite,608608 "# %s: pass:%lu fail:%lu skip:%lu total:%lu",609609 suite->name,610610- suite_stats.passed,611611- suite_stats.failed,612612- suite_stats.skipped,613613- suite_stats.total);610610+ suite_stats->passed,611611+ suite_stats->failed,612612+ suite_stats->skipped,613613+ suite_stats->total);614614 }615615616616 if (kunit_should_print_stats(param_stats)) {617617 kunit_log(KERN_INFO, suite,618618 "# Totals: pass:%lu fail:%lu skip:%lu total:%lu",619619- param_stats.passed,620620- param_stats.failed,621621- param_stats.skipped,622622- param_stats.total);619619+ param_stats->passed,620620+ param_stats->failed,621621+ param_stats->skipped,622622+ param_stats->total);623623 }624624}625625···681681 }682682}683683684684-int kunit_run_tests(struct kunit_suite *suite)684684+static noinline_for_stack void685685+kunit_run_param_test(struct kunit_suite *suite, struct kunit_case *test_case,686686+ struct kunit *test,687687+ struct kunit_result_stats *suite_stats,688688+ struct kunit_result_stats *total_stats,689689+ struct kunit_result_stats *param_stats)685690{686691 char param_desc[KUNIT_PARAM_DESC_SIZE];692692+ const void *curr_param;693693+694694+ kunit_init_parent_param_test(test_case, test);695695+ if (test_case->status == KUNIT_FAILURE) {696696+ kunit_update_stats(param_stats, test->status);697697+ return;698698+ }699699+ /* Get initial param. */700700+ param_desc[0] = '\0';701701+ /* TODO: Make generate_params try-catch */702702+ curr_param = test_case->generate_params(test, NULL, param_desc);703703+ test_case->status = KUNIT_SKIPPED;704704+ kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT705705+ "KTAP version 1\n");706706+ kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT707707+ "# Subtest: %s", test_case->name);708708+ if (test->params_array.params &&709709+ test_case->generate_params == kunit_array_gen_params) {710710+ kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT711711+ KUNIT_SUBTEST_INDENT "1..%zd\n",712712+ test->params_array.num_params);713713+ }714714+715715+ while (curr_param) {716716+ struct kunit param_test = {717717+ .param_value = curr_param,718718+ .param_index = ++test->param_index,719719+ .parent = test,720720+ };721721+ kunit_init_test(¶m_test, test_case->name, NULL);722722+ param_test.log = test_case->log;723723+ kunit_run_case_catch_errors(suite, test_case, ¶m_test);724724+725725+ if (param_desc[0] == '\0') {726726+ snprintf(param_desc, sizeof(param_desc),727727+ "param-%d", param_test.param_index);728728+ }729729+730730+ kunit_print_ok_not_ok(¶m_test, KUNIT_LEVEL_CASE_PARAM,731731+ param_test.status,732732+ param_test.param_index,733733+ param_desc,734734+ param_test.status_comment);735735+736736+ kunit_update_stats(param_stats, param_test.status);737737+738738+ /* Get next param. */739739+ param_desc[0] = '\0';740740+ curr_param = test_case->generate_params(test, curr_param,741741+ param_desc);742742+ }743743+ /*744744+ * TODO: Put into a try catch. Since we don't need suite->exit745745+ * for it we can't reuse kunit_try_run_cleanup for this yet.746746+ */747747+ if (test_case->param_exit)748748+ test_case->param_exit(test);749749+ /* TODO: Put this kunit_cleanup into a try-catch. */750750+ kunit_cleanup(test);751751+}752752+753753+static noinline_for_stack void754754+kunit_run_one_test(struct kunit_suite *suite, struct kunit_case *test_case,755755+ struct kunit_result_stats *suite_stats,756756+ struct kunit_result_stats *total_stats)757757+{758758+ struct kunit test = { .param_value = NULL, .param_index = 0 };759759+ struct kunit_result_stats param_stats = { 0 };760760+761761+ kunit_init_test(&test, test_case->name, test_case->log);762762+ if (test_case->status == KUNIT_SKIPPED) {763763+ /* Test marked as skip */764764+ test.status = KUNIT_SKIPPED;765765+ kunit_update_stats(¶m_stats, test.status);766766+ } else if (!test_case->generate_params) {767767+ /* Non-parameterised test. */768768+ test_case->status = KUNIT_SKIPPED;769769+ kunit_run_case_catch_errors(suite, test_case, &test);770770+ kunit_update_stats(¶m_stats, test.status);771771+ } else {772772+ kunit_run_param_test(suite, test_case, &test, suite_stats,773773+ total_stats, ¶m_stats);774774+ }775775+ kunit_print_attr((void *)test_case, true, KUNIT_LEVEL_CASE);776776+777777+ kunit_print_test_stats(&test, ¶m_stats);778778+779779+ kunit_print_ok_not_ok(&test, KUNIT_LEVEL_CASE, test_case->status,780780+ kunit_test_case_num(suite, test_case),781781+ test_case->name,782782+ test.status_comment);783783+784784+ kunit_update_stats(suite_stats, test_case->status);785785+ kunit_accumulate_stats(total_stats, param_stats);786786+}787787+788788+789789+int kunit_run_tests(struct kunit_suite *suite)790790+{687791 struct kunit_case *test_case;688792 struct kunit_result_stats suite_stats = { 0 };689793 struct kunit_result_stats total_stats = { 0 };690690- const void *curr_param;691794692795 /* Taint the kernel so we know we've run tests. */693796 add_taint(TAINT_TEST, LOCKDEP_STILL_OK);···806703807704 kunit_print_suite_start(suite);808705809809- kunit_suite_for_each_test_case(suite, test_case) {810810- struct kunit test = { .param_value = NULL, .param_index = 0 };811811- struct kunit_result_stats param_stats = { 0 };812812-813813- kunit_init_test(&test, test_case->name, test_case->log);814814- if (test_case->status == KUNIT_SKIPPED) {815815- /* Test marked as skip */816816- test.status = KUNIT_SKIPPED;817817- kunit_update_stats(¶m_stats, test.status);818818- } else if (!test_case->generate_params) {819819- /* Non-parameterised test. */820820- test_case->status = KUNIT_SKIPPED;821821- kunit_run_case_catch_errors(suite, test_case, &test);822822- kunit_update_stats(¶m_stats, test.status);823823- } else {824824- kunit_init_parent_param_test(test_case, &test);825825- if (test_case->status == KUNIT_FAILURE) {826826- kunit_update_stats(¶m_stats, test.status);827827- goto test_case_end;828828- }829829- /* Get initial param. */830830- param_desc[0] = '\0';831831- /* TODO: Make generate_params try-catch */832832- curr_param = test_case->generate_params(&test, NULL, param_desc);833833- test_case->status = KUNIT_SKIPPED;834834- kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT835835- "KTAP version 1\n");836836- kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT837837- "# Subtest: %s", test_case->name);838838- if (test.params_array.params &&839839- test_case->generate_params == kunit_array_gen_params) {840840- kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT841841- KUNIT_SUBTEST_INDENT "1..%zd\n",842842- test.params_array.num_params);843843- }844844-845845- while (curr_param) {846846- struct kunit param_test = {847847- .param_value = curr_param,848848- .param_index = ++test.param_index,849849- .parent = &test,850850- };851851- kunit_init_test(¶m_test, test_case->name, NULL);852852- param_test.log = test_case->log;853853- kunit_run_case_catch_errors(suite, test_case, ¶m_test);854854-855855- if (param_desc[0] == '\0') {856856- snprintf(param_desc, sizeof(param_desc),857857- "param-%d", param_test.param_index);858858- }859859-860860- kunit_print_ok_not_ok(¶m_test, KUNIT_LEVEL_CASE_PARAM,861861- param_test.status,862862- param_test.param_index,863863- param_desc,864864- param_test.status_comment);865865-866866- kunit_update_stats(¶m_stats, param_test.status);867867-868868- /* Get next param. */869869- param_desc[0] = '\0';870870- curr_param = test_case->generate_params(&test, curr_param,871871- param_desc);872872- }873873- /*874874- * TODO: Put into a try catch. Since we don't need suite->exit875875- * for it we can't reuse kunit_try_run_cleanup for this yet.876876- */877877- if (test_case->param_exit)878878- test_case->param_exit(&test);879879- /* TODO: Put this kunit_cleanup into a try-catch. */880880- kunit_cleanup(&test);881881- }882882-test_case_end:883883- kunit_print_attr((void *)test_case, true, KUNIT_LEVEL_CASE);884884-885885- kunit_print_test_stats(&test, param_stats);886886-887887- kunit_print_ok_not_ok(&test, KUNIT_LEVEL_CASE, test_case->status,888888- kunit_test_case_num(suite, test_case),889889- test_case->name,890890- test.status_comment);891891-892892- kunit_update_stats(&suite_stats, test_case->status);893893- kunit_accumulate_stats(&total_stats, param_stats);894894- }706706+ kunit_suite_for_each_test_case(suite, test_case)707707+ kunit_run_one_test(suite, test_case, &suite_stats, &total_stats);895708896709 if (suite->suite_exit)897710 suite->suite_exit(suite);898711899899- kunit_print_suite_stats(suite, suite_stats, total_stats);712712+ kunit_print_suite_stats(suite, &suite_stats, &total_stats);900713suite_end:901714 kunit_print_suite_end(suite);902715
···5959 * to save memory. In case ->stride field is not available,6060 * such optimizations are disabled.6161 */6262- unsigned short stride;6262+ unsigned int stride;6363#endif6464 };6565 };···559559}560560561561#ifdef CONFIG_64BIT562562-static inline void slab_set_stride(struct slab *slab, unsigned short stride)562562+static inline void slab_set_stride(struct slab *slab, unsigned int stride)563563{564564 slab->stride = stride;565565}566566-static inline unsigned short slab_get_stride(struct slab *slab)566566+static inline unsigned int slab_get_stride(struct slab *slab)567567{568568 return slab->stride;569569}570570#else571571-static inline void slab_set_stride(struct slab *slab, unsigned short stride)571571+static inline void slab_set_stride(struct slab *slab, unsigned int stride)572572{573573 VM_WARN_ON_ONCE(stride != sizeof(struct slabobj_ext));574574}575575-static inline unsigned short slab_get_stride(struct slab *slab)575575+static inline unsigned int slab_get_stride(struct slab *slab)576576{577577 return sizeof(struct slabobj_ext);578578}
+47-22
mm/slub.c
···28582858 * object pointers are moved to a on-stack array under the lock. To bound the28592859 * stack usage, limit each batch to PCS_BATCH_MAX.28602860 *28612861- * returns true if at least partially flushed28612861+ * Must be called with s->cpu_sheaves->lock locked, returns with the lock28622862+ * unlocked.28632863+ *28642864+ * Returns how many objects are remaining to be flushed28622865 */28632863-static bool sheaf_flush_main(struct kmem_cache *s)28662866+static unsigned int __sheaf_flush_main_batch(struct kmem_cache *s)28642867{28652868 struct slub_percpu_sheaves *pcs;28662869 unsigned int batch, remaining;28672870 void *objects[PCS_BATCH_MAX];28682871 struct slab_sheaf *sheaf;28692869- bool ret = false;2870287228712871-next_batch:28722872- if (!local_trylock(&s->cpu_sheaves->lock))28732873- return ret;28732873+ lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock));2874287428752875 pcs = this_cpu_ptr(s->cpu_sheaves);28762876 sheaf = pcs->main;···2888288828892889 stat_add(s, SHEAF_FLUSH, batch);2890289028912891- ret = true;28912891+ return remaining;28922892+}2892289328932893- if (remaining)28942894- goto next_batch;28942894+static void sheaf_flush_main(struct kmem_cache *s)28952895+{28962896+ unsigned int remaining;28972897+28982898+ do {28992899+ local_lock(&s->cpu_sheaves->lock);29002900+29012901+ remaining = __sheaf_flush_main_batch(s);29022902+29032903+ } while (remaining);29042904+}29052905+29062906+/*29072907+ * Returns true if the main sheaf was at least partially flushed.29082908+ */29092909+static bool sheaf_try_flush_main(struct kmem_cache *s)29102910+{29112911+ unsigned int remaining;29122912+ bool ret = false;29132913+29142914+ do {29152915+ if (!local_trylock(&s->cpu_sheaves->lock))29162916+ return ret;29172917+29182918+ ret = true;29192919+ remaining = __sheaf_flush_main_batch(s);29202920+29212921+ } while (remaining);2895292228962923 return ret;28972924}···45674540 struct slab_sheaf *empty = NULL;45684541 struct slab_sheaf *full;45694542 struct node_barn *barn;45704570- bool can_alloc;45434543+ bool allow_spin;4571454445724545 lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock));45734546···45884561 return NULL;45894562 }4590456345914591- full = barn_replace_empty_sheaf(barn, pcs->main,45924592- gfpflags_allow_spinning(gfp));45644564+ allow_spin = gfpflags_allow_spinning(gfp);45654565+45664566+ full = barn_replace_empty_sheaf(barn, pcs->main, allow_spin);4593456745944568 if (full) {45954569 stat(s, BARN_GET);···4600457246014573 stat(s, BARN_GET_FAIL);4602457446034603- can_alloc = gfpflags_allow_blocking(gfp);46044604-46054605- if (can_alloc) {45754575+ if (allow_spin) {46064576 if (pcs->spare) {46074577 empty = pcs->spare;46084578 pcs->spare = NULL;···46104584 }4611458546124586 local_unlock(&s->cpu_sheaves->lock);45874587+ pcs = NULL;4613458846144614- if (!can_alloc)45894589+ if (!allow_spin)46154590 return NULL;4616459146174592 if (empty) {···46324605 if (!full)46334606 return NULL;4634460746354635- /*46364636- * we can reach here only when gfpflags_allow_blocking46374637- * so this must not be an irq46384638- */46394639- local_lock(&s->cpu_sheaves->lock);46084608+ if (!local_trylock(&s->cpu_sheaves->lock))46094609+ goto barn_put;46404610 pcs = this_cpu_ptr(s->cpu_sheaves);4641461146424612 /*···46644640 return pcs;46654641 }4666464246434643+barn_put:46674644 barn_put_full_sheaf(barn, full);46684645 stat(s, BARN_PUT);46694646···57295704 if (put_fail)57305705 stat(s, BARN_PUT_FAIL);5731570657325732- if (!sheaf_flush_main(s))57075707+ if (!sheaf_try_flush_main(s))57335708 return NULL;5734570957355710 if (!local_trylock(&s->cpu_sheaves->lock))
···111111 /* unsupported WiFi driver version */112112 goto default_throughput;113113114114- real_netdev = batadv_get_real_netdev(hard_iface->net_dev);114114+ /* only use rtnl_trylock because the elp worker will be cancelled while115115+ * the rntl_lock is held. the cancel_delayed_work_sync() would otherwise116116+ * wait forever when the elp work_item was started and it is then also117117+ * trying to rtnl_lock118118+ */119119+ if (!rtnl_trylock())120120+ return false;121121+ real_netdev = __batadv_get_real_netdev(hard_iface->net_dev);122122+ rtnl_unlock();115123 if (!real_netdev)116124 goto default_throughput;117125
+4-4
net/batman-adv/hard-interface.c
···204204}205205206206/**207207- * batadv_get_real_netdevice() - check if the given netdev struct is a virtual207207+ * __batadv_get_real_netdev() - check if the given netdev struct is a virtual208208 * interface on top of another 'real' interface209209 * @netdev: the device to check210210 *···214214 * Return: the 'real' net device or the original net device and NULL in case215215 * of an error.216216 */217217-static struct net_device *batadv_get_real_netdevice(struct net_device *netdev)217217+struct net_device *__batadv_get_real_netdev(struct net_device *netdev)218218{219219 struct batadv_hard_iface *hard_iface = NULL;220220 struct net_device *real_netdev = NULL;···267267 struct net_device *real_netdev;268268269269 rtnl_lock();270270- real_netdev = batadv_get_real_netdevice(net_device);270270+ real_netdev = __batadv_get_real_netdev(net_device);271271 rtnl_unlock();272272273273 return real_netdev;···336336 if (batadv_is_cfg80211_netdev(net_device))337337 wifi_flags |= BATADV_HARDIF_WIFI_CFG80211_DIRECT;338338339339- real_netdev = batadv_get_real_netdevice(net_device);339339+ real_netdev = __batadv_get_real_netdev(net_device);340340 if (!real_netdev)341341 return wifi_flags;342342
···39873987 if (shinfo->nr_frags > 0) {39883988 niov = netmem_to_net_iov(skb_frag_netmem(&shinfo->frags[0]));39893989 if (net_is_devmem_iov(niov) &&39903990- net_devmem_iov_binding(niov)->dev != dev)39903990+ READ_ONCE(net_devmem_iov_binding(niov)->dev) != dev)39913991 goto out_free;39923992 }39933993···48184818 if (dev->flags & IFF_UP) {48194819 int cpu = smp_processor_id(); /* ok because BHs are off */4820482048214821- /* Other cpus might concurrently change txq->xmit_lock_owner48224822- * to -1 or to their cpu id, but not to our id.48234823- */48244824- if (READ_ONCE(txq->xmit_lock_owner) != cpu) {48214821+ if (!netif_tx_owned(txq, cpu)) {48254822 bool is_list = false;4826482348274824 if (dev_xmit_recursion())···77917794 return -1;77927795}7793779677947794-static void napi_threaded_poll_loop(struct napi_struct *napi, bool busy_poll)77977797+static void napi_threaded_poll_loop(struct napi_struct *napi,77987798+ unsigned long *busy_poll_last_qs)77957799{78007800+ unsigned long last_qs = busy_poll_last_qs ? *busy_poll_last_qs : jiffies;77967801 struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;77977802 struct softnet_data *sd;77987798- unsigned long last_qs = jiffies;7799780378007804 for (;;) {78017805 bool repoll = false;···78257827 /* When busy poll is enabled, the old packets are not flushed in78267828 * napi_complete_done. So flush them here.78277829 */78287828- if (busy_poll)78307830+ if (busy_poll_last_qs)78297831 gro_flush_normal(&napi->gro, HZ >= 1000);78307832 local_bh_enable();7831783378327834 /* Call cond_resched here to avoid watchdog warnings. */78337833- if (repoll || busy_poll) {78357835+ if (repoll || busy_poll_last_qs) {78347836 rcu_softirq_qs_periodic(last_qs);78357837 cond_resched();78367838 }···78387840 if (!repoll)78397841 break;78407842 }78437843+78447844+ if (busy_poll_last_qs)78457845+ *busy_poll_last_qs = last_qs;78417846}7842784778437848static int napi_threaded_poll(void *data)78447849{78457850 struct napi_struct *napi = data;78517851+ unsigned long last_qs = jiffies;78467852 bool want_busy_poll;78477853 bool in_busy_poll;78487854 unsigned long val;···78647862 assign_bit(NAPI_STATE_IN_BUSY_POLL, &napi->state,78657863 want_busy_poll);7866786478677867- napi_threaded_poll_loop(napi, want_busy_poll);78657865+ napi_threaded_poll_loop(napi, want_busy_poll ? &last_qs : NULL);78687866 }7869786778707868 return 0;···1317713175{1317813176 struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);13179131771318013180- napi_threaded_poll_loop(&sd->backlog, false);1317813178+ napi_threaded_poll_loop(&sd->backlog, NULL);1318113179}13182131801318313181static void backlog_napi_setup(unsigned int cpu)
···509509 if (r->sdiag_family != AF_UNSPEC &&510510 sk->sk_family != r->sdiag_family)511511 goto next_normal;512512- if (r->id.idiag_sport != htons(sk->sk_num) &&512512+ if (r->id.idiag_sport != htons(READ_ONCE(sk->sk_num)) &&513513 r->id.idiag_sport)514514 goto next_normal;515515 if (r->id.idiag_dport != sk->sk_dport &&
+15-23
net/ipv4/tcp_input.c
···53745374static bool tcp_prune_ofo_queue(struct sock *sk, const struct sk_buff *in_skb);53755375static int tcp_prune_queue(struct sock *sk, const struct sk_buff *in_skb);5376537653775377-/* Check if this incoming skb can be added to socket receive queues53785378- * while satisfying sk->sk_rcvbuf limit.53795379- *53805380- * In theory we should use skb->truesize, but this can cause problems53815381- * when applications use too small SO_RCVBUF values.53825382- * When LRO / hw gro is used, the socket might have a high tp->scaling_ratio,53835383- * allowing RWIN to be close to available space.53845384- * Whenever the receive queue gets full, we can receive a small packet53855385- * filling RWIN, but with a high skb->truesize, because most NIC use 4K page53865386- * plus sk_buff metadata even when receiving less than 1500 bytes of payload.53875387- *53885388- * Note that we use skb->len to decide to accept or drop this packet,53895389- * but sk->sk_rmem_alloc is the sum of all skb->truesize.53905390- */53915377static bool tcp_can_ingest(const struct sock *sk, const struct sk_buff *skb)53925378{53935379 unsigned int rmem = atomic_read(&sk->sk_rmem_alloc);5394538053955395- return rmem + skb->len <= sk->sk_rcvbuf;53815381+ return rmem <= sk->sk_rcvbuf;53965382}5397538353985384static int tcp_try_rmem_schedule(struct sock *sk, const struct sk_buff *skb,···5411542554125426 if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {54135427 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP);54145414- sk->sk_data_ready(sk);54285428+ READ_ONCE(sk->sk_data_ready)(sk);54155429 tcp_drop_reason(sk, skb, SKB_DROP_REASON_PROTO_MEM);54165430 return;54175431 }···56215635void tcp_data_ready(struct sock *sk)56225636{56235637 if (tcp_epollin_ready(sk, sk->sk_rcvlowat) || sock_flag(sk, SOCK_DONE))56245624- sk->sk_data_ready(sk);56385638+ READ_ONCE(sk->sk_data_ready)(sk);56255639}5626564056275641static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)···56775691 inet_csk(sk)->icsk_ack.pending |=56785692 (ICSK_ACK_NOMEM | ICSK_ACK_NOW);56795693 inet_csk_schedule_ack(sk);56805680- sk->sk_data_ready(sk);56945694+ READ_ONCE(sk->sk_data_ready)(sk);5681569556825696 if (skb_queue_len(&sk->sk_receive_queue) && skb->len) {56835697 reason = SKB_DROP_REASON_PROTO_MEM;···61006114 tp->snd_cwnd_stamp = tcp_jiffies32;61016115 }6102611661036103- INDIRECT_CALL_1(sk->sk_write_space, sk_stream_write_space, sk);61176117+ INDIRECT_CALL_1(READ_ONCE(sk->sk_write_space),61186118+ sk_stream_write_space,61196119+ sk);61046120}6105612161066122/* Caller made space either from:···63136325 BUG();63146326 WRITE_ONCE(tp->urg_data, TCP_URG_VALID | tmp);63156327 if (!sock_flag(sk, SOCK_DEAD))63166316- sk->sk_data_ready(sk);63286328+ READ_ONCE(sk->sk_data_ready)(sk);63176329 }63186330 }63196331}···76467658 const struct tcp_sock *tp = tcp_sk(sk);76477659 struct net *net = sock_net(sk);76487660 struct sock *fastopen_sk = NULL;76617661+ union tcp_seq_and_ts_off st;76497662 struct request_sock *req;76507663 bool want_cookie = false;76517664 struct dst_entry *dst;···77167727 if (!dst)77177728 goto drop_and_free;7718772977307730+ if (tmp_opt.tstamp_ok || (!want_cookie && !isn))77317731+ st = af_ops->init_seq_and_ts_off(net, skb);77327732+77197733 if (tmp_opt.tstamp_ok) {77207734 tcp_rsk(req)->req_usec_ts = dst_tcp_usec_ts(dst);77217721- tcp_rsk(req)->ts_off = af_ops->init_ts_off(net, skb);77357735+ tcp_rsk(req)->ts_off = st.ts_off;77227736 }77237737 if (!want_cookie && !isn) {77247738 int max_syn_backlog = READ_ONCE(net->ipv4.sysctl_max_syn_backlog);···77437751 goto drop_and_release;77447752 }7745775377467746- isn = af_ops->init_seq(skb);77547754+ isn = st.seq;77477755 }7748775677497757 tcp_ecn_create_request(req, skb, sk, dst);···77847792 sock_put(fastopen_sk);77857793 goto drop_and_free;77867794 }77877787- sk->sk_data_ready(sk);77957795+ READ_ONCE(sk->sk_data_ready)(sk);77887796 bh_unlock_sock(fastopen_sk);77897797 sock_put(fastopen_sk);77907798 } else {
···10041004 reason = tcp_rcv_state_process(child, skb);10051005 /* Wakeup parent, send SIGIO */10061006 if (state == TCP_SYN_RECV && child->sk_state != state)10071007- parent->sk_data_ready(parent);10071007+ READ_ONCE(parent->sk_data_ready)(parent);10081008 } else {10091009 /* Alas, it is possible again, because we do lookup10101010 * in main socket hash table and lock on listening
···158158 int family = sk->sk_family == AF_INET ? UDP_BPF_IPV4 : UDP_BPF_IPV6;159159160160 if (restore) {161161- sk->sk_write_space = psock->saved_write_space;161161+ WRITE_ONCE(sk->sk_write_space, psock->saved_write_space);162162 sock_replace_proto(sk, psock->sk_proto);163163 return 0;164164 }
+2-1
net/ipv6/inet6_hashtables.c
···9595{9696 int score = -1;97979898- if (net_eq(sock_net(sk), net) && inet_sk(sk)->inet_num == hnum &&9898+ if (net_eq(sock_net(sk), net) &&9999+ READ_ONCE(inet_sk(sk)->inet_num) == hnum &&99100 sk->sk_family == PF_INET6) {100101 if (!ipv6_addr_equal(&sk->sk_v6_rcv_saddr, daddr))101102 return -1;
+5-6
net/ipv6/route.c
···10631063 */10641064 if (netif_is_l3_slave(dev) &&10651065 !rt6_need_strict(&res->f6i->fib6_dst.addr))10661066- dev = l3mdev_master_dev_rcu(dev);10661066+ dev = l3mdev_master_dev_rcu(dev) ? :10671067+ dev_net(dev)->loopback_dev;10671068 else if (!netif_is_l3_master(dev))10681069 dev = dev_net(dev)->loopback_dev;10691070 /* last case is netif_is_l3_master(dev) is true in which···35833582 netdevice_tracker *dev_tracker = &fib6_nh->fib_nh_dev_tracker;35843583 struct net_device *dev = NULL;35853584 struct inet6_dev *idev = NULL;35863586- int addr_type;35873585 int err;3588358635893587 fib6_nh->fib_nh_family = AF_INET6;···3624362436253625 fib6_nh->fib_nh_weight = 1;3626362636273627- /* We cannot add true routes via loopback here,36283628- * they would result in kernel looping; promote them to reject routes36273627+ /* Reset the nexthop device to the loopback device in case of reject36283628+ * routes.36293629 */36303630- addr_type = ipv6_addr_type(&cfg->fc_dst);36313631- if (fib6_is_reject(cfg->fc_flags, dev, addr_type)) {36303630+ if (cfg->fc_flags & RTF_REJECT) {36323631 /* hold loopback dev/idev if we haven't done so. */36333632 if (dev != net->loopback_dev) {36343633 if (dev) {
···212212 spin_lock_bh(&msk->pm.lock);213213}214214215215-void mptcp_pm_addr_send_ack(struct mptcp_sock *msk)215215+static bool subflow_in_rm_list(const struct mptcp_subflow_context *subflow,216216+ const struct mptcp_rm_list *rm_list)216217{217217- struct mptcp_subflow_context *subflow, *alt = NULL;218218+ u8 i, id = subflow_get_local_id(subflow);219219+220220+ for (i = 0; i < rm_list->nr; i++) {221221+ if (rm_list->ids[i] == id)222222+ return true;223223+ }224224+225225+ return false;226226+}227227+228228+static void229229+mptcp_pm_addr_send_ack_avoid_list(struct mptcp_sock *msk,230230+ const struct mptcp_rm_list *rm_list)231231+{232232+ struct mptcp_subflow_context *subflow, *stale = NULL, *same_id = NULL;218233219234 msk_owned_by_me(msk);220235 lockdep_assert_held(&msk->pm.lock);···239224 return;240225241226 mptcp_for_each_subflow(msk, subflow) {242242- if (__mptcp_subflow_active(subflow)) {243243- if (!subflow->stale) {244244- mptcp_pm_send_ack(msk, subflow, false, false);245245- return;246246- }227227+ if (!__mptcp_subflow_active(subflow))228228+ continue;247229248248- if (!alt)249249- alt = subflow;230230+ if (unlikely(subflow->stale)) {231231+ if (!stale)232232+ stale = subflow;233233+ } else if (unlikely(rm_list &&234234+ subflow_in_rm_list(subflow, rm_list))) {235235+ if (!same_id)236236+ same_id = subflow;237237+ } else {238238+ goto send_ack;250239 }251240 }252241253253- if (alt)254254- mptcp_pm_send_ack(msk, alt, false, false);242242+ if (same_id)243243+ subflow = same_id;244244+ else if (stale)245245+ subflow = stale;246246+ else247247+ return;248248+249249+send_ack:250250+ mptcp_pm_send_ack(msk, subflow, false, false);251251+}252252+253253+void mptcp_pm_addr_send_ack(struct mptcp_sock *msk)254254+{255255+ mptcp_pm_addr_send_ack_avoid_list(msk, NULL);255256}256257257258int mptcp_pm_mp_prio_send_ack(struct mptcp_sock *msk,···501470 msk->pm.rm_list_tx = *rm_list;502471 rm_addr |= BIT(MPTCP_RM_ADDR_SIGNAL);503472 WRITE_ONCE(msk->pm.addr_signal, rm_addr);504504- mptcp_pm_addr_send_ack(msk);473473+ mptcp_pm_addr_send_ack_avoid_list(msk, rm_list);505474 return 0;506475}507476
+9
net/mptcp/pm_kernel.c
···418418 }419419420420exit:421421+ /* If an endpoint has both the signal and subflow flags, but it is not422422+ * possible to create subflows -- the 'while' loop body above never423423+ * executed -- then still mark the endp as used, which is somehow the424424+ * case. This avoids issues later when removing the endpoint and calling425425+ * __mark_subflow_endp_available(), which expects the increment here.426426+ */427427+ if (signal_and_subflow && local.addr.id != msk->mpc_endpoint_id)428428+ msk->pm.local_addr_used++;429429+421430 mptcp_pm_nl_check_work_pending(msk);422431}423432
+25-20
net/netfilter/nf_tables_api.c
···833833 }834834}835835836836+/* Use NFT_ITER_UPDATE iterator even if this may be called from the preparation837837+ * phase, the set clone might already exist from a previous command, or it might838838+ * be a set that is going away and does not require a clone. The netns and839839+ * netlink release paths also need to work on the live set.840840+ */836841static void nft_map_deactivate(const struct nft_ctx *ctx, struct nft_set *set)837842{838843 struct nft_set_iter iter = {···71757170 struct nft_data_desc desc;71767171 enum nft_registers dreg;71777172 struct nft_trans *trans;71737173+ bool set_full = false;71787174 u64 expiration;71797175 u64 timeout;71807176 int err, i;···74677461 if (err < 0)74687462 goto err_elem_free;7469746374647464+ if (!(flags & NFT_SET_ELEM_CATCHALL)) {74657465+ unsigned int max = nft_set_maxsize(set), nelems;74667466+74677467+ nelems = atomic_inc_return(&set->nelems);74687468+ if (nelems > max)74697469+ set_full = true;74707470+ }74717471+74707472 trans = nft_trans_elem_alloc(ctx, NFT_MSG_NEWSETELEM, set);74717473 if (trans == NULL) {74727474 err = -ENOMEM;74737473- goto err_elem_free;74757475+ goto err_set_size;74747476 }7475747774767478 ext->genmask = nft_genmask_cur(ctx->net);···7530751675317517 ue->priv = elem_priv;75327518 nft_trans_commit_list_add_elem(ctx->net, trans);75337533- goto err_elem_free;75197519+ goto err_set_size;75347520 }75357521 }75367522 }···75487534 goto err_element_clash;75497535 }7550753675517551- if (!(flags & NFT_SET_ELEM_CATCHALL)) {75527552- unsigned int max = nft_set_maxsize(set);75537553-75547554- if (!atomic_add_unless(&set->nelems, 1, max)) {75557555- err = -ENFILE;75567556- goto err_set_full;75577557- }75587558- }75597559-75607537 nft_trans_container_elem(trans)->elems[0].priv = elem.priv;75617538 nft_trans_commit_list_add_elem(ctx->net, trans);75627562- return 0;7563753975647564-err_set_full:75657565- nft_setelem_remove(ctx->net, set, elem.priv);75407540+ return set_full ? -ENFILE : 0;75417541+75667542err_element_clash:75677543 kfree(trans);75447544+err_set_size:75457545+ if (!(flags & NFT_SET_ELEM_CATCHALL))75467546+ atomic_dec(&set->nelems);75687547err_elem_free:75697548 nf_tables_set_elem_destroy(ctx, set, elem.priv);75707549err_parse_data:···7908790179097902static int nft_set_flush(struct nft_ctx *ctx, struct nft_set *set, u8 genmask)79107903{79047904+ /* The set backend might need to clone the set, do it now from the79057905+ * preparation phase, use NFT_ITER_UPDATE_CLONE iterator type.79067906+ */79117907 struct nft_set_iter iter = {79127908 .genmask = genmask,79137913- .type = NFT_ITER_UPDATE,79097909+ .type = NFT_ITER_UPDATE_CLONE,79147910 .fn = nft_setelem_flush,79157911 };79167912···1049110481 spin_unlock(&nf_tables_gc_list_lock);10492104821049310483 schedule_work(&trans_gc_work);1049410494-}1049510495-1049610496-static int nft_trans_gc_space(struct nft_trans_gc *trans)1049710497-{1049810498- return NFT_TRANS_GC_BATCHCOUNT - trans->count;1049910484}10500104851050110486struct nft_trans_gc *nft_trans_gc_queue_async(struct nft_trans_gc *gc,
+1
net/netfilter/nft_set_hash.c
···374374{375375 switch (iter->type) {376376 case NFT_ITER_UPDATE:377377+ case NFT_ITER_UPDATE_CLONE:377378 /* only relevant for netlink dumps which use READ type */378379 WARN_ON_ONCE(iter->skip != 0);379380
+52-10
net/netfilter/nft_set_pipapo.c
···16801680}1681168116821682/**16831683- * pipapo_gc() - Drop expired entries from set, destroy start and end elements16831683+ * pipapo_gc_scan() - Drop expired entries from set and link them to gc list16841684 * @set: nftables API set representation16851685 * @m: Matching data16861686 */16871687-static void pipapo_gc(struct nft_set *set, struct nft_pipapo_match *m)16871687+static void pipapo_gc_scan(struct nft_set *set, struct nft_pipapo_match *m)16881688{16891689 struct nft_pipapo *priv = nft_set_priv(set);16901690 struct net *net = read_pnet(&set->net);···16961696 gc = nft_trans_gc_alloc(set, 0, GFP_KERNEL);16971697 if (!gc)16981698 return;16991699+17001700+ list_add(&gc->list, &priv->gc_head);1699170117001702 while ((rules_f0 = pipapo_rules_same_key(m->f, first_rule))) {17011703 union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS];···17261724 * NFT_SET_ELEM_DEAD_BIT.17271725 */17281726 if (__nft_set_elem_expired(&e->ext, tstamp)) {17291729- gc = nft_trans_gc_queue_sync(gc, GFP_KERNEL);17301730- if (!gc)17311731- return;17271727+ if (!nft_trans_gc_space(gc)) {17281728+ gc = nft_trans_gc_alloc(set, 0, GFP_KERNEL);17291729+ if (!gc)17301730+ return;17311731+17321732+ list_add(&gc->list, &priv->gc_head);17331733+ }1732173417331735 nft_pipapo_gc_deactivate(net, set, e);17341736 pipapo_drop(m, rulemap);···17461740 }17471741 }1748174217491749- gc = nft_trans_gc_catchall_sync(gc);17431743+ priv->last_gc = jiffies;17441744+}17451745+17461746+/**17471747+ * pipapo_gc_queue() - Free expired elements17481748+ * @set: nftables API set representation17491749+ */17501750+static void pipapo_gc_queue(struct nft_set *set)17511751+{17521752+ struct nft_pipapo *priv = nft_set_priv(set);17531753+ struct nft_trans_gc *gc, *next;17541754+17551755+ /* always do a catchall cycle: */17561756+ gc = nft_trans_gc_alloc(set, 0, GFP_KERNEL);17501757 if (gc) {17581758+ gc = nft_trans_gc_catchall_sync(gc);17591759+ if (gc)17601760+ nft_trans_gc_queue_sync_done(gc);17611761+ }17621762+17631763+ /* always purge queued gc elements. */17641764+ list_for_each_entry_safe(gc, next, &priv->gc_head, list) {17651765+ list_del(&gc->list);17511766 nft_trans_gc_queue_sync_done(gc);17521752- priv->last_gc = jiffies;17531767 }17541768}17551769···18231797 *18241798 * We also need to create a new working copy for subsequent insertions and18251799 * deletions.18001800+ *18011801+ * After the live copy has been replaced by the clone, we can safely queue18021802+ * expired elements that have been collected by pipapo_gc_scan() for18031803+ * memory reclaim.18261804 */18271805static void nft_pipapo_commit(struct nft_set *set)18281806{···18371807 return;1838180818391809 if (time_after_eq(jiffies, priv->last_gc + nft_set_gc_interval(set)))18401840- pipapo_gc(set, priv->clone);18101810+ pipapo_gc_scan(set, priv->clone);1841181118421812 old = rcu_replace_pointer(priv->match, priv->clone,18431813 nft_pipapo_transaction_mutex_held(set));···1845181518461816 if (old)18471817 call_rcu(&old->rcu, pipapo_reclaim_match);18181818+18191819+ pipapo_gc_queue(set);18481820}1849182118501822static void nft_pipapo_abort(const struct nft_set *set)···21762144 const struct nft_pipapo_match *m;2177214521782146 switch (iter->type) {21792179- case NFT_ITER_UPDATE:21472147+ case NFT_ITER_UPDATE_CLONE:21802148 m = pipapo_maybe_clone(set);21812149 if (!m) {21822150 iter->err = -ENOMEM;21832151 return;21842152 }21852185-21532153+ nft_pipapo_do_walk(ctx, set, m, iter);21542154+ break;21552155+ case NFT_ITER_UPDATE:21562156+ if (priv->clone)21572157+ m = priv->clone;21582158+ else21592159+ m = rcu_dereference_protected(priv->match,21602160+ nft_pipapo_transaction_mutex_held(set));21862161 nft_pipapo_do_walk(ctx, set, m, iter);21872162 break;21882163 case NFT_ITER_READ:···23112272 f->mt = NULL;23122273 }2313227422752275+ INIT_LIST_HEAD(&priv->gc_head);23142276 rcu_assign_pointer(priv->match, m);2315227723162278 return 0;···23602320{23612321 struct nft_pipapo *priv = nft_set_priv(set);23622322 struct nft_pipapo_match *m;23232323+23242324+ WARN_ON_ONCE(!list_empty(&priv->gc_head));2363232523642326 m = rcu_dereference_protected(priv->match, true);23652327
+2
net/netfilter/nft_set_pipapo.h
···156156 * @clone: Copy where pending insertions and deletions are kept157157 * @width: Total bytes to be matched for one packet, including padding158158 * @last_gc: Timestamp of last garbage collection run, jiffies159159+ * @gc_head: list of nft_trans_gc to queue up for mem reclaim159160 */160161struct nft_pipapo {161162 struct nft_pipapo_match __rcu *match;162163 struct nft_pipapo_match *clone;163164 int width;164165 unsigned long last_gc;166166+ struct list_head gc_head;165167};166168167169struct nft_pipapo_elem;
+5-3
net/netfilter/nft_set_rbtree.c
···861861 struct nft_rbtree *priv = nft_set_priv(set);862862863863 switch (iter->type) {864864- case NFT_ITER_UPDATE:865865- lockdep_assert_held(&nft_pernet(ctx->net)->commit_mutex);866866-864864+ case NFT_ITER_UPDATE_CLONE:867865 if (nft_array_may_resize(set) < 0) {868866 iter->err = -ENOMEM;869867 break;870868 }869869+ fallthrough;870870+ case NFT_ITER_UPDATE:871871+ lockdep_assert_held(&nft_pernet(ctx->net)->commit_mutex);872872+871873 nft_rbtree_do_walk(ctx, set, iter);872874 break;873875 case NFT_ITER_READ:
···567567 flush_workqueue(ndev->cmd_wq);568568 timer_delete_sync(&ndev->cmd_timer);569569 timer_delete_sync(&ndev->data_timer);570570+ if (test_bit(NCI_DATA_EXCHANGE, &ndev->flags))571571+ nci_data_exchange_complete(ndev, NULL,572572+ ndev->cur_conn_id,573573+ -ENODEV);570574 mutex_unlock(&ndev->req_lock);571575 return 0;572576 }···602598 flush_workqueue(ndev->cmd_wq);603599604600 timer_delete_sync(&ndev->cmd_timer);601601+ timer_delete_sync(&ndev->data_timer);602602+603603+ if (test_bit(NCI_DATA_EXCHANGE, &ndev->flags))604604+ nci_data_exchange_complete(ndev, NULL, ndev->cur_conn_id,605605+ -ENODEV);605606606607 /* Clear flags except NCI_UNREG */607608 ndev->flags &= BIT(NCI_UNREG);···10441035 struct nci_conn_info *conn_info;1045103610461037 conn_info = ndev->rf_conn_info;10471047- if (!conn_info)10381038+ if (!conn_info) {10391039+ kfree_skb(skb);10481040 return -EPROTO;10411041+ }1049104210501043 pr_debug("target_idx %d, len %d\n", target->idx, skb->len);1051104410521045 if (!ndev->target_active_prot) {10531046 pr_err("unable to exchange data, no active target\n");10471047+ kfree_skb(skb);10541048 return -EINVAL;10551049 }1056105010571057- if (test_and_set_bit(NCI_DATA_EXCHANGE, &ndev->flags))10511051+ if (test_and_set_bit(NCI_DATA_EXCHANGE, &ndev->flags)) {10521052+ kfree_skb(skb);10581053 return -EBUSY;10541054+ }1059105510601056 /* store cb and context to be used on receiving data */10611057 conn_info->data_exchange_cb = cb;···14961482 unsigned int hdr_size = NCI_CTRL_HDR_SIZE;1497148314981484 if (skb->len < hdr_size ||14991499- !nci_plen(skb->data) ||15001485 skb->len < hdr_size + nci_plen(skb->data)) {15011486 return false;15021487 }14881488+14891489+ if (!nci_plen(skb->data)) {14901490+ /* Allow zero length in proprietary notifications (0x20 - 0x3F). */14911491+ if (nci_opcode_oid(nci_opcode(skb->data)) >= 0x20 &&14921492+ nci_mt(skb->data) == NCI_MT_NTF_PKT)14931493+ return true;14941494+14951495+ /* Disallow zero length otherwise. */14961496+ return false;14971497+ }14981498+15031499 return true;15041500}15051501
+8-4
net/nfc/nci/data.c
···3333 conn_info = nci_get_conn_info_by_conn_id(ndev, conn_id);3434 if (!conn_info) {3535 kfree_skb(skb);3636- goto exit;3636+ clear_bit(NCI_DATA_EXCHANGE, &ndev->flags);3737+ return;3738 }38393940 cb = conn_info->data_exchange_cb;···4645 timer_delete_sync(&ndev->data_timer);4746 clear_bit(NCI_DATA_EXCHANGE_TO, &ndev->flags);48474848+ /* Mark the exchange as done before calling the callback.4949+ * The callback (e.g. rawsock_data_exchange_complete) may5050+ * want to immediately queue another data exchange.5151+ */5252+ clear_bit(NCI_DATA_EXCHANGE, &ndev->flags);5353+4954 if (cb) {5055 /* forward skb to nfc core */5156 cb(cb_context, skb, err);···6154 /* no waiting callback, free skb */6255 kfree_skb(skb);6356 }6464-6565-exit:6666- clear_bit(NCI_DATA_EXCHANGE, &ndev->flags);6757}68586959/* ----------------- NCI TX Data ----------------- */
+11
net/nfc/rawsock.c
···6767 if (sock->type == SOCK_RAW)6868 nfc_sock_unlink(&raw_sk_list, sk);69697070+ if (sk->sk_state == TCP_ESTABLISHED) {7171+ /* Prevent rawsock_tx_work from starting new transmits and7272+ * wait for any in-progress work to finish. This must happen7373+ * before the socket is orphaned to avoid a race where7474+ * rawsock_tx_work runs after the NCI device has been freed.7575+ */7676+ sk->sk_shutdown |= SEND_SHUTDOWN;7777+ cancel_work_sync(&nfc_rawsock(sk)->tx_work);7878+ rawsock_write_queue_purge(sk);7979+ }8080+7081 sock_orphan(sk);7182 sock_put(sk);7283
+10-4
net/rds/tcp.c
···490490 struct rds_tcp_net *rtn;491491492492 tcp_sock_set_nodelay(sock->sk);493493- lock_sock(sk);494493 /* TCP timer functions might access net namespace even after495494 * a process which created this net namespace terminated.496495 */497496 if (!sk->sk_net_refcnt) {498498- if (!maybe_get_net(net)) {499499- release_sock(sk);497497+ if (!maybe_get_net(net))500498 return false;501501- }499499+ /*500500+ * sk_net_refcnt_upgrade() must be called before lock_sock()501501+ * because it does a GFP_KERNEL allocation, which can trigger502502+ * fs_reclaim and create a circular lock dependency with the503503+ * socket lock. The fields it modifies (sk_net_refcnt,504504+ * ns_tracker) are not accessed by any concurrent code path505505+ * at this point.506506+ */502507 sk_net_refcnt_upgrade(sk);503508 put_net(net);504509 }510510+ lock_sock(sk);505511 rtn = net_generic(net, rds_tcp_netid);506512 if (rtn->sndbuf_size > 0) {507513 sk->sk_sndbuf = rtn->sndbuf_size;
+6
net/sched/act_ct.c
···13601360 return -EINVAL;13611361 }1362136213631363+ if (bind && !(flags & TCA_ACT_FLAGS_AT_INGRESS_OR_CLSACT)) {13641364+ NL_SET_ERR_MSG_MOD(extack,13651365+ "Attaching ct to a non ingress/clsact qdisc is unsupported");13661366+ return -EOPNOTSUPP;13671367+ }13681368+13631369 err = nla_parse_nested(tb, TCA_CT_MAX, nla, ct_policy, extack);13641370 if (err < 0)13651371 return err;
···1414/// Public but hidden since it should only be used from KUnit generated code.1515#[doc(hidden)]1616pub fn err(args: fmt::Arguments<'_>) {1717+ // `args` is unused if `CONFIG_PRINTK` is not set - this avoids a build-time warning.1818+ #[cfg(not(CONFIG_PRINTK))]1919+ let _ = args;2020+1721 // SAFETY: The format string is null-terminated and the `%pA` specifier matches the argument we1822 // are passing.1923 #[cfg(CONFIG_PRINTK)]···3430/// Public but hidden since it should only be used from KUnit generated code.3531#[doc(hidden)]3632pub fn info(args: fmt::Arguments<'_>) {3333+ // `args` is unused if `CONFIG_PRINTK` is not set - this avoids a build-time warning.3434+ #[cfg(not(CONFIG_PRINTK))]3535+ let _ = args;3636+3737 // SAFETY: The format string is null-terminated and the `%pA` specifier matches the argument we3838 // are passing.3939 #[cfg(CONFIG_PRINTK)]
+2-2
scripts/genksyms/parse.y
···325325 { $$ = $4; }326326 | direct_declarator BRACKET_PHRASE327327 { $$ = $2; }328328- | '(' declarator ')'329329- { $$ = $3; }328328+ | '(' attribute_opt declarator ')'329329+ { $$ = $4; }330330 ;331331332332/* Nested declarators differ from regular declarators in that they do
···11+# sched-ext mandatory options22+#33+CONFIG_BPF=y44+CONFIG_BPF_SYSCALL=y55+CONFIG_BPF_JIT=y66+CONFIG_DEBUG_INFO_BTF=y77+CONFIG_BPF_JIT_ALWAYS_ON=y88+CONFIG_BPF_JIT_DEFAULT_ON=y99+CONFIG_SCHED_CLASS_EXT=y1010+1111+# Required by some rust schedulers (e.g. scx_p2dq)1212+#1313+CONFIG_KALLSYMS_ALL=y1414+1515+# Required on arm641616+#1717+# CONFIG_DEBUG_INFO_REDUCED is not set1818+1919+# LAVD tracks futex to give an additional time slice for futex holder2020+# (i.e., avoiding lock holder preemption) for better system-wide progress.2121+# LAVD first tries to use ftrace to trace futex function calls.2222+# If that is not available, it tries to use a tracepoint.2323+CONFIG_FUNCTION_TRACER=y2424+2525+# Enable scheduling debugging2626+#2727+CONFIG_SCHED_DEBUG=y2828+2929+# Enable extra scheduling features (for a better code coverage while testing3030+# the schedulers)3131+#3232+CONFIG_SCHED_AUTOGROUP=y3333+CONFIG_SCHED_CORE=y3434+CONFIG_SCHED_MC=y3535+3636+# Enable fully preemptible kernel for a better test coverage of the schedulers3737+#3838+# CONFIG_PREEMPT_NONE is not set3939+# CONFIG_PREEMPT_VOLUNTARY is not set4040+CONFIG_PREEMPT=y4141+CONFIG_PREEMPT_DYNAMIC=y4242+4343+# Additional debugging information (useful to catch potential locking issues)4444+CONFIG_DEBUG_LOCKDEP=y4545+CONFIG_DEBUG_ATOMIC_SLEEP=y4646+CONFIG_PROVE_LOCKING=y4747+4848+# Bpftrace headers (for additional debug info)4949+CONFIG_BPF_EVENTS=y5050+CONFIG_FTRACE_SYSCALLS=y5151+CONFIG_DYNAMIC_FTRACE=y5252+CONFIG_KPROBES=y5353+CONFIG_KPROBE_EVENTS=y5454+CONFIG_UPROBES=y5555+CONFIG_UPROBE_EVENTS=y5656+CONFIG_DEBUG_FS=y5757+5858+# Enable access to kernel configuration and headers at runtime5959+CONFIG_IKHEADERS=y6060+CONFIG_IKCONFIG_PROC=y6161+CONFIG_IKCONFIG=y
···5858CONFIG_BPF_SYSCALL=y5959CONFIG_BPF_JIT=y6060CONFIG_DEBUG_INFO_BTF=y6161-```6262-6363-It's also recommended that you also include the following Kconfig options:6464-6565-```6661CONFIG_BPF_JIT_ALWAYS_ON=y6762CONFIG_BPF_JIT_DEFAULT_ON=y6868-CONFIG_PAHOLE_HAS_BTF_TAG=y6963```70647165There is a `Kconfig` file in this directory whose contents you can append to
+5-2
tools/sched_ext/include/scx/compat.h
···125125{126126 int fd;127127 char buf[32];128128+ char *endptr;128129 ssize_t len;129130 long val;130131···138137 buf[len] = 0;139138 close(fd);140139141141- val = strtoul(buf, NULL, 10);142142- SCX_BUG_ON(val < 0, "invalid num hotplug events: %lu", val);140140+ errno = 0;141141+ val = strtoul(buf, &endptr, 10);142142+ SCX_BUG_ON(errno == ERANGE || endptr == buf ||143143+ (*endptr != '\n' && *endptr != '\0'), "invalid num hotplug events: %ld", val);143144144145 return val;145146}
···346346 return self.validate_config(build_dir)347347348348 def run_kernel(self, args: Optional[List[str]]=None, build_dir: str='', filter_glob: str='', filter: str='', filter_action: Optional[str]=None, timeout: Optional[int]=None) -> Iterator[str]:349349- if not args:350350- args = []349349+ # Copy to avoid mutating the caller-supplied list. exec_tests() reuses350350+ # the same args across repeated run_kernel() calls (e.g. --run_isolated),351351+ # so appending to the original would accumulate stale flags on each call.352352+ args = list(args) if args else []351353 if filter_glob:352354 args.append('kunit.filter_glob=' + filter_glob)353355 if filter:
+26
tools/testing/kunit/kunit_tool_test.py
···503503 with open(kunit_kernel.get_outfile_path(build_dir), 'rt') as outfile:504504 self.assertEqual(outfile.read(), 'hi\nbye\n', msg='Missing some output')505505506506+ def test_run_kernel_args_not_mutated(self):507507+ """Verify run_kernel() copies args so callers can reuse them."""508508+ start_calls = []509509+510510+ def fake_start(start_args, unused_build_dir):511511+ start_calls.append(list(start_args))512512+ return subprocess.Popen(['printf', 'KTAP version 1\n'],513513+ text=True, stdout=subprocess.PIPE)514514+515515+ with tempfile.TemporaryDirectory('') as build_dir:516516+ tree = kunit_kernel.LinuxSourceTree(build_dir,517517+ kunitconfig_paths=[os.devnull])518518+ with mock.patch.object(tree._ops, 'start', side_effect=fake_start), \519519+ mock.patch.object(kunit_kernel.subprocess, 'call'):520520+ kernel_args = ['mem=1G']521521+ for _ in tree.run_kernel(args=kernel_args, build_dir=build_dir,522522+ filter_glob='suite.test1'):523523+ pass524524+ for _ in tree.run_kernel(args=kernel_args, build_dir=build_dir,525525+ filter_glob='suite.test2'):526526+ pass527527+ self.assertEqual(kernel_args, ['mem=1G'],528528+ 'run_kernel() should not modify caller args')529529+ self.assertIn('kunit.filter_glob=suite.test1', start_calls[0])530530+ self.assertIn('kunit.filter_glob=suite.test2', start_calls[1])531531+506532 def test_build_reconfig_no_config(self):507533 with tempfile.TemporaryDirectory('') as build_dir:508534 with open(kunit_kernel.get_kunitconfig_path(build_dir), 'w') as f:
···409409 CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \410410 LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \411411 EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' \412412+ HOSTPKG_CONFIG=$(PKG_CONFIG) \412413 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ)413414414415# Get Clang's default includes on this system, as opposed to those seen by
···610610 system("ip link del bond");611611}612612613613+/*614614+ * Test that changing xmit_hash_policy to vlan+srcmac is rejected when a615615+ * native XDP program is loaded on a bond in 802.3ad or balance-xor mode.616616+ * These modes support XDP only when xmit_hash_policy != vlan+srcmac; freely617617+ * changing the policy creates an inconsistency that triggers a WARNING in618618+ * dev_xdp_uninstall() during device teardown.619619+ */620620+static void test_xdp_bonding_xmit_policy_compat(struct skeletons *skeletons)621621+{622622+ struct nstoken *nstoken = NULL;623623+ int bond_ifindex = -1;624624+ int xdp_fd, err;625625+626626+ SYS(out, "ip netns add ns_xmit_policy");627627+ nstoken = open_netns("ns_xmit_policy");628628+ if (!ASSERT_OK_PTR(nstoken, "open ns_xmit_policy"))629629+ goto out;630630+631631+ /* 802.3ad with layer2+3 policy: native XDP is supported */632632+ SYS(out, "ip link add bond0 type bond mode 802.3ad xmit_hash_policy layer2+3");633633+ SYS(out, "ip link add veth0 type veth peer name veth0p");634634+ SYS(out, "ip link set veth0 master bond0");635635+ SYS(out, "ip link set bond0 up");636636+637637+ bond_ifindex = if_nametoindex("bond0");638638+ if (!ASSERT_GT(bond_ifindex, 0, "bond0 ifindex"))639639+ goto out;640640+641641+ xdp_fd = bpf_program__fd(skeletons->xdp_dummy->progs.xdp_dummy_prog);642642+ if (!ASSERT_GE(xdp_fd, 0, "xdp_dummy fd"))643643+ goto out;644644+645645+ err = bpf_xdp_attach(bond_ifindex, xdp_fd, XDP_FLAGS_DRV_MODE, NULL);646646+ if (!ASSERT_OK(err, "attach XDP to bond0"))647647+ goto out;648648+649649+ /* With XDP loaded, switching to vlan+srcmac must be rejected */650650+ err = system("ip link set bond0 type bond xmit_hash_policy vlan+srcmac 2>/dev/null");651651+ ASSERT_NEQ(err, 0, "vlan+srcmac change with XDP loaded should fail");652652+653653+ /* Detach XDP first, then the same change must succeed */654654+ ASSERT_OK(bpf_xdp_detach(bond_ifindex, XDP_FLAGS_DRV_MODE, NULL),655655+ "detach XDP from bond0");656656+657657+ bond_ifindex = -1;658658+ err = system("ip link set bond0 type bond xmit_hash_policy vlan+srcmac 2>/dev/null");659659+ ASSERT_OK(err, "vlan+srcmac change without XDP should succeed");660660+661661+out:662662+ if (bond_ifindex > 0)663663+ bpf_xdp_detach(bond_ifindex, XDP_FLAGS_DRV_MODE, NULL);664664+ close_netns(nstoken);665665+ SYS_NOFAIL("ip netns del ns_xmit_policy");666666+}667667+613668static int libbpf_debug_print(enum libbpf_print_level level,614669 const char *format, va_list args)615670{···731676 test_case->mode,732677 test_case->xmit_policy);733678 }679679+680680+ if (test__start_subtest("xdp_bonding_xmit_policy_compat"))681681+ test_xdp_bonding_xmit_policy_compat(&skeletons);734682735683 if (test__start_subtest("xdp_bonding_redirect_multi"))736684 test_xdp_bonding_redirect_multi(&skeletons);
···363363 __sink(path[0]);364364}365365366366+void dummy_calls(void)367367+{368368+ bpf_iter_num_new(0, 0, 0);369369+ bpf_iter_num_next(0);370370+ bpf_iter_num_destroy(0);371371+}372372+373373+SEC("socket")374374+__success375375+__flag(BPF_F_TEST_STATE_FREQ)376376+int spurious_precision_marks(void *ctx)377377+{378378+ struct bpf_iter_num iter;379379+380380+ asm volatile(381381+ "r1 = %[iter];"382382+ "r2 = 0;"383383+ "r3 = 10;"384384+ "call %[bpf_iter_num_new];"385385+ "1:"386386+ "r1 = %[iter];"387387+ "call %[bpf_iter_num_next];"388388+ "if r0 == 0 goto 4f;"389389+ "r7 = *(u32 *)(r0 + 0);"390390+ "r8 = *(u32 *)(r0 + 0);"391391+ /* This jump can't be predicted and does not change r7 or r8 state. */392392+ "if r7 > r8 goto 2f;"393393+ /* Branch explored first ties r2 and r7 as having the same id. */394394+ "r2 = r7;"395395+ "goto 3f;"396396+ "2:"397397+ /* Branch explored second does not tie r2 and r7 but has a function call. */398398+ "call %[bpf_get_prandom_u32];"399399+ "3:"400400+ /*401401+ * A checkpoint.402402+ * When first branch is explored, this would inject linked registers403403+ * r2 and r7 into the jump history.404404+ * When second branch is explored, this would be a cache hit point,405405+ * triggering propagate_precision().406406+ */407407+ "if r7 <= 42 goto +0;"408408+ /*409409+ * Mark r7 as precise using an if condition that is always true.410410+ * When reached via the second branch, this triggered a bug in the backtrack_insn()411411+ * because r2 (tied to r7) was propagated as precise to a call.412412+ */413413+ "if r7 <= 0xffffFFFF goto +0;"414414+ "goto 1b;"415415+ "4:"416416+ "r1 = %[iter];"417417+ "call %[bpf_iter_num_destroy];"418418+ :419419+ : __imm_ptr(iter),420420+ __imm(bpf_iter_num_new),421421+ __imm(bpf_iter_num_next),422422+ __imm(bpf_iter_num_destroy),423423+ __imm(bpf_get_prandom_u32)424424+ : __clobber_common, "r7", "r8"425425+ );426426+427427+ return 0;428428+}429429+366430char _license[] SEC("license") = "GPL";
···598598 if unit_set:599599 assert required[usage].contains(field)600600601601- def test_prop_direct(self):602602- """603603- Todo: Verify that INPUT_PROP_DIRECT is set on display devices.604604- """605605- pass606606-607607- def test_prop_pointer(self):608608- """609609- Todo: Verify that INPUT_PROP_POINTER is set on opaque devices.610610- """611611- pass612612-613601614602class PenTabletTest(BaseTest.TestTablet):615603 def assertName(self, uhdev):···664676 self.sync_and_assert_events(665677 uhdev.event(130, 240, pressure=0), [], auto_syn=False, strict=True666678 )679679+680680+ def test_prop_pointer(self):681681+ """682682+ Verify that INPUT_PROP_POINTER is set and INPUT_PROP_DIRECT683683+ is not set on opaque devices.684684+ """685685+ evdev = self.uhdev.get_evdev()686686+ assert libevdev.INPUT_PROP_POINTER in evdev.properties687687+ assert libevdev.INPUT_PROP_DIRECT not in evdev.properties667688668689669690class TestOpaqueCTLTablet(TestOpaqueTablet):···859862 )860863861864862862-class TestDTH2452Tablet(test_multitouch.BaseTest.TestMultitouch, TouchTabletTest):865865+class DirectTabletTest():866866+ def test_prop_direct(self):867867+ """868868+ Verify that INPUT_PROP_DIRECT is set and INPUT_PROP_POINTER869869+ is not set on display devices.870870+ """871871+ evdev = self.uhdev.get_evdev()872872+ assert libevdev.INPUT_PROP_DIRECT in evdev.properties873873+ assert libevdev.INPUT_PROP_POINTER not in evdev.properties874874+875875+876876+class TestDTH2452Tablet(test_multitouch.BaseTest.TestMultitouch, TouchTabletTest, DirectTabletTest):863877 ContactIds = namedtuple("ContactIds", "contact_id, tracking_id, slot_num")864878865879 def create_device(self):
···11+#!/bin/bash22+# SPDX-License-Identifier: GPL-2.033+#44+# Test bridge VLAN range grouping. VLANs are collapsed into a range entry in55+# the dump if they have the same per-VLAN options. These tests verify that66+# VLANs with different per-VLAN option values are not grouped together.77+88+# shellcheck disable=SC1091,SC2034,SC2154,SC231799+source lib.sh1010+1111+ALL_TESTS="1212+ vlan_range_neigh_suppress1313+ vlan_range_mcast_max_groups1414+ vlan_range_mcast_n_groups1515+ vlan_range_mcast_enabled1616+"1717+1818+setup_prepare()1919+{2020+ setup_ns NS2121+ defer cleanup_all_ns2222+2323+ ip -n "$NS" link add name br0 type bridge vlan_filtering 1 \2424+ vlan_default_pvid 0 mcast_snooping 1 mcast_vlan_snooping 12525+ ip -n "$NS" link set dev br0 up2626+2727+ ip -n "$NS" link add name dummy0 type dummy2828+ ip -n "$NS" link set dev dummy0 master br02929+ ip -n "$NS" link set dev dummy0 up3030+}3131+3232+vlan_range_neigh_suppress()3333+{3434+ RET=03535+3636+ # Add two new consecutive VLANs for range grouping test3737+ bridge -n "$NS" vlan add vid 10 dev dummy03838+ defer bridge -n "$NS" vlan del vid 10 dev dummy03939+4040+ bridge -n "$NS" vlan add vid 11 dev dummy04141+ defer bridge -n "$NS" vlan del vid 11 dev dummy04242+4343+ # Configure different neigh_suppress values and verify no range grouping4444+ bridge -n "$NS" vlan set vid 10 dev dummy0 neigh_suppress on4545+ check_err $? "Failed to set neigh_suppress for VLAN 10"4646+4747+ bridge -n "$NS" vlan set vid 11 dev dummy0 neigh_suppress off4848+ check_err $? "Failed to set neigh_suppress for VLAN 11"4949+5050+ # Verify VLANs are not shown as a range, but individual entries exist5151+ bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11"5252+ check_fail $? "VLANs with different neigh_suppress incorrectly grouped"5353+5454+ bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+10$|^\s+10$"5555+ check_err $? "VLAN 10 individual entry not found"5656+5757+ bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+11$|^\s+11$"5858+ check_err $? "VLAN 11 individual entry not found"5959+6060+ # Configure same neigh_suppress value and verify range grouping6161+ bridge -n "$NS" vlan set vid 11 dev dummy0 neigh_suppress on6262+ check_err $? "Failed to set neigh_suppress for VLAN 11"6363+6464+ bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11"6565+ check_err $? "VLANs with same neigh_suppress not grouped"6666+6767+ log_test "VLAN range grouping with neigh_suppress"6868+}6969+7070+vlan_range_mcast_max_groups()7171+{7272+ RET=07373+7474+ # Add two new consecutive VLANs for range grouping test7575+ bridge -n "$NS" vlan add vid 10 dev dummy07676+ defer bridge -n "$NS" vlan del vid 10 dev dummy07777+7878+ bridge -n "$NS" vlan add vid 11 dev dummy07979+ defer bridge -n "$NS" vlan del vid 11 dev dummy08080+8181+ # Configure different mcast_max_groups values and verify no range grouping8282+ bridge -n "$NS" vlan set vid 10 dev dummy0 mcast_max_groups 1008383+ check_err $? "Failed to set mcast_max_groups for VLAN 10"8484+8585+ bridge -n "$NS" vlan set vid 11 dev dummy0 mcast_max_groups 2008686+ check_err $? "Failed to set mcast_max_groups for VLAN 11"8787+8888+ # Verify VLANs are not shown as a range, but individual entries exist8989+ bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11"9090+ check_fail $? "VLANs with different mcast_max_groups incorrectly grouped"9191+9292+ bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+10$|^\s+10$"9393+ check_err $? "VLAN 10 individual entry not found"9494+9595+ bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+11$|^\s+11$"9696+ check_err $? "VLAN 11 individual entry not found"9797+9898+ # Configure same mcast_max_groups value and verify range grouping9999+ bridge -n "$NS" vlan set vid 11 dev dummy0 mcast_max_groups 100100100+ check_err $? "Failed to set mcast_max_groups for VLAN 11"101101+102102+ bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11"103103+ check_err $? "VLANs with same mcast_max_groups not grouped"104104+105105+ log_test "VLAN range grouping with mcast_max_groups"106106+}107107+108108+vlan_range_mcast_n_groups()109109+{110110+ RET=0111111+112112+ # Add two new consecutive VLANs for range grouping test113113+ bridge -n "$NS" vlan add vid 10 dev dummy0114114+ defer bridge -n "$NS" vlan del vid 10 dev dummy0115115+116116+ bridge -n "$NS" vlan add vid 11 dev dummy0117117+ defer bridge -n "$NS" vlan del vid 11 dev dummy0118118+119119+ # Add different numbers of multicast groups to each VLAN120120+ bridge -n "$NS" mdb add dev br0 port dummy0 grp 239.1.1.1 vid 10121121+ check_err $? "Failed to add mdb entry to VLAN 10"122122+ defer bridge -n "$NS" mdb del dev br0 port dummy0 grp 239.1.1.1 vid 10123123+124124+ bridge -n "$NS" mdb add dev br0 port dummy0 grp 239.1.1.2 vid 10125125+ check_err $? "Failed to add second mdb entry to VLAN 10"126126+ defer bridge -n "$NS" mdb del dev br0 port dummy0 grp 239.1.1.2 vid 10127127+128128+ bridge -n "$NS" mdb add dev br0 port dummy0 grp 239.1.1.1 vid 11129129+ check_err $? "Failed to add mdb entry to VLAN 11"130130+ defer bridge -n "$NS" mdb del dev br0 port dummy0 grp 239.1.1.1 vid 11131131+132132+ # Verify VLANs are not shown as a range due to different mcast_n_groups133133+ bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11"134134+ check_fail $? "VLANs with different mcast_n_groups incorrectly grouped"135135+136136+ bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+10$|^\s+10$"137137+ check_err $? "VLAN 10 individual entry not found"138138+139139+ bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+11$|^\s+11$"140140+ check_err $? "VLAN 11 individual entry not found"141141+142142+ # Add another group to VLAN 11 to match VLAN 10's count143143+ bridge -n "$NS" mdb add dev br0 port dummy0 grp 239.1.1.2 vid 11144144+ check_err $? "Failed to add second mdb entry to VLAN 11"145145+ defer bridge -n "$NS" mdb del dev br0 port dummy0 grp 239.1.1.2 vid 11146146+147147+ bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11"148148+ check_err $? "VLANs with same mcast_n_groups not grouped"149149+150150+ log_test "VLAN range grouping with mcast_n_groups"151151+}152152+153153+vlan_range_mcast_enabled()154154+{155155+ RET=0156156+157157+ # Add two new consecutive VLANs for range grouping test158158+ bridge -n "$NS" vlan add vid 10 dev br0 self159159+ defer bridge -n "$NS" vlan del vid 10 dev br0 self160160+161161+ bridge -n "$NS" vlan add vid 11 dev br0 self162162+ defer bridge -n "$NS" vlan del vid 11 dev br0 self163163+164164+ bridge -n "$NS" vlan add vid 10 dev dummy0165165+ defer bridge -n "$NS" vlan del vid 10 dev dummy0166166+167167+ bridge -n "$NS" vlan add vid 11 dev dummy0168168+ defer bridge -n "$NS" vlan del vid 11 dev dummy0169169+170170+ # Configure different mcast_snooping for bridge VLANs171171+ # Port VLANs inherit BR_VLFLAG_MCAST_ENABLED from bridge VLANs172172+ bridge -n "$NS" vlan global set dev br0 vid 10 mcast_snooping 1173173+ bridge -n "$NS" vlan global set dev br0 vid 11 mcast_snooping 0174174+175175+ # Verify port VLANs are not grouped due to different mcast_enabled176176+ bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11"177177+ check_fail $? "VLANs with different mcast_enabled incorrectly grouped"178178+179179+ bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+10$|^\s+10$"180180+ check_err $? "VLAN 10 individual entry not found"181181+182182+ bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+11$|^\s+11$"183183+ check_err $? "VLAN 11 individual entry not found"184184+185185+ # Configure same mcast_snooping and verify range grouping186186+ bridge -n "$NS" vlan global set dev br0 vid 11 mcast_snooping 1187187+188188+ bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11"189189+ check_err $? "VLANs with same mcast_enabled not grouped"190190+191191+ log_test "VLAN range grouping with mcast_enabled"192192+}193193+194194+# Verify the newest tested option is supported195195+if ! bridge vlan help 2>&1 | grep -q "neigh_suppress"; then196196+ echo "SKIP: iproute2 too old, missing per-VLAN neighbor suppression support"197197+ exit "$ksft_skip"198198+fi199199+200200+trap defer_scopes_cleanup EXIT201201+setup_prepare202202+tests_run203203+204204+exit "$EXIT_STATUS"
+11
tools/testing/selftests/net/fib_nexthops.sh
···1672167216731673 run_cmd "$IP ro replace 172.16.101.1/32 via inet6 2001:db8:50::1 dev veth1"16741674 log_test $? 2 "IPv4 route with invalid IPv6 gateway"16751675+16761676+ # Test IPv4 route with loopback IPv6 nexthop16771677+ # Regression test: loopback IPv6 nexthop was misclassified as reject16781678+ # route, skipping nhc_pcpu_rth_output allocation, causing panic when16791679+ # an IPv4 route references it and triggers __mkroute_output().16801680+ run_cmd "$IP -6 nexthop add id 20 dev lo"16811681+ run_cmd "$IP ro add 172.20.20.0/24 nhid 20"16821682+ run_cmd "ip netns exec $me ping -c1 -W1 172.20.20.1"16831683+ log_test $? 1 "IPv4 route with loopback IPv6 nexthop (no crash)"16841684+ run_cmd "$IP ro del 172.20.20.0/24"16851685+ run_cmd "$IP nexthop del id 20"16751686}1676168716771688ipv4_fcnal_runtime()
+49
tools/testing/selftests/net/mptcp/mptcp_join.sh
···104104 6 0 0 65535,105105 6 0 0 0"106106107107+# IPv4: TCP hdr of 48B, a first suboption of 12B (DACK8), the RM_ADDR suboption108108+# generated using "nfbpf_compile '(ip[32] & 0xf0) == 0xc0 && ip[53] == 0x0c &&109109+# (ip[66] & 0xf0) == 0x40'"110110+CBPF_MPTCP_SUBOPTION_RM_ADDR="13,111111+ 48 0 0 0,112112+ 84 0 0 240,113113+ 21 0 9 64,114114+ 48 0 0 32,115115+ 84 0 0 240,116116+ 21 0 6 192,117117+ 48 0 0 53,118118+ 21 0 4 12,119119+ 48 0 0 66,120120+ 84 0 0 240,121121+ 21 0 1 64,122122+ 6 0 0 65535,123123+ 6 0 0 0"124124+107125init_partial()108126{109127 capout=$(mktemp)···26262608 chk_rst_nr 0 026272609 fi2628261026112611+ # signal+subflow with limits, remove26122612+ if reset "remove signal+subflow with limits"; then26132613+ pm_nl_set_limits $ns1 0 026142614+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal,subflow26152615+ pm_nl_set_limits $ns2 0 026162616+ addr_nr_ns1=-1 speed=slow \26172617+ run_tests $ns1 $ns2 10.0.1.126182618+ chk_join_nr 0 0 026192619+ chk_add_nr 1 126202620+ chk_rm_nr 1 0 invert26212621+ chk_rst_nr 0 026222622+ fi26232623+26292624 # addresses remove26302625 if reset "remove addresses"; then26312626 pm_nl_set_limits $ns1 3 3···42484217 chk_subflow_nr "after no reject" 342494218 chk_mptcp_info subflows 2 subflows 24250421942204220+ # To make sure RM_ADDR are sent over a different subflow, but42214221+ # allow the rest to quickly and cleanly close the subflow42224222+ local ipt=142234223+ ip netns exec "${ns2}" ${iptables} -I OUTPUT -s "10.0.1.2" \42244224+ -p tcp -m tcp --tcp-option 30 \42254225+ -m bpf --bytecode \42264226+ "$CBPF_MPTCP_SUBOPTION_RM_ADDR" \42274227+ -j DROP || ipt=042514228 local i42524229 for i in $(seq 3); do42534230 pm_nl_del_endpoint $ns2 1 10.0.1.2···42684229 chk_subflow_nr "after re-add id 0 ($i)" 342694230 chk_mptcp_info subflows 3 subflows 342704231 done42324232+ [ ${ipt} = 1 ] && ip netns exec "${ns2}" ${iptables} -D OUTPUT 14271423342724234 mptcp_lib_kill_group_wait $tests_pid42734235···43284288 chk_mptcp_info subflows 2 subflows 243294289 chk_mptcp_info add_addr_signal 2 add_addr_accepted 24330429042914291+ # To make sure RM_ADDR are sent over a different subflow, but42924292+ # allow the rest to quickly and cleanly close the subflow42934293+ local ipt=142944294+ ip netns exec "${ns1}" ${iptables} -I OUTPUT -s "10.0.1.1" \42954295+ -p tcp -m tcp --tcp-option 30 \42964296+ -m bpf --bytecode \42974297+ "$CBPF_MPTCP_SUBOPTION_RM_ADDR" \42984298+ -j DROP || ipt=043314299 pm_nl_del_endpoint $ns1 42 10.0.1.143324300 sleep 0.543334301 chk_subflow_nr "after delete ID 0" 243344302 chk_mptcp_info subflows 2 subflows 243354303 chk_mptcp_info add_addr_signal 2 add_addr_accepted 243044304+ [ ${ipt} = 1 ] && ip netns exec "${ns1}" ${iptables} -D OUTPUT 14336430543374306 pm_nl_add_endpoint $ns1 10.0.1.1 id 99 flags signal43384307 wait_mpj 4
+7-4
tools/testing/selftests/net/mptcp/simult_flows.sh
···237237 for dev in ns2eth1 ns2eth2; do238238 tc -n $ns2 qdisc del dev $dev root >/dev/null 2>&1239239 done240240- tc -n $ns1 qdisc add dev ns1eth1 root netem rate ${rate1}mbit $delay1241241- tc -n $ns1 qdisc add dev ns1eth2 root netem rate ${rate2}mbit $delay2242242- tc -n $ns2 qdisc add dev ns2eth1 root netem rate ${rate1}mbit $delay1243243- tc -n $ns2 qdisc add dev ns2eth2 root netem rate ${rate2}mbit $delay2240240+241241+ # keep the queued pkts number low, or the RTT estimator will see242242+ # increasing latency over time.243243+ tc -n $ns1 qdisc add dev ns1eth1 root netem rate ${rate1}mbit $delay1 limit 50244244+ tc -n $ns1 qdisc add dev ns1eth2 root netem rate ${rate2}mbit $delay2 limit 50245245+ tc -n $ns2 qdisc add dev ns2eth1 root netem rate ${rate1}mbit $delay1 limit 50246246+ tc -n $ns2 qdisc add dev ns2eth2 root netem rate ${rate2}mbit $delay2 limit 50244247245248 # time is measured in ms, account for transfer size, aggregated link speed246249 # and header overhead (10%)
···383839394040def list_categories(testlist):4141- """ Show all categories that are present in a test case file. """4242- categories = set(map(lambda x: x['category'], testlist))4141+ """Show all unique categories present in the test cases."""4242+ categories = set()4343+ for t in testlist:4444+ if 'category' in t:4545+ categories.update(t['category'])4646+4347 print("Available categories:")4444- print(", ".join(str(s) for s in categories))4848+ print(", ".join(sorted(categories)))4549 print("")46504751