Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge v7.0-rc3 into drm-next

Requested by Maxime Ripard for drm-misc-next because renesas people need
fb797a70108f ("drm: renesas: rz-du: mipi_dsi: Set DSI divider").

Signed-off-by: Simona Vetter <simona.vetter@ffwll.ch>

+6263 -2611
+3
.mailmap
··· 219 219 Danilo Krummrich <dakr@kernel.org> <dakr@redhat.com> 220 220 David Brownell <david-b@pacbell.net> 221 221 David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org> 222 + David Gow <david@davidgow.net> <davidgow@google.com> 222 223 David Heidelberg <david@ixit.cz> <d.okias@gmail.com> 223 224 David Hildenbrand <david@kernel.org> <david@redhat.com> 224 225 David Rheinsberg <david@readahead.eu> <dh.herrmann@gmail.com> ··· 354 353 Jason Gunthorpe <jgg@ziepe.ca> <jgg@mellanox.com> 355 354 Jason Gunthorpe <jgg@ziepe.ca> <jgg@nvidia.com> 356 355 Jason Gunthorpe <jgg@ziepe.ca> <jgunthorpe@obsidianresearch.com> 356 + Jason Xing <kerneljasonxing@gmail.com> <kernelxing@tencent.com> 357 357 <javier@osg.samsung.com> <javier.martinez@collabora.co.uk> 358 358 Javi Merino <javi.merino@kernel.org> <javi.merino@arm.com> 359 359 Jayachandran C <c.jayachandran@gmail.com> <jayachandranc@netlogicmicro.com> ··· 403 401 Jisheng Zhang <jszhang@kernel.org> <jszhang@marvell.com> 404 402 Jisheng Zhang <jszhang@kernel.org> <Jisheng.Zhang@synaptics.com> 405 403 Jishnu Prakash <quic_jprakash@quicinc.com> <jprakash@codeaurora.org> 404 + Joe Damato <joe@dama.to> <jdamato@fastly.com> 406 405 Joel Granados <joel.granados@kernel.org> <j.granados@samsung.com> 407 406 Johan Hovold <johan@kernel.org> <jhovold@gmail.com> 408 407 Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com>
+8
CREDITS
··· 1242 1242 E: vfalico@gmail.com 1243 1243 D: Co-maintainer and co-author of the network bonding driver. 1244 1244 1245 + N: Thomas Falcon 1246 + E: tlfalcon@linux.ibm.com 1247 + D: Initial author of the IBM ibmvnic network driver 1248 + 1245 1249 N: János Farkas 1246 1250 E: chexum@shadow.banki.hu 1247 1251 D: romfs, various (mostly networking) fixes ··· 2418 2414 S: Am Muehlenweg 38 2419 2415 S: D53424 Remagen 2420 2416 S: Germany 2417 + 2418 + N: Jonathan Lemon 2419 + E: jonathan.lemon@gmail.com 2420 + D: OpenCompute PTP clock driver (ptp_ocp) 2421 2421 2422 2422 N: Colin Leroy 2423 2423 E: colin@colino.net
+5 -5
Documentation/ABI/testing/sysfs-driver-uniwill-laptop
··· 1 - What: /sys/bus/platform/devices/INOU0000:XX/fn_lock_toggle_enable 1 + What: /sys/bus/platform/devices/INOU0000:XX/fn_lock 2 2 Date: November 2025 3 3 KernelVersion: 6.19 4 4 Contact: Armin Wolf <W_Armin@gmx.de> ··· 8 8 9 9 Reading this file returns the current enable status of the FN lock functionality. 10 10 11 - What: /sys/bus/platform/devices/INOU0000:XX/super_key_toggle_enable 11 + What: /sys/bus/platform/devices/INOU0000:XX/super_key_enable 12 12 Date: November 2025 13 13 KernelVersion: 6.19 14 14 Contact: Armin Wolf <W_Armin@gmx.de> 15 15 Description: 16 - Allows userspace applications to enable/disable the super key functionality 17 - of the integrated keyboard by writing "1"/"0" into this file. 16 + Allows userspace applications to enable/disable the super key of the integrated 17 + keyboard by writing "1"/"0" into this file. 18 18 19 - Reading this file returns the current enable status of the super key functionality. 19 + Reading this file returns the current enable status of the super key. 20 20 21 21 What: /sys/bus/platform/devices/INOU0000:XX/touchpad_toggle_enable 22 22 Date: November 2025
+13
Documentation/admin-guide/kernel-parameters.txt
··· 74 74 TPM TPM drivers are enabled. 75 75 UMS USB Mass Storage support is enabled. 76 76 USB USB support is enabled. 77 + NVME NVMe support is enabled 77 78 USBHID USB Human Interface Device support is enabled. 78 79 V4L Video For Linux support is enabled. 79 80 VGA The VGA console has been enabled. ··· 4787 4786 'node', 'default' can be specified 4788 4787 This can be set from sysctl after boot. 4789 4788 See Documentation/admin-guide/sysctl/vm.rst for details. 4789 + 4790 + nvme.quirks= [NVME] A list of quirk entries to augment the built-in 4791 + nvme quirk list. List entries are separated by a 4792 + '-' character. 4793 + Each entry has the form VendorID:ProductID:quirk_names. 4794 + The IDs are 4-digits hex numbers and quirk_names is a 4795 + list of quirk names separated by commas. A quirk name 4796 + can be prefixed by '^', meaning that the specified 4797 + quirk must be disabled. 4798 + 4799 + Example: 4800 + nvme.quirks=7710:2267:bogus_nid,^identify_cns-9900:7711:broken_msi 4790 4801 4791 4802 ohci1394_dma=early [HW,EARLY] enable debugging via the ohci1394 driver. 4792 4803 See Documentation/core-api/debugging-via-ohci1394.rst for more
+1 -1
Documentation/admin-guide/laptops/uniwill-laptop.rst
··· 24 24 25 25 The ``uniwill-laptop`` driver allows the user to enable/disable: 26 26 27 - - the FN and super key lock functionality of the integrated keyboard 27 + - the FN lock and super key of the integrated keyboard 28 28 - the touchpad toggle functionality of the integrated touchpad 29 29 30 30 See Documentation/ABI/testing/sysfs-driver-uniwill-laptop for details.
-1
Documentation/devicetree/bindings/hwmon/kontron,sl28cpld-hwmon.yaml
··· 16 16 properties: 17 17 compatible: 18 18 enum: 19 - - kontron,sa67mcu-hwmon 20 19 - kontron,sl28cpld-fan 21 20 22 21 reg:
+1
Documentation/devicetree/bindings/net/can/nxp,sja1000.yaml
··· 87 87 88 88 allOf: 89 89 - $ref: can-controller.yaml# 90 + - $ref: /schemas/memory-controllers/mc-peripheral-props.yaml 90 91 - if: 91 92 properties: 92 93 compatible:
+1
Documentation/devicetree/bindings/sound/nvidia,tegra-audio-graph-card.yaml
··· 23 23 enum: 24 24 - nvidia,tegra210-audio-graph-card 25 25 - nvidia,tegra186-audio-graph-card 26 + - nvidia,tegra238-audio-graph-card 26 27 - nvidia,tegra264-audio-graph-card 27 28 28 29 clocks:
+1
Documentation/devicetree/bindings/sound/renesas,rz-ssi.yaml
··· 20 20 - renesas,r9a07g044-ssi # RZ/G2{L,LC} 21 21 - renesas,r9a07g054-ssi # RZ/V2L 22 22 - renesas,r9a08g045-ssi # RZ/G3S 23 + - renesas,r9a08g046-ssi # RZ/G3L 23 24 - const: renesas,rz-ssi 24 25 25 26 reg:
+1 -1
Documentation/hwmon/emc1403.rst
··· 57 57 - https://ww1.microchip.com/downloads/en/DeviceDoc/EMC1438%20DS%20Rev.%201.0%20(04-29-10).pdf 58 58 59 59 Author: 60 - Kalhan Trisal <kalhan.trisal@intel.com 60 + Kalhan Trisal <kalhan.trisal@intel.com> 61 61 62 62 63 63 Description
-1
Documentation/hwmon/index.rst
··· 220 220 q54sj108a2 221 221 qnap-mcu-hwmon 222 222 raspberrypi-hwmon 223 - sa67 224 223 sbrmi 225 224 sbtsi_temp 226 225 sch5627
-41
Documentation/hwmon/sa67.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0-only 2 - 3 - Kernel driver sa67mcu 4 - ===================== 5 - 6 - Supported chips: 7 - 8 - * Kontron sa67mcu 9 - 10 - Prefix: 'sa67mcu' 11 - 12 - Datasheet: not available 13 - 14 - Authors: Michael Walle <mwalle@kernel.org> 15 - 16 - Description 17 - ----------- 18 - 19 - The sa67mcu is a board management controller which also exposes a hardware 20 - monitoring controller. 21 - 22 - The controller has two voltage and one temperature sensor. The values are 23 - hold in two 8 bit registers to form one 16 bit value. Reading the lower byte 24 - will also capture the high byte to make the access atomic. The unit of the 25 - volatge sensors are 1mV and the unit of the temperature sensor is 0.1degC. 26 - 27 - Sysfs entries 28 - ------------- 29 - 30 - The following attributes are supported. 31 - 32 - ======================= ======================================================== 33 - in0_label "VDDIN" 34 - in0_input Measured VDDIN voltage. 35 - 36 - in1_label "VDD_RTC" 37 - in1_input Measured VDD_RTC voltage. 38 - 39 - temp1_input MCU temperature. Roughly the board temperature. 40 - ======================= ======================================================== 41 -
+2 -2
Documentation/netlink/specs/nfsd.yaml
··· 152 152 - compound-ops 153 153 - 154 154 name: threads-set 155 - doc: set the number of running threads 155 + doc: set the maximum number of running threads 156 156 attribute-set: server 157 157 flags: [admin-perm] 158 158 do: ··· 165 165 - min-threads 166 166 - 167 167 name: threads-get 168 - doc: get the number of running threads 168 + doc: get the maximum number of running threads 169 169 attribute-set: server 170 170 do: 171 171 reply:
+4
Documentation/sound/alsa-configuration.rst
··· 2372 2372 audible volume 2373 2373 * bit 25: ``mixer_capture_min_mute`` 2374 2374 Similar to bit 24 but for capture streams 2375 + * bit 26: ``skip_iface_setup`` 2376 + Skip the probe-time interface setup (usb_set_interface, 2377 + init_pitch, init_sample_rate); redundant with 2378 + snd_usb_endpoint_prepare() at stream-open time 2375 2379 2376 2380 This module supports multiple devices, autoprobe and hotplugging. 2377 2381
+10 -27
MAINTAINERS
··· 993 993 F: drivers/thermal/thermal_mmio.c 994 994 995 995 AMAZON ETHERNET DRIVERS 996 - M: Shay Agroskin <shayagr@amazon.com> 997 996 M: Arthur Kiyanovski <akiyano@amazon.com> 998 - R: David Arinzon <darinzon@amazon.com> 999 - R: Saeed Bishara <saeedb@amazon.com> 997 + M: David Arinzon <darinzon@amazon.com> 1000 998 L: netdev@vger.kernel.org 1001 999 S: Maintained 1002 1000 F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst ··· 4615 4617 4616 4618 BLUETOOTH SUBSYSTEM 4617 4619 M: Marcel Holtmann <marcel@holtmann.org> 4618 - M: Johan Hedberg <johan.hedberg@gmail.com> 4619 4620 M: Luiz Augusto von Dentz <luiz.dentz@gmail.com> 4620 4621 L: linux-bluetooth@vger.kernel.org 4621 4622 S: Supported ··· 10178 10181 10179 10182 FREESCALE IMX / MXC FEC DRIVER 10180 10183 M: Wei Fang <wei.fang@nxp.com> 10184 + R: Frank Li <frank.li@nxp.com> 10181 10185 R: Shenwei Wang <shenwei.wang@nxp.com> 10182 - R: Clark Wang <xiaoning.wang@nxp.com> 10183 10186 L: imx@lists.linux.dev 10184 10187 L: netdev@vger.kernel.org 10185 10188 S: Maintained ··· 10491 10494 F: Documentation/trace/ftrace* 10492 10495 F: arch/*/*/*/*ftrace* 10493 10496 F: arch/*/*/*ftrace* 10494 - F: include/*/ftrace.h 10497 + F: include/*/*ftrace* 10495 10498 F: kernel/trace/fgraph.c 10496 10499 F: kernel/trace/ftrace* 10497 10500 F: samples/ftrace ··· 12224 12227 M: Haren Myneni <haren@linux.ibm.com> 12225 12228 M: Rick Lindsley <ricklind@linux.ibm.com> 12226 12229 R: Nick Child <nnac123@linux.ibm.com> 12227 - R: Thomas Falcon <tlfalcon@linux.ibm.com> 12228 12230 L: netdev@vger.kernel.org 12229 12231 S: Maintained 12230 12232 F: drivers/net/ethernet/ibm/ibmvnic.* ··· 13949 13953 13950 13954 KERNEL UNIT TESTING FRAMEWORK (KUnit) 13951 13955 M: Brendan Higgins <brendan.higgins@linux.dev> 13952 - M: David Gow <davidgow@google.com> 13956 + M: David Gow <david@davidgow.net> 13953 13957 R: Rae Moar <raemoar63@gmail.com> 13954 13958 L: linux-kselftest@vger.kernel.org 13955 13959 L: kunit-dev@googlegroups.com ··· 14769 14773 F: drivers/platform/x86/hp/hp_accel.c 14770 14774 14771 14775 LIST KUNIT TEST 14772 - M: David Gow <davidgow@google.com> 14776 + M: David Gow <david@davidgow.net> 14773 14777 L: linux-kselftest@vger.kernel.org 14774 14778 L: kunit-dev@googlegroups.com 14775 14779 S: Maintained ··· 15382 15386 F: include/linux/soc/marvell/octeontx2/ 15383 15387 15384 15388 MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2) 15385 - M: Mirko Lindner <mlindner@marvell.com> 15386 - M: Stephen Hemminger <stephen@networkplumber.org> 15387 15389 L: netdev@vger.kernel.org 15388 - S: Odd fixes 15390 + S: Orphan 15389 15391 F: drivers/net/ethernet/marvell/sk* 15390 15392 15391 15393 MARVELL LIBERTAS WIRELESS DRIVER ··· 15480 15486 M: Sunil Goutham <sgoutham@marvell.com> 15481 15487 M: Linu Cherian <lcherian@marvell.com> 15482 15488 M: Geetha sowjanya <gakula@marvell.com> 15483 - M: Jerin Jacob <jerinj@marvell.com> 15484 15489 M: hariprasad <hkelam@marvell.com> 15485 15490 M: Subbaraya Sundeep <sbhatta@marvell.com> 15486 15491 L: netdev@vger.kernel.org ··· 15494 15501 F: drivers/perf/marvell_pem_pmu.c 15495 15502 15496 15503 MARVELL PRESTERA ETHERNET SWITCH DRIVER 15497 - M: Taras Chornyi <taras.chornyi@plvision.eu> 15504 + M: Elad Nachman <enachman@marvell.com> 15498 15505 S: Supported 15499 15506 W: https://github.com/Marvell-switching/switchdev-prestera 15500 15507 F: drivers/net/ethernet/marvell/prestera/ ··· 16168 16175 16169 16176 MEDIATEK ETHERNET DRIVER 16170 16177 M: Felix Fietkau <nbd@nbd.name> 16171 - M: Sean Wang <sean.wang@mediatek.com> 16172 16178 M: Lorenzo Bianconi <lorenzo@kernel.org> 16173 16179 L: netdev@vger.kernel.org 16174 16180 S: Maintained ··· 16360 16368 MEDIATEK SWITCH DRIVER 16361 16369 M: Chester A. Unal <chester.a.unal@arinc9.com> 16362 16370 M: Daniel Golle <daniel@makrotopia.org> 16363 - M: DENG Qingfang <dqfext@gmail.com> 16364 - M: Sean Wang <sean.wang@mediatek.com> 16365 16371 L: netdev@vger.kernel.org 16366 16372 S: Maintained 16367 16373 F: drivers/net/dsa/mt7530-mdio.c ··· 19227 19237 19228 19238 OCELOT ETHERNET SWITCH DRIVER 19229 19239 M: Vladimir Oltean <vladimir.oltean@nxp.com> 19230 - M: Claudiu Manoil <claudiu.manoil@nxp.com> 19231 - M: Alexandre Belloni <alexandre.belloni@bootlin.com> 19232 19240 M: UNGLinuxDriver@microchip.com 19233 19241 L: netdev@vger.kernel.org 19234 19242 S: Supported ··· 19812 19824 F: include/dt-bindings/ 19813 19825 19814 19826 OPENCOMPUTE PTP CLOCK DRIVER 19815 - M: Jonathan Lemon <jonathan.lemon@gmail.com> 19816 19827 M: Vadim Fedorenko <vadim.fedorenko@linux.dev> 19817 19828 L: netdev@vger.kernel.org 19818 19829 S: Maintained ··· 20119 20132 F: drivers/pci/controller/pci-aardvark.c 20120 20133 20121 20134 PCI DRIVER FOR ALTERA PCIE IP 20122 - M: Joyce Ooi <joyce.ooi@intel.com> 20123 20135 L: linux-pci@vger.kernel.org 20124 - S: Supported 20136 + S: Orphan 20125 20137 F: Documentation/devicetree/bindings/pci/altr,pcie-root-port.yaml 20126 20138 F: drivers/pci/controller/pcie-altera.c 20127 20139 ··· 20365 20379 F: Documentation/PCI/pci-error-recovery.rst 20366 20380 20367 20381 PCI MSI DRIVER FOR ALTERA MSI IP 20368 - M: Joyce Ooi <joyce.ooi@intel.com> 20369 20382 L: linux-pci@vger.kernel.org 20370 - S: Supported 20383 + S: Orphan 20371 20384 F: Documentation/devicetree/bindings/interrupt-controller/altr,msi-controller.yaml 20372 20385 F: drivers/pci/controller/pcie-altera-msi.c 20373 20386 ··· 21453 21468 F: drivers/scsi/qedi/ 21454 21469 21455 21470 QLOGIC QL4xxx ETHERNET DRIVER 21456 - M: Manish Chopra <manishc@marvell.com> 21457 21471 L: netdev@vger.kernel.org 21458 - S: Maintained 21472 + S: Orphan 21459 21473 F: drivers/net/ethernet/qlogic/qed/ 21460 21474 F: drivers/net/ethernet/qlogic/qede/ 21461 21475 F: include/linux/qed/ ··· 24332 24348 F: Documentation/devicetree/bindings/pwm/kontron,sl28cpld-pwm.yaml 24333 24349 F: Documentation/devicetree/bindings/watchdog/kontron,sl28cpld-wdt.yaml 24334 24350 F: drivers/gpio/gpio-sl28cpld.c 24335 - F: drivers/hwmon/sa67mcu-hwmon.c 24336 24351 F: drivers/hwmon/sl28cpld-hwmon.c 24337 24352 F: drivers/irqchip/irq-sl28cpld.c 24338 24353 F: drivers/pwm/pwm-sl28cpld.c
+5 -5
Makefile
··· 2 2 VERSION = 7 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 1497 1497 $(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean 1498 1498 endif 1499 1499 1500 - PHONY += objtool_clean 1500 + PHONY += objtool_clean objtool_mrproper 1501 1501 1502 1502 objtool_O = $(abspath $(objtree))/tools/objtool 1503 1503 1504 - objtool_clean: 1504 + objtool_clean objtool_mrproper: 1505 1505 ifneq ($(wildcard $(objtool_O)),) 1506 - $(Q)$(MAKE) -sC $(abs_srctree)/tools/objtool O=$(objtool_O) srctree=$(abs_srctree) clean 1506 + $(Q)$(MAKE) -sC $(abs_srctree)/tools/objtool O=$(objtool_O) srctree=$(abs_srctree) $(patsubst objtool_%,%,$@) 1507 1507 endif 1508 1508 1509 1509 tools/: FORCE ··· 1686 1686 $(mrproper-dirs): 1687 1687 $(Q)$(MAKE) $(clean)=$(patsubst _mrproper_%,%,$@) 1688 1688 1689 - mrproper: clean $(mrproper-dirs) 1689 + mrproper: clean objtool_mrproper $(mrproper-dirs) 1690 1690 $(call cmd,rmfiles) 1691 1691 @find . $(RCS_FIND_IGNORE) \ 1692 1692 \( -name '*.rmeta' \) \
+1
arch/alpha/kernel/vmlinux.lds.S
··· 71 71 72 72 STABS_DEBUG 73 73 DWARF_DEBUG 74 + MODINFO 74 75 ELF_DETAILS 75 76 76 77 DISCARDS
+1
arch/arc/kernel/vmlinux.lds.S
··· 123 123 _end = . ; 124 124 125 125 STABS_DEBUG 126 + MODINFO 126 127 ELF_DETAILS 127 128 DISCARDS 128 129
+1
arch/arm/boot/compressed/vmlinux.lds.S
··· 21 21 COMMON_DISCARDS 22 22 *(.ARM.exidx*) 23 23 *(.ARM.extab*) 24 + *(.modinfo) 24 25 *(.note.*) 25 26 *(.rel.*) 26 27 *(.printk_index)
+1
arch/arm/kernel/vmlinux-xip.lds.S
··· 154 154 155 155 STABS_DEBUG 156 156 DWARF_DEBUG 157 + MODINFO 157 158 ARM_DETAILS 158 159 159 160 ARM_ASSERTS
+1
arch/arm/kernel/vmlinux.lds.S
··· 153 153 154 154 STABS_DEBUG 155 155 DWARF_DEBUG 156 + MODINFO 156 157 ARM_DETAILS 157 158 158 159 ARM_ASSERTS
+7 -5
arch/arm64/include/asm/cmpxchg.h
··· 91 91 #define __xchg_wrapper(sfx, ptr, x) \ 92 92 ({ \ 93 93 __typeof__(*(ptr)) __ret; \ 94 - __ret = (__typeof__(*(ptr))) \ 95 - __arch_xchg##sfx((unsigned long)(x), (ptr), sizeof(*(ptr))); \ 94 + __ret = (__force __typeof__(*(ptr))) \ 95 + __arch_xchg##sfx((__force unsigned long)(x), (ptr), \ 96 + sizeof(*(ptr))); \ 96 97 __ret; \ 97 98 }) 98 99 ··· 176 175 #define __cmpxchg_wrapper(sfx, ptr, o, n) \ 177 176 ({ \ 178 177 __typeof__(*(ptr)) __ret; \ 179 - __ret = (__typeof__(*(ptr))) \ 180 - __cmpxchg##sfx((ptr), (unsigned long)(o), \ 181 - (unsigned long)(n), sizeof(*(ptr))); \ 178 + __ret = (__force __typeof__(*(ptr))) \ 179 + __cmpxchg##sfx((ptr), (__force unsigned long)(o), \ 180 + (__force unsigned long)(n), \ 181 + sizeof(*(ptr))); \ 182 182 __ret; \ 183 183 }) 184 184
+5 -5
arch/arm64/include/asm/pgtable-prot.h
··· 50 50 51 51 #define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL)) 52 52 53 - #define _PAGE_KERNEL (PROT_NORMAL) 54 - #define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY) 55 - #define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY) 56 - #define _PAGE_KERNEL_EXEC (PROT_NORMAL & ~PTE_PXN) 57 - #define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT) 53 + #define _PAGE_KERNEL (PROT_NORMAL | PTE_DIRTY) 54 + #define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY | PTE_DIRTY) 55 + #define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY | PTE_DIRTY) 56 + #define _PAGE_KERNEL_EXEC ((PROT_NORMAL & ~PTE_PXN) | PTE_DIRTY) 57 + #define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT | PTE_DIRTY) 58 58 59 59 #define _PAGE_SHARED (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) 60 60 #define _PAGE_SHARED_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE)
+4
arch/arm64/include/asm/runtime-const.h
··· 2 2 #ifndef _ASM_RUNTIME_CONST_H 3 3 #define _ASM_RUNTIME_CONST_H 4 4 5 + #ifdef MODULE 6 + #error "Cannot use runtime-const infrastructure from modules" 7 + #endif 8 + 5 9 #include <asm/cacheflush.h> 6 10 7 11 /* Sigh. You can still run arm64 in BE mode */
+1
arch/arm64/kernel/vmlinux.lds.S
··· 349 349 350 350 STABS_DEBUG 351 351 DWARF_DEBUG 352 + MODINFO 352 353 ELF_DETAILS 353 354 354 355 HEAD_SYMBOLS
+49 -4
arch/arm64/mm/contpte.c
··· 599 599 } 600 600 EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes); 601 601 602 + static bool contpte_all_subptes_match_access_flags(pte_t *ptep, pte_t entry) 603 + { 604 + pte_t *cont_ptep = contpte_align_down(ptep); 605 + /* 606 + * PFNs differ per sub-PTE. Match only bits consumed by 607 + * __ptep_set_access_flags(): AF, DIRTY and write permission. 608 + */ 609 + const pteval_t cmp_mask = PTE_RDONLY | PTE_AF | PTE_WRITE | PTE_DIRTY; 610 + pteval_t entry_cmp = pte_val(entry) & cmp_mask; 611 + int i; 612 + 613 + for (i = 0; i < CONT_PTES; i++) { 614 + pteval_t pte_cmp = pte_val(__ptep_get(cont_ptep + i)) & cmp_mask; 615 + 616 + if (pte_cmp != entry_cmp) 617 + return false; 618 + } 619 + 620 + return true; 621 + } 622 + 602 623 int contpte_ptep_set_access_flags(struct vm_area_struct *vma, 603 624 unsigned long addr, pte_t *ptep, 604 625 pte_t entry, int dirty) ··· 629 608 int i; 630 609 631 610 /* 632 - * Gather the access/dirty bits for the contiguous range. If nothing has 633 - * changed, its a noop. 611 + * Check whether all sub-PTEs in the CONT block already match the 612 + * requested access flags/write permission, using raw per-PTE values 613 + * rather than the gathered ptep_get() view. 614 + * 615 + * __ptep_set_access_flags() can update AF, dirty and write 616 + * permission, but only to make the mapping more permissive. 617 + * 618 + * ptep_get() gathers AF/dirty state across the whole CONT block, 619 + * which is correct for a CPU with FEAT_HAFDBS. But page-table 620 + * walkers that evaluate each descriptor individually (e.g. a CPU 621 + * without DBM support, or an SMMU without HTTU, or with HA/HD 622 + * disabled in CD.TCR) can keep faulting on the target sub-PTE if 623 + * only a sibling has been updated. Gathering can therefore cause 624 + * false no-ops when only a sibling has been updated: 625 + * - write faults: target still has PTE_RDONLY (needs PTE_RDONLY cleared) 626 + * - read faults: target still lacks PTE_AF 627 + * 628 + * Per Arm ARM (DDI 0487) D8.7.1, any sub-PTE in a CONT range may 629 + * become the effective cached translation, so all entries must have 630 + * consistent attributes. Check the full CONT block before returning 631 + * no-op, and when any sub-PTE mismatches, proceed to update the whole 632 + * range. 634 633 */ 635 - orig_pte = pte_mknoncont(ptep_get(ptep)); 636 - if (pte_val(orig_pte) == pte_val(entry)) 634 + if (contpte_all_subptes_match_access_flags(ptep, entry)) 637 635 return 0; 636 + 637 + /* 638 + * Use raw target pte (not gathered) for write-bit unfold decision. 639 + */ 640 + orig_pte = pte_mknoncont(__ptep_get(ptep)); 638 641 639 642 /* 640 643 * We can fix up access/dirty bits without having to unfold the contig
+1
arch/csky/kernel/vmlinux.lds.S
··· 109 109 110 110 STABS_DEBUG 111 111 DWARF_DEBUG 112 + MODINFO 112 113 ELF_DETAILS 113 114 114 115 DISCARDS
+1
arch/hexagon/kernel/vmlinux.lds.S
··· 62 62 63 63 STABS_DEBUG 64 64 DWARF_DEBUG 65 + MODINFO 65 66 ELF_DETAILS 66 67 .hexagon.attributes 0 : { *(.hexagon.attributes) } 67 68
+1
arch/loongarch/kernel/vmlinux.lds.S
··· 147 147 148 148 STABS_DEBUG 149 149 DWARF_DEBUG 150 + MODINFO 150 151 ELF_DETAILS 151 152 152 153 #ifdef CONFIG_EFI_STUB
+1
arch/m68k/kernel/vmlinux-nommu.lds
··· 85 85 _end = .; 86 86 87 87 STABS_DEBUG 88 + MODINFO 88 89 ELF_DETAILS 89 90 90 91 /* Sections to be discarded */
+1
arch/m68k/kernel/vmlinux-std.lds
··· 58 58 _end = . ; 59 59 60 60 STABS_DEBUG 61 + MODINFO 61 62 ELF_DETAILS 62 63 63 64 /* Sections to be discarded */
+1
arch/m68k/kernel/vmlinux-sun3.lds
··· 51 51 _end = . ; 52 52 53 53 STABS_DEBUG 54 + MODINFO 54 55 ELF_DETAILS 55 56 56 57 /* Sections to be discarded */
+1
arch/mips/kernel/vmlinux.lds.S
··· 217 217 218 218 STABS_DEBUG 219 219 DWARF_DEBUG 220 + MODINFO 220 221 ELF_DETAILS 221 222 222 223 /* These must appear regardless of . */
+1
arch/nios2/kernel/vmlinux.lds.S
··· 57 57 58 58 STABS_DEBUG 59 59 DWARF_DEBUG 60 + MODINFO 60 61 ELF_DETAILS 61 62 62 63 DISCARDS
+1
arch/openrisc/kernel/vmlinux.lds.S
··· 101 101 /* Throw in the debugging sections */ 102 102 STABS_DEBUG 103 103 DWARF_DEBUG 104 + MODINFO 104 105 ELF_DETAILS 105 106 106 107 /* Sections to be discarded -- must be last */
+1
arch/parisc/boot/compressed/vmlinux.lds.S
··· 90 90 /* Sections to be discarded */ 91 91 DISCARDS 92 92 /DISCARD/ : { 93 + *(.modinfo) 93 94 #ifdef CONFIG_64BIT 94 95 /* temporary hack until binutils is fixed to not emit these 95 96 * for static binaries
+1 -1
arch/parisc/include/asm/pgtable.h
··· 85 85 printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, (unsigned long)pgd_val(e)) 86 86 87 87 /* This is the size of the initially mapped kernel memory */ 88 - #if defined(CONFIG_64BIT) 88 + #if defined(CONFIG_64BIT) || defined(CONFIG_KALLSYMS) 89 89 #define KERNEL_INITIAL_ORDER 26 /* 1<<26 = 64MB */ 90 90 #else 91 91 #define KERNEL_INITIAL_ORDER 25 /* 1<<25 = 32MB */
+6 -1
arch/parisc/kernel/head.S
··· 56 56 57 57 .import __bss_start,data 58 58 .import __bss_stop,data 59 + .import __end,data 59 60 60 61 load32 PA(__bss_start),%r3 61 62 load32 PA(__bss_stop),%r4 ··· 150 149 * everything ... it will get remapped correctly later */ 151 150 ldo 0+_PAGE_KERNEL_RWX(%r0),%r3 /* Hardwired 0 phys addr start */ 152 151 load32 (1<<(KERNEL_INITIAL_ORDER-PAGE_SHIFT)),%r11 /* PFN count */ 153 - load32 PA(pg0),%r1 152 + load32 PA(_end),%r1 153 + SHRREG %r1,PAGE_SHIFT,%r1 /* %r1 is PFN count for _end symbol */ 154 + cmpb,<<,n %r11,%r1,1f 155 + copy %r1,%r11 /* %r1 PFN count smaller than %r11 */ 156 + 1: load32 PA(pg0),%r1 154 157 155 158 $pgt_fill_loop: 156 159 STREGM %r3,ASM_PTE_ENTRY_SIZE(%r1)
+12 -8
arch/parisc/kernel/setup.c
··· 120 120 #endif 121 121 printk(KERN_CONT ".\n"); 122 122 123 - /* 124 - * Check if initial kernel page mappings are sufficient. 125 - * panic early if not, else we may access kernel functions 126 - * and variables which can't be reached. 127 - */ 128 - if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE) 129 - panic("KERNEL_INITIAL_ORDER too small!"); 130 - 131 123 #ifdef CONFIG_64BIT 132 124 if(parisc_narrow_firmware) { 133 125 printk(KERN_INFO "Kernel is using PDC in 32-bit mode.\n"); ··· 270 278 { 271 279 int ret, cpunum; 272 280 struct pdc_coproc_cfg coproc_cfg; 281 + 282 + /* 283 + * Check if initial kernel page mapping is sufficient. 284 + * Print warning if not, because we may access kernel functions and 285 + * variables which can't be reached yet through the initial mappings. 286 + * Note that the panic() and printk() functions are not functional 287 + * yet, so we need to use direct iodc() firmware calls instead. 288 + */ 289 + const char warn1[] = "CRITICAL: Kernel may crash because " 290 + "KERNEL_INITIAL_ORDER is too small.\n"; 291 + if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE) 292 + pdc_iodc_print(warn1, sizeof(warn1) - 1); 273 293 274 294 /* check QEMU/SeaBIOS marker in PAGE0 */ 275 295 running_on_qemu = (memcmp(&PAGE0->pad0, "SeaBIOS", 8) == 0);
+1
arch/parisc/kernel/vmlinux.lds.S
··· 165 165 _end = . ; 166 166 167 167 STABS_DEBUG 168 + MODINFO 168 169 ELF_DETAILS 169 170 .note 0 : { *(.note) } 170 171
+7
arch/powerpc/kernel/pci_of_scan.c
··· 212 212 dev->error_state = pci_channel_io_normal; 213 213 dev->dma_mask = 0xffffffff; 214 214 215 + /* 216 + * Assume 64-bit addresses for MSI initially. Will be changed to 32-bit 217 + * if MSI (rather than MSI-X) capability does not have 218 + * PCI_MSI_FLAGS_64BIT. Can also be overridden by driver. 219 + */ 220 + dev->msi_addr_mask = DMA_BIT_MASK(64); 221 + 215 222 /* Early fixups, before probing the BARs */ 216 223 pci_fixup_device(pci_fixup_early, dev); 217 224
+1
arch/powerpc/kernel/vmlinux.lds.S
··· 397 397 _end = . ; 398 398 399 399 DWARF_DEBUG 400 + MODINFO 400 401 ELF_DETAILS 401 402 402 403 DISCARDS
+1
arch/riscv/kernel/vmlinux.lds.S
··· 170 170 171 171 STABS_DEBUG 172 172 DWARF_DEBUG 173 + MODINFO 173 174 ELF_DETAILS 174 175 .riscv.attributes 0 : { *(.riscv.attributes) } 175 176
+1 -1
arch/s390/include/asm/processor.h
··· 159 159 " j 4f\n" 160 160 "3: mvc 8(1,%[addr]),0(%[addr])\n" 161 161 "4:" 162 - : [addr] "+&a" (erase_low), [count] "+&d" (count), [tmp] "=&a" (tmp) 162 + : [addr] "+&a" (erase_low), [count] "+&a" (count), [tmp] "=&a" (tmp) 163 163 : [poison] "d" (poison) 164 164 : "memory", "cc" 165 165 );
+1
arch/s390/kernel/vmlinux.lds.S
··· 221 221 /* Debugging sections. */ 222 222 STABS_DEBUG 223 223 DWARF_DEBUG 224 + MODINFO 224 225 ELF_DETAILS 225 226 226 227 /*
+5 -6
arch/s390/lib/xor.c
··· 28 28 " j 3f\n" 29 29 "2: xc 0(1,%1),0(%2)\n" 30 30 "3:" 31 - : : "d" (bytes), "a" (p1), "a" (p2) 32 - : "0", "cc", "memory"); 31 + : "+a" (bytes), "+a" (p1), "+a" (p2) 32 + : : "0", "cc", "memory"); 33 33 } 34 34 35 35 static void xor_xc_3(unsigned long bytes, unsigned long * __restrict p1, ··· 54 54 "2: xc 0(1,%1),0(%2)\n" 55 55 "3: xc 0(1,%1),0(%3)\n" 56 56 "4:" 57 - : "+d" (bytes), "+a" (p1), "+a" (p2), "+a" (p3) 57 + : "+a" (bytes), "+a" (p1), "+a" (p2), "+a" (p3) 58 58 : : "0", "cc", "memory"); 59 59 } 60 60 ··· 85 85 "3: xc 0(1,%1),0(%3)\n" 86 86 "4: xc 0(1,%1),0(%4)\n" 87 87 "5:" 88 - : "+d" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4) 88 + : "+a" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4) 89 89 : : "0", "cc", "memory"); 90 90 } 91 91 ··· 96 96 const unsigned long * __restrict p5) 97 97 { 98 98 asm volatile( 99 - " larl 1,2f\n" 100 99 " aghi %0,-1\n" 101 100 " jm 6f\n" 102 101 " srlg 0,%0,8\n" ··· 121 122 "4: xc 0(1,%1),0(%4)\n" 122 123 "5: xc 0(1,%1),0(%5)\n" 123 124 "6:" 124 - : "+d" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4), 125 + : "+a" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4), 125 126 "+a" (p5) 126 127 : : "0", "cc", "memory"); 127 128 }
+1
arch/sh/kernel/vmlinux.lds.S
··· 89 89 90 90 STABS_DEBUG 91 91 DWARF_DEBUG 92 + MODINFO 92 93 ELF_DETAILS 93 94 94 95 DISCARDS
+7
arch/sparc/kernel/pci.c
··· 355 355 dev->error_state = pci_channel_io_normal; 356 356 dev->dma_mask = 0xffffffff; 357 357 358 + /* 359 + * Assume 64-bit addresses for MSI initially. Will be changed to 32-bit 360 + * if MSI (rather than MSI-X) capability does not have 361 + * PCI_MSI_FLAGS_64BIT. Can also be overridden by driver. 362 + */ 363 + dev->msi_addr_mask = DMA_BIT_MASK(64); 364 + 358 365 if (of_node_name_eq(node, "pci")) { 359 366 /* a PCI-PCI bridge */ 360 367 dev->hdr_type = PCI_HEADER_TYPE_BRIDGE;
+1
arch/sparc/kernel/vmlinux.lds.S
··· 191 191 192 192 STABS_DEBUG 193 193 DWARF_DEBUG 194 + MODINFO 194 195 ELF_DETAILS 195 196 196 197 DISCARDS
+1
arch/um/kernel/dyn.lds.S
··· 172 172 173 173 STABS_DEBUG 174 174 DWARF_DEBUG 175 + MODINFO 175 176 ELF_DETAILS 176 177 177 178 DISCARDS
+1
arch/um/kernel/uml.lds.S
··· 113 113 114 114 STABS_DEBUG 115 115 DWARF_DEBUG 116 + MODINFO 116 117 ELF_DETAILS 117 118 118 119 DISCARDS
+1
arch/x86/boot/compressed/Makefile
··· 113 113 114 114 ifdef CONFIG_EFI_SBAT 115 115 $(obj)/sbat.o: $(CONFIG_EFI_SBAT_FILE) 116 + AFLAGS_sbat.o += -I $(srctree) 116 117 endif 117 118 118 119 $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE
+5 -4
arch/x86/boot/compressed/sev.c
··· 28 28 #include "sev.h" 29 29 30 30 static struct ghcb boot_ghcb_page __aligned(PAGE_SIZE); 31 - struct ghcb *boot_ghcb; 31 + struct ghcb *boot_ghcb __section(".data"); 32 32 33 33 #undef __init 34 34 #define __init 35 35 36 36 #define __BOOT_COMPRESSED 37 37 38 - u8 snp_vmpl; 39 - u16 ghcb_version; 38 + u8 snp_vmpl __section(".data"); 39 + u16 ghcb_version __section(".data"); 40 40 41 - u64 boot_svsm_caa_pa; 41 + u64 boot_svsm_caa_pa __section(".data"); 42 42 43 43 /* Include code for early handlers */ 44 44 #include "../../boot/startup/sev-shared.c" ··· 188 188 MSR_AMD64_SNP_RESERVED_BIT13 | \ 189 189 MSR_AMD64_SNP_RESERVED_BIT15 | \ 190 190 MSR_AMD64_SNP_SECURE_AVIC | \ 191 + MSR_AMD64_SNP_RESERVED_BITS19_22 | \ 191 192 MSR_AMD64_SNP_RESERVED_MASK) 192 193 193 194 #ifdef CONFIG_AMD_SECURE_AVIC
+1 -1
arch/x86/boot/compressed/vmlinux.lds.S
··· 88 88 /DISCARD/ : { 89 89 *(.dynamic) *(.dynsym) *(.dynstr) *(.dynbss) 90 90 *(.hash) *(.gnu.hash) 91 - *(.note.*) 91 + *(.note.*) *(.modinfo) 92 92 } 93 93 94 94 .got.plt (INFO) : {
+1 -1
arch/x86/boot/startup/sev-shared.c
··· 31 31 static u32 cpuid_hyp_range_max __ro_after_init; 32 32 static u32 cpuid_ext_range_max __ro_after_init; 33 33 34 - bool sev_snp_needs_sfw; 34 + bool sev_snp_needs_sfw __section(".data"); 35 35 36 36 void __noreturn 37 37 sev_es_terminate(unsigned int set, unsigned int reason)
+1
arch/x86/coco/sev/core.c
··· 89 89 [MSR_AMD64_SNP_VMSA_REG_PROT_BIT] = "VMSARegProt", 90 90 [MSR_AMD64_SNP_SMT_PROT_BIT] = "SMTProt", 91 91 [MSR_AMD64_SNP_SECURE_AVIC_BIT] = "SecureAVIC", 92 + [MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT] = "IBPBOnEntry", 92 93 }; 93 94 94 95 /*
+30
arch/x86/entry/vdso/vdso32/sigreturn.S
··· 35 35 #endif 36 36 .endm 37 37 38 + /* 39 + * WARNING: 40 + * 41 + * A bug in the libgcc unwinder as of at least gcc 15.2 (2026) means that 42 + * the unwinder fails to recognize the signal frame flag. 43 + * 44 + * There is a hacky legacy fallback path in libgcc which ends up 45 + * getting invoked instead. It happens to work as long as BOTH of the 46 + * following conditions are true: 47 + * 48 + * 1. There is at least one byte before the each of the sigreturn 49 + * functions which falls outside any function. This is enforced by 50 + * an explicit nop instruction before the ALIGN. 51 + * 2. The code sequences between the entry point up to and including 52 + * the int $0x80 below need to match EXACTLY. Do not change them 53 + * in any way. The exact byte sequences are: 54 + * 55 + * __kernel_sigreturn: 56 + * 0: 58 pop %eax 57 + * 1: b8 77 00 00 00 mov $0x77,%eax 58 + * 6: cd 80 int $0x80 59 + * 60 + * __kernel_rt_sigreturn: 61 + * 0: b8 ad 00 00 00 mov $0xad,%eax 62 + * 5: cd 80 int $0x80 63 + * 64 + * For details, see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=124050 65 + */ 38 66 .text 39 67 .globl __kernel_sigreturn 40 68 .type __kernel_sigreturn,@function 69 + nop /* libgcc hack: see comment above */ 41 70 ALIGN 42 71 __kernel_sigreturn: 43 72 STARTPROC_SIGNAL_FRAME IA32_SIGFRAME_sigcontext ··· 81 52 82 53 .globl __kernel_rt_sigreturn 83 54 .type __kernel_rt_sigreturn,@function 55 + nop /* libgcc hack: see comment above */ 84 56 ALIGN 85 57 __kernel_rt_sigreturn: 86 58 STARTPROC_SIGNAL_FRAME IA32_RT_SIGFRAME_sigcontext
+1 -1
arch/x86/include/asm/efi.h
··· 138 138 extern int __init efi_reuse_config(u64 tables, int nr_tables); 139 139 extern void efi_delete_dummy_variable(void); 140 140 extern void efi_crash_gracefully_on_page_fault(unsigned long phys_addr); 141 - extern void efi_free_boot_services(void); 141 + extern void efi_unmap_boot_services(void); 142 142 143 143 void arch_efi_call_virt_setup(void); 144 144 void arch_efi_call_virt_teardown(void);
+4 -1
arch/x86/include/asm/msr-index.h
··· 740 740 #define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT) 741 741 #define MSR_AMD64_SNP_SECURE_AVIC_BIT 18 742 742 #define MSR_AMD64_SNP_SECURE_AVIC BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT) 743 - #define MSR_AMD64_SNP_RESV_BIT 19 743 + #define MSR_AMD64_SNP_RESERVED_BITS19_22 GENMASK_ULL(22, 19) 744 + #define MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT 23 745 + #define MSR_AMD64_SNP_IBPB_ON_ENTRY BIT_ULL(MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT) 746 + #define MSR_AMD64_SNP_RESV_BIT 24 744 747 #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) 745 748 #define MSR_AMD64_SAVIC_CONTROL 0xc0010138 746 749 #define MSR_AMD64_SAVIC_EN_BIT 0
+6
arch/x86/include/asm/numa.h
··· 22 22 */ 23 23 extern s16 __apicid_to_node[MAX_LOCAL_APIC]; 24 24 extern nodemask_t numa_nodes_parsed __initdata; 25 + extern nodemask_t numa_phys_nodes_parsed __initdata; 25 26 26 27 static inline void set_apicid_to_node(int apicid, s16 node) 27 28 { ··· 49 48 extern void numa_add_cpu(unsigned int cpu); 50 49 extern void numa_remove_cpu(unsigned int cpu); 51 50 extern void init_gi_nodes(void); 51 + extern int num_phys_nodes(void); 52 52 #else /* CONFIG_NUMA */ 53 53 static inline void numa_set_node(int cpu, int node) { } 54 54 static inline void numa_clear_node(int cpu) { } ··· 57 55 static inline void numa_add_cpu(unsigned int cpu) { } 58 56 static inline void numa_remove_cpu(unsigned int cpu) { } 59 57 static inline void init_gi_nodes(void) { } 58 + static inline int num_phys_nodes(void) 59 + { 60 + return 1; 61 + } 60 62 #endif /* CONFIG_NUMA */ 61 63 62 64 #ifdef CONFIG_DEBUG_PER_CPU_MAPS
-2
arch/x86/include/asm/pgtable_64.h
··· 19 19 extern p4d_t level4_kernel_pgt[512]; 20 20 extern p4d_t level4_ident_pgt[512]; 21 21 extern pud_t level3_kernel_pgt[512]; 22 - extern pud_t level3_ident_pgt[512]; 23 22 extern pmd_t level2_kernel_pgt[512]; 24 23 extern pmd_t level2_fixmap_pgt[512]; 25 - extern pmd_t level2_ident_pgt[512]; 26 24 extern pte_t level1_fixmap_pgt[512 * FIXMAP_PMD_NUM]; 27 25 extern pgd_t init_top_pgt[]; 28 26
+6
arch/x86/include/asm/topology.h
··· 155 155 extern unsigned int __max_threads_per_core; 156 156 extern unsigned int __num_threads_per_package; 157 157 extern unsigned int __num_cores_per_package; 158 + extern unsigned int __num_nodes_per_package; 158 159 159 160 const char *get_topology_cpu_type_name(struct cpuinfo_x86 *c); 160 161 enum x86_topology_cpu_type get_topology_cpu_type(struct cpuinfo_x86 *c); ··· 178 177 static inline unsigned int topology_num_threads_per_package(void) 179 178 { 180 179 return __num_threads_per_package; 180 + } 181 + 182 + static inline unsigned int topology_num_nodes_per_package(void) 183 + { 184 + return __num_nodes_per_package; 181 185 } 182 186 183 187 #ifdef CONFIG_X86_LOCAL_APIC
+3
arch/x86/kernel/cpu/common.c
··· 95 95 unsigned int __max_logical_packages __ro_after_init = 1; 96 96 EXPORT_SYMBOL(__max_logical_packages); 97 97 98 + unsigned int __num_nodes_per_package __ro_after_init = 1; 99 + EXPORT_SYMBOL(__num_nodes_per_package); 100 + 98 101 unsigned int __num_cores_per_package __ro_after_init = 1; 99 102 EXPORT_SYMBOL(__num_cores_per_package); 100 103
+5 -31
arch/x86/kernel/cpu/resctrl/monitor.c
··· 364 364 msr_clear_bit(MSR_RMID_SNC_CONFIG, 0); 365 365 } 366 366 367 - /* CPU models that support MSR_RMID_SNC_CONFIG */ 367 + /* CPU models that support SNC and MSR_RMID_SNC_CONFIG */ 368 368 static const struct x86_cpu_id snc_cpu_ids[] __initconst = { 369 369 X86_MATCH_VFM(INTEL_ICELAKE_X, 0), 370 370 X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, 0), ··· 375 375 {} 376 376 }; 377 377 378 - /* 379 - * There isn't a simple hardware bit that indicates whether a CPU is running 380 - * in Sub-NUMA Cluster (SNC) mode. Infer the state by comparing the 381 - * number of CPUs sharing the L3 cache with CPU0 to the number of CPUs in 382 - * the same NUMA node as CPU0. 383 - * It is not possible to accurately determine SNC state if the system is 384 - * booted with a maxcpus=N parameter. That distorts the ratio of SNC nodes 385 - * to L3 caches. It will be OK if system is booted with hyperthreading 386 - * disabled (since this doesn't affect the ratio). 387 - */ 388 378 static __init int snc_get_config(void) 389 379 { 390 - struct cacheinfo *ci = get_cpu_cacheinfo_level(0, RESCTRL_L3_CACHE); 391 - const cpumask_t *node0_cpumask; 392 - int cpus_per_node, cpus_per_l3; 393 - int ret; 380 + int ret = topology_num_nodes_per_package(); 394 381 395 - if (!x86_match_cpu(snc_cpu_ids) || !ci) 382 + if (ret > 1 && !x86_match_cpu(snc_cpu_ids)) { 383 + pr_warn("CoD enabled system? Resctrl not supported\n"); 396 384 return 1; 397 - 398 - cpus_read_lock(); 399 - if (num_online_cpus() != num_present_cpus()) 400 - pr_warn("Some CPUs offline, SNC detection may be incorrect\n"); 401 - cpus_read_unlock(); 402 - 403 - node0_cpumask = cpumask_of_node(cpu_to_node(0)); 404 - 405 - cpus_per_node = cpumask_weight(node0_cpumask); 406 - cpus_per_l3 = cpumask_weight(&ci->shared_cpu_map); 407 - 408 - if (!cpus_per_node || !cpus_per_l3) 409 - return 1; 410 - 411 - ret = cpus_per_l3 / cpus_per_node; 385 + } 412 386 413 387 /* sanity check: Only valid results are 1, 2, 3, 4, 6 */ 414 388 switch (ret) {
+11 -2
arch/x86/kernel/cpu/topology.c
··· 31 31 #include <asm/mpspec.h> 32 32 #include <asm/msr.h> 33 33 #include <asm/smp.h> 34 + #include <asm/numa.h> 34 35 35 36 #include "cpu.h" 36 37 ··· 493 492 set_nr_cpu_ids(allowed); 494 493 495 494 cnta = domain_weight(TOPO_PKG_DOMAIN); 496 - cntb = domain_weight(TOPO_DIE_DOMAIN); 497 495 __max_logical_packages = cnta; 496 + 497 + pr_info("Max. logical packages: %3u\n", __max_logical_packages); 498 + 499 + cntb = num_phys_nodes(); 500 + __num_nodes_per_package = DIV_ROUND_UP(cntb, cnta); 501 + 502 + pr_info("Max. logical nodes: %3u\n", cntb); 503 + pr_info("Num. nodes per package:%3u\n", __num_nodes_per_package); 504 + 505 + cntb = domain_weight(TOPO_DIE_DOMAIN); 498 506 __max_dies_per_package = 1U << (get_count_order(cntb) - get_count_order(cnta)); 499 507 500 - pr_info("Max. logical packages: %3u\n", cnta); 501 508 pr_info("Max. logical dies: %3u\n", cntb); 502 509 pr_info("Max. dies per package: %3u\n", __max_dies_per_package); 503 510
-28
arch/x86/kernel/head_64.S
··· 616 616 617 617 .data 618 618 619 - #if defined(CONFIG_XEN_PV) || defined(CONFIG_PVH) 620 - SYM_DATA_START_PTI_ALIGNED(init_top_pgt) 621 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC 622 - .org init_top_pgt + L4_PAGE_OFFSET*8, 0 623 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC 624 - .org init_top_pgt + L4_START_KERNEL*8, 0 625 - /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ 626 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC 627 - .fill PTI_USER_PGD_FILL,8,0 628 - SYM_DATA_END(init_top_pgt) 629 - 630 - SYM_DATA_START_PAGE_ALIGNED(level3_ident_pgt) 631 - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC 632 - .fill 511, 8, 0 633 - SYM_DATA_END(level3_ident_pgt) 634 - SYM_DATA_START_PAGE_ALIGNED(level2_ident_pgt) 635 - /* 636 - * Since I easily can, map the first 1G. 637 - * Don't set NX because code runs from these pages. 638 - * 639 - * Note: This sets _PAGE_GLOBAL despite whether 640 - * the CPU supports it or it is enabled. But, 641 - * the CPU should ignore the bit. 642 - */ 643 - PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD) 644 - SYM_DATA_END(level2_ident_pgt) 645 - #else 646 619 SYM_DATA_START_PTI_ALIGNED(init_top_pgt) 647 620 .fill 512,8,0 648 621 .fill PTI_USER_PGD_FILL,8,0 649 622 SYM_DATA_END(init_top_pgt) 650 - #endif 651 623 652 624 SYM_DATA_START_PAGE_ALIGNED(level4_kernel_pgt) 653 625 .fill 511,8,0
+144 -55
arch/x86/kernel/smpboot.c
··· 468 468 } 469 469 #endif 470 470 471 - /* 472 - * Set if a package/die has multiple NUMA nodes inside. 473 - * AMD Magny-Cours, Intel Cluster-on-Die, and Intel 474 - * Sub-NUMA Clustering have this. 475 - */ 476 - static bool x86_has_numa_in_package; 477 - 478 471 static struct sched_domain_topology_level x86_topology[] = { 479 472 SDTL_INIT(tl_smt_mask, cpu_smt_flags, SMT), 480 473 #ifdef CONFIG_SCHED_CLUSTER ··· 489 496 * PKG domain since the NUMA domains will auto-magically create the 490 497 * right spanning domains based on the SLIT. 491 498 */ 492 - if (x86_has_numa_in_package) { 499 + if (topology_num_nodes_per_package() > 1) { 493 500 unsigned int pkgdom = ARRAY_SIZE(x86_topology) - 2; 494 501 495 502 memset(&x86_topology[pkgdom], 0, sizeof(x86_topology[pkgdom])); ··· 506 513 } 507 514 508 515 #ifdef CONFIG_NUMA 509 - static int sched_avg_remote_distance; 510 - static int avg_remote_numa_distance(void) 516 + /* 517 + * Test if the on-trace cluster at (N,N) is symmetric. 518 + * Uses upper triangle iteration to avoid obvious duplicates. 519 + */ 520 + static bool slit_cluster_symmetric(int N) 511 521 { 512 - int i, j; 513 - int distance, nr_remote, total_distance; 522 + int u = topology_num_nodes_per_package(); 514 523 515 - if (sched_avg_remote_distance > 0) 516 - return sched_avg_remote_distance; 517 - 518 - nr_remote = 0; 519 - total_distance = 0; 520 - for_each_node_state(i, N_CPU) { 521 - for_each_node_state(j, N_CPU) { 522 - distance = node_distance(i, j); 523 - 524 - if (distance >= REMOTE_DISTANCE) { 525 - nr_remote++; 526 - total_distance += distance; 527 - } 524 + for (int k = 0; k < u; k++) { 525 + for (int l = k; l < u; l++) { 526 + if (node_distance(N + k, N + l) != 527 + node_distance(N + l, N + k)) 528 + return false; 528 529 } 529 530 } 530 - if (nr_remote) 531 - sched_avg_remote_distance = total_distance / nr_remote; 532 - else 533 - sched_avg_remote_distance = REMOTE_DISTANCE; 534 531 535 - return sched_avg_remote_distance; 532 + return true; 533 + } 534 + 535 + /* 536 + * Return the package-id of the cluster, or ~0 if indeterminate. 537 + * Each node in the on-trace cluster should have the same package-id. 538 + */ 539 + static u32 slit_cluster_package(int N) 540 + { 541 + int u = topology_num_nodes_per_package(); 542 + u32 pkg_id = ~0; 543 + 544 + for (int n = 0; n < u; n++) { 545 + const struct cpumask *cpus = cpumask_of_node(N + n); 546 + int cpu; 547 + 548 + for_each_cpu(cpu, cpus) { 549 + u32 id = topology_logical_package_id(cpu); 550 + 551 + if (pkg_id == ~0) 552 + pkg_id = id; 553 + if (pkg_id != id) 554 + return ~0; 555 + } 556 + } 557 + 558 + return pkg_id; 559 + } 560 + 561 + /* 562 + * Validate the SLIT table is of the form expected for SNC, specifically: 563 + * 564 + * - each on-trace cluster should be symmetric, 565 + * - each on-trace cluster should have a unique package-id. 566 + * 567 + * If you NUMA_EMU on top of SNC, you get to keep the pieces. 568 + */ 569 + static bool slit_validate(void) 570 + { 571 + int u = topology_num_nodes_per_package(); 572 + u32 pkg_id, prev_pkg_id = ~0; 573 + 574 + for (int pkg = 0; pkg < topology_max_packages(); pkg++) { 575 + int n = pkg * u; 576 + 577 + /* 578 + * Ensure the on-trace cluster is symmetric and each cluster 579 + * has a different package id. 580 + */ 581 + if (!slit_cluster_symmetric(n)) 582 + return false; 583 + pkg_id = slit_cluster_package(n); 584 + if (pkg_id == ~0) 585 + return false; 586 + if (pkg && pkg_id == prev_pkg_id) 587 + return false; 588 + 589 + prev_pkg_id = pkg_id; 590 + } 591 + 592 + return true; 593 + } 594 + 595 + /* 596 + * Compute a sanitized SLIT table for SNC; notably SNC-3 can end up with 597 + * asymmetric off-trace clusters, reflecting physical assymmetries. However 598 + * this leads to 'unfortunate' sched_domain configurations. 599 + * 600 + * For example dual socket GNR with SNC-3: 601 + * 602 + * node distances: 603 + * node 0 1 2 3 4 5 604 + * 0: 10 15 17 21 28 26 605 + * 1: 15 10 15 23 26 23 606 + * 2: 17 15 10 26 23 21 607 + * 3: 21 28 26 10 15 17 608 + * 4: 23 26 23 15 10 15 609 + * 5: 26 23 21 17 15 10 610 + * 611 + * Fix things up by averaging out the off-trace clusters; resulting in: 612 + * 613 + * node 0 1 2 3 4 5 614 + * 0: 10 15 17 24 24 24 615 + * 1: 15 10 15 24 24 24 616 + * 2: 17 15 10 24 24 24 617 + * 3: 24 24 24 10 15 17 618 + * 4: 24 24 24 15 10 15 619 + * 5: 24 24 24 17 15 10 620 + */ 621 + static int slit_cluster_distance(int i, int j) 622 + { 623 + static int slit_valid = -1; 624 + int u = topology_num_nodes_per_package(); 625 + long d = 0; 626 + int x, y; 627 + 628 + if (slit_valid < 0) { 629 + slit_valid = slit_validate(); 630 + if (!slit_valid) 631 + pr_err(FW_BUG "SLIT table doesn't have the expected form for SNC -- fixup disabled!\n"); 632 + else 633 + pr_info("Fixing up SNC SLIT table.\n"); 634 + } 635 + 636 + /* 637 + * Is this a unit cluster on the trace? 638 + */ 639 + if ((i / u) == (j / u) || !slit_valid) 640 + return node_distance(i, j); 641 + 642 + /* 643 + * Off-trace cluster. 644 + * 645 + * Notably average out the symmetric pair of off-trace clusters to 646 + * ensure the resulting SLIT table is symmetric. 647 + */ 648 + x = i - (i % u); 649 + y = j - (j % u); 650 + 651 + for (i = x; i < x + u; i++) { 652 + for (j = y; j < y + u; j++) { 653 + d += node_distance(i, j); 654 + d += node_distance(j, i); 655 + } 656 + } 657 + 658 + return d / (2*u*u); 536 659 } 537 660 538 661 int arch_sched_node_distance(int from, int to) ··· 658 549 switch (boot_cpu_data.x86_vfm) { 659 550 case INTEL_GRANITERAPIDS_X: 660 551 case INTEL_ATOM_DARKMONT_X: 661 - 662 - if (!x86_has_numa_in_package || topology_max_packages() == 1 || 663 - d < REMOTE_DISTANCE) 552 + if (topology_max_packages() == 1 || 553 + topology_num_nodes_per_package() < 3) 664 554 return d; 665 555 666 556 /* 667 - * With SNC enabled, there could be too many levels of remote 668 - * NUMA node distances, creating NUMA domain levels 669 - * including local nodes and partial remote nodes. 670 - * 671 - * Trim finer distance tuning for NUMA nodes in remote package 672 - * for the purpose of building sched domains. Group NUMA nodes 673 - * in the remote package in the same sched group. 674 - * Simplify NUMA domains and avoid extra NUMA levels including 675 - * different remote NUMA nodes and local nodes. 676 - * 677 - * GNR and CWF don't expect systems with more than 2 packages 678 - * and more than 2 hops between packages. Single average remote 679 - * distance won't be appropriate if there are more than 2 680 - * packages as average distance to different remote packages 681 - * could be different. 557 + * Handle SNC-3 asymmetries. 682 558 */ 683 - WARN_ONCE(topology_max_packages() > 2, 684 - "sched: Expect only up to 2 packages for GNR or CWF, " 685 - "but saw %d packages when building sched domains.", 686 - topology_max_packages()); 687 - 688 - d = avg_remote_numa_distance(); 559 + return slit_cluster_distance(from, to); 689 560 } 690 561 return d; 691 562 } ··· 695 606 o = &cpu_data(i); 696 607 697 608 if (match_pkg(c, o) && !topology_same_node(c, o)) 698 - x86_has_numa_in_package = true; 609 + WARN_ON_ONCE(topology_num_nodes_per_package() == 1); 699 610 700 611 if ((i == cpu) || (has_smt && match_smt(c, o))) 701 612 link_mask(topology_sibling_cpumask, cpu, i);
+1
arch/x86/kernel/vmlinux.lds.S
··· 427 427 .llvm_bb_addr_map : { *(.llvm_bb_addr_map) } 428 428 #endif 429 429 430 + MODINFO 430 431 ELF_DETAILS 431 432 432 433 DISCARDS
+8
arch/x86/mm/numa.c
··· 48 48 [0 ... MAX_LOCAL_APIC-1] = NUMA_NO_NODE 49 49 }; 50 50 51 + nodemask_t numa_phys_nodes_parsed __initdata; 52 + 51 53 int numa_cpu_node(int cpu) 52 54 { 53 55 u32 apicid = early_per_cpu(x86_cpu_to_apicid, cpu); ··· 57 55 if (apicid != BAD_APICID) 58 56 return __apicid_to_node[apicid]; 59 57 return NUMA_NO_NODE; 58 + } 59 + 60 + int __init num_phys_nodes(void) 61 + { 62 + return bitmap_weight(numa_phys_nodes_parsed.bits, MAX_NUMNODES); 60 63 } 61 64 62 65 cpumask_var_t node_to_cpumask_map[MAX_NUMNODES]; ··· 217 210 0LLU, PFN_PHYS(max_pfn) - 1); 218 211 219 212 node_set(0, numa_nodes_parsed); 213 + node_set(0, numa_phys_nodes_parsed); 220 214 numa_add_memblk(0, 0, PFN_PHYS(max_pfn)); 221 215 222 216 return 0;
+2
arch/x86/mm/srat.c
··· 57 57 } 58 58 set_apicid_to_node(apic_id, node); 59 59 node_set(node, numa_nodes_parsed); 60 + node_set(node, numa_phys_nodes_parsed); 60 61 pr_debug("SRAT: PXM %u -> APIC 0x%04x -> Node %u\n", pxm, apic_id, node); 61 62 } 62 63 ··· 98 97 99 98 set_apicid_to_node(apic_id, node); 100 99 node_set(node, numa_nodes_parsed); 100 + node_set(node, numa_phys_nodes_parsed); 101 101 pr_debug("SRAT: PXM %u -> APIC 0x%02x -> Node %u\n", pxm, apic_id, node); 102 102 } 103 103
+1 -1
arch/x86/platform/efi/efi.c
··· 836 836 } 837 837 838 838 efi_check_for_embedded_firmwares(); 839 - efi_free_boot_services(); 839 + efi_unmap_boot_services(); 840 840 841 841 if (!efi_is_mixed()) 842 842 efi_native_runtime_setup();
+52 -3
arch/x86/platform/efi/quirks.c
··· 341 341 342 342 /* 343 343 * Because the following memblock_reserve() is paired 344 - * with memblock_free_late() for this region in 344 + * with free_reserved_area() for this region in 345 345 * efi_free_boot_services(), we must be extremely 346 346 * careful not to reserve, and subsequently free, 347 347 * critical regions of memory (like the kernel image) or ··· 404 404 pr_err("Failed to unmap VA mapping for 0x%llx\n", va); 405 405 } 406 406 407 - void __init efi_free_boot_services(void) 407 + struct efi_freeable_range { 408 + u64 start; 409 + u64 end; 410 + }; 411 + 412 + static struct efi_freeable_range *ranges_to_free; 413 + 414 + void __init efi_unmap_boot_services(void) 408 415 { 409 416 struct efi_memory_map_data data = { 0 }; 410 417 efi_memory_desc_t *md; 411 418 int num_entries = 0; 419 + int idx = 0; 420 + size_t sz; 412 421 void *new, *new_md; 413 422 414 423 /* Keep all regions for /sys/kernel/debug/efi */ 415 424 if (efi_enabled(EFI_DBG)) 416 425 return; 426 + 427 + sz = sizeof(*ranges_to_free) * efi.memmap.nr_map + 1; 428 + ranges_to_free = kzalloc(sz, GFP_KERNEL); 429 + if (!ranges_to_free) { 430 + pr_err("Failed to allocate storage for freeable EFI regions\n"); 431 + return; 432 + } 417 433 418 434 for_each_efi_memory_desc(md) { 419 435 unsigned long long start = md->phys_addr; ··· 487 471 start = SZ_1M; 488 472 } 489 473 490 - memblock_free_late(start, size); 474 + /* 475 + * With CONFIG_DEFERRED_STRUCT_PAGE_INIT parts of the memory 476 + * map are still not initialized and we can't reliably free 477 + * memory here. 478 + * Queue the ranges to free at a later point. 479 + */ 480 + ranges_to_free[idx].start = start; 481 + ranges_to_free[idx].end = start + size; 482 + idx++; 491 483 } 492 484 493 485 if (!num_entries) ··· 535 511 return; 536 512 } 537 513 } 514 + 515 + static int __init efi_free_boot_services(void) 516 + { 517 + struct efi_freeable_range *range = ranges_to_free; 518 + unsigned long freed = 0; 519 + 520 + if (!ranges_to_free) 521 + return 0; 522 + 523 + while (range->start) { 524 + void *start = phys_to_virt(range->start); 525 + void *end = phys_to_virt(range->end); 526 + 527 + free_reserved_area(start, end, -1, NULL); 528 + freed += (end - start); 529 + range++; 530 + } 531 + kfree(ranges_to_free); 532 + 533 + if (freed) 534 + pr_info("Freeing EFI boot services memory: %ldK\n", freed / SZ_1K); 535 + 536 + return 0; 537 + } 538 + arch_initcall(efi_free_boot_services); 538 539 539 540 /* 540 541 * A number of config table entries get remapped to virtual addresses
+1 -6
arch/x86/platform/pvh/enlighten.c
··· 25 25 26 26 const unsigned int __initconst pvh_start_info_sz = sizeof(pvh_start_info); 27 27 28 - static u64 __init pvh_get_root_pointer(void) 29 - { 30 - return pvh_start_info.rsdp_paddr; 31 - } 32 - 33 28 /* 34 29 * Xen guests are able to obtain the memory map from the hypervisor via the 35 30 * HYPERVISOR_memory_op hypercall. ··· 90 95 pvh_bootparams.hdr.version = (2 << 8) | 12; 91 96 pvh_bootparams.hdr.type_of_loader = ((xen_guest ? 0x9 : 0xb) << 4) | 0; 92 97 93 - x86_init.acpi.get_root_pointer = pvh_get_root_pointer; 98 + pvh_bootparams.acpi_rsdp_addr = pvh_start_info.rsdp_paddr; 94 99 } 95 100 96 101 /*
+1 -1
arch/x86/xen/enlighten_pv.c
··· 392 392 393 393 /* 394 394 * Xen PV would need some work to support PCID: CR3 handling as well 395 - * as xen_flush_tlb_others() would need updating. 395 + * as xen_flush_tlb_multi() would need updating. 396 396 */ 397 397 setup_clear_cpu_cap(X86_FEATURE_PCID); 398 398
+9
arch/x86/xen/mmu_pv.c
··· 105 105 static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss; 106 106 #endif 107 107 108 + static pud_t level3_ident_pgt[PTRS_PER_PUD] __page_aligned_bss; 109 + static pmd_t level2_ident_pgt[PTRS_PER_PMD] __page_aligned_bss; 110 + 108 111 /* 109 112 * Protects atomic reservation decrease/increase against concurrent increases. 110 113 * Also protects non-atomic updates of current_pages and balloon lists. ··· 1779 1776 1780 1777 /* Zap identity mapping */ 1781 1778 init_top_pgt[0] = __pgd(0); 1779 + 1780 + init_top_pgt[pgd_index(__PAGE_OFFSET_BASE_L4)].pgd = 1781 + __pa_symbol(level3_ident_pgt) + _KERNPG_TABLE_NOENC; 1782 + init_top_pgt[pgd_index(__START_KERNEL_map)].pgd = 1783 + __pa_symbol(level3_kernel_pgt) + _PAGE_TABLE_NOENC; 1784 + level3_ident_pgt[0].pud = __pa_symbol(level2_ident_pgt) + _KERNPG_TABLE_NOENC; 1782 1785 1783 1786 /* Pre-constructed entries are in pfn, so convert to mfn */ 1784 1787 /* L4[273] -> level3_ident_pgt */
+1 -2
block/blk-map.c
··· 398 398 if (op_is_write(op)) 399 399 memcpy(page_address(page), p, bytes); 400 400 401 - if (bio_add_page(bio, page, bytes, 0) < bytes) 402 - break; 401 + __bio_add_page(bio, page, bytes, 0); 403 402 404 403 len -= bytes; 405 404 p += bytes;
+30 -15
block/blk-mq.c
··· 4793 4793 } 4794 4794 } 4795 4795 4796 - static int blk_mq_realloc_tag_set_tags(struct blk_mq_tag_set *set, 4797 - int new_nr_hw_queues) 4796 + static struct blk_mq_tags **blk_mq_prealloc_tag_set_tags( 4797 + struct blk_mq_tag_set *set, 4798 + int new_nr_hw_queues) 4798 4799 { 4799 4800 struct blk_mq_tags **new_tags; 4800 4801 int i; 4801 4802 4802 4803 if (set->nr_hw_queues >= new_nr_hw_queues) 4803 - goto done; 4804 + return NULL; 4804 4805 4805 4806 new_tags = kcalloc_node(new_nr_hw_queues, sizeof(struct blk_mq_tags *), 4806 4807 GFP_KERNEL, set->numa_node); 4807 4808 if (!new_tags) 4808 - return -ENOMEM; 4809 + return ERR_PTR(-ENOMEM); 4809 4810 4810 4811 if (set->tags) 4811 4812 memcpy(new_tags, set->tags, set->nr_hw_queues * 4812 4813 sizeof(*set->tags)); 4813 - kfree(set->tags); 4814 - set->tags = new_tags; 4815 4814 4816 4815 for (i = set->nr_hw_queues; i < new_nr_hw_queues; i++) { 4817 - if (!__blk_mq_alloc_map_and_rqs(set, i)) { 4818 - while (--i >= set->nr_hw_queues) 4819 - __blk_mq_free_map_and_rqs(set, i); 4820 - return -ENOMEM; 4816 + if (blk_mq_is_shared_tags(set->flags)) { 4817 + new_tags[i] = set->shared_tags; 4818 + } else { 4819 + new_tags[i] = blk_mq_alloc_map_and_rqs(set, i, 4820 + set->queue_depth); 4821 + if (!new_tags[i]) 4822 + goto out_unwind; 4821 4823 } 4822 4824 cond_resched(); 4823 4825 } 4824 4826 4825 - done: 4826 - set->nr_hw_queues = new_nr_hw_queues; 4827 - return 0; 4827 + return new_tags; 4828 + out_unwind: 4829 + while (--i >= set->nr_hw_queues) { 4830 + if (!blk_mq_is_shared_tags(set->flags)) 4831 + blk_mq_free_map_and_rqs(set, new_tags[i], i); 4832 + } 4833 + kfree(new_tags); 4834 + return ERR_PTR(-ENOMEM); 4828 4835 } 4829 4836 4830 4837 /* ··· 5120 5113 unsigned int memflags; 5121 5114 int i; 5122 5115 struct xarray elv_tbl; 5116 + struct blk_mq_tags **new_tags; 5123 5117 bool queues_frozen = false; 5124 5118 5125 5119 lockdep_assert_held(&set->tag_list_lock); ··· 5155 5147 if (blk_mq_elv_switch_none(q, &elv_tbl)) 5156 5148 goto switch_back; 5157 5149 5150 + new_tags = blk_mq_prealloc_tag_set_tags(set, nr_hw_queues); 5151 + if (IS_ERR(new_tags)) 5152 + goto switch_back; 5153 + 5158 5154 list_for_each_entry(q, &set->tag_list, tag_set_list) 5159 5155 blk_mq_freeze_queue_nomemsave(q); 5160 5156 queues_frozen = true; 5161 - if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0) 5162 - goto switch_back; 5157 + if (new_tags) { 5158 + kfree(set->tags); 5159 + set->tags = new_tags; 5160 + } 5161 + set->nr_hw_queues = nr_hw_queues; 5163 5162 5164 5163 fallback: 5165 5164 blk_mq_update_queue_map(set);
+7 -1
block/blk-sysfs.c
··· 78 78 /* 79 79 * Serialize updating nr_requests with concurrent queue_requests_store() 80 80 * and switching elevator. 81 + * 82 + * Use trylock to avoid circular lock dependency with kernfs active 83 + * reference during concurrent disk deletion: 84 + * update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del) 85 + * kn->active -> update_nr_hwq_lock (via this sysfs write path) 81 86 */ 82 - down_write(&set->update_nr_hwq_lock); 87 + if (!down_write_trylock(&set->update_nr_hwq_lock)) 88 + return -EBUSY; 83 89 84 90 if (nr == q->nr_requests) 85 91 goto unlock;
+11 -1
block/elevator.c
··· 807 807 elv_iosched_load_module(ctx.name); 808 808 ctx.type = elevator_find_get(ctx.name); 809 809 810 - down_read(&set->update_nr_hwq_lock); 810 + /* 811 + * Use trylock to avoid circular lock dependency with kernfs active 812 + * reference during concurrent disk deletion: 813 + * update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del) 814 + * kn->active -> update_nr_hwq_lock (via this sysfs write path) 815 + */ 816 + if (!down_read_trylock(&set->update_nr_hwq_lock)) { 817 + ret = -EBUSY; 818 + goto out; 819 + } 811 820 if (!blk_queue_no_elv_switch(q)) { 812 821 ret = elevator_change(q, &ctx); 813 822 if (!ret) ··· 826 817 } 827 818 up_read(&set->update_nr_hwq_lock); 828 819 820 + out: 829 821 if (ctx.type) 830 822 elevator_put(ctx.type); 831 823 return ret;
-9
crypto/Kconfig
··· 876 876 - blake2b-384 877 877 - blake2b-512 878 878 879 - Used by the btrfs filesystem. 880 - 881 879 See https://blake2.net for further information. 882 880 883 881 config CRYPTO_CMAC ··· 963 965 10118-3), including HMAC support. 964 966 965 967 This is required for IPsec AH (XFRM_AH) and IPsec ESP (XFRM_ESP). 966 - Used by the btrfs filesystem, Ceph, NFS, and SMB. 967 968 968 969 config CRYPTO_SHA512 969 970 tristate "SHA-384 and SHA-512" ··· 1036 1039 1037 1040 Extremely fast, working at speeds close to RAM limits. 1038 1041 1039 - Used by the btrfs filesystem. 1040 - 1041 1042 endmenu 1042 1043 1043 1044 menu "CRCs (cyclic redundancy checks)" ··· 1053 1058 on Communications, Vol. 41, No. 6, June 1993, selected for use with 1054 1059 iSCSI. 1055 1060 1056 - Used by btrfs, ext4, jbd2, NVMeoF/TCP, and iSCSI. 1057 - 1058 1061 config CRYPTO_CRC32 1059 1062 tristate "CRC32" 1060 1063 select CRYPTO_HASH 1061 1064 select CRC32 1062 1065 help 1063 1066 CRC32 CRC algorithm (IEEE 802.3) 1064 - 1065 - Used by RoCEv2 and f2fs. 1066 1067 1067 1068 endmenu 1068 1069
+2 -2
crypto/testmgr.c
··· 4132 4132 .fips_allowed = 1, 4133 4133 }, { 4134 4134 .alg = "authenc(hmac(sha224),cbc(aes))", 4135 - .generic_driver = "authenc(hmac-sha224-lib,cbc(aes-generic))", 4135 + .generic_driver = "authenc(hmac-sha224-lib,cbc(aes-lib))", 4136 4136 .test = alg_test_aead, 4137 4137 .suite = { 4138 4138 .aead = __VECS(hmac_sha224_aes_cbc_tv_temp) ··· 4194 4194 .fips_allowed = 1, 4195 4195 }, { 4196 4196 .alg = "authenc(hmac(sha384),cbc(aes))", 4197 - .generic_driver = "authenc(hmac-sha384-lib,cbc(aes-generic))", 4197 + .generic_driver = "authenc(hmac-sha384-lib,cbc(aes-lib))", 4198 4198 .test = alg_test_aead, 4199 4199 .suite = { 4200 4200 .aead = __VECS(hmac_sha384_aes_cbc_tv_temp)
+8 -15
drivers/accel/amdxdna/aie2_ctx.c
··· 186 186 cmd_abo = job->cmd_bo; 187 187 188 188 if (unlikely(job->job_timeout)) { 189 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT); 189 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_TIMEOUT); 190 190 ret = -EINVAL; 191 191 goto out; 192 192 } 193 193 194 194 if (unlikely(!data) || unlikely(size != sizeof(u32))) { 195 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT); 195 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ABORT); 196 196 ret = -EINVAL; 197 197 goto out; 198 198 } ··· 202 202 if (status == AIE2_STATUS_SUCCESS) 203 203 amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_COMPLETED); 204 204 else 205 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ERROR); 205 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ERROR); 206 206 207 207 out: 208 208 aie2_sched_notify(job); ··· 244 244 cmd_abo = job->cmd_bo; 245 245 246 246 if (unlikely(job->job_timeout)) { 247 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT); 247 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_TIMEOUT); 248 248 ret = -EINVAL; 249 249 goto out; 250 250 } 251 251 252 252 if (unlikely(!data) || unlikely(size != sizeof(u32) * 3)) { 253 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT); 253 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ABORT); 254 254 ret = -EINVAL; 255 255 goto out; 256 256 } ··· 270 270 fail_cmd_idx, fail_cmd_status); 271 271 272 272 if (fail_cmd_status == AIE2_STATUS_SUCCESS) { 273 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT); 273 + amdxdna_cmd_set_error(cmd_abo, job, fail_cmd_idx, ERT_CMD_STATE_ABORT); 274 274 ret = -EINVAL; 275 - goto out; 275 + } else { 276 + amdxdna_cmd_set_error(cmd_abo, job, fail_cmd_idx, ERT_CMD_STATE_ERROR); 276 277 } 277 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ERROR); 278 278 279 - if (amdxdna_cmd_get_op(cmd_abo) == ERT_CMD_CHAIN) { 280 - struct amdxdna_cmd_chain *cc = amdxdna_cmd_get_payload(cmd_abo, NULL); 281 - 282 - cc->error_index = fail_cmd_idx; 283 - if (cc->error_index >= cc->command_count) 284 - cc->error_index = 0; 285 - } 286 279 out: 287 280 aie2_sched_notify(job); 288 281 return ret;
+28 -8
drivers/accel/amdxdna/aie2_message.c
··· 40 40 return -ENODEV; 41 41 42 42 ret = xdna_send_msg_wait(xdna, ndev->mgmt_chann, msg); 43 - if (ret == -ETIME) { 44 - xdna_mailbox_stop_channel(ndev->mgmt_chann); 45 - xdna_mailbox_destroy_channel(ndev->mgmt_chann); 46 - ndev->mgmt_chann = NULL; 47 - } 43 + if (ret == -ETIME) 44 + aie2_destroy_mgmt_chann(ndev); 48 45 49 46 if (!ret && *hdl->status != AIE2_STATUS_SUCCESS) { 50 47 XDNA_ERR(xdna, "command opcode 0x%x failed, status 0x%x", ··· 293 296 } 294 297 295 298 intr_reg = i2x.mb_head_ptr_reg + 4; 296 - hwctx->priv->mbox_chann = xdna_mailbox_create_channel(ndev->mbox, &x2i, &i2x, 297 - intr_reg, ret); 299 + hwctx->priv->mbox_chann = xdna_mailbox_alloc_channel(ndev->mbox); 298 300 if (!hwctx->priv->mbox_chann) { 299 301 XDNA_ERR(xdna, "Not able to create channel"); 300 302 ret = -EINVAL; 301 303 goto del_ctx_req; 304 + } 305 + 306 + ret = xdna_mailbox_start_channel(hwctx->priv->mbox_chann, &x2i, &i2x, 307 + intr_reg, ret); 308 + if (ret) { 309 + XDNA_ERR(xdna, "Not able to create channel"); 310 + ret = -EINVAL; 311 + goto free_channel; 302 312 } 303 313 ndev->hwctx_num++; 304 314 ··· 314 310 315 311 return 0; 316 312 313 + free_channel: 314 + xdna_mailbox_free_channel(hwctx->priv->mbox_chann); 317 315 del_ctx_req: 318 316 aie2_destroy_context_req(ndev, hwctx->fw_ctx_id); 319 317 return ret; ··· 331 325 332 326 xdna_mailbox_stop_channel(hwctx->priv->mbox_chann); 333 327 ret = aie2_destroy_context_req(ndev, hwctx->fw_ctx_id); 334 - xdna_mailbox_destroy_channel(hwctx->priv->mbox_chann); 328 + xdna_mailbox_free_channel(hwctx->priv->mbox_chann); 335 329 XDNA_DBG(xdna, "Destroyed fw ctx %d", hwctx->fw_ctx_id); 336 330 hwctx->priv->mbox_chann = NULL; 337 331 hwctx->fw_ctx_id = -1; ··· 918 912 ndev->exec_msg_ops = &npu_exec_message_ops; 919 913 else 920 914 ndev->exec_msg_ops = &legacy_exec_message_ops; 915 + } 916 + 917 + void aie2_destroy_mgmt_chann(struct amdxdna_dev_hdl *ndev) 918 + { 919 + struct amdxdna_dev *xdna = ndev->xdna; 920 + 921 + drm_WARN_ON(&xdna->ddev, !mutex_is_locked(&xdna->dev_lock)); 922 + 923 + if (!ndev->mgmt_chann) 924 + return; 925 + 926 + xdna_mailbox_stop_channel(ndev->mgmt_chann); 927 + xdna_mailbox_free_channel(ndev->mgmt_chann); 928 + ndev->mgmt_chann = NULL; 921 929 } 922 930 923 931 static inline struct amdxdna_gem_obj *
+37 -29
drivers/accel/amdxdna/aie2_pci.c
··· 330 330 331 331 aie2_runtime_cfg(ndev, AIE2_RT_CFG_CLK_GATING, NULL); 332 332 aie2_mgmt_fw_fini(ndev); 333 - xdna_mailbox_stop_channel(ndev->mgmt_chann); 334 - xdna_mailbox_destroy_channel(ndev->mgmt_chann); 335 - ndev->mgmt_chann = NULL; 333 + aie2_destroy_mgmt_chann(ndev); 336 334 drmm_kfree(&xdna->ddev, ndev->mbox); 337 335 ndev->mbox = NULL; 338 336 aie2_psp_stop(ndev->psp_hdl); ··· 361 363 } 362 364 pci_set_master(pdev); 363 365 366 + mbox_res.ringbuf_base = ndev->sram_base; 367 + mbox_res.ringbuf_size = pci_resource_len(pdev, xdna->dev_info->sram_bar); 368 + mbox_res.mbox_base = ndev->mbox_base; 369 + mbox_res.mbox_size = MBOX_SIZE(ndev); 370 + mbox_res.name = "xdna_mailbox"; 371 + ndev->mbox = xdnam_mailbox_create(&xdna->ddev, &mbox_res); 372 + if (!ndev->mbox) { 373 + XDNA_ERR(xdna, "failed to create mailbox device"); 374 + ret = -ENODEV; 375 + goto disable_dev; 376 + } 377 + 378 + ndev->mgmt_chann = xdna_mailbox_alloc_channel(ndev->mbox); 379 + if (!ndev->mgmt_chann) { 380 + XDNA_ERR(xdna, "failed to alloc channel"); 381 + ret = -ENODEV; 382 + goto disable_dev; 383 + } 384 + 364 385 ret = aie2_smu_init(ndev); 365 386 if (ret) { 366 387 XDNA_ERR(xdna, "failed to init smu, ret %d", ret); 367 - goto disable_dev; 388 + goto free_channel; 368 389 } 369 390 370 391 ret = aie2_psp_start(ndev->psp_hdl); ··· 398 381 goto stop_psp; 399 382 } 400 383 401 - mbox_res.ringbuf_base = ndev->sram_base; 402 - mbox_res.ringbuf_size = pci_resource_len(pdev, xdna->dev_info->sram_bar); 403 - mbox_res.mbox_base = ndev->mbox_base; 404 - mbox_res.mbox_size = MBOX_SIZE(ndev); 405 - mbox_res.name = "xdna_mailbox"; 406 - ndev->mbox = xdnam_mailbox_create(&xdna->ddev, &mbox_res); 407 - if (!ndev->mbox) { 408 - XDNA_ERR(xdna, "failed to create mailbox device"); 409 - ret = -ENODEV; 410 - goto stop_psp; 411 - } 412 - 413 384 mgmt_mb_irq = pci_irq_vector(pdev, ndev->mgmt_chan_idx); 414 385 if (mgmt_mb_irq < 0) { 415 386 ret = mgmt_mb_irq; ··· 406 401 } 407 402 408 403 xdna_mailbox_intr_reg = ndev->mgmt_i2x.mb_head_ptr_reg + 4; 409 - ndev->mgmt_chann = xdna_mailbox_create_channel(ndev->mbox, 410 - &ndev->mgmt_x2i, 411 - &ndev->mgmt_i2x, 412 - xdna_mailbox_intr_reg, 413 - mgmt_mb_irq); 414 - if (!ndev->mgmt_chann) { 415 - XDNA_ERR(xdna, "failed to create management mailbox channel"); 404 + ret = xdna_mailbox_start_channel(ndev->mgmt_chann, 405 + &ndev->mgmt_x2i, 406 + &ndev->mgmt_i2x, 407 + xdna_mailbox_intr_reg, 408 + mgmt_mb_irq); 409 + if (ret) { 410 + XDNA_ERR(xdna, "failed to start management mailbox channel"); 416 411 ret = -EINVAL; 417 412 goto stop_psp; 418 413 } ··· 420 415 ret = aie2_mgmt_fw_init(ndev); 421 416 if (ret) { 422 417 XDNA_ERR(xdna, "initial mgmt firmware failed, ret %d", ret); 423 - goto destroy_mgmt_chann; 418 + goto stop_fw; 424 419 } 425 420 426 421 ret = aie2_pm_init(ndev); 427 422 if (ret) { 428 423 XDNA_ERR(xdna, "failed to init pm, ret %d", ret); 429 - goto destroy_mgmt_chann; 424 + goto stop_fw; 430 425 } 431 426 432 427 ret = aie2_mgmt_fw_query(ndev); 433 428 if (ret) { 434 429 XDNA_ERR(xdna, "failed to query fw, ret %d", ret); 435 - goto destroy_mgmt_chann; 430 + goto stop_fw; 436 431 } 437 432 438 433 ret = aie2_error_async_events_alloc(ndev); 439 434 if (ret) { 440 435 XDNA_ERR(xdna, "Allocate async events failed, ret %d", ret); 441 - goto destroy_mgmt_chann; 436 + goto stop_fw; 442 437 } 443 438 444 439 ndev->dev_status = AIE2_DEV_START; 445 440 446 441 return 0; 447 442 448 - destroy_mgmt_chann: 443 + stop_fw: 444 + aie2_suspend_fw(ndev); 449 445 xdna_mailbox_stop_channel(ndev->mgmt_chann); 450 - xdna_mailbox_destroy_channel(ndev->mgmt_chann); 451 446 stop_psp: 452 447 aie2_psp_stop(ndev->psp_hdl); 453 448 fini_smu: 454 449 aie2_smu_fini(ndev); 450 + free_channel: 451 + xdna_mailbox_free_channel(ndev->mgmt_chann); 452 + ndev->mgmt_chann = NULL; 455 453 disable_dev: 456 454 pci_disable_device(pdev); 457 455
+1
drivers/accel/amdxdna/aie2_pci.h
··· 303 303 304 304 /* aie2_message.c */ 305 305 void aie2_msg_init(struct amdxdna_dev_hdl *ndev); 306 + void aie2_destroy_mgmt_chann(struct amdxdna_dev_hdl *ndev); 306 307 int aie2_suspend_fw(struct amdxdna_dev_hdl *ndev); 307 308 int aie2_resume_fw(struct amdxdna_dev_hdl *ndev); 308 309 int aie2_set_runtime_cfg(struct amdxdna_dev_hdl *ndev, u32 type, u64 value);
+27
drivers/accel/amdxdna/amdxdna_ctx.c
··· 135 135 return INVALID_CU_IDX; 136 136 } 137 137 138 + int amdxdna_cmd_set_error(struct amdxdna_gem_obj *abo, 139 + struct amdxdna_sched_job *job, u32 cmd_idx, 140 + enum ert_cmd_state error_state) 141 + { 142 + struct amdxdna_client *client = job->hwctx->client; 143 + struct amdxdna_cmd *cmd = abo->mem.kva; 144 + struct amdxdna_cmd_chain *cc = NULL; 145 + 146 + cmd->header &= ~AMDXDNA_CMD_STATE; 147 + cmd->header |= FIELD_PREP(AMDXDNA_CMD_STATE, error_state); 148 + 149 + if (amdxdna_cmd_get_op(abo) == ERT_CMD_CHAIN) { 150 + cc = amdxdna_cmd_get_payload(abo, NULL); 151 + cc->error_index = (cmd_idx < cc->command_count) ? cmd_idx : 0; 152 + abo = amdxdna_gem_get_obj(client, cc->data[0], AMDXDNA_BO_CMD); 153 + if (!abo) 154 + return -EINVAL; 155 + cmd = abo->mem.kva; 156 + } 157 + 158 + memset(cmd->data, 0xff, abo->mem.size - sizeof(*cmd)); 159 + if (cc) 160 + amdxdna_gem_put_obj(abo); 161 + 162 + return 0; 163 + } 164 + 138 165 /* 139 166 * This should be called in close() and remove(). DO NOT call in other syscalls. 140 167 * This guarantee that when hwctx and resources will be released, if user
+3
drivers/accel/amdxdna/amdxdna_ctx.h
··· 167 167 168 168 void *amdxdna_cmd_get_payload(struct amdxdna_gem_obj *abo, u32 *size); 169 169 u32 amdxdna_cmd_get_cu_idx(struct amdxdna_gem_obj *abo); 170 + int amdxdna_cmd_set_error(struct amdxdna_gem_obj *abo, 171 + struct amdxdna_sched_job *job, u32 cmd_idx, 172 + enum ert_cmd_state error_state); 170 173 171 174 void amdxdna_sched_job_cleanup(struct amdxdna_sched_job *job); 172 175 void amdxdna_hwctx_remove_all(struct amdxdna_client *client);
+49 -50
drivers/accel/amdxdna/amdxdna_mailbox.c
··· 460 460 return ret; 461 461 } 462 462 463 - struct mailbox_channel * 464 - xdna_mailbox_create_channel(struct mailbox *mb, 465 - const struct xdna_mailbox_chann_res *x2i, 466 - const struct xdna_mailbox_chann_res *i2x, 467 - u32 iohub_int_addr, 468 - int mb_irq) 463 + struct mailbox_channel *xdna_mailbox_alloc_channel(struct mailbox *mb) 469 464 { 470 465 struct mailbox_channel *mb_chann; 471 - int ret; 472 - 473 - if (!is_power_of_2(x2i->rb_size) || !is_power_of_2(i2x->rb_size)) { 474 - pr_err("Ring buf size must be power of 2"); 475 - return NULL; 476 - } 477 466 478 467 mb_chann = kzalloc_obj(*mb_chann); 479 468 if (!mb_chann) 480 469 return NULL; 481 470 471 + INIT_WORK(&mb_chann->rx_work, mailbox_rx_worker); 472 + mb_chann->work_q = create_singlethread_workqueue(MAILBOX_NAME); 473 + if (!mb_chann->work_q) { 474 + MB_ERR(mb_chann, "Create workqueue failed"); 475 + goto free_chann; 476 + } 482 477 mb_chann->mb = mb; 478 + 479 + return mb_chann; 480 + 481 + free_chann: 482 + kfree(mb_chann); 483 + return NULL; 484 + } 485 + 486 + void xdna_mailbox_free_channel(struct mailbox_channel *mb_chann) 487 + { 488 + destroy_workqueue(mb_chann->work_q); 489 + kfree(mb_chann); 490 + } 491 + 492 + int 493 + xdna_mailbox_start_channel(struct mailbox_channel *mb_chann, 494 + const struct xdna_mailbox_chann_res *x2i, 495 + const struct xdna_mailbox_chann_res *i2x, 496 + u32 iohub_int_addr, 497 + int mb_irq) 498 + { 499 + int ret; 500 + 501 + if (!is_power_of_2(x2i->rb_size) || !is_power_of_2(i2x->rb_size)) { 502 + pr_err("Ring buf size must be power of 2"); 503 + return -EINVAL; 504 + } 505 + 483 506 mb_chann->msix_irq = mb_irq; 484 507 mb_chann->iohub_int_addr = iohub_int_addr; 485 508 memcpy(&mb_chann->res[CHAN_RES_X2I], x2i, sizeof(*x2i)); ··· 512 489 mb_chann->x2i_tail = mailbox_get_tailptr(mb_chann, CHAN_RES_X2I); 513 490 mb_chann->i2x_head = mailbox_get_headptr(mb_chann, CHAN_RES_I2X); 514 491 515 - INIT_WORK(&mb_chann->rx_work, mailbox_rx_worker); 516 - mb_chann->work_q = create_singlethread_workqueue(MAILBOX_NAME); 517 - if (!mb_chann->work_q) { 518 - MB_ERR(mb_chann, "Create workqueue failed"); 519 - goto free_and_out; 520 - } 521 - 522 492 /* Everything look good. Time to enable irq handler */ 523 493 ret = request_irq(mb_irq, mailbox_irq_handler, 0, MAILBOX_NAME, mb_chann); 524 494 if (ret) { 525 495 MB_ERR(mb_chann, "Failed to request irq %d ret %d", mb_irq, ret); 526 - goto destroy_wq; 496 + return ret; 527 497 } 528 498 529 499 mb_chann->bad_state = false; 530 500 mailbox_reg_write(mb_chann, mb_chann->iohub_int_addr, 0); 531 501 532 - MB_DBG(mb_chann, "Mailbox channel created (irq: %d)", mb_chann->msix_irq); 533 - return mb_chann; 534 - 535 - destroy_wq: 536 - destroy_workqueue(mb_chann->work_q); 537 - free_and_out: 538 - kfree(mb_chann); 539 - return NULL; 540 - } 541 - 542 - int xdna_mailbox_destroy_channel(struct mailbox_channel *mb_chann) 543 - { 544 - struct mailbox_msg *mb_msg; 545 - unsigned long msg_id; 546 - 547 - MB_DBG(mb_chann, "IRQ disabled and RX work cancelled"); 548 - free_irq(mb_chann->msix_irq, mb_chann); 549 - destroy_workqueue(mb_chann->work_q); 550 - /* We can clean up and release resources */ 551 - 552 - xa_for_each(&mb_chann->chan_xa, msg_id, mb_msg) 553 - mailbox_release_msg(mb_chann, mb_msg); 554 - 555 - xa_destroy(&mb_chann->chan_xa); 556 - 557 - MB_DBG(mb_chann, "Mailbox channel destroyed, irq: %d", mb_chann->msix_irq); 558 - kfree(mb_chann); 502 + MB_DBG(mb_chann, "Mailbox channel started (irq: %d)", mb_chann->msix_irq); 559 503 return 0; 560 504 } 561 505 562 506 void xdna_mailbox_stop_channel(struct mailbox_channel *mb_chann) 563 507 { 508 + struct mailbox_msg *mb_msg; 509 + unsigned long msg_id; 510 + 564 511 /* Disable an irq and wait. This might sleep. */ 565 - disable_irq(mb_chann->msix_irq); 512 + free_irq(mb_chann->msix_irq, mb_chann); 566 513 567 514 /* Cancel RX work and wait for it to finish */ 568 - cancel_work_sync(&mb_chann->rx_work); 569 - MB_DBG(mb_chann, "IRQ disabled and RX work cancelled"); 515 + drain_workqueue(mb_chann->work_q); 516 + 517 + /* We can clean up and release resources */ 518 + xa_for_each(&mb_chann->chan_xa, msg_id, mb_msg) 519 + mailbox_release_msg(mb_chann, mb_msg); 520 + xa_destroy(&mb_chann->chan_xa); 521 + 522 + MB_DBG(mb_chann, "Mailbox channel stopped, irq: %d", mb_chann->msix_irq); 570 523 } 571 524 572 525 struct mailbox *xdnam_mailbox_create(struct drm_device *ddev,
+17 -14
drivers/accel/amdxdna/amdxdna_mailbox.h
··· 74 74 const struct xdna_mailbox_res *res); 75 75 76 76 /* 77 - * xdna_mailbox_create_channel() -- Create a mailbox channel instance 77 + * xdna_mailbox_alloc_channel() -- alloc a mailbox channel 78 78 * 79 - * @mailbox: the handle return from xdna_mailbox_create() 79 + * @mb: mailbox handle 80 + */ 81 + struct mailbox_channel *xdna_mailbox_alloc_channel(struct mailbox *mb); 82 + 83 + /* 84 + * xdna_mailbox_start_channel() -- start a mailbox channel instance 85 + * 86 + * @mb_chann: the handle return from xdna_mailbox_alloc_channel() 80 87 * @x2i: host to firmware mailbox resources 81 88 * @i2x: firmware to host mailbox resources 82 89 * @xdna_mailbox_intr_reg: register addr of MSI-X interrupt ··· 91 84 * 92 85 * Return: If success, return a handle of mailbox channel. Otherwise, return NULL. 93 86 */ 94 - struct mailbox_channel * 95 - xdna_mailbox_create_channel(struct mailbox *mailbox, 96 - const struct xdna_mailbox_chann_res *x2i, 97 - const struct xdna_mailbox_chann_res *i2x, 98 - u32 xdna_mailbox_intr_reg, 99 - int mb_irq); 87 + int 88 + xdna_mailbox_start_channel(struct mailbox_channel *mb_chann, 89 + const struct xdna_mailbox_chann_res *x2i, 90 + const struct xdna_mailbox_chann_res *i2x, 91 + u32 xdna_mailbox_intr_reg, 92 + int mb_irq); 100 93 101 94 /* 102 - * xdna_mailbox_destroy_channel() -- destroy mailbox channel 95 + * xdna_mailbox_free_channel() -- free mailbox channel 103 96 * 104 97 * @mailbox_chann: the handle return from xdna_mailbox_create_channel() 105 - * 106 - * Return: if success, return 0. otherwise return error code 107 98 */ 108 - int xdna_mailbox_destroy_channel(struct mailbox_channel *mailbox_chann); 99 + void xdna_mailbox_free_channel(struct mailbox_channel *mailbox_chann); 109 100 110 101 /* 111 102 * xdna_mailbox_stop_channel() -- stop mailbox channel 112 103 * 113 104 * @mailbox_chann: the handle return from xdna_mailbox_create_channel() 114 - * 115 - * Return: if success, return 0. otherwise return error code 116 105 */ 117 106 void xdna_mailbox_stop_channel(struct mailbox_channel *mailbox_chann); 118 107
+1 -1
drivers/accel/amdxdna/npu1_regs.c
··· 67 67 68 68 static const struct aie2_fw_feature_tbl npu1_fw_feature_table[] = { 69 69 { .major = 5, .min_minor = 7 }, 70 - { .features = BIT_U64(AIE2_NPU_COMMAND), .min_minor = 8 }, 70 + { .features = BIT_U64(AIE2_NPU_COMMAND), .major = 5, .min_minor = 8 }, 71 71 { 0 } 72 72 }; 73 73
+9 -3
drivers/accel/ethosu/ethosu_gem.c
··· 245 245 ((st->ifm.stride_kernel >> 1) & 0x1) + 1; 246 246 u32 stride_x = ((st->ifm.stride_kernel >> 5) & 0x2) + 247 247 (st->ifm.stride_kernel & 0x1) + 1; 248 - u32 ifm_height = st->ofm.height[2] * stride_y + 248 + s32 ifm_height = st->ofm.height[2] * stride_y + 249 249 st->ifm.height[2] - (st->ifm.pad_top + st->ifm.pad_bottom); 250 - u32 ifm_width = st->ofm.width * stride_x + 250 + s32 ifm_width = st->ofm.width * stride_x + 251 251 st->ifm.width - (st->ifm.pad_left + st->ifm.pad_right); 252 + 253 + if (ifm_height < 0 || ifm_width < 0) 254 + return -EINVAL; 252 255 253 256 len = feat_matrix_length(info, &st->ifm, ifm_width, 254 257 ifm_height, st->ifm.depth); ··· 420 417 return ret; 421 418 break; 422 419 case NPU_OP_ELEMENTWISE: 423 - use_ifm2 = !((st.ifm2.broadcast == 8) || (param == 5) || 420 + use_scale = ethosu_is_u65(edev) ? 421 + (st.ifm2.broadcast & 0x80) : 422 + (st.ifm2.broadcast == 8); 423 + use_ifm2 = !(use_scale || (param == 5) || 424 424 (param == 6) || (param == 7) || (param == 0x24)); 425 425 use_ifm = st.ifm.broadcast != 8; 426 426 ret = calc_sizes_elemwise(ddev, info, cmd, &st, use_ifm, use_ifm2);
+19 -9
drivers/accel/ethosu/ethosu_job.c
··· 143 143 return ret; 144 144 } 145 145 146 - static void ethosu_job_cleanup(struct kref *ref) 146 + static void ethosu_job_err_cleanup(struct ethosu_job *job) 147 147 { 148 - struct ethosu_job *job = container_of(ref, struct ethosu_job, 149 - refcount); 150 148 unsigned int i; 151 - 152 - pm_runtime_put_autosuspend(job->dev->base.dev); 153 - 154 - dma_fence_put(job->done_fence); 155 - dma_fence_put(job->inference_done_fence); 156 149 157 150 for (i = 0; i < job->region_cnt; i++) 158 151 drm_gem_object_put(job->region_bo[i]); ··· 153 160 drm_gem_object_put(job->cmd_bo); 154 161 155 162 kfree(job); 163 + } 164 + 165 + static void ethosu_job_cleanup(struct kref *ref) 166 + { 167 + struct ethosu_job *job = container_of(ref, struct ethosu_job, 168 + refcount); 169 + 170 + pm_runtime_put_autosuspend(job->dev->base.dev); 171 + 172 + dma_fence_put(job->done_fence); 173 + dma_fence_put(job->inference_done_fence); 174 + 175 + ethosu_job_err_cleanup(job); 156 176 } 157 177 158 178 static void ethosu_job_put(struct ethosu_job *job) ··· 460 454 } 461 455 } 462 456 ret = ethosu_job_push(ejob); 457 + if (!ret) { 458 + ethosu_job_put(ejob); 459 + return 0; 460 + } 463 461 464 462 out_cleanup_job: 465 463 if (ret) 466 464 drm_sched_job_cleanup(&ejob->base); 467 465 out_put_job: 468 - ethosu_job_put(ejob); 466 + ethosu_job_err_cleanup(ejob); 469 467 470 468 return ret; 471 469 }
+3 -2
drivers/acpi/acpica/acpredef.h
··· 379 379 380 380 {{"_CPC", METHOD_0ARGS, 381 381 METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Ints/Bufs) */ 382 - PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_INTEGER | ACPI_RTYPE_BUFFER, 0, 383 - 0, 0, 0), 382 + PACKAGE_INFO(ACPI_PTYPE1_VAR, 383 + ACPI_RTYPE_INTEGER | ACPI_RTYPE_BUFFER | 384 + ACPI_RTYPE_PACKAGE, 0, 0, 0, 0), 384 385 385 386 {{"_CR3", METHOD_0ARGS, /* ACPI 6.0 */ 386 387 METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
-9
drivers/acpi/device_pm.c
··· 1457 1457 return 0; 1458 1458 1459 1459 /* 1460 - * Skip devices whose ACPI companions don't support power management and 1461 - * don't have a wakeup GPE. 1462 - */ 1463 - if (!acpi_device_power_manageable(adev) && !acpi_device_can_wakeup(adev)) { 1464 - dev_dbg(dev, "No ACPI power management or wakeup GPE\n"); 1465 - return 0; 1466 - } 1467 - 1468 - /* 1469 1460 * Only attach the power domain to the first device if the 1470 1461 * companion is shared by multiple. This is to prevent doing power 1471 1462 * management twice.
+2
drivers/ata/libata-core.c
··· 4189 4189 ATA_QUIRK_FIRMWARE_WARN }, 4190 4190 4191 4191 /* Seagate disks with LPM issues */ 4192 + { "ST1000DM010-2EP102", NULL, ATA_QUIRK_NOLPM }, 4192 4193 { "ST2000DM008-2FR102", NULL, ATA_QUIRK_NOLPM }, 4193 4194 4194 4195 /* drives which fail FPDMA_AA activation (some may freeze afterwards) ··· 4232 4231 /* Devices that do not need bridging limits applied */ 4233 4232 { "MTRON MSP-SATA*", NULL, ATA_QUIRK_BRIDGE_OK }, 4234 4233 { "BUFFALO HD-QSU2/R5", NULL, ATA_QUIRK_BRIDGE_OK }, 4234 + { "QEMU HARDDISK", "2.5+", ATA_QUIRK_BRIDGE_OK }, 4235 4235 4236 4236 /* Devices which aren't very happy with higher link speeds */ 4237 4237 { "WD My Book", NULL, ATA_QUIRK_1_5_GBPS },
+2 -1
drivers/ata/libata-eh.c
··· 647 647 break; 648 648 } 649 649 650 - if (qc == ap->deferred_qc) { 650 + if (i < ATA_MAX_QUEUE && qc == ap->deferred_qc) { 651 651 /* 652 652 * This is a deferred command that timed out while 653 653 * waiting for the command queue to drain. Since the qc ··· 659 659 */ 660 660 WARN_ON_ONCE(qc->flags & ATA_QCFLAG_ACTIVE); 661 661 ap->deferred_qc = NULL; 662 + cancel_work(&ap->deferred_qc_work); 662 663 set_host_byte(scmd, DID_TIME_OUT); 663 664 scsi_eh_finish_cmd(scmd, &ap->eh_done_q); 664 665 } else if (i < ATA_MAX_QUEUE) {
+1
drivers/ata/libata-scsi.c
··· 1699 1699 1700 1700 scmd = qc->scsicmd; 1701 1701 ap->deferred_qc = NULL; 1702 + cancel_work(&ap->deferred_qc_work); 1702 1703 ata_qc_free(qc); 1703 1704 scmd->result = (DID_SOFT_ERROR << 16); 1704 1705 scsi_done(scmd);
+1 -10
drivers/base/base.h
··· 179 179 void driver_detach(const struct device_driver *drv); 180 180 void driver_deferred_probe_del(struct device *dev); 181 181 void device_set_deferred_probe_reason(const struct device *dev, struct va_format *vaf); 182 - static inline int driver_match_device_locked(const struct device_driver *drv, 183 - struct device *dev) 184 - { 185 - device_lock_assert(dev); 186 - 187 - return drv->bus->match ? drv->bus->match(dev, drv) : 1; 188 - } 189 - 190 182 static inline int driver_match_device(const struct device_driver *drv, 191 183 struct device *dev) 192 184 { 193 - guard(device)(dev); 194 - return driver_match_device_locked(drv, dev); 185 + return drv->bus->match ? drv->bus->match(dev, drv) : 1; 195 186 } 196 187 197 188 static inline void dev_sync_state(struct device *dev)
+1 -1
drivers/base/dd.c
··· 928 928 bool async_allowed; 929 929 int ret; 930 930 931 - ret = driver_match_device_locked(drv, dev); 931 + ret = driver_match_device(drv, dev); 932 932 if (ret == 0) { 933 933 /* no match */ 934 934 return 0;
+3 -2
drivers/crypto/atmel-sha204a.c
··· 52 52 rng->priv = 0; 53 53 } else { 54 54 work_data = kmalloc_obj(*work_data, GFP_ATOMIC); 55 - if (!work_data) 55 + if (!work_data) { 56 + atomic_dec(&i2c_priv->tfm_count); 56 57 return -ENOMEM; 57 - 58 + } 58 59 work_data->ctx = i2c_priv; 59 60 work_data->client = i2c_priv->client; 60 61
+1 -1
drivers/crypto/ccp/sev-dev-tsm.c
··· 378 378 return; 379 379 380 380 error_exit: 381 - kfree(t); 382 381 pr_err("Failed to enable SEV-TIO: ret=%d en=%d initdone=%d SEV=%d\n", 383 382 ret, t->tio_en, t->tio_init_done, boot_cpu_has(X86_FEATURE_SEV)); 383 + kfree(t); 384 384 } 385 385 386 386 void sev_tsm_uninit(struct sev_device *sev)
+4 -6
drivers/crypto/ccp/sev-dev.c
··· 1105 1105 { 1106 1106 struct psp_device *psp_master = psp_get_master_device(); 1107 1107 struct snp_hv_fixed_pages_entry *entry; 1108 - struct sev_device *sev; 1109 1108 unsigned int order; 1110 1109 struct page *page; 1111 1110 1112 - if (!psp_master || !psp_master->sev_data) 1111 + if (!psp_master) 1113 1112 return NULL; 1114 - 1115 - sev = psp_master->sev_data; 1116 1113 1117 1114 order = get_order(PMD_SIZE * num_2mb_pages); 1118 1115 ··· 1123 1126 * This API uses SNP_INIT_EX to transition allocated pages to HV_Fixed 1124 1127 * page state, fail if SNP is already initialized. 1125 1128 */ 1126 - if (sev->snp_initialized) 1129 + if (psp_master->sev_data && 1130 + ((struct sev_device *)psp_master->sev_data)->snp_initialized) 1127 1131 return NULL; 1128 1132 1129 1133 /* Re-use freed pages that match the request */ ··· 1160 1162 struct psp_device *psp_master = psp_get_master_device(); 1161 1163 struct snp_hv_fixed_pages_entry *entry, *nentry; 1162 1164 1163 - if (!psp_master || !psp_master->sev_data) 1165 + if (!psp_master) 1164 1166 return; 1165 1167 1166 1168 /*
+1 -1
drivers/firmware/efi/mokvar-table.c
··· 85 85 * as an alternative to ordinary EFI variables, due to platform-dependent 86 86 * limitations. The memory occupied by this table is marked as reserved. 87 87 * 88 - * This routine must be called before efi_free_boot_services() in order 88 + * This routine must be called before efi_unmap_boot_services() in order 89 89 * to guarantee that it can mark the table as reserved. 90 90 * 91 91 * Implicit inputs:
+8 -3
drivers/gpu/drm/i915/display/intel_psr.c
··· 1307 1307 u16 sink_y_granularity = crtc_state->has_panel_replay ? 1308 1308 connector->dp.panel_replay_caps.su_y_granularity : 1309 1309 connector->dp.psr_caps.su_y_granularity; 1310 - u16 sink_w_granularity = crtc_state->has_panel_replay ? 1311 - connector->dp.panel_replay_caps.su_w_granularity : 1312 - connector->dp.psr_caps.su_w_granularity; 1310 + u16 sink_w_granularity; 1311 + 1312 + if (crtc_state->has_panel_replay) 1313 + sink_w_granularity = connector->dp.panel_replay_caps.su_w_granularity == 1314 + DP_PANEL_REPLAY_FULL_LINE_GRANULARITY ? 1315 + crtc_hdisplay : connector->dp.panel_replay_caps.su_w_granularity; 1316 + else 1317 + sink_w_granularity = connector->dp.psr_caps.su_w_granularity; 1313 1318 1314 1319 /* PSR2 HW only send full lines so we only need to validate the width */ 1315 1320 if (crtc_hdisplay % sink_w_granularity)
+3
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 1230 1230 u8 size = msg->size; 1231 1231 int ret; 1232 1232 1233 + if (pm_runtime_suspended(nv_connector->base.dev->dev)) 1234 + return -EBUSY; 1235 + 1233 1236 nv_encoder = find_encoder(&nv_connector->base, DCB_OUTPUT_DP); 1234 1237 if (!nv_encoder) 1235 1238 return -ENODEV;
+5 -4
drivers/gpu/drm/panthor/panthor_sched.c
··· 893 893 894 894 out_sync: 895 895 /* Make sure the CPU caches are invalidated before the seqno is read. 896 - * drm_gem_shmem_sync() is a NOP if map_wc=true, so no need to check 896 + * panthor_gem_sync() is a NOP if map_wc=true, so no need to check 897 897 * it here. 898 898 */ 899 - panthor_gem_sync(&bo->base.base, queue->syncwait.offset, 899 + panthor_gem_sync(&bo->base.base, 900 + DRM_PANTHOR_BO_SYNC_CPU_CACHE_FLUSH_AND_INVALIDATE, 901 + queue->syncwait.offset, 900 902 queue->syncwait.sync64 ? 901 903 sizeof(struct panthor_syncobj_64b) : 902 - sizeof(struct panthor_syncobj_32b), 903 - DRM_PANTHOR_BO_SYNC_CPU_CACHE_FLUSH_AND_INVALIDATE); 904 + sizeof(struct panthor_syncobj_32b)); 904 905 905 906 return queue->syncwait.kmap + queue->syncwait.offset; 906 907
+15 -1
drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
··· 1122 1122 struct mipi_dsi_device *device) 1123 1123 { 1124 1124 struct rzg2l_mipi_dsi *dsi = host_to_rzg2l_mipi_dsi(host); 1125 + int bpp; 1125 1126 int ret; 1126 1127 1127 1128 if (device->lanes > dsi->num_data_lanes) { ··· 1132 1131 return -EINVAL; 1133 1132 } 1134 1133 1135 - switch (mipi_dsi_pixel_format_to_bpp(device->format)) { 1134 + bpp = mipi_dsi_pixel_format_to_bpp(device->format); 1135 + switch (bpp) { 1136 1136 case 24: 1137 1137 break; 1138 1138 case 18: ··· 1163 1161 } 1164 1162 1165 1163 drm_bridge_add(&dsi->bridge); 1164 + 1165 + /* 1166 + * Report the required division ratio setting for the MIPI clock dividers. 1167 + * 1168 + * vclk * bpp = hsclk * 8 * num_lanes 1169 + * 1170 + * vclk * DSI_AB_divider = hsclk * 16 1171 + * 1172 + * which simplifies to... 1173 + * DSI_AB_divider = bpp * 2 / num_lanes 1174 + */ 1175 + rzg2l_cpg_dsi_div_set_divider(bpp * 2 / dsi->lanes, PLL5_TARGET_DSI); 1166 1176 1167 1177 return 0; 1168 1178 }
+1
drivers/gpu/drm/scheduler/sched_main.c
··· 361 361 /** 362 362 * drm_sched_job_done - complete a job 363 363 * @s_job: pointer to the job which is done 364 + * @result: 0 on success, -ERRNO on error 364 365 * 365 366 * Finish the job's fence and resubmit the work items. 366 367 */
+2 -4
drivers/gpu/drm/solomon/ssd130x.c
··· 737 737 unsigned int height = drm_rect_height(rect); 738 738 unsigned int line_length = DIV_ROUND_UP(width, 8); 739 739 unsigned int page_height = SSD130X_PAGE_HEIGHT; 740 + u8 page_start = ssd130x->page_offset + y / page_height; 740 741 unsigned int pages = DIV_ROUND_UP(height, page_height); 741 742 struct drm_device *drm = &ssd130x->drm; 742 743 u32 array_idx = 0; ··· 775 774 */ 776 775 777 776 if (!ssd130x->page_address_mode) { 778 - u8 page_start; 779 - 780 777 /* Set address range for horizontal addressing mode */ 781 778 ret = ssd130x_set_col_range(ssd130x, ssd130x->col_offset + x, width); 782 779 if (ret < 0) 783 780 return ret; 784 781 785 - page_start = ssd130x->page_offset + y / page_height; 786 782 ret = ssd130x_set_page_range(ssd130x, page_start, pages); 787 783 if (ret < 0) 788 784 return ret; ··· 811 813 */ 812 814 if (ssd130x->page_address_mode) { 813 815 ret = ssd130x_set_page_pos(ssd130x, 814 - ssd130x->page_offset + i, 816 + page_start + i, 815 817 ssd130x->col_offset + x); 816 818 if (ret < 0) 817 819 return ret;
+2 -2
drivers/gpu/drm/ttm/tests/ttm_bo_test.c
··· 222 222 KUNIT_FAIL(test, "Couldn't create ttm bo reserve task\n"); 223 223 224 224 /* Take a lock so the threaded reserve has to wait */ 225 - mutex_lock(&bo->base.resv->lock.base); 225 + dma_resv_lock(bo->base.resv, NULL); 226 226 227 227 wake_up_process(task); 228 228 msleep(20); 229 229 err = kthread_stop(task); 230 230 231 - mutex_unlock(&bo->base.resv->lock.base); 231 + dma_resv_unlock(bo->base.resv); 232 232 233 233 KUNIT_ASSERT_EQ(test, err, -ERESTARTSYS); 234 234 }
+5 -6
drivers/gpu/drm/ttm/ttm_bo.c
··· 1107 1107 static s64 1108 1108 ttm_bo_swapout_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo) 1109 1109 { 1110 - struct ttm_resource *res = bo->resource; 1111 - struct ttm_place place = { .mem_type = res->mem_type }; 1110 + struct ttm_place place = { .mem_type = bo->resource->mem_type }; 1112 1111 struct ttm_bo_swapout_walk *swapout_walk = 1113 1112 container_of(walk, typeof(*swapout_walk), walk); 1114 1113 struct ttm_operation_ctx *ctx = walk->arg.ctx; ··· 1147 1148 /* 1148 1149 * Move to system cached 1149 1150 */ 1150 - if (res->mem_type != TTM_PL_SYSTEM) { 1151 + if (bo->resource->mem_type != TTM_PL_SYSTEM) { 1151 1152 struct ttm_resource *evict_mem; 1152 1153 struct ttm_place hop; 1153 1154 ··· 1179 1180 1180 1181 if (ttm_tt_is_populated(tt)) { 1181 1182 spin_lock(&bdev->lru_lock); 1182 - ttm_resource_del_bulk_move(res, bo); 1183 + ttm_resource_del_bulk_move(bo->resource, bo); 1183 1184 spin_unlock(&bdev->lru_lock); 1184 1185 1185 1186 ret = ttm_tt_swapout(bdev, tt, swapout_walk->gfp_flags); 1186 1187 1187 1188 spin_lock(&bdev->lru_lock); 1188 1189 if (ret) 1189 - ttm_resource_add_bulk_move(res, bo); 1190 - ttm_resource_move_to_lru_tail(res); 1190 + ttm_resource_add_bulk_move(bo->resource, bo); 1191 + ttm_resource_move_to_lru_tail(bo->resource); 1191 1192 spin_unlock(&bdev->lru_lock); 1192 1193 } 1193 1194
+1 -1
drivers/gpu/drm/ttm/ttm_pool_internal.h
··· 17 17 return pool->alloc_flags & TTM_ALLOCATION_POOL_USE_DMA32; 18 18 } 19 19 20 - static inline bool ttm_pool_beneficial_order(struct ttm_pool *pool) 20 + static inline unsigned int ttm_pool_beneficial_order(struct ttm_pool *pool) 21 21 { 22 22 return pool->alloc_flags & 0xff; 23 23 }
+2 -1
drivers/gpu/drm/xe/xe_vm_madvise.c
··· 453 453 madvise_range.num_vmas, 454 454 args->atomic.val)) { 455 455 err = -EINVAL; 456 - goto madv_fini; 456 + goto free_vmas; 457 457 } 458 458 } 459 459 ··· 490 490 err_fini: 491 491 if (madvise_range.has_bo_vmas) 492 492 drm_exec_fini(&exec); 493 + free_vmas: 493 494 kfree(madvise_range.vmas); 494 495 madvise_range.vmas = NULL; 495 496 madv_fini:
+14
drivers/gpu/drm/xe/xe_wa.c
··· 258 258 LSN_DIM_Z_WGT(1))) 259 259 }, 260 260 261 + /* Xe2_HPM */ 262 + 263 + { XE_RTP_NAME("16021867713"), 264 + XE_RTP_RULES(MEDIA_VERSION(1301), 265 + ENGINE_CLASS(VIDEO_DECODE)), 266 + XE_RTP_ACTIONS(SET(VDBOX_CGCTL3F1C(0), MFXPIPE_CLKGATE_DIS)), 267 + XE_RTP_ENTRY_FLAG(FOREACH_ENGINE), 268 + }, 269 + { XE_RTP_NAME("14019449301"), 270 + XE_RTP_RULES(MEDIA_VERSION(1301), ENGINE_CLASS(VIDEO_DECODE)), 271 + XE_RTP_ACTIONS(SET(VDBOX_CGCTL3F08(0), CG3DDISHRS_CLKGATE_DIS)), 272 + XE_RTP_ENTRY_FLAG(FOREACH_ENGINE), 273 + }, 274 + 261 275 /* Xe3_LPG */ 262 276 263 277 { XE_RTP_NAME("14021871409"),
+4 -3
drivers/hid/hid-apple.c
··· 365 365 { "A3R" }, 366 366 { "hfd.cn" }, 367 367 { "WKB603" }, 368 + { "TH87" }, /* EPOMAKER TH87 BT mode */ 369 + { "HFD Epomaker TH87" }, /* EPOMAKER TH87 USB mode */ 370 + { "2.4G Wireless Receiver" }, /* EPOMAKER TH87 dongle */ 368 371 }; 369 372 370 373 static bool apple_is_non_apple_keyboard(struct hid_device *hdev) ··· 689 686 hid_info(hdev, 690 687 "fixing up Magic Keyboard battery report descriptor\n"); 691 688 *rsize = *rsize - 1; 692 - rdesc = kmemdup(rdesc + 1, *rsize, GFP_KERNEL); 693 - if (!rdesc) 694 - return NULL; 689 + rdesc = rdesc + 1; 695 690 696 691 rdesc[0] = 0x05; 697 692 rdesc[1] = 0x01;
+14 -4
drivers/hid/hid-asus.c
··· 1399 1399 */ 1400 1400 if (*rsize == rsize_orig && 1401 1401 rdesc[offs] == 0x09 && rdesc[offs + 1] == 0x76) { 1402 - *rsize = rsize_orig + 1; 1403 - rdesc = kmemdup(rdesc, *rsize, GFP_KERNEL); 1404 - if (!rdesc) 1405 - return NULL; 1402 + __u8 *new_rdesc; 1403 + 1404 + new_rdesc = devm_kzalloc(&hdev->dev, rsize_orig + 1, 1405 + GFP_KERNEL); 1406 + if (!new_rdesc) 1407 + return rdesc; 1406 1408 1407 1409 hid_info(hdev, "Fixing up %s keyb report descriptor\n", 1408 1410 drvdata->quirks & QUIRK_T100CHI ? 1409 1411 "T100CHI" : "T90CHI"); 1412 + 1413 + memcpy(new_rdesc, rdesc, rsize_orig); 1414 + *rsize = rsize_orig + 1; 1415 + rdesc = new_rdesc; 1416 + 1410 1417 memmove(rdesc + offs + 4, rdesc + offs + 2, 12); 1411 1418 rdesc[offs] = 0x19; 1412 1419 rdesc[offs + 1] = 0x00; ··· 1497 1490 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1498 1491 USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X), 1499 1492 QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD | QUIRK_ROG_ALLY_XPAD }, 1493 + { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1494 + USB_DEVICE_ID_ASUSTEK_XGM_2023), 1495 + }, 1500 1496 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1501 1497 USB_DEVICE_ID_ASUSTEK_ROG_CLAYMORE_II_KEYBOARD), 1502 1498 QUIRK_ROG_CLAYMORE_II_KEYBOARD },
+1 -1
drivers/hid/hid-cmedia.c
··· 99 99 { 100 100 struct cmhid *cm = hid_get_drvdata(hid); 101 101 102 - if (len != CM6533_JD_RAWEV_LEN) 102 + if (len != CM6533_JD_RAWEV_LEN || !(hid->claimed & HID_CLAIMED_INPUT)) 103 103 goto out; 104 104 if (memcmp(data+CM6533_JD_SFX_OFFSET, ji_sfx, sizeof(ji_sfx))) 105 105 goto out;
+1 -1
drivers/hid/hid-creative-sb0540.c
··· 153 153 u64 code, main_code; 154 154 int key; 155 155 156 - if (len != 6) 156 + if (len != 6 || !(hid->claimed & HID_CLAIMED_INPUT)) 157 157 return 0; 158 158 159 159 /* From daemons/hw_hiddev.c sb0540_rec() in lirc */
+1
drivers/hid/hid-ids.h
··· 229 229 #define USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X 0x1b4c 230 230 #define USB_DEVICE_ID_ASUSTEK_ROG_CLAYMORE_II_KEYBOARD 0x196b 231 231 #define USB_DEVICE_ID_ASUSTEK_FX503VD_KEYBOARD 0x1869 232 + #define USB_DEVICE_ID_ASUSTEK_XGM_2023 0x1a9a 232 233 233 234 #define USB_VENDOR_ID_ATEN 0x0557 234 235 #define USB_DEVICE_ID_ATEN_UC100KM 0x2004
+2 -4
drivers/hid/hid-magicmouse.c
··· 990 990 */ 991 991 if ((is_usb_magicmouse2(hdev->vendor, hdev->product) || 992 992 is_usb_magictrackpad2(hdev->vendor, hdev->product)) && 993 - *rsize == 83 && rdesc[46] == 0x84 && rdesc[58] == 0x85) { 993 + *rsize >= 83 && rdesc[46] == 0x84 && rdesc[58] == 0x85) { 994 994 hid_info(hdev, 995 995 "fixing up magicmouse battery report descriptor\n"); 996 996 *rsize = *rsize - 1; 997 - rdesc = kmemdup(rdesc + 1, *rsize, GFP_KERNEL); 998 - if (!rdesc) 999 - return NULL; 997 + rdesc = rdesc + 1; 1000 998 1001 999 rdesc[0] = 0x05; 1002 1000 rdesc[1] = 0x01;
+2
drivers/hid/hid-mcp2221.c
··· 353 353 usleep_range(90, 100); 354 354 retries++; 355 355 } else { 356 + usleep_range(980, 1000); 357 + mcp_cancel_last_cmd(mcp); 356 358 return ret; 357 359 } 358 360 } else {
+38 -5
drivers/hid/hid-multitouch.c
··· 77 77 #define MT_QUIRK_ORIENTATION_INVERT BIT(22) 78 78 #define MT_QUIRK_APPLE_TOUCHBAR BIT(23) 79 79 #define MT_QUIRK_YOGABOOK9I BIT(24) 80 + #define MT_QUIRK_KEEP_LATENCY_ON_CLOSE BIT(25) 80 81 81 82 #define MT_INPUTMODE_TOUCHSCREEN 0x02 82 83 #define MT_INPUTMODE_TOUCHPAD 0x03 ··· 215 214 #define MT_CLS_WIN_8_DISABLE_WAKEUP 0x0016 216 215 #define MT_CLS_WIN_8_NO_STICKY_FINGERS 0x0017 217 216 #define MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU 0x0018 217 + #define MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE 0x0019 218 218 219 219 /* vendor specific classes */ 220 220 #define MT_CLS_3M 0x0101 ··· 235 233 #define MT_CLS_SMART_TECH 0x0113 236 234 #define MT_CLS_APPLE_TOUCHBAR 0x0114 237 235 #define MT_CLS_YOGABOOK9I 0x0115 236 + #define MT_CLS_EGALAX_P80H84 0x0116 238 237 #define MT_CLS_SIS 0x0457 239 238 240 239 #define MT_DEFAULT_MAXCONTACT 10 ··· 336 333 MT_QUIRK_HOVERING | 337 334 MT_QUIRK_CONTACT_CNT_ACCURATE | 338 335 MT_QUIRK_WIN8_PTP_BUTTONS, 336 + .export_all_inputs = true }, 337 + { .name = MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE, 338 + .quirks = MT_QUIRK_ALWAYS_VALID | 339 + MT_QUIRK_IGNORE_DUPLICATES | 340 + MT_QUIRK_HOVERING | 341 + MT_QUIRK_CONTACT_CNT_ACCURATE | 342 + MT_QUIRK_STICKY_FINGERS | 343 + MT_QUIRK_WIN8_PTP_BUTTONS | 344 + MT_QUIRK_KEEP_LATENCY_ON_CLOSE, 339 345 .export_all_inputs = true }, 340 346 341 347 /* ··· 449 437 MT_QUIRK_HOVERING | 450 438 MT_QUIRK_YOGABOOK9I, 451 439 .export_all_inputs = true 440 + }, 441 + { .name = MT_CLS_EGALAX_P80H84, 442 + .quirks = MT_QUIRK_ALWAYS_VALID | 443 + MT_QUIRK_IGNORE_DUPLICATES | 444 + MT_QUIRK_CONTACT_CNT_ACCURATE, 452 445 }, 453 446 { } 454 447 }; ··· 866 849 if ((cls->name == MT_CLS_WIN_8 || 867 850 cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT || 868 851 cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU || 869 - cls->name == MT_CLS_WIN_8_DISABLE_WAKEUP) && 852 + cls->name == MT_CLS_WIN_8_DISABLE_WAKEUP || 853 + cls->name == MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE) && 870 854 (field->application == HID_DG_TOUCHPAD || 871 855 field->application == HID_DG_TOUCHSCREEN)) 872 856 app->quirks |= MT_QUIRK_CONFIDENCE; ··· 1780 1762 int ret; 1781 1763 1782 1764 if (td->is_haptic_touchpad && (td->mtclass.name == MT_CLS_WIN_8 || 1783 - td->mtclass.name == MT_CLS_WIN_8_FORCE_MULTI_INPUT)) { 1765 + td->mtclass.name == MT_CLS_WIN_8_FORCE_MULTI_INPUT || 1766 + td->mtclass.name == MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE)) { 1784 1767 if (hid_haptic_input_configured(hdev, td->haptic, hi) == 0) 1785 1768 td->is_haptic_touchpad = false; 1786 1769 } else { ··· 2094 2075 2095 2076 static void mt_on_hid_hw_close(struct hid_device *hdev) 2096 2077 { 2097 - mt_set_modes(hdev, HID_LATENCY_HIGH, TOUCHPAD_REPORT_NONE); 2078 + struct mt_device *td = hid_get_drvdata(hdev); 2079 + 2080 + if (td->mtclass.quirks & MT_QUIRK_KEEP_LATENCY_ON_CLOSE) 2081 + mt_set_modes(hdev, HID_LATENCY_NORMAL, TOUCHPAD_REPORT_NONE); 2082 + else 2083 + mt_set_modes(hdev, HID_LATENCY_HIGH, TOUCHPAD_REPORT_NONE); 2098 2084 } 2099 2085 2100 2086 /* ··· 2239 2215 { .driver_data = MT_CLS_EGALAX_SERIAL, 2240 2216 MT_USB_DEVICE(USB_VENDOR_ID_DWAV, 2241 2217 USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C000) }, 2242 - { .driver_data = MT_CLS_EGALAX, 2243 - MT_USB_DEVICE(USB_VENDOR_ID_DWAV, 2218 + { .driver_data = MT_CLS_EGALAX_P80H84, 2219 + HID_DEVICE(HID_BUS_ANY, HID_GROUP_MULTITOUCH_WIN_8, 2220 + USB_VENDOR_ID_DWAV, 2244 2221 USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002) }, 2245 2222 2246 2223 /* Elan devices */ ··· 2485 2460 { .driver_data = MT_CLS_NSMU, 2486 2461 MT_USB_DEVICE(USB_VENDOR_ID_UNITEC, 2487 2462 USB_DEVICE_ID_UNITEC_USB_TOUCH_0A19) }, 2463 + 2464 + /* Uniwill touchpads */ 2465 + { .driver_data = MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE, 2466 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 2467 + USB_VENDOR_ID_PIXART, 0x0255) }, 2468 + { .driver_data = MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE, 2469 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 2470 + USB_VENDOR_ID_PIXART, 0x0274) }, 2488 2471 2489 2472 /* VTL panels */ 2490 2473 { .driver_data = MT_CLS_VTL,
+1 -1
drivers/hid/hid-zydacron.c
··· 114 114 unsigned key; 115 115 unsigned short index; 116 116 117 - if (report->id == data[0]) { 117 + if (report->id == data[0] && (hdev->claimed & HID_CLAIMED_INPUT)) { 118 118 119 119 /* break keys */ 120 120 for (index = 0; index < 4; index++) {
+2
drivers/hid/intel-ish-hid/ipc/hw-ish.h
··· 39 39 #define PCI_DEVICE_ID_INTEL_ISH_PTL_H 0xE345 40 40 #define PCI_DEVICE_ID_INTEL_ISH_PTL_P 0xE445 41 41 #define PCI_DEVICE_ID_INTEL_ISH_WCL 0x4D45 42 + #define PCI_DEVICE_ID_INTEL_ISH_NVL_H 0xD354 43 + #define PCI_DEVICE_ID_INTEL_ISH_NVL_S 0x6E78 42 44 43 45 #define REVISION_ID_CHT_A0 0x6 44 46 #define REVISION_ID_CHT_Ax_SI 0x0
+12
drivers/hid/intel-ish-hid/ipc/pci-ish.c
··· 28 28 ISHTP_DRIVER_DATA_LNL_M, 29 29 ISHTP_DRIVER_DATA_PTL, 30 30 ISHTP_DRIVER_DATA_WCL, 31 + ISHTP_DRIVER_DATA_NVL_H, 32 + ISHTP_DRIVER_DATA_NVL_S, 31 33 }; 32 34 33 35 #define ISH_FW_GEN_LNL_M "lnlm" 34 36 #define ISH_FW_GEN_PTL "ptl" 35 37 #define ISH_FW_GEN_WCL "wcl" 38 + #define ISH_FW_GEN_NVL_H "nvlh" 39 + #define ISH_FW_GEN_NVL_S "nvls" 36 40 37 41 #define ISH_FIRMWARE_PATH(gen) "intel/ish/ish_" gen ".bin" 38 42 #define ISH_FIRMWARE_PATH_ALL "intel/ish/ish_*.bin" ··· 50 46 }, 51 47 [ISHTP_DRIVER_DATA_WCL] = { 52 48 .fw_generation = ISH_FW_GEN_WCL, 49 + }, 50 + [ISHTP_DRIVER_DATA_NVL_H] = { 51 + .fw_generation = ISH_FW_GEN_NVL_H, 52 + }, 53 + [ISHTP_DRIVER_DATA_NVL_S] = { 54 + .fw_generation = ISH_FW_GEN_NVL_S, 53 55 }, 54 56 }; 55 57 ··· 86 76 {PCI_DEVICE_DATA(INTEL, ISH_PTL_H, ISHTP_DRIVER_DATA_PTL)}, 87 77 {PCI_DEVICE_DATA(INTEL, ISH_PTL_P, ISHTP_DRIVER_DATA_PTL)}, 88 78 {PCI_DEVICE_DATA(INTEL, ISH_WCL, ISHTP_DRIVER_DATA_WCL)}, 79 + {PCI_DEVICE_DATA(INTEL, ISH_NVL_H, ISHTP_DRIVER_DATA_NVL_H)}, 80 + {PCI_DEVICE_DATA(INTEL, ISH_NVL_S, ISHTP_DRIVER_DATA_NVL_S)}, 89 81 {} 90 82 }; 91 83 MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+7 -4
drivers/hid/usbhid/hid-pidff.c
··· 1452 1452 hid_warn(pidff->hid, "unknown ramp effect layout\n"); 1453 1453 1454 1454 if (PIDFF_FIND_FIELDS(set_condition, PID_SET_CONDITION, 1)) { 1455 - if (test_and_clear_bit(FF_SPRING, dev->ffbit) || 1456 - test_and_clear_bit(FF_DAMPER, dev->ffbit) || 1457 - test_and_clear_bit(FF_FRICTION, dev->ffbit) || 1458 - test_and_clear_bit(FF_INERTIA, dev->ffbit)) 1455 + bool test = false; 1456 + 1457 + test |= test_and_clear_bit(FF_SPRING, dev->ffbit); 1458 + test |= test_and_clear_bit(FF_DAMPER, dev->ffbit); 1459 + test |= test_and_clear_bit(FF_FRICTION, dev->ffbit); 1460 + test |= test_and_clear_bit(FF_INERTIA, dev->ffbit); 1461 + if (test) 1459 1462 hid_warn(pidff->hid, "unknown condition effect layout\n"); 1460 1463 } 1461 1464
-10
drivers/hwmon/Kconfig
··· 1927 1927 This driver can also be built as a module. If so, the module 1928 1928 will be called raspberrypi-hwmon. 1929 1929 1930 - config SENSORS_SA67MCU 1931 - tristate "Kontron sa67mcu hardware monitoring driver" 1932 - depends on MFD_SL28CPLD || COMPILE_TEST 1933 - help 1934 - If you say yes here you get support for the voltage and temperature 1935 - monitor of the sa67 board management controller. 1936 - 1937 - This driver can also be built as a module. If so, the module 1938 - will be called sa67mcu-hwmon. 1939 - 1940 1930 config SENSORS_SL28CPLD 1941 1931 tristate "Kontron sl28cpld hardware monitoring driver" 1942 1932 depends on MFD_SL28CPLD || COMPILE_TEST
-1
drivers/hwmon/Makefile
··· 199 199 obj-$(CONFIG_SENSORS_PWM_FAN) += pwm-fan.o 200 200 obj-$(CONFIG_SENSORS_QNAP_MCU_HWMON) += qnap-mcu-hwmon.o 201 201 obj-$(CONFIG_SENSORS_RASPBERRYPI_HWMON) += raspberrypi-hwmon.o 202 - obj-$(CONFIG_SENSORS_SA67MCU) += sa67mcu-hwmon.o 203 202 obj-$(CONFIG_SENSORS_SBTSI) += sbtsi_temp.o 204 203 obj-$(CONFIG_SENSORS_SBRMI) += sbrmi.o 205 204 obj-$(CONFIG_SENSORS_SCH56XX_COMMON)+= sch56xx-common.o
+4 -2
drivers/hwmon/aht10.c
··· 37 37 #define AHT10_CMD_MEAS 0b10101100 38 38 #define AHT10_CMD_RST 0b10111010 39 39 40 - #define DHT20_CMD_INIT 0x71 40 + #define AHT20_CMD_INIT 0b10111110 41 + 42 + #define DHT20_CMD_INIT 0b01110001 41 43 42 44 /* 43 45 * Flags in the answer byte/command ··· 343 341 data->meas_size = AHT20_MEAS_SIZE; 344 342 data->crc8 = true; 345 343 crc8_populate_msb(crc8_table, AHT20_CRC8_POLY); 346 - data->init_cmd = AHT10_CMD_INIT; 344 + data->init_cmd = AHT20_CMD_INIT; 347 345 break; 348 346 case dht20: 349 347 data->meas_size = AHT20_MEAS_SIZE;
+4 -1
drivers/hwmon/it87.c
··· 3590 3590 { 3591 3591 struct platform_device *pdev = to_platform_device(dev); 3592 3592 struct it87_data *data = dev_get_drvdata(dev); 3593 + int err; 3593 3594 3594 3595 it87_resume_sio(pdev); 3595 3596 3596 - it87_lock(data); 3597 + err = it87_lock(data); 3598 + if (err) 3599 + return err; 3597 3600 3598 3601 it87_check_pwm(dev); 3599 3602 it87_check_limit_regs(data);
+26 -25
drivers/hwmon/macsmc-hwmon.c
··· 22 22 23 23 #include <linux/bitfield.h> 24 24 #include <linux/hwmon.h> 25 + #include <linux/math64.h> 25 26 #include <linux/mfd/macsmc.h> 26 27 #include <linux/module.h> 27 28 #include <linux/of.h> ··· 131 130 if (ret < 0) 132 131 return ret; 133 132 134 - *p = mult_frac(val, scale, 65536); 133 + *p = mul_u64_u32_div(val, scale, 65536); 135 134 136 135 return 0; 137 136 } ··· 141 140 * them. 142 141 */ 143 142 static int macsmc_hwmon_read_f32_scaled(struct apple_smc *smc, smc_key key, 144 - int *p, int scale) 143 + long *p, int scale) 145 144 { 146 145 u32 fval; 147 146 u64 val; ··· 163 162 val = 0; 164 163 else if (exp < 0) 165 164 val >>= -exp; 166 - else if (exp != 0 && (val & ~((1UL << (64 - exp)) - 1))) /* overflow */ 165 + else if (exp != 0 && (val & ~((1ULL << (64 - exp)) - 1))) /* overflow */ 167 166 val = U64_MAX; 168 167 else 169 168 val <<= exp; 170 169 171 170 if (fval & FLT_SIGN_MASK) { 172 - if (val > (-(s64)INT_MIN)) 173 - *p = INT_MIN; 171 + if (val > (u64)LONG_MAX + 1) 172 + *p = LONG_MIN; 174 173 else 175 - *p = -val; 174 + *p = -(long)val; 176 175 } else { 177 - if (val > INT_MAX) 178 - *p = INT_MAX; 176 + if (val > (u64)LONG_MAX) 177 + *p = LONG_MAX; 179 178 else 180 - *p = val; 179 + *p = (long)val; 181 180 } 182 181 183 182 return 0; ··· 196 195 switch (sensor->info.type_code) { 197 196 /* 32-bit IEEE 754 float */ 198 197 case __SMC_KEY('f', 'l', 't', ' '): { 199 - u32 flt_ = 0; 198 + long flt_ = 0; 200 199 201 200 ret = macsmc_hwmon_read_f32_scaled(smc, sensor->macsmc_key, 202 201 &flt_, scale); ··· 215 214 if (ret) 216 215 return ret; 217 216 218 - *val = (long)ioft; 217 + if (ioft > LONG_MAX) 218 + *val = LONG_MAX; 219 + else 220 + *val = (long)ioft; 219 221 break; 220 222 } 221 223 default: ··· 228 224 return 0; 229 225 } 230 226 231 - static int macsmc_hwmon_write_f32(struct apple_smc *smc, smc_key key, int value) 227 + static int macsmc_hwmon_write_f32(struct apple_smc *smc, smc_key key, long value) 232 228 { 233 229 u64 val; 234 230 u32 fval = 0; 235 - int exp = 0, neg; 231 + int exp, neg; 236 232 233 + neg = value < 0; 237 234 val = abs(value); 238 - neg = val != value; 239 235 240 236 if (val) { 241 - int msb = __fls(val) - exp; 237 + exp = __fls(val); 242 238 243 - if (msb > 23) { 244 - val >>= msb - FLT_MANT_BIAS; 245 - exp -= msb - FLT_MANT_BIAS; 246 - } else if (msb < 23) { 247 - val <<= FLT_MANT_BIAS - msb; 248 - exp += msb; 249 - } 239 + if (exp > 23) 240 + val >>= exp - 23; 241 + else 242 + val <<= 23 - exp; 250 243 251 244 fval = FIELD_PREP(FLT_SIGN_MASK, neg) | 252 245 FIELD_PREP(FLT_EXP_MASK, exp + FLT_EXP_BIAS) | 253 - FIELD_PREP(FLT_MANT_MASK, val); 246 + FIELD_PREP(FLT_MANT_MASK, val & FLT_MANT_MASK); 254 247 } 255 248 256 249 return apple_smc_write_u32(smc, key, fval); ··· 664 663 if (!hwmon->volt.sensors) 665 664 return -ENOMEM; 666 665 667 - for_each_child_of_node_with_prefix(hwmon_node, key_node, "volt-") { 668 - sensor = &hwmon->temp.sensors[hwmon->temp.count]; 666 + for_each_child_of_node_with_prefix(hwmon_node, key_node, "voltage-") { 667 + sensor = &hwmon->volt.sensors[hwmon->volt.count]; 669 668 if (!macsmc_hwmon_create_sensor(hwmon->dev, hwmon->smc, key_node, sensor)) { 670 669 sensor->attrs = HWMON_I_INPUT; 671 670
+1 -1
drivers/hwmon/max6639.c
··· 607 607 return err; 608 608 609 609 /* Fans PWM polarity high by default */ 610 - err = regmap_write(data->regmap, MAX6639_REG_FAN_CONFIG2a(i), 0x00); 610 + err = regmap_write(data->regmap, MAX6639_REG_FAN_CONFIG2a(i), 0x02); 611 611 if (err) 612 612 return err; 613 613
+10 -9
drivers/hwmon/pmbus/q54sj108a2.c
··· 79 79 int idx = *idxp; 80 80 struct q54sj108a2_data *psu = to_psu(idxp, idx); 81 81 char data[I2C_SMBUS_BLOCK_MAX + 2] = { 0 }; 82 - char data_char[I2C_SMBUS_BLOCK_MAX + 2] = { 0 }; 82 + char data_char[I2C_SMBUS_BLOCK_MAX * 2 + 2] = { 0 }; 83 + char *out = data; 83 84 char *res; 84 85 85 86 switch (idx) { ··· 151 150 if (rc < 0) 152 151 return rc; 153 152 154 - res = bin2hex(data, data_char, 32); 155 - rc = res - data; 156 - 153 + res = bin2hex(data_char, data, rc); 154 + rc = res - data_char; 155 + out = data_char; 157 156 break; 158 157 case Q54SJ108A2_DEBUGFS_FLASH_KEY: 159 158 rc = i2c_smbus_read_block_data(psu->client, PMBUS_FLASH_KEY_WRITE, data); 160 159 if (rc < 0) 161 160 return rc; 162 161 163 - res = bin2hex(data, data_char, 4); 164 - rc = res - data; 165 - 162 + res = bin2hex(data_char, data, rc); 163 + rc = res - data_char; 164 + out = data_char; 166 165 break; 167 166 default: 168 167 return -EINVAL; 169 168 } 170 169 171 - data[rc] = '\n'; 170 + out[rc] = '\n'; 172 171 rc += 2; 173 172 174 - return simple_read_from_buffer(buf, count, ppos, data, rc); 173 + return simple_read_from_buffer(buf, count, ppos, out, rc); 175 174 } 176 175 177 176 static ssize_t q54sj108a2_debugfs_write(struct file *file, const char __user *buf,
-161
drivers/hwmon/sa67mcu-hwmon.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * sl67mcu hardware monitoring driver 4 - * 5 - * Copyright 2025 Kontron Europe GmbH 6 - */ 7 - 8 - #include <linux/bitfield.h> 9 - #include <linux/hwmon.h> 10 - #include <linux/kernel.h> 11 - #include <linux/mod_devicetable.h> 12 - #include <linux/module.h> 13 - #include <linux/platform_device.h> 14 - #include <linux/property.h> 15 - #include <linux/regmap.h> 16 - 17 - #define SA67MCU_VOLTAGE(n) (0x00 + ((n) * 2)) 18 - #define SA67MCU_TEMP(n) (0x04 + ((n) * 2)) 19 - 20 - struct sa67mcu_hwmon { 21 - struct regmap *regmap; 22 - u32 offset; 23 - }; 24 - 25 - static int sa67mcu_hwmon_read(struct device *dev, 26 - enum hwmon_sensor_types type, u32 attr, 27 - int channel, long *input) 28 - { 29 - struct sa67mcu_hwmon *hwmon = dev_get_drvdata(dev); 30 - unsigned int offset; 31 - u8 reg[2]; 32 - int ret; 33 - 34 - switch (type) { 35 - case hwmon_in: 36 - switch (attr) { 37 - case hwmon_in_input: 38 - offset = hwmon->offset + SA67MCU_VOLTAGE(channel); 39 - break; 40 - default: 41 - return -EOPNOTSUPP; 42 - } 43 - break; 44 - case hwmon_temp: 45 - switch (attr) { 46 - case hwmon_temp_input: 47 - offset = hwmon->offset + SA67MCU_TEMP(channel); 48 - break; 49 - default: 50 - return -EOPNOTSUPP; 51 - } 52 - break; 53 - default: 54 - return -EOPNOTSUPP; 55 - } 56 - 57 - /* Reading the low byte will capture the value */ 58 - ret = regmap_bulk_read(hwmon->regmap, offset, reg, ARRAY_SIZE(reg)); 59 - if (ret) 60 - return ret; 61 - 62 - *input = reg[1] << 8 | reg[0]; 63 - 64 - /* Temperatures are s16 and in 0.1degC steps. */ 65 - if (type == hwmon_temp) 66 - *input = sign_extend32(*input, 15) * 100; 67 - 68 - return 0; 69 - } 70 - 71 - static const struct hwmon_channel_info * const sa67mcu_hwmon_info[] = { 72 - HWMON_CHANNEL_INFO(in, 73 - HWMON_I_INPUT | HWMON_I_LABEL, 74 - HWMON_I_INPUT | HWMON_I_LABEL), 75 - HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT), 76 - NULL 77 - }; 78 - 79 - static const char *const sa67mcu_hwmon_in_labels[] = { 80 - "VDDIN", 81 - "VDD_RTC", 82 - }; 83 - 84 - static int sa67mcu_hwmon_read_string(struct device *dev, 85 - enum hwmon_sensor_types type, u32 attr, 86 - int channel, const char **str) 87 - { 88 - switch (type) { 89 - case hwmon_in: 90 - switch (attr) { 91 - case hwmon_in_label: 92 - *str = sa67mcu_hwmon_in_labels[channel]; 93 - return 0; 94 - default: 95 - return -EOPNOTSUPP; 96 - } 97 - default: 98 - return -EOPNOTSUPP; 99 - } 100 - } 101 - 102 - static const struct hwmon_ops sa67mcu_hwmon_ops = { 103 - .visible = 0444, 104 - .read = sa67mcu_hwmon_read, 105 - .read_string = sa67mcu_hwmon_read_string, 106 - }; 107 - 108 - static const struct hwmon_chip_info sa67mcu_hwmon_chip_info = { 109 - .ops = &sa67mcu_hwmon_ops, 110 - .info = sa67mcu_hwmon_info, 111 - }; 112 - 113 - static int sa67mcu_hwmon_probe(struct platform_device *pdev) 114 - { 115 - struct sa67mcu_hwmon *hwmon; 116 - struct device *hwmon_dev; 117 - int ret; 118 - 119 - if (!pdev->dev.parent) 120 - return -ENODEV; 121 - 122 - hwmon = devm_kzalloc(&pdev->dev, sizeof(*hwmon), GFP_KERNEL); 123 - if (!hwmon) 124 - return -ENOMEM; 125 - 126 - hwmon->regmap = dev_get_regmap(pdev->dev.parent, NULL); 127 - if (!hwmon->regmap) 128 - return -ENODEV; 129 - 130 - ret = device_property_read_u32(&pdev->dev, "reg", &hwmon->offset); 131 - if (ret) 132 - return -EINVAL; 133 - 134 - hwmon_dev = devm_hwmon_device_register_with_info(&pdev->dev, 135 - "sa67mcu_hwmon", hwmon, 136 - &sa67mcu_hwmon_chip_info, 137 - NULL); 138 - if (IS_ERR(hwmon_dev)) 139 - dev_err(&pdev->dev, "failed to register as hwmon device"); 140 - 141 - return PTR_ERR_OR_ZERO(hwmon_dev); 142 - } 143 - 144 - static const struct of_device_id sa67mcu_hwmon_of_match[] = { 145 - { .compatible = "kontron,sa67mcu-hwmon", }, 146 - {} 147 - }; 148 - MODULE_DEVICE_TABLE(of, sa67mcu_hwmon_of_match); 149 - 150 - static struct platform_driver sa67mcu_hwmon_driver = { 151 - .probe = sa67mcu_hwmon_probe, 152 - .driver = { 153 - .name = "sa67mcu-hwmon", 154 - .of_match_table = sa67mcu_hwmon_of_match, 155 - }, 156 - }; 157 - module_platform_driver(sa67mcu_hwmon_driver); 158 - 159 - MODULE_DESCRIPTION("sa67mcu Hardware Monitoring Driver"); 160 - MODULE_AUTHOR("Michael Walle <mwalle@kernel.org>"); 161 - MODULE_LICENSE("GPL");
+10 -4
drivers/i2c/busses/i2c-i801.c
··· 310 310 311 311 /* 312 312 * If set to true the host controller registers are reserved for 313 - * ACPI AML use. 313 + * ACPI AML use. Needs extra protection by acpi_lock. 314 314 */ 315 315 bool acpi_reserved; 316 + struct mutex acpi_lock; 316 317 }; 317 318 318 319 #define FEATURE_SMBUS_PEC BIT(0) ··· 895 894 int hwpec, ret; 896 895 struct i801_priv *priv = i2c_get_adapdata(adap); 897 896 898 - if (priv->acpi_reserved) 897 + mutex_lock(&priv->acpi_lock); 898 + if (priv->acpi_reserved) { 899 + mutex_unlock(&priv->acpi_lock); 899 900 return -EBUSY; 901 + } 900 902 901 903 pm_runtime_get_sync(&priv->pci_dev->dev); 902 904 ··· 939 935 iowrite8(SMBHSTSTS_INUSE_STS | STATUS_FLAGS, SMBHSTSTS(priv)); 940 936 941 937 pm_runtime_put_autosuspend(&priv->pci_dev->dev); 938 + mutex_unlock(&priv->acpi_lock); 942 939 return ret; 943 940 } 944 941 ··· 1470 1465 * further access from the driver itself. This device is now owned 1471 1466 * by the system firmware. 1472 1467 */ 1473 - i2c_lock_bus(&priv->adapter, I2C_LOCK_SEGMENT); 1468 + mutex_lock(&priv->acpi_lock); 1474 1469 1475 1470 if (!priv->acpi_reserved && i801_acpi_is_smbus_ioport(priv, address)) { 1476 1471 priv->acpi_reserved = true; ··· 1490 1485 else 1491 1486 status = acpi_os_write_port(address, (u32)*value, bits); 1492 1487 1493 - i2c_unlock_bus(&priv->adapter, I2C_LOCK_SEGMENT); 1488 + mutex_unlock(&priv->acpi_lock); 1494 1489 1495 1490 return status; 1496 1491 } ··· 1550 1545 priv->adapter.dev.parent = &dev->dev; 1551 1546 acpi_use_parent_companion(&priv->adapter.dev); 1552 1547 priv->adapter.retries = 3; 1548 + mutex_init(&priv->acpi_lock); 1553 1549 1554 1550 priv->pci_dev = dev; 1555 1551 priv->features = id->driver_data;
+3
drivers/media/dvb-core/dvb_net.c
··· 228 228 unsigned char hlen = (p->ule_sndu_type & 0x0700) >> 8; 229 229 unsigned char htype = p->ule_sndu_type & 0x00FF; 230 230 231 + if (htype >= ARRAY_SIZE(ule_mandatory_ext_handlers)) 232 + return -1; 233 + 231 234 /* Discriminate mandatory and optional extension headers. */ 232 235 if (hlen == 0) { 233 236 /* Mandatory extension header */
+7 -2
drivers/net/bonding/bond_main.c
··· 324 324 } 325 325 } 326 326 327 - bool bond_xdp_check(struct bonding *bond, int mode) 327 + bool __bond_xdp_check(int mode, int xmit_policy) 328 328 { 329 329 switch (mode) { 330 330 case BOND_MODE_ROUNDROBIN: ··· 335 335 /* vlan+srcmac is not supported with XDP as in most cases the 802.1q 336 336 * payload is not in the packet due to hardware offload. 337 337 */ 338 - if (bond->params.xmit_policy != BOND_XMIT_POLICY_VLAN_SRCMAC) 338 + if (xmit_policy != BOND_XMIT_POLICY_VLAN_SRCMAC) 339 339 return true; 340 340 fallthrough; 341 341 default: 342 342 return false; 343 343 } 344 + } 345 + 346 + bool bond_xdp_check(struct bonding *bond, int mode) 347 + { 348 + return __bond_xdp_check(mode, bond->params.xmit_policy); 344 349 } 345 350 346 351 /*---------------------------------- VLAN -----------------------------------*/
+2
drivers/net/bonding/bond_options.c
··· 1575 1575 static int bond_option_xmit_hash_policy_set(struct bonding *bond, 1576 1576 const struct bond_opt_value *newval) 1577 1577 { 1578 + if (bond->xdp_prog && !__bond_xdp_check(BOND_MODE(bond), newval->value)) 1579 + return -EOPNOTSUPP; 1578 1580 netdev_dbg(bond->dev, "Setting xmit hash policy to %s (%llu)\n", 1579 1581 newval->string, newval->value); 1580 1582 bond->params.xmit_policy = newval->value;
+1
drivers/net/can/dummy_can.c
··· 241 241 242 242 dev->netdev_ops = &dummy_can_netdev_ops; 243 243 dev->ethtool_ops = &dummy_can_ethtool_ops; 244 + dev->flags |= IFF_ECHO; /* enable echo handling */ 244 245 priv = netdev_priv(dev); 245 246 priv->can.bittiming_const = &dummy_can_bittiming_const; 246 247 priv->can.bitrate_max = 20 * MEGA /* BPS */;
+14 -1
drivers/net/can/spi/mcp251x.c
··· 1214 1214 { 1215 1215 struct mcp251x_priv *priv = netdev_priv(net); 1216 1216 struct spi_device *spi = priv->spi; 1217 + bool release_irq = false; 1217 1218 unsigned long flags = 0; 1218 1219 int ret; 1219 1220 ··· 1258 1257 return 0; 1259 1258 1260 1259 out_free_irq: 1261 - free_irq(spi->irq, priv); 1260 + /* The IRQ handler might be running, and if so it will be waiting 1261 + * for the lock. But free_irq() must wait for the handler to finish 1262 + * so calling it here would deadlock. 1263 + * 1264 + * Setting priv->force_quit will let the handler exit right away 1265 + * without any access to the hardware. This make it safe to call 1266 + * free_irq() after the lock is released. 1267 + */ 1268 + priv->force_quit = 1; 1269 + release_irq = true; 1270 + 1262 1271 mcp251x_hw_sleep(spi); 1263 1272 out_close: 1264 1273 mcp251x_power_enable(priv->transceiver, 0); 1265 1274 close_candev(net); 1266 1275 mutex_unlock(&priv->mcp_lock); 1276 + if (release_irq) 1277 + free_irq(spi->irq, priv); 1267 1278 return ret; 1268 1279 } 1269 1280
+6 -1
drivers/net/can/usb/ems_usb.c
··· 445 445 start = CPC_HEADER_SIZE; 446 446 447 447 while (msg_count) { 448 + if (start + CPC_MSG_HEADER_LEN > urb->actual_length) { 449 + netdev_err(netdev, "format error\n"); 450 + break; 451 + } 452 + 448 453 msg = (struct ems_cpc_msg *)&ibuf[start]; 449 454 450 455 switch (msg->type) { ··· 479 474 start += CPC_MSG_HEADER_LEN + msg->length; 480 475 msg_count--; 481 476 482 - if (start > urb->transfer_buffer_length) { 477 + if (start > urb->actual_length) { 483 478 netdev_err(netdev, "format error\n"); 484 479 break; 485 480 }
+17 -13
drivers/net/can/usb/esd_usb.c
··· 272 272 273 273 struct usb_anchor rx_submitted; 274 274 275 + unsigned int rx_pipe; 276 + unsigned int tx_pipe; 277 + 275 278 int net_count; 276 279 u32 version; 277 280 int rxinitdone; ··· 540 537 } 541 538 542 539 resubmit_urb: 543 - usb_fill_bulk_urb(urb, dev->udev, usb_rcvbulkpipe(dev->udev, 1), 540 + usb_fill_bulk_urb(urb, dev->udev, dev->rx_pipe, 544 541 urb->transfer_buffer, ESD_USB_RX_BUFFER_SIZE, 545 542 esd_usb_read_bulk_callback, dev); 546 543 ··· 629 626 { 630 627 int actual_length; 631 628 632 - return usb_bulk_msg(dev->udev, 633 - usb_sndbulkpipe(dev->udev, 2), 634 - msg, 629 + return usb_bulk_msg(dev->udev, dev->tx_pipe, msg, 635 630 msg->hdr.len * sizeof(u32), /* convert to # of bytes */ 636 631 &actual_length, 637 632 1000); ··· 640 639 { 641 640 int actual_length; 642 641 643 - return usb_bulk_msg(dev->udev, 644 - usb_rcvbulkpipe(dev->udev, 1), 645 - msg, 646 - sizeof(*msg), 647 - &actual_length, 648 - 1000); 642 + return usb_bulk_msg(dev->udev, dev->rx_pipe, msg, 643 + sizeof(*msg), &actual_length, 1000); 649 644 } 650 645 651 646 static int esd_usb_setup_rx_urbs(struct esd_usb *dev) ··· 674 677 675 678 urb->transfer_dma = buf_dma; 676 679 677 - usb_fill_bulk_urb(urb, dev->udev, 678 - usb_rcvbulkpipe(dev->udev, 1), 680 + usb_fill_bulk_urb(urb, dev->udev, dev->rx_pipe, 679 681 buf, ESD_USB_RX_BUFFER_SIZE, 680 682 esd_usb_read_bulk_callback, dev); 681 683 urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; ··· 899 903 /* hnd must not be 0 - MSB is stripped in txdone handling */ 900 904 msg->tx.hnd = BIT(31) | i; /* returned in TX done message */ 901 905 902 - usb_fill_bulk_urb(urb, dev->udev, usb_sndbulkpipe(dev->udev, 2), buf, 906 + usb_fill_bulk_urb(urb, dev->udev, dev->tx_pipe, buf, 903 907 msg->hdr.len * sizeof(u32), /* convert to # of bytes */ 904 908 esd_usb_write_bulk_callback, context); 905 909 ··· 1294 1298 static int esd_usb_probe(struct usb_interface *intf, 1295 1299 const struct usb_device_id *id) 1296 1300 { 1301 + struct usb_endpoint_descriptor *ep_in, *ep_out; 1297 1302 struct esd_usb *dev; 1298 1303 union esd_usb_msg *msg; 1299 1304 int i, err; 1305 + 1306 + err = usb_find_common_endpoints(intf->cur_altsetting, &ep_in, &ep_out, 1307 + NULL, NULL); 1308 + if (err) 1309 + return err; 1300 1310 1301 1311 dev = kzalloc_obj(*dev); 1302 1312 if (!dev) { ··· 1311 1309 } 1312 1310 1313 1311 dev->udev = interface_to_usbdev(intf); 1312 + dev->rx_pipe = usb_rcvbulkpipe(dev->udev, ep_in->bEndpointAddress); 1313 + dev->tx_pipe = usb_sndbulkpipe(dev->udev, ep_out->bEndpointAddress); 1314 1314 1315 1315 init_usb_anchor(&dev->rx_submitted); 1316 1316
+7 -1
drivers/net/can/usb/etas_es58x/es58x_core.c
··· 1461 1461 } 1462 1462 1463 1463 resubmit_urb: 1464 + usb_anchor_urb(urb, &es58x_dev->rx_urbs); 1464 1465 ret = usb_submit_urb(urb, GFP_ATOMIC); 1466 + if (!ret) 1467 + return; 1468 + 1469 + usb_unanchor_urb(urb); 1470 + 1465 1471 if (ret == -ENODEV) { 1466 1472 for (i = 0; i < es58x_dev->num_can_ch; i++) 1467 1473 if (es58x_dev->netdev[i]) 1468 1474 netif_device_detach(es58x_dev->netdev[i]); 1469 - } else if (ret) 1475 + } else 1470 1476 dev_err_ratelimited(dev, 1471 1477 "Failed resubmitting read bulk urb: %pe\n", 1472 1478 ERR_PTR(ret));
+40 -5
drivers/net/can/usb/f81604.c
··· 413 413 { 414 414 struct f81604_can_frame *frame = urb->transfer_buffer; 415 415 struct net_device *netdev = urb->context; 416 + struct f81604_port_priv *priv = netdev_priv(netdev); 416 417 int ret; 417 418 418 419 if (!netif_device_present(netdev)) ··· 446 445 f81604_process_rx_packet(netdev, frame); 447 446 448 447 resubmit_urb: 448 + usb_anchor_urb(urb, &priv->urbs_anchor); 449 449 ret = usb_submit_urb(urb, GFP_ATOMIC); 450 + if (!ret) 451 + return; 452 + usb_unanchor_urb(urb); 453 + 450 454 if (ret == -ENODEV) 451 455 netif_device_detach(netdev); 452 - else if (ret) 456 + else 453 457 netdev_err(netdev, 454 458 "%s: failed to resubmit read bulk urb: %pe\n", 455 459 __func__, ERR_PTR(ret)); ··· 626 620 netdev_info(netdev, "%s: Int URB aborted: %pe\n", __func__, 627 621 ERR_PTR(urb->status)); 628 622 623 + if (urb->actual_length < sizeof(*data)) { 624 + netdev_warn(netdev, "%s: short int URB: %u < %zu\n", 625 + __func__, urb->actual_length, sizeof(*data)); 626 + goto resubmit_urb; 627 + } 628 + 629 629 switch (urb->status) { 630 630 case 0: /* success */ 631 631 break; ··· 658 646 f81604_handle_tx(priv, data); 659 647 660 648 resubmit_urb: 649 + usb_anchor_urb(urb, &priv->urbs_anchor); 661 650 ret = usb_submit_urb(urb, GFP_ATOMIC); 651 + if (!ret) 652 + return; 653 + usb_unanchor_urb(urb); 654 + 662 655 if (ret == -ENODEV) 663 656 netif_device_detach(netdev); 664 - else if (ret) 657 + else 665 658 netdev_err(netdev, "%s: failed to resubmit int urb: %pe\n", 666 659 __func__, ERR_PTR(ret)); 667 660 } ··· 891 874 if (!netif_device_present(netdev)) 892 875 return; 893 876 894 - if (urb->status) 895 - netdev_info(netdev, "%s: Tx URB error: %pe\n", __func__, 896 - ERR_PTR(urb->status)); 877 + if (!urb->status) 878 + return; 879 + 880 + switch (urb->status) { 881 + case -ENOENT: 882 + case -ECONNRESET: 883 + case -ESHUTDOWN: 884 + return; 885 + default: 886 + break; 887 + } 888 + 889 + if (net_ratelimit()) 890 + netdev_err(netdev, "%s: Tx URB error: %pe\n", __func__, 891 + ERR_PTR(urb->status)); 892 + 893 + can_free_echo_skb(netdev, 0, NULL); 894 + netdev->stats.tx_dropped++; 895 + netdev->stats.tx_errors++; 896 + 897 + netif_wake_queue(netdev); 897 898 } 898 899 899 900 static void f81604_clear_reg_work(struct work_struct *work)
+16 -6
drivers/net/can/usb/gs_usb.c
··· 772 772 } 773 773 } 774 774 775 - static int gs_usb_set_bittiming(struct net_device *netdev) 775 + static int gs_usb_set_bittiming(struct gs_can *dev) 776 776 { 777 - struct gs_can *dev = netdev_priv(netdev); 778 777 struct can_bittiming *bt = &dev->can.bittiming; 779 778 struct gs_device_bittiming dbt = { 780 779 .prop_seg = cpu_to_le32(bt->prop_seg), ··· 790 791 GFP_KERNEL); 791 792 } 792 793 793 - static int gs_usb_set_data_bittiming(struct net_device *netdev) 794 + static int gs_usb_set_data_bittiming(struct gs_can *dev) 794 795 { 795 - struct gs_can *dev = netdev_priv(netdev); 796 796 struct can_bittiming *bt = &dev->can.fd.data_bittiming; 797 797 struct gs_device_bittiming dbt = { 798 798 .prop_seg = cpu_to_le32(bt->prop_seg), ··· 1054 1056 /* if hardware supports timestamps, enable it */ 1055 1057 if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP) 1056 1058 flags |= GS_CAN_MODE_HW_TIMESTAMP; 1059 + 1060 + rc = gs_usb_set_bittiming(dev); 1061 + if (rc) { 1062 + netdev_err(netdev, "failed to set bittiming: %pe\n", ERR_PTR(rc)); 1063 + goto out_usb_kill_anchored_urbs; 1064 + } 1065 + 1066 + if (ctrlmode & CAN_CTRLMODE_FD) { 1067 + rc = gs_usb_set_data_bittiming(dev); 1068 + if (rc) { 1069 + netdev_err(netdev, "failed to set data bittiming: %pe\n", ERR_PTR(rc)); 1070 + goto out_usb_kill_anchored_urbs; 1071 + } 1072 + } 1057 1073 1058 1074 /* finally start device */ 1059 1075 dev->can.state = CAN_STATE_ERROR_ACTIVE; ··· 1382 1370 dev->can.state = CAN_STATE_STOPPED; 1383 1371 dev->can.clock.freq = le32_to_cpu(bt_const.fclk_can); 1384 1372 dev->can.bittiming_const = &dev->bt_const; 1385 - dev->can.do_set_bittiming = gs_usb_set_bittiming; 1386 1373 1387 1374 dev->can.ctrlmode_supported = CAN_CTRLMODE_CC_LEN8_DLC; 1388 1375 ··· 1405 1394 * GS_CAN_FEATURE_BT_CONST_EXT is set. 1406 1395 */ 1407 1396 dev->can.fd.data_bittiming_const = &dev->bt_const; 1408 - dev->can.fd.do_set_data_bittiming = gs_usb_set_data_bittiming; 1409 1397 } 1410 1398 1411 1399 if (feature & GS_CAN_FEATURE_TERMINATION) {
+1 -1
drivers/net/can/usb/ucan.c
··· 748 748 len = le16_to_cpu(m->len); 749 749 750 750 /* check sanity (length of content) */ 751 - if (urb->actual_length - pos < len) { 751 + if ((len == 0) || (urb->actual_length - pos < len)) { 752 752 netdev_warn(up->netdev, 753 753 "invalid message (short; no data; l:%d)\n", 754 754 urb->actual_length);
+1 -1
drivers/net/dsa/realtek/rtl8365mb.c
··· 769 769 out: 770 770 rtl83xx_unlock(priv); 771 771 772 - return 0; 772 + return ret; 773 773 } 774 774 775 775 static int rtl8365mb_phy_read(struct realtek_priv *priv, int phy, int regnum)
+1 -1
drivers/net/ethernet/amd/xgbe/xgbe-common.h
··· 431 431 #define MAC_SSIR_SSINC_INDEX 16 432 432 #define MAC_SSIR_SSINC_WIDTH 8 433 433 #define MAC_TCR_SS_INDEX 29 434 - #define MAC_TCR_SS_WIDTH 2 434 + #define MAC_TCR_SS_WIDTH 3 435 435 #define MAC_TCR_TE_INDEX 0 436 436 #define MAC_TCR_TE_WIDTH 1 437 437 #define MAC_TCR_VNE_INDEX 24
-10
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 1120 1120 { 1121 1121 struct xgbe_prv_data *pdata = netdev_priv(netdev); 1122 1122 struct xgbe_hw_if *hw_if = &pdata->hw_if; 1123 - unsigned long flags; 1124 1123 1125 1124 DBGPR("-->xgbe_powerdown\n"); 1126 1125 ··· 1129 1130 DBGPR("<--xgbe_powerdown\n"); 1130 1131 return -EINVAL; 1131 1132 } 1132 - 1133 - spin_lock_irqsave(&pdata->lock, flags); 1134 1133 1135 1134 if (caller == XGMAC_DRIVER_CONTEXT) 1136 1135 netif_device_detach(netdev); ··· 1145 1148 1146 1149 pdata->power_down = 1; 1147 1150 1148 - spin_unlock_irqrestore(&pdata->lock, flags); 1149 - 1150 1151 DBGPR("<--xgbe_powerdown\n"); 1151 1152 1152 1153 return 0; ··· 1154 1159 { 1155 1160 struct xgbe_prv_data *pdata = netdev_priv(netdev); 1156 1161 struct xgbe_hw_if *hw_if = &pdata->hw_if; 1157 - unsigned long flags; 1158 1162 1159 1163 DBGPR("-->xgbe_powerup\n"); 1160 1164 ··· 1163 1169 DBGPR("<--xgbe_powerup\n"); 1164 1170 return -EINVAL; 1165 1171 } 1166 - 1167 - spin_lock_irqsave(&pdata->lock, flags); 1168 1172 1169 1173 pdata->power_down = 0; 1170 1174 ··· 1177 1185 netif_tx_start_all_queues(netdev); 1178 1186 1179 1187 xgbe_start_timers(pdata); 1180 - 1181 - spin_unlock_irqrestore(&pdata->lock, flags); 1182 1188 1183 1189 DBGPR("<--xgbe_powerup\n"); 1184 1190
-1
drivers/net/ethernet/amd/xgbe/xgbe-main.c
··· 76 76 pdata->netdev = netdev; 77 77 pdata->dev = dev; 78 78 79 - spin_lock_init(&pdata->lock); 80 79 spin_lock_init(&pdata->xpcs_lock); 81 80 mutex_init(&pdata->rss_mutex); 82 81 spin_lock_init(&pdata->tstamp_lock);
-3
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 1004 1004 unsigned int pp3; 1005 1005 unsigned int pp4; 1006 1006 1007 - /* Overall device lock */ 1008 - spinlock_t lock; 1009 - 1010 1007 /* XPCS indirect addressing lock */ 1011 1008 spinlock_t xpcs_lock; 1012 1009 unsigned int xpcs_window_def_reg;
+2 -1
drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
··· 1533 1533 if_id = (status & 0xFFFF0000) >> 16; 1534 1534 if (if_id >= ethsw->sw_attr.num_ifs) { 1535 1535 dev_err(dev, "Invalid if_id %d in IRQ status\n", if_id); 1536 - goto out; 1536 + goto out_clear; 1537 1537 } 1538 1538 port_priv = ethsw->ports[if_id]; 1539 1539 ··· 1553 1553 dpaa2_switch_port_connect_mac(port_priv); 1554 1554 } 1555 1555 1556 + out_clear: 1556 1557 err = dpsw_clear_irq_status(ethsw->mc_io, 0, ethsw->dpsw_handle, 1557 1558 DPSW_IRQ_INDEX_IF, status); 1558 1559 if (err)
+1 -1
drivers/net/ethernet/freescale/enetc/enetc.c
··· 3467 3467 priv->rx_ring[i] = bdr; 3468 3468 3469 3469 err = __xdp_rxq_info_reg(&bdr->xdp.rxq, priv->ndev, i, 0, 3470 - ENETC_RXB_DMA_SIZE_XDP); 3470 + ENETC_RXB_TRUESIZE); 3471 3471 if (err) 3472 3472 goto free_vector; 3473 3473
+1
drivers/net/ethernet/intel/e1000e/defines.h
··· 33 33 34 34 /* Extended Device Control */ 35 35 #define E1000_CTRL_EXT_LPCD 0x00000004 /* LCD Power Cycle Done */ 36 + #define E1000_CTRL_EXT_DPG_EN 0x00000008 /* Dynamic Power Gating Enable */ 36 37 #define E1000_CTRL_EXT_SDP3_DATA 0x00000080 /* Value of SW Definable Pin 3 */ 37 38 #define E1000_CTRL_EXT_FORCE_SMBUS 0x00000800 /* Force SMBus mode */ 38 39 #define E1000_CTRL_EXT_EE_RST 0x00002000 /* Reinitialize from EEPROM */
+3 -1
drivers/net/ethernet/intel/e1000e/e1000.h
··· 117 117 board_pch_cnp, 118 118 board_pch_tgp, 119 119 board_pch_adp, 120 - board_pch_mtp 120 + board_pch_mtp, 121 + board_pch_ptp 121 122 }; 122 123 123 124 struct e1000_ps_page { ··· 528 527 extern const struct e1000_info e1000_pch_tgp_info; 529 528 extern const struct e1000_info e1000_pch_adp_info; 530 529 extern const struct e1000_info e1000_pch_mtp_info; 530 + extern const struct e1000_info e1000_pch_ptp_info; 531 531 extern const struct e1000_info e1000_es2_info; 532 532 533 533 void e1000e_ptp_init(struct e1000_adapter *adapter);
-2
drivers/net/ethernet/intel/e1000e/hw.h
··· 118 118 #define E1000_DEV_ID_PCH_ARL_I219_V24 0x57A1 119 119 #define E1000_DEV_ID_PCH_PTP_I219_LM25 0x57B3 120 120 #define E1000_DEV_ID_PCH_PTP_I219_V25 0x57B4 121 - #define E1000_DEV_ID_PCH_PTP_I219_LM26 0x57B5 122 - #define E1000_DEV_ID_PCH_PTP_I219_V26 0x57B6 123 121 #define E1000_DEV_ID_PCH_PTP_I219_LM27 0x57B7 124 122 #define E1000_DEV_ID_PCH_PTP_I219_V27 0x57B8 125 123 #define E1000_DEV_ID_PCH_NVL_I219_LM29 0x57B9
+30 -1
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 528 528 529 529 phy->id = e1000_phy_unknown; 530 530 531 - if (hw->mac.type == e1000_pch_mtp) { 531 + if (hw->mac.type == e1000_pch_mtp || hw->mac.type == e1000_pch_ptp) { 532 532 phy->retry_count = 2; 533 533 e1000e_enable_phy_retry(hw); 534 534 } ··· 4932 4932 reg |= E1000_KABGTXD_BGSQLBIAS; 4933 4933 ew32(KABGTXD, reg); 4934 4934 4935 + /* The hardware reset value of the DPG_EN bit is 1. 4936 + * Clear DPG_EN to prevent unexpected autonomous power gating. 4937 + */ 4938 + if (hw->mac.type >= e1000_pch_ptp) { 4939 + reg = er32(CTRL_EXT); 4940 + reg &= ~E1000_CTRL_EXT_DPG_EN; 4941 + ew32(CTRL_EXT, reg); 4942 + } 4943 + 4935 4944 return 0; 4936 4945 } 4937 4946 ··· 6200 6191 6201 6192 const struct e1000_info e1000_pch_mtp_info = { 6202 6193 .mac = e1000_pch_mtp, 6194 + .flags = FLAG_IS_ICH 6195 + | FLAG_HAS_WOL 6196 + | FLAG_HAS_HW_TIMESTAMP 6197 + | FLAG_HAS_CTRLEXT_ON_LOAD 6198 + | FLAG_HAS_AMT 6199 + | FLAG_HAS_FLASH 6200 + | FLAG_HAS_JUMBO_FRAMES 6201 + | FLAG_APME_IN_WUC, 6202 + .flags2 = FLAG2_HAS_PHY_STATS 6203 + | FLAG2_HAS_EEE, 6204 + .pba = 26, 6205 + .max_hw_frame_size = 9022, 6206 + .get_variants = e1000_get_variants_ich8lan, 6207 + .mac_ops = &ich8_mac_ops, 6208 + .phy_ops = &ich8_phy_ops, 6209 + .nvm_ops = &spt_nvm_ops, 6210 + }; 6211 + 6212 + const struct e1000_info e1000_pch_ptp_info = { 6213 + .mac = e1000_pch_ptp, 6203 6214 .flags = FLAG_IS_ICH 6204 6215 | FLAG_HAS_WOL 6205 6216 | FLAG_HAS_HW_TIMESTAMP
+7 -8
drivers/net/ethernet/intel/e1000e/netdev.c
··· 55 55 [board_pch_tgp] = &e1000_pch_tgp_info, 56 56 [board_pch_adp] = &e1000_pch_adp_info, 57 57 [board_pch_mtp] = &e1000_pch_mtp_info, 58 + [board_pch_ptp] = &e1000_pch_ptp_info, 58 59 }; 59 60 60 61 struct e1000_reg_info { ··· 7923 7922 { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_mtp }, 7924 7923 { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ARL_I219_LM24), board_pch_mtp }, 7925 7924 { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ARL_I219_V24), board_pch_mtp }, 7926 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM25), board_pch_mtp }, 7927 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V25), board_pch_mtp }, 7928 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM26), board_pch_mtp }, 7929 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V26), board_pch_mtp }, 7930 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM27), board_pch_mtp }, 7931 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V27), board_pch_mtp }, 7932 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_NVL_I219_LM29), board_pch_mtp }, 7933 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_NVL_I219_V29), board_pch_mtp }, 7925 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM25), board_pch_ptp }, 7926 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V25), board_pch_ptp }, 7927 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM27), board_pch_ptp }, 7928 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V27), board_pch_ptp }, 7929 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_NVL_I219_LM29), board_pch_ptp }, 7930 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_NVL_I219_V29), board_pch_ptp }, 7934 7931 7935 7932 { 0, 0, 0, 0, 0, 0, 0 } /* terminate list */ 7936 7933 };
+24 -17
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 3569 3569 u16 pf_q = vsi->base_queue + ring->queue_index; 3570 3570 struct i40e_hw *hw = &vsi->back->hw; 3571 3571 struct i40e_hmc_obj_rxq rx_ctx; 3572 + u32 xdp_frame_sz; 3572 3573 int err = 0; 3573 3574 bool ok; 3574 3575 ··· 3579 3578 memset(&rx_ctx, 0, sizeof(rx_ctx)); 3580 3579 3581 3580 ring->rx_buf_len = vsi->rx_buf_len; 3581 + xdp_frame_sz = i40e_rx_pg_size(ring) / 2; 3582 3582 3583 3583 /* XDP RX-queue info only needed for RX rings exposed to XDP */ 3584 3584 if (ring->vsi->type != I40E_VSI_MAIN) 3585 3585 goto skip; 3586 3586 3587 - if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) { 3588 - err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 3589 - ring->queue_index, 3590 - ring->q_vector->napi.napi_id, 3591 - ring->rx_buf_len); 3592 - if (err) 3593 - return err; 3594 - } 3595 - 3596 3587 ring->xsk_pool = i40e_xsk_pool(ring); 3597 3588 if (ring->xsk_pool) { 3598 - xdp_rxq_info_unreg(&ring->xdp_rxq); 3589 + xdp_frame_sz = xsk_pool_get_rx_frag_step(ring->xsk_pool); 3599 3590 ring->rx_buf_len = xsk_pool_get_rx_frame_size(ring->xsk_pool); 3600 3591 err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 3601 3592 ring->queue_index, 3602 3593 ring->q_vector->napi.napi_id, 3603 - ring->rx_buf_len); 3594 + xdp_frame_sz); 3604 3595 if (err) 3605 3596 return err; 3606 3597 err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, 3607 3598 MEM_TYPE_XSK_BUFF_POOL, 3608 3599 NULL); 3609 3600 if (err) 3610 - return err; 3601 + goto unreg_xdp; 3611 3602 dev_info(&vsi->back->pdev->dev, 3612 3603 "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n", 3613 3604 ring->queue_index); 3614 3605 3615 3606 } else { 3607 + err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 3608 + ring->queue_index, 3609 + ring->q_vector->napi.napi_id, 3610 + xdp_frame_sz); 3611 + if (err) 3612 + return err; 3616 3613 err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, 3617 3614 MEM_TYPE_PAGE_SHARED, 3618 3615 NULL); 3619 3616 if (err) 3620 - return err; 3617 + goto unreg_xdp; 3621 3618 } 3622 3619 3623 3620 skip: 3624 - xdp_init_buff(&ring->xdp, i40e_rx_pg_size(ring) / 2, &ring->xdp_rxq); 3621 + xdp_init_buff(&ring->xdp, xdp_frame_sz, &ring->xdp_rxq); 3625 3622 3626 3623 rx_ctx.dbuff = DIV_ROUND_UP(ring->rx_buf_len, 3627 3624 BIT_ULL(I40E_RXQ_CTX_DBUFF_SHIFT)); ··· 3653 3654 dev_info(&vsi->back->pdev->dev, 3654 3655 "Failed to clear LAN Rx queue context on Rx ring %d (pf_q %d), error: %d\n", 3655 3656 ring->queue_index, pf_q, err); 3656 - return -ENOMEM; 3657 + err = -ENOMEM; 3658 + goto unreg_xdp; 3657 3659 } 3658 3660 3659 3661 /* set the context in the HMC */ ··· 3663 3663 dev_info(&vsi->back->pdev->dev, 3664 3664 "Failed to set LAN Rx queue context on Rx ring %d (pf_q %d), error: %d\n", 3665 3665 ring->queue_index, pf_q, err); 3666 - return -ENOMEM; 3666 + err = -ENOMEM; 3667 + goto unreg_xdp; 3667 3668 } 3668 3669 3669 3670 /* configure Rx buffer alignment */ ··· 3672 3671 if (I40E_2K_TOO_SMALL_WITH_PADDING) { 3673 3672 dev_info(&vsi->back->pdev->dev, 3674 3673 "2k Rx buffer is too small to fit standard MTU and skb_shared_info\n"); 3675 - return -EOPNOTSUPP; 3674 + err = -EOPNOTSUPP; 3675 + goto unreg_xdp; 3676 3676 } 3677 3677 clear_ring_build_skb_enabled(ring); 3678 3678 } else { ··· 3703 3701 } 3704 3702 3705 3703 return 0; 3704 + unreg_xdp: 3705 + if (ring->vsi->type == I40E_VSI_MAIN) 3706 + xdp_rxq_info_unreg(&ring->xdp_rxq); 3707 + 3708 + return err; 3706 3709 } 3707 3710 3708 3711 /**
+1 -1
drivers/net/ethernet/intel/i40e/i40e_trace.h
··· 88 88 __entry->rx_clean_complete = rx_clean_complete; 89 89 __entry->tx_clean_complete = tx_clean_complete; 90 90 __entry->irq_num = q->irq_num; 91 - __entry->curr_cpu = get_cpu(); 91 + __entry->curr_cpu = smp_processor_id(); 92 92 __assign_str(qname); 93 93 __assign_str(dev_name); 94 94 __assign_bitmask(irq_affinity, cpumask_bits(&q->affinity_mask),
+3 -2
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 1470 1470 if (!rx_ring->rx_bi) 1471 1471 return; 1472 1472 1473 + if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) 1474 + xdp_rxq_info_unreg(&rx_ring->xdp_rxq); 1475 + 1473 1476 if (rx_ring->xsk_pool) { 1474 1477 i40e_xsk_clean_rx_ring(rx_ring); 1475 1478 goto skip_free; ··· 1530 1527 void i40e_free_rx_resources(struct i40e_ring *rx_ring) 1531 1528 { 1532 1529 i40e_clean_rx_ring(rx_ring); 1533 - if (rx_ring->vsi->type == I40E_VSI_MAIN) 1534 - xdp_rxq_info_unreg(&rx_ring->xdp_rxq); 1535 1530 rx_ring->xdp_prog = NULL; 1536 1531 kfree(rx_ring->rx_bi); 1537 1532 rx_ring->rx_bi = NULL;
+16 -1
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 2793 2793 netdev->watchdog_timeo = 5 * HZ; 2794 2794 2795 2795 netdev->min_mtu = ETH_MIN_MTU; 2796 - netdev->max_mtu = LIBIE_MAX_MTU; 2796 + 2797 + /* PF/VF API: vf_res->max_mtu is max frame size (not MTU). 2798 + * Convert to MTU. 2799 + */ 2800 + if (!adapter->vf_res->max_mtu) { 2801 + netdev->max_mtu = LIBIE_MAX_MTU; 2802 + } else if (adapter->vf_res->max_mtu < LIBETH_RX_LL_LEN + ETH_MIN_MTU || 2803 + adapter->vf_res->max_mtu > 2804 + LIBETH_RX_LL_LEN + LIBIE_MAX_MTU) { 2805 + netdev_warn_once(adapter->netdev, 2806 + "invalid max frame size %d from PF, using default MTU %d", 2807 + adapter->vf_res->max_mtu, LIBIE_MAX_MTU); 2808 + netdev->max_mtu = LIBIE_MAX_MTU; 2809 + } else { 2810 + netdev->max_mtu = adapter->vf_res->max_mtu - LIBETH_RX_LL_LEN; 2811 + } 2797 2812 2798 2813 if (!is_valid_ether_addr(adapter->hw.mac.addr)) { 2799 2814 dev_info(&pdev->dev, "Invalid MAC address %pM, using random\n",
+1
drivers/net/ethernet/intel/ice/ice.h
··· 987 987 void ice_print_link_msg(struct ice_vsi *vsi, bool isup); 988 988 int ice_plug_aux_dev(struct ice_pf *pf); 989 989 void ice_unplug_aux_dev(struct ice_pf *pf); 990 + void ice_rdma_finalize_setup(struct ice_pf *pf); 990 991 int ice_init_rdma(struct ice_pf *pf); 991 992 void ice_deinit_rdma(struct ice_pf *pf); 992 993 bool ice_is_wol_supported(struct ice_hw *hw);
+14 -24
drivers/net/ethernet/intel/ice/ice_base.c
··· 124 124 if (vsi->type == ICE_VSI_VF) { 125 125 ice_calc_vf_reg_idx(vsi->vf, q_vector); 126 126 goto out; 127 + } else if (vsi->type == ICE_VSI_LB) { 128 + goto skip_alloc; 127 129 } else if (vsi->type == ICE_VSI_CTRL && vsi->vf) { 128 130 struct ice_vsi *ctrl_vsi = ice_get_vf_ctrl_vsi(pf, vsi); 129 131 ··· 661 659 { 662 660 struct device *dev = ice_pf_to_dev(ring->vsi->back); 663 661 u32 num_bufs = ICE_DESC_UNUSED(ring); 664 - u32 rx_buf_len; 665 662 int err; 666 663 667 - if (ring->vsi->type == ICE_VSI_PF || ring->vsi->type == ICE_VSI_SF) { 668 - if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) { 669 - err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 670 - ring->q_index, 671 - ring->q_vector->napi.napi_id, 672 - ring->rx_buf_len); 673 - if (err) 674 - return err; 675 - } 676 - 664 + if (ring->vsi->type == ICE_VSI_PF || ring->vsi->type == ICE_VSI_SF || 665 + ring->vsi->type == ICE_VSI_LB) { 677 666 ice_rx_xsk_pool(ring); 678 667 err = ice_realloc_rx_xdp_bufs(ring, ring->xsk_pool); 679 668 if (err) 680 669 return err; 681 670 682 671 if (ring->xsk_pool) { 683 - xdp_rxq_info_unreg(&ring->xdp_rxq); 684 - 685 - rx_buf_len = 686 - xsk_pool_get_rx_frame_size(ring->xsk_pool); 672 + u32 frag_size = 673 + xsk_pool_get_rx_frag_step(ring->xsk_pool); 687 674 err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 688 675 ring->q_index, 689 676 ring->q_vector->napi.napi_id, 690 - rx_buf_len); 677 + frag_size); 691 678 if (err) 692 679 return err; 693 680 err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, ··· 693 702 if (err) 694 703 return err; 695 704 696 - if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) { 697 - err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 698 - ring->q_index, 699 - ring->q_vector->napi.napi_id, 700 - ring->rx_buf_len); 701 - if (err) 702 - goto err_destroy_fq; 703 - } 705 + err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 706 + ring->q_index, 707 + ring->q_vector->napi.napi_id, 708 + ring->truesize); 709 + if (err) 710 + goto err_destroy_fq; 711 + 704 712 xdp_rxq_info_attach_page_pool(&ring->xdp_rxq, 705 713 ring->pp); 706 714 }
+11 -4
drivers/net/ethernet/intel/ice/ice_common.c
··· 1816 1816 case ice_aqc_opc_lldp_stop: 1817 1817 case ice_aqc_opc_lldp_start: 1818 1818 case ice_aqc_opc_lldp_filter_ctrl: 1819 + case ice_aqc_opc_sff_eeprom: 1819 1820 return true; 1820 1821 } 1821 1822 ··· 1842 1841 { 1843 1842 struct libie_aq_desc desc_cpy; 1844 1843 bool is_cmd_for_retry; 1844 + u8 *buf_cpy = NULL; 1845 1845 u8 idx = 0; 1846 1846 u16 opcode; 1847 1847 int status; ··· 1852 1850 memset(&desc_cpy, 0, sizeof(desc_cpy)); 1853 1851 1854 1852 if (is_cmd_for_retry) { 1855 - /* All retryable cmds are direct, without buf. */ 1856 - WARN_ON(buf); 1853 + if (buf) { 1854 + buf_cpy = kmemdup(buf, buf_size, GFP_KERNEL); 1855 + if (!buf_cpy) 1856 + return -ENOMEM; 1857 + } 1857 1858 1858 1859 memcpy(&desc_cpy, desc, sizeof(desc_cpy)); 1859 1860 } ··· 1868 1863 hw->adminq.sq_last_status != LIBIE_AQ_RC_EBUSY) 1869 1864 break; 1870 1865 1866 + if (buf_cpy) 1867 + memcpy(buf, buf_cpy, buf_size); 1871 1868 memcpy(desc, &desc_cpy, sizeof(desc_cpy)); 1872 - 1873 1869 msleep(ICE_SQ_SEND_DELAY_TIME_MS); 1874 1870 1875 1871 } while (++idx < ICE_SQ_SEND_MAX_EXECUTE); 1876 1872 1873 + kfree(buf_cpy); 1877 1874 return status; 1878 1875 } 1879 1876 ··· 6398 6391 struct ice_aqc_lldp_filter_ctrl *cmd; 6399 6392 struct libie_aq_desc desc; 6400 6393 6401 - if (vsi->type != ICE_VSI_PF || !ice_fw_supports_lldp_fltr_ctrl(hw)) 6394 + if (!ice_fw_supports_lldp_fltr_ctrl(hw)) 6402 6395 return -EOPNOTSUPP; 6403 6396 6404 6397 cmd = libie_aq_raw(&desc);
+28 -23
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 1289 1289 test_vsi->netdev = netdev; 1290 1290 tx_ring = test_vsi->tx_rings[0]; 1291 1291 rx_ring = test_vsi->rx_rings[0]; 1292 + /* Dummy q_vector and napi. Fill the minimum required for 1293 + * ice_rxq_pp_create(). 1294 + */ 1295 + rx_ring->q_vector->napi.dev = netdev; 1292 1296 1293 1297 if (ice_lbtest_prepare_rings(test_vsi)) { 1294 1298 ret = 2; ··· 3332 3328 rx_rings = kzalloc_objs(*rx_rings, vsi->num_rxq); 3333 3329 if (!rx_rings) { 3334 3330 err = -ENOMEM; 3335 - goto done; 3331 + goto free_xdp; 3336 3332 } 3337 3333 3338 3334 ice_for_each_rxq(vsi, i) { ··· 3342 3338 rx_rings[i].cached_phctime = pf->ptp.cached_phc_time; 3343 3339 rx_rings[i].desc = NULL; 3344 3340 rx_rings[i].xdp_buf = NULL; 3341 + rx_rings[i].xdp_rxq = (struct xdp_rxq_info){ }; 3345 3342 3346 3343 /* this is to allow wr32 to have something to write to 3347 3344 * during early allocation of Rx buffers ··· 3360 3355 } 3361 3356 kfree(rx_rings); 3362 3357 err = -ENOMEM; 3363 - goto free_tx; 3358 + goto free_xdp; 3364 3359 } 3365 3360 } 3366 3361 ··· 3411 3406 ice_up(vsi); 3412 3407 } 3413 3408 goto done; 3409 + 3410 + free_xdp: 3411 + if (xdp_rings) { 3412 + ice_for_each_xdp_txq(vsi, i) 3413 + ice_free_tx_ring(&xdp_rings[i]); 3414 + kfree(xdp_rings); 3415 + } 3414 3416 3415 3417 free_tx: 3416 3418 /* error cleanup if the Rx allocations failed after getting Tx */ ··· 4517 4505 u8 addr = ICE_I2C_EEPROM_DEV_ADDR; 4518 4506 struct ice_hw *hw = &pf->hw; 4519 4507 bool is_sfp = false; 4520 - unsigned int i, j; 4508 + unsigned int i; 4521 4509 u16 offset = 0; 4522 4510 u8 page = 0; 4523 4511 int status; ··· 4559 4547 if (page == 0 || !(data[0x2] & 0x4)) { 4560 4548 u32 copy_len; 4561 4549 4562 - /* If i2c bus is busy due to slow page change or 4563 - * link management access, call can fail. This is normal. 4564 - * So we retry this a few times. 4565 - */ 4566 - for (j = 0; j < 4; j++) { 4567 - status = ice_aq_sff_eeprom(hw, 0, addr, offset, page, 4568 - !is_sfp, value, 4569 - SFF_READ_BLOCK_SIZE, 4570 - 0, NULL); 4571 - netdev_dbg(netdev, "SFF %02X %02X %02X %X = %02X%02X%02X%02X.%02X%02X%02X%02X (%X)\n", 4572 - addr, offset, page, is_sfp, 4573 - value[0], value[1], value[2], value[3], 4574 - value[4], value[5], value[6], value[7], 4575 - status); 4576 - if (status) { 4577 - usleep_range(1500, 2500); 4578 - memset(value, 0, SFF_READ_BLOCK_SIZE); 4579 - continue; 4580 - } 4581 - break; 4550 + status = ice_aq_sff_eeprom(hw, 0, addr, offset, page, 4551 + !is_sfp, value, 4552 + SFF_READ_BLOCK_SIZE, 4553 + 0, NULL); 4554 + netdev_dbg(netdev, "SFF %02X %02X %02X %X = %02X%02X%02X%02X.%02X%02X%02X%02X (%pe)\n", 4555 + addr, offset, page, is_sfp, 4556 + value[0], value[1], value[2], value[3], 4557 + value[4], value[5], value[6], value[7], 4558 + ERR_PTR(status)); 4559 + if (status) { 4560 + netdev_err(netdev, "%s: error reading module EEPROM: status %pe\n", 4561 + __func__, ERR_PTR(status)); 4562 + return status; 4582 4563 } 4583 4564 4584 4565 /* Make sure we have enough room for the new block */
+34 -10
drivers/net/ethernet/intel/ice/ice_idc.c
··· 361 361 } 362 362 363 363 /** 364 + * ice_rdma_finalize_setup - Complete RDMA setup after VSI is ready 365 + * @pf: ptr to ice_pf 366 + * 367 + * Sets VSI-dependent information and plugs aux device. 368 + * Must be called after ice_init_rdma(), ice_vsi_rebuild(), and 369 + * ice_dcb_rebuild() complete. 370 + */ 371 + void ice_rdma_finalize_setup(struct ice_pf *pf) 372 + { 373 + struct device *dev = ice_pf_to_dev(pf); 374 + struct iidc_rdma_priv_dev_info *privd; 375 + int ret; 376 + 377 + if (!ice_is_rdma_ena(pf) || !pf->cdev_info) 378 + return; 379 + 380 + privd = pf->cdev_info->iidc_priv; 381 + if (!privd || !pf->vsi || !pf->vsi[0] || !pf->vsi[0]->netdev) 382 + return; 383 + 384 + /* Assign VSI info now that VSI is valid */ 385 + privd->netdev = pf->vsi[0]->netdev; 386 + privd->vport_id = pf->vsi[0]->vsi_num; 387 + 388 + /* Update QoS info after DCB has been rebuilt */ 389 + ice_setup_dcb_qos_info(pf, &privd->qos_info); 390 + 391 + ret = ice_plug_aux_dev(pf); 392 + if (ret) 393 + dev_warn(dev, "Failed to plug RDMA aux device: %d\n", ret); 394 + } 395 + 396 + /** 364 397 * ice_init_rdma - initializes PF for RDMA use 365 398 * @pf: ptr to ice_pf 366 399 */ ··· 431 398 } 432 399 433 400 cdev->iidc_priv = privd; 434 - privd->netdev = pf->vsi[0]->netdev; 435 401 436 402 privd->hw_addr = (u8 __iomem *)pf->hw.hw_addr; 437 403 cdev->pdev = pf->pdev; 438 - privd->vport_id = pf->vsi[0]->vsi_num; 439 404 440 405 pf->cdev_info->rdma_protocol |= IIDC_RDMA_PROTOCOL_ROCEV2; 441 - ice_setup_dcb_qos_info(pf, &privd->qos_info); 442 - ret = ice_plug_aux_dev(pf); 443 - if (ret) 444 - goto err_plug_aux_dev; 406 + 445 407 return 0; 446 408 447 - err_plug_aux_dev: 448 - pf->cdev_info->adev = NULL; 449 - xa_erase(&ice_aux_id, pf->aux_idx); 450 409 err_alloc_xa: 451 410 kfree(privd); 452 411 err_privd_alloc: ··· 457 432 if (!ice_is_rdma_ena(pf)) 458 433 return; 459 434 460 - ice_unplug_aux_dev(pf); 461 435 xa_erase(&ice_aux_id, pf->aux_idx); 462 436 kfree(pf->cdev_info->iidc_priv); 463 437 kfree(pf->cdev_info);
+10 -5
drivers/net/ethernet/intel/ice/ice_lib.c
··· 107 107 if (!vsi->rxq_map) 108 108 goto err_rxq_map; 109 109 110 - /* There is no need to allocate q_vectors for a loopback VSI. */ 111 - if (vsi->type == ICE_VSI_LB) 112 - return 0; 113 - 114 110 /* allocate memory for q_vector pointers */ 115 111 vsi->q_vectors = devm_kcalloc(dev, vsi->num_q_vectors, 116 112 sizeof(*vsi->q_vectors), GFP_KERNEL); ··· 237 241 case ICE_VSI_LB: 238 242 vsi->alloc_txq = 1; 239 243 vsi->alloc_rxq = 1; 244 + /* A dummy q_vector, no actual IRQ. */ 245 + vsi->num_q_vectors = 1; 240 246 break; 241 247 default: 242 248 dev_warn(ice_pf_to_dev(pf), "Unknown VSI type %d\n", vsi_type); ··· 2424 2426 } 2425 2427 break; 2426 2428 case ICE_VSI_LB: 2427 - ret = ice_vsi_alloc_rings(vsi); 2429 + ret = ice_vsi_alloc_q_vectors(vsi); 2428 2430 if (ret) 2429 2431 goto unroll_vsi_init; 2432 + 2433 + ret = ice_vsi_alloc_rings(vsi); 2434 + if (ret) 2435 + goto unroll_alloc_q_vector; 2430 2436 2431 2437 ret = ice_vsi_alloc_ring_stats(vsi); 2432 2438 if (ret) 2433 2439 goto unroll_vector_base; 2440 + 2441 + /* Simply map the dummy q_vector to the only rx_ring */ 2442 + vsi->rx_rings[0]->q_vector = vsi->q_vectors[0]; 2434 2443 2435 2444 break; 2436 2445 default:
+6 -1
drivers/net/ethernet/intel/ice/ice_main.c
··· 5138 5138 if (err) 5139 5139 goto err_init_rdma; 5140 5140 5141 + /* Finalize RDMA: VSI already created, assign info and plug device */ 5142 + ice_rdma_finalize_setup(pf); 5143 + 5141 5144 ice_service_task_restart(pf); 5142 5145 5143 5146 clear_bit(ICE_DOWN, pf->state); ··· 5172 5169 5173 5170 devl_assert_locked(priv_to_devlink(pf)); 5174 5171 5172 + ice_unplug_aux_dev(pf); 5175 5173 ice_deinit_rdma(pf); 5176 5174 ice_deinit_features(pf); 5177 5175 ice_tc_indir_block_unregister(vsi); ··· 5599 5595 */ 5600 5596 disabled = ice_service_task_stop(pf); 5601 5597 5598 + ice_unplug_aux_dev(pf); 5602 5599 ice_deinit_rdma(pf); 5603 5600 5604 5601 /* Already suspended?, then there is nothing to do */ ··· 7864 7859 7865 7860 ice_health_clear(pf); 7866 7861 7867 - ice_plug_aux_dev(pf); 7862 + ice_rdma_finalize_setup(pf); 7868 7863 if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) 7869 7864 ice_lag_rebuild(pf); 7870 7865
+3 -1
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 560 560 i = 0; 561 561 } 562 562 563 - if (rx_ring->vsi->type == ICE_VSI_PF && 563 + if ((rx_ring->vsi->type == ICE_VSI_PF || 564 + rx_ring->vsi->type == ICE_VSI_SF || 565 + rx_ring->vsi->type == ICE_VSI_LB) && 564 566 xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) { 565 567 xdp_rxq_info_detach_mem_model(&rx_ring->xdp_rxq); 566 568 xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
+3
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 899 899 u16 ntc = rx_ring->next_to_clean; 900 900 u16 ntu = rx_ring->next_to_use; 901 901 902 + if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) 903 + xdp_rxq_info_unreg(&rx_ring->xdp_rxq); 904 + 902 905 while (ntc != ntu) { 903 906 struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc); 904 907
-3
drivers/net/ethernet/intel/idpf/idpf_ethtool.c
··· 307 307 vport_config = vport->adapter->vport_config[np->vport_idx]; 308 308 user_config = &vport_config->user_config; 309 309 310 - if (!idpf_sideband_action_ena(vport, fsp)) 311 - return -EOPNOTSUPP; 312 - 313 310 rule = kzalloc_flex(*rule, rule_info, 1); 314 311 if (!rule) 315 312 return -ENOMEM;
+1
drivers/net/ethernet/intel/idpf/idpf_lib.c
··· 1318 1318 1319 1319 free_rss_key: 1320 1320 kfree(rss_data->rss_key); 1321 + rss_data->rss_key = NULL; 1321 1322 free_qreg_chunks: 1322 1323 idpf_vport_deinit_queue_reg_chunks(adapter->vport_config[idx]); 1323 1324 free_vector_idxs:
+10 -4
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 1314 1314 struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; 1315 1315 1316 1316 for (unsigned int j = 0; j < txq_grp->num_txq; j++) { 1317 + if (!txq_grp->txqs[j]) 1318 + continue; 1319 + 1317 1320 if (idpf_queue_has(FLOW_SCH_EN, txq_grp->txqs[j])) { 1318 1321 kfree(txq_grp->txqs[j]->refillq); 1319 1322 txq_grp->txqs[j]->refillq = NULL; ··· 1342 1339 */ 1343 1340 static void idpf_rxq_sw_queue_rel(struct idpf_rxq_group *rx_qgrp) 1344 1341 { 1342 + if (!rx_qgrp->splitq.bufq_sets) 1343 + return; 1344 + 1345 1345 for (unsigned int i = 0; i < rx_qgrp->splitq.num_bufq_sets; i++) { 1346 1346 struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[i]; 1347 1347 ··· 2342 2336 2343 2337 do { 2344 2338 struct idpf_splitq_4b_tx_compl_desc *tx_desc; 2345 - struct idpf_tx_queue *target; 2339 + struct idpf_tx_queue *target = NULL; 2346 2340 u32 ctype_gen, id; 2347 2341 2348 2342 tx_desc = flow ? &complq->comp[ntc].common : ··· 2362 2356 target = complq->txq_grp->txqs[id]; 2363 2357 2364 2358 idpf_queue_clear(SW_MARKER, target); 2365 - if (target == txq) 2366 - break; 2367 2359 2368 2360 next: 2369 2361 if (unlikely(++ntc == complq->desc_count)) { 2370 2362 ntc = 0; 2371 2363 gen_flag = !gen_flag; 2372 2364 } 2365 + if (target == txq) 2366 + break; 2373 2367 } while (time_before(jiffies, timeout)); 2374 2368 2375 2369 idpf_queue_assign(GEN_CHK, complq, gen_flag); ··· 4065 4059 continue; 4066 4060 4067 4061 name = kasprintf(GFP_KERNEL, "%s-%s-%s-%d", drv_name, if_name, 4068 - vec_name, vidx); 4062 + vec_name, vector); 4069 4063 4070 4064 err = request_irq(irq_num, idpf_vport_intr_clean_queues, 0, 4071 4065 name, q_vector);
+5 -1
drivers/net/ethernet/intel/idpf/xdp.c
··· 47 47 { 48 48 const struct idpf_vport *vport = rxq->q_vector->vport; 49 49 const struct idpf_q_vec_rsrc *rsrc; 50 + u32 frag_size = 0; 50 51 bool split; 51 52 int err; 52 53 54 + if (idpf_queue_has(XSK, rxq)) 55 + frag_size = rxq->bufq_sets[0].bufq.truesize; 56 + 53 57 err = __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx, 54 58 rxq->q_vector->napi.napi_id, 55 - rxq->rx_buf_size); 59 + frag_size); 56 60 if (err) 57 61 return err; 58 62
+1
drivers/net/ethernet/intel/idpf/xsk.c
··· 403 403 bufq->pending = fq.pending; 404 404 bufq->thresh = fq.thresh; 405 405 bufq->rx_buf_size = fq.buf_len; 406 + bufq->truesize = fq.truesize; 406 407 407 408 if (!idpf_xskfq_refill(bufq)) 408 409 netdev_err(bufq->pool->netdev,
+30 -8
drivers/net/ethernet/intel/igb/igb_xsk.c
··· 524 524 return nb_pkts < budget; 525 525 } 526 526 527 + static u32 igb_sw_irq_prep(struct igb_q_vector *q_vector) 528 + { 529 + u32 eics = 0; 530 + 531 + if (!napi_if_scheduled_mark_missed(&q_vector->napi)) 532 + eics = q_vector->eims_value; 533 + 534 + return eics; 535 + } 536 + 527 537 int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags) 528 538 { 529 539 struct igb_adapter *adapter = netdev_priv(dev); ··· 552 542 553 543 ring = adapter->tx_ring[qid]; 554 544 555 - if (test_bit(IGB_RING_FLAG_TX_DISABLED, &ring->flags)) 556 - return -ENETDOWN; 557 - 558 545 if (!READ_ONCE(ring->xsk_pool)) 559 546 return -EINVAL; 560 547 561 - if (!napi_if_scheduled_mark_missed(&ring->q_vector->napi)) { 548 + if (flags & XDP_WAKEUP_TX) { 549 + if (test_bit(IGB_RING_FLAG_TX_DISABLED, &ring->flags)) 550 + return -ENETDOWN; 551 + 552 + eics |= igb_sw_irq_prep(ring->q_vector); 553 + } 554 + 555 + if (flags & XDP_WAKEUP_RX) { 556 + /* If IGB_FLAG_QUEUE_PAIRS is active, the q_vector 557 + * and NAPI is shared between RX and TX. 558 + * If NAPI is already running it would be marked as missed 559 + * from the TX path, making this RX call a NOP 560 + */ 561 + ring = adapter->rx_ring[qid]; 562 + eics |= igb_sw_irq_prep(ring->q_vector); 563 + } 564 + 565 + if (eics) { 562 566 /* Cause software interrupt */ 563 - if (adapter->flags & IGB_FLAG_HAS_MSIX) { 564 - eics |= ring->q_vector->eims_value; 567 + if (adapter->flags & IGB_FLAG_HAS_MSIX) 565 568 wr32(E1000_EICS, eics); 566 - } else { 569 + else 567 570 wr32(E1000_ICS, E1000_ICS_RXDMT0); 568 - } 569 571 } 570 572 571 573 return 0;
+24 -10
drivers/net/ethernet/intel/igc/igc_main.c
··· 6906 6906 return nxmit; 6907 6907 } 6908 6908 6909 - static void igc_trigger_rxtxq_interrupt(struct igc_adapter *adapter, 6910 - struct igc_q_vector *q_vector) 6909 + static u32 igc_sw_irq_prep(struct igc_q_vector *q_vector) 6911 6910 { 6912 - struct igc_hw *hw = &adapter->hw; 6913 6911 u32 eics = 0; 6914 6912 6915 - eics |= q_vector->eims_value; 6916 - wr32(IGC_EICS, eics); 6913 + if (!napi_if_scheduled_mark_missed(&q_vector->napi)) 6914 + eics = q_vector->eims_value; 6915 + 6916 + return eics; 6917 6917 } 6918 6918 6919 6919 int igc_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags) 6920 6920 { 6921 6921 struct igc_adapter *adapter = netdev_priv(dev); 6922 - struct igc_q_vector *q_vector; 6922 + struct igc_hw *hw = &adapter->hw; 6923 6923 struct igc_ring *ring; 6924 + u32 eics = 0; 6924 6925 6925 6926 if (test_bit(__IGC_DOWN, &adapter->state)) 6926 6927 return -ENETDOWN; 6927 6928 6928 6929 if (!igc_xdp_is_enabled(adapter)) 6929 6930 return -ENXIO; 6930 - 6931 + /* Check if queue_id is valid. Tx and Rx queue numbers are always same */ 6931 6932 if (queue_id >= adapter->num_rx_queues) 6932 6933 return -EINVAL; 6933 6934 ··· 6937 6936 if (!ring->xsk_pool) 6938 6937 return -ENXIO; 6939 6938 6940 - q_vector = adapter->q_vector[queue_id]; 6941 - if (!napi_if_scheduled_mark_missed(&q_vector->napi)) 6942 - igc_trigger_rxtxq_interrupt(adapter, q_vector); 6939 + if (flags & XDP_WAKEUP_RX) 6940 + eics |= igc_sw_irq_prep(ring->q_vector); 6941 + 6942 + if (flags & XDP_WAKEUP_TX) { 6943 + /* If IGC_FLAG_QUEUE_PAIRS is active, the q_vector 6944 + * and NAPI is shared between RX and TX. 6945 + * If NAPI is already running it would be marked as missed 6946 + * from the RX path, making this TX call a NOP 6947 + */ 6948 + ring = adapter->tx_ring[queue_id]; 6949 + eics |= igc_sw_irq_prep(ring->q_vector); 6950 + } 6951 + 6952 + if (eics) 6953 + /* Cause software interrupt */ 6954 + wr32(IGC_EICS, eics); 6943 6955 6944 6956 return 0; 6945 6957 }
+2 -1
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 550 550 tstamp->buffer_type = 0; 551 551 552 552 /* Trigger txrx interrupt for transmit completion */ 553 - igc_xsk_wakeup(adapter->netdev, tstamp->xsk_queue_index, 0); 553 + igc_xsk_wakeup(adapter->netdev, tstamp->xsk_queue_index, 554 + XDP_WAKEUP_TX); 554 555 555 556 return; 556 557 }
+2 -1
drivers/net/ethernet/intel/ixgbevf/vf.c
··· 852 852 if (!mac->get_link_status) 853 853 goto out; 854 854 855 - if (hw->mac.type == ixgbe_mac_e610_vf) { 855 + if (hw->mac.type == ixgbe_mac_e610_vf && 856 + hw->api_version >= ixgbe_mbox_api_16) { 856 857 ret_val = ixgbevf_get_pf_link_state(hw, speed, link_up); 857 858 if (ret_val) 858 859 goto out;
+1
drivers/net/ethernet/intel/libeth/xsk.c
··· 167 167 fq->pending = fq->count; 168 168 fq->thresh = libeth_xdp_queue_threshold(fq->count); 169 169 fq->buf_len = xsk_pool_get_rx_frame_size(fq->pool); 170 + fq->truesize = xsk_pool_get_rx_frag_step(fq->pool); 170 171 171 172 return 0; 172 173 }
+4
drivers/net/ethernet/intel/libie/fwlog.c
··· 1049 1049 { 1050 1050 int status; 1051 1051 1052 + /* if FW logging isn't supported it means no configuration was done */ 1053 + if (!libie_fwlog_supported(fwlog)) 1054 + return; 1055 + 1052 1056 /* make sure FW logging is disabled to not put the FW in a weird state 1053 1057 * for the next driver load 1054 1058 */
+32 -16
drivers/net/ethernet/marvell/octeon_ep/octep_main.c
··· 554 554 } 555 555 556 556 /** 557 + * octep_update_pkt() - Update IQ/OQ IN/OUT_CNT registers. 558 + * 559 + * @iq: Octeon Tx queue data structure. 560 + * @oq: Octeon Rx queue data structure. 561 + */ 562 + static void octep_update_pkt(struct octep_iq *iq, struct octep_oq *oq) 563 + { 564 + u32 pkts_pend = READ_ONCE(oq->pkts_pending); 565 + u32 last_pkt_count = READ_ONCE(oq->last_pkt_count); 566 + u32 pkts_processed = READ_ONCE(iq->pkts_processed); 567 + u32 pkt_in_done = READ_ONCE(iq->pkt_in_done); 568 + 569 + netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no); 570 + if (pkts_processed) { 571 + writel(pkts_processed, iq->inst_cnt_reg); 572 + readl(iq->inst_cnt_reg); 573 + WRITE_ONCE(iq->pkt_in_done, (pkt_in_done - pkts_processed)); 574 + WRITE_ONCE(iq->pkts_processed, 0); 575 + } 576 + if (last_pkt_count - pkts_pend) { 577 + writel(last_pkt_count - pkts_pend, oq->pkts_sent_reg); 578 + readl(oq->pkts_sent_reg); 579 + WRITE_ONCE(oq->last_pkt_count, pkts_pend); 580 + } 581 + 582 + /* Flush the previous wrties before writing to RESEND bit */ 583 + smp_wmb(); 584 + } 585 + 586 + /** 557 587 * octep_enable_ioq_irq() - Enable MSI-x interrupt of a Tx/Rx queue. 558 588 * 559 589 * @iq: Octeon Tx queue data structure. ··· 591 561 */ 592 562 static void octep_enable_ioq_irq(struct octep_iq *iq, struct octep_oq *oq) 593 563 { 594 - u32 pkts_pend = oq->pkts_pending; 595 - 596 - netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no); 597 - if (iq->pkts_processed) { 598 - writel(iq->pkts_processed, iq->inst_cnt_reg); 599 - iq->pkt_in_done -= iq->pkts_processed; 600 - iq->pkts_processed = 0; 601 - } 602 - if (oq->last_pkt_count - pkts_pend) { 603 - writel(oq->last_pkt_count - pkts_pend, oq->pkts_sent_reg); 604 - oq->last_pkt_count = pkts_pend; 605 - } 606 - 607 - /* Flush the previous wrties before writing to RESEND bit */ 608 - wmb(); 609 564 writeq(1UL << OCTEP_OQ_INTR_RESEND_BIT, oq->pkts_sent_reg); 610 565 writeq(1UL << OCTEP_IQ_INTR_RESEND_BIT, iq->inst_cnt_reg); 611 566 } ··· 616 601 if (tx_pending || rx_done >= budget) 617 602 return budget; 618 603 619 - napi_complete(napi); 604 + octep_update_pkt(ioq_vector->iq, ioq_vector->oq); 605 + napi_complete_done(napi, rx_done); 620 606 octep_enable_ioq_irq(ioq_vector->iq, ioq_vector->oq); 621 607 return rx_done; 622 608 }
+19 -8
drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
··· 324 324 struct octep_oq *oq) 325 325 { 326 326 u32 pkt_count, new_pkts; 327 + u32 last_pkt_count, pkts_pending; 327 328 328 329 pkt_count = readl(oq->pkts_sent_reg); 329 - new_pkts = pkt_count - oq->last_pkt_count; 330 + last_pkt_count = READ_ONCE(oq->last_pkt_count); 331 + new_pkts = pkt_count - last_pkt_count; 330 332 333 + if (pkt_count < last_pkt_count) { 334 + dev_err(oq->dev, "OQ-%u pkt_count(%u) < oq->last_pkt_count(%u)\n", 335 + oq->q_no, pkt_count, last_pkt_count); 336 + } 331 337 /* Clear the hardware packets counter register if the rx queue is 332 338 * being processed continuously with-in a single interrupt and 333 339 * reached half its max value. ··· 344 338 pkt_count = readl(oq->pkts_sent_reg); 345 339 new_pkts += pkt_count; 346 340 } 347 - oq->last_pkt_count = pkt_count; 348 - oq->pkts_pending += new_pkts; 341 + WRITE_ONCE(oq->last_pkt_count, pkt_count); 342 + pkts_pending = READ_ONCE(oq->pkts_pending); 343 + WRITE_ONCE(oq->pkts_pending, (pkts_pending + new_pkts)); 349 344 return new_pkts; 350 345 } 351 346 ··· 421 414 u16 rx_ol_flags; 422 415 u32 read_idx; 423 416 424 - read_idx = oq->host_read_idx; 417 + read_idx = READ_ONCE(oq->host_read_idx); 425 418 rx_bytes = 0; 426 419 desc_used = 0; 427 420 for (pkt = 0; pkt < pkts_to_process; pkt++) { ··· 506 499 napi_gro_receive(oq->napi, skb); 507 500 } 508 501 509 - oq->host_read_idx = read_idx; 502 + WRITE_ONCE(oq->host_read_idx, read_idx); 510 503 oq->refill_count += desc_used; 511 504 oq->stats->packets += pkt; 512 505 oq->stats->bytes += rx_bytes; ··· 529 522 { 530 523 u32 pkts_available, pkts_processed, total_pkts_processed; 531 524 struct octep_device *oct = oq->octep_dev; 525 + u32 pkts_pending; 532 526 533 527 pkts_available = 0; 534 528 pkts_processed = 0; 535 529 total_pkts_processed = 0; 536 530 while (total_pkts_processed < budget) { 537 531 /* update pending count only when current one exhausted */ 538 - if (oq->pkts_pending == 0) 532 + pkts_pending = READ_ONCE(oq->pkts_pending); 533 + if (pkts_pending == 0) 539 534 octep_oq_check_hw_for_pkts(oct, oq); 535 + pkts_pending = READ_ONCE(oq->pkts_pending); 540 536 pkts_available = min(budget - total_pkts_processed, 541 - oq->pkts_pending); 537 + pkts_pending); 542 538 if (!pkts_available) 543 539 break; 544 540 545 541 pkts_processed = __octep_oq_process_rx(oct, oq, 546 542 pkts_available); 547 - oq->pkts_pending -= pkts_processed; 543 + pkts_pending = READ_ONCE(oq->pkts_pending); 544 + WRITE_ONCE(oq->pkts_pending, (pkts_pending - pkts_processed)); 548 545 total_pkts_processed += pkts_processed; 549 546 } 550 547
+34 -16
drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
··· 286 286 } 287 287 288 288 /** 289 + * octep_vf_update_pkt() - Update IQ/OQ IN/OUT_CNT registers. 290 + * 291 + * @iq: Octeon Tx queue data structure. 292 + * @oq: Octeon Rx queue data structure. 293 + */ 294 + 295 + static void octep_vf_update_pkt(struct octep_vf_iq *iq, struct octep_vf_oq *oq) 296 + { 297 + u32 pkts_pend = READ_ONCE(oq->pkts_pending); 298 + u32 last_pkt_count = READ_ONCE(oq->last_pkt_count); 299 + u32 pkts_processed = READ_ONCE(iq->pkts_processed); 300 + u32 pkt_in_done = READ_ONCE(iq->pkt_in_done); 301 + 302 + netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no); 303 + if (pkts_processed) { 304 + writel(pkts_processed, iq->inst_cnt_reg); 305 + readl(iq->inst_cnt_reg); 306 + WRITE_ONCE(iq->pkt_in_done, (pkt_in_done - pkts_processed)); 307 + WRITE_ONCE(iq->pkts_processed, 0); 308 + } 309 + if (last_pkt_count - pkts_pend) { 310 + writel(last_pkt_count - pkts_pend, oq->pkts_sent_reg); 311 + readl(oq->pkts_sent_reg); 312 + WRITE_ONCE(oq->last_pkt_count, pkts_pend); 313 + } 314 + 315 + /* Flush the previous wrties before writing to RESEND bit */ 316 + smp_wmb(); 317 + } 318 + 319 + /** 289 320 * octep_vf_enable_ioq_irq() - Enable MSI-x interrupt of a Tx/Rx queue. 290 321 * 291 322 * @iq: Octeon Tx queue data structure. 292 323 * @oq: Octeon Rx queue data structure. 293 324 */ 294 - static void octep_vf_enable_ioq_irq(struct octep_vf_iq *iq, struct octep_vf_oq *oq) 325 + static void octep_vf_enable_ioq_irq(struct octep_vf_iq *iq, 326 + struct octep_vf_oq *oq) 295 327 { 296 - u32 pkts_pend = oq->pkts_pending; 297 - 298 - netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no); 299 - if (iq->pkts_processed) { 300 - writel(iq->pkts_processed, iq->inst_cnt_reg); 301 - iq->pkt_in_done -= iq->pkts_processed; 302 - iq->pkts_processed = 0; 303 - } 304 - if (oq->last_pkt_count - pkts_pend) { 305 - writel(oq->last_pkt_count - pkts_pend, oq->pkts_sent_reg); 306 - oq->last_pkt_count = pkts_pend; 307 - } 308 - 309 - /* Flush the previous wrties before writing to RESEND bit */ 310 - smp_wmb(); 311 328 writeq(1UL << OCTEP_VF_OQ_INTR_RESEND_BIT, oq->pkts_sent_reg); 312 329 writeq(1UL << OCTEP_VF_IQ_INTR_RESEND_BIT, iq->inst_cnt_reg); 313 330 } ··· 350 333 if (tx_pending || rx_done >= budget) 351 334 return budget; 352 335 336 + octep_vf_update_pkt(ioq_vector->iq, ioq_vector->oq); 353 337 if (likely(napi_complete_done(napi, rx_done))) 354 338 octep_vf_enable_ioq_irq(ioq_vector->iq, ioq_vector->oq); 355 339
+20 -8
drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
··· 325 325 struct octep_vf_oq *oq) 326 326 { 327 327 u32 pkt_count, new_pkts; 328 + u32 last_pkt_count, pkts_pending; 328 329 329 330 pkt_count = readl(oq->pkts_sent_reg); 330 - new_pkts = pkt_count - oq->last_pkt_count; 331 + last_pkt_count = READ_ONCE(oq->last_pkt_count); 332 + new_pkts = pkt_count - last_pkt_count; 333 + 334 + if (pkt_count < last_pkt_count) { 335 + dev_err(oq->dev, "OQ-%u pkt_count(%u) < oq->last_pkt_count(%u)\n", 336 + oq->q_no, pkt_count, last_pkt_count); 337 + } 331 338 332 339 /* Clear the hardware packets counter register if the rx queue is 333 340 * being processed continuously with-in a single interrupt and ··· 346 339 pkt_count = readl(oq->pkts_sent_reg); 347 340 new_pkts += pkt_count; 348 341 } 349 - oq->last_pkt_count = pkt_count; 350 - oq->pkts_pending += new_pkts; 342 + WRITE_ONCE(oq->last_pkt_count, pkt_count); 343 + pkts_pending = READ_ONCE(oq->pkts_pending); 344 + WRITE_ONCE(oq->pkts_pending, (pkts_pending + new_pkts)); 351 345 return new_pkts; 352 346 } 353 347 ··· 377 369 struct sk_buff *skb; 378 370 u32 read_idx; 379 371 380 - read_idx = oq->host_read_idx; 372 + read_idx = READ_ONCE(oq->host_read_idx); 381 373 rx_bytes = 0; 382 374 desc_used = 0; 383 375 for (pkt = 0; pkt < pkts_to_process; pkt++) { ··· 471 463 napi_gro_receive(oq->napi, skb); 472 464 } 473 465 474 - oq->host_read_idx = read_idx; 466 + WRITE_ONCE(oq->host_read_idx, read_idx); 475 467 oq->refill_count += desc_used; 476 468 oq->stats->packets += pkt; 477 469 oq->stats->bytes += rx_bytes; ··· 494 486 { 495 487 u32 pkts_available, pkts_processed, total_pkts_processed; 496 488 struct octep_vf_device *oct = oq->octep_vf_dev; 489 + u32 pkts_pending; 497 490 498 491 pkts_available = 0; 499 492 pkts_processed = 0; 500 493 total_pkts_processed = 0; 501 494 while (total_pkts_processed < budget) { 502 495 /* update pending count only when current one exhausted */ 503 - if (oq->pkts_pending == 0) 496 + pkts_pending = READ_ONCE(oq->pkts_pending); 497 + if (pkts_pending == 0) 504 498 octep_vf_oq_check_hw_for_pkts(oct, oq); 499 + pkts_pending = READ_ONCE(oq->pkts_pending); 505 500 pkts_available = min(budget - total_pkts_processed, 506 - oq->pkts_pending); 501 + pkts_pending); 507 502 if (!pkts_available) 508 503 break; 509 504 510 505 pkts_processed = __octep_vf_oq_process_rx(oct, oq, 511 506 pkts_available); 512 - oq->pkts_pending -= pkts_processed; 507 + pkts_pending = READ_ONCE(oq->pkts_pending); 508 + WRITE_ONCE(oq->pkts_pending, (pkts_pending - pkts_processed)); 513 509 total_pkts_processed += pkts_processed; 514 510 } 515 511
+12 -3
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 3748 3748 mtk_stop(dev); 3749 3749 3750 3750 old_prog = rcu_replace_pointer(eth->prog, prog, lockdep_rtnl_is_held()); 3751 + 3752 + if (netif_running(dev) && need_update) { 3753 + int err; 3754 + 3755 + err = mtk_open(dev); 3756 + if (err) { 3757 + rcu_assign_pointer(eth->prog, old_prog); 3758 + 3759 + return err; 3760 + } 3761 + } 3762 + 3751 3763 if (old_prog) 3752 3764 bpf_prog_put(old_prog); 3753 - 3754 - if (netif_running(dev) && need_update) 3755 - return mtk_open(dev); 3756 3765 3757 3766 return 0; 3758 3767 }
+18 -5
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 1770 1770 ndev = txq->ndev; 1771 1771 apc = netdev_priv(ndev); 1772 1772 1773 + /* Limit CQEs polled to 4 wraparounds of the CQ to ensure the 1774 + * doorbell can be rung in time for the hardware's requirement 1775 + * of at least one doorbell ring every 8 wraparounds. 1776 + */ 1773 1777 comp_read = mana_gd_poll_cq(cq->gdma_cq, completions, 1774 - CQE_POLLING_BUFFER); 1778 + min((cq->gdma_cq->queue_size / 1779 + COMP_ENTRY_SIZE) * 4, 1780 + CQE_POLLING_BUFFER)); 1775 1781 1776 1782 if (comp_read < 1) 1777 1783 return; ··· 2162 2156 struct mana_rxq *rxq = cq->rxq; 2163 2157 int comp_read, i; 2164 2158 2165 - comp_read = mana_gd_poll_cq(cq->gdma_cq, comp, CQE_POLLING_BUFFER); 2159 + /* Limit CQEs polled to 4 wraparounds of the CQ to ensure the 2160 + * doorbell can be rung in time for the hardware's requirement 2161 + * of at least one doorbell ring every 8 wraparounds. 2162 + */ 2163 + comp_read = mana_gd_poll_cq(cq->gdma_cq, comp, 2164 + min((cq->gdma_cq->queue_size / 2165 + COMP_ENTRY_SIZE) * 4, 2166 + CQE_POLLING_BUFFER)); 2166 2167 WARN_ON_ONCE(comp_read > CQE_POLLING_BUFFER); 2167 2168 2168 2169 rxq->xdp_flush = false; ··· 2214 2201 mana_gd_ring_cq(gdma_queue, SET_ARM_BIT); 2215 2202 cq->work_done_since_doorbell = 0; 2216 2203 napi_complete_done(&cq->napi, w); 2217 - } else if (cq->work_done_since_doorbell > 2218 - cq->gdma_cq->queue_size / COMP_ENTRY_SIZE * 4) { 2204 + } else if (cq->work_done_since_doorbell >= 2205 + (cq->gdma_cq->queue_size / COMP_ENTRY_SIZE) * 4) { 2219 2206 /* MANA hardware requires at least one doorbell ring every 8 2220 2207 * wraparounds of CQ even if there is no need to arm the CQ. 2221 - * This driver rings the doorbell as soon as we have exceeded 2208 + * This driver rings the doorbell as soon as it has processed 2222 2209 * 4 wraparounds. 2223 2210 */ 2224 2211 mana_gd_ring_cq(gdma_queue, 0);
+1
drivers/net/ethernet/stmicro/stmmac/stmmac.h
··· 323 323 void __iomem *ptpaddr; 324 324 void __iomem *estaddr; 325 325 unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; 326 + unsigned int num_double_vlans; 326 327 int sfty_irq; 327 328 int sfty_ce_irq; 328 329 int sfty_ue_irq;
+47 -6
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 156 156 static void stmmac_flush_tx_descriptors(struct stmmac_priv *priv, int queue); 157 157 static void stmmac_set_dma_operation_mode(struct stmmac_priv *priv, u32 txmode, 158 158 u32 rxmode, u32 chan); 159 + static int stmmac_vlan_restore(struct stmmac_priv *priv); 159 160 160 161 #ifdef CONFIG_DEBUG_FS 161 162 static const struct net_device_ops stmmac_netdev_ops; ··· 4108 4107 4109 4108 phylink_start(priv->phylink); 4110 4109 4110 + stmmac_vlan_restore(priv); 4111 + 4111 4112 ret = stmmac_request_irq(dev); 4112 4113 if (ret) 4113 4114 goto irq_error; ··· 6769 6766 hash = 0; 6770 6767 } 6771 6768 6769 + if (!netif_running(priv->dev)) 6770 + return 0; 6771 + 6772 6772 return stmmac_update_vlan_hash(priv, priv->hw, hash, pmatch, is_double); 6773 6773 } 6774 6774 ··· 6781 6775 static int stmmac_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid) 6782 6776 { 6783 6777 struct stmmac_priv *priv = netdev_priv(ndev); 6778 + unsigned int num_double_vlans; 6784 6779 bool is_double = false; 6785 6780 int ret; 6786 6781 ··· 6793 6786 is_double = true; 6794 6787 6795 6788 set_bit(vid, priv->active_vlans); 6796 - ret = stmmac_vlan_update(priv, is_double); 6789 + num_double_vlans = priv->num_double_vlans + is_double; 6790 + ret = stmmac_vlan_update(priv, num_double_vlans); 6797 6791 if (ret) { 6798 6792 clear_bit(vid, priv->active_vlans); 6799 6793 goto err_pm_put; ··· 6802 6794 6803 6795 if (priv->hw->num_vlan) { 6804 6796 ret = stmmac_add_hw_vlan_rx_fltr(priv, ndev, priv->hw, proto, vid); 6805 - if (ret) 6797 + if (ret) { 6798 + clear_bit(vid, priv->active_vlans); 6799 + stmmac_vlan_update(priv, priv->num_double_vlans); 6806 6800 goto err_pm_put; 6801 + } 6807 6802 } 6803 + 6804 + priv->num_double_vlans = num_double_vlans; 6805 + 6808 6806 err_pm_put: 6809 6807 pm_runtime_put(priv->device); 6810 6808 ··· 6823 6809 static int stmmac_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vid) 6824 6810 { 6825 6811 struct stmmac_priv *priv = netdev_priv(ndev); 6812 + unsigned int num_double_vlans; 6826 6813 bool is_double = false; 6827 6814 int ret; 6828 6815 ··· 6835 6820 is_double = true; 6836 6821 6837 6822 clear_bit(vid, priv->active_vlans); 6823 + num_double_vlans = priv->num_double_vlans - is_double; 6824 + ret = stmmac_vlan_update(priv, num_double_vlans); 6825 + if (ret) { 6826 + set_bit(vid, priv->active_vlans); 6827 + goto del_vlan_error; 6828 + } 6838 6829 6839 6830 if (priv->hw->num_vlan) { 6840 6831 ret = stmmac_del_hw_vlan_rx_fltr(priv, ndev, priv->hw, proto, vid); 6841 - if (ret) 6832 + if (ret) { 6833 + set_bit(vid, priv->active_vlans); 6834 + stmmac_vlan_update(priv, priv->num_double_vlans); 6842 6835 goto del_vlan_error; 6836 + } 6843 6837 } 6844 6838 6845 - ret = stmmac_vlan_update(priv, is_double); 6839 + priv->num_double_vlans = num_double_vlans; 6846 6840 6847 6841 del_vlan_error: 6848 6842 pm_runtime_put(priv->device); 6843 + 6844 + return ret; 6845 + } 6846 + 6847 + static int stmmac_vlan_restore(struct stmmac_priv *priv) 6848 + { 6849 + int ret; 6850 + 6851 + if (!(priv->dev->features & NETIF_F_VLAN_FEATURES)) 6852 + return 0; 6853 + 6854 + if (priv->hw->num_vlan) 6855 + stmmac_restore_hw_vlan_rx_fltr(priv, priv->dev, priv->hw); 6856 + 6857 + ret = stmmac_vlan_update(priv, priv->num_double_vlans); 6858 + if (ret) 6859 + netdev_err(priv->dev, "Failed to restore VLANs\n"); 6849 6860 6850 6861 return ret; 6851 6862 } ··· 8300 8259 stmmac_init_coalesce(priv); 8301 8260 phylink_rx_clk_stop_block(priv->phylink); 8302 8261 stmmac_set_rx_mode(ndev); 8303 - 8304 - stmmac_restore_hw_vlan_rx_fltr(priv, ndev, priv->hw); 8305 8262 phylink_rx_clk_stop_unblock(priv->phylink); 8263 + 8264 + stmmac_vlan_restore(priv); 8306 8265 8307 8266 stmmac_enable_all_queues(priv); 8308 8267 stmmac_enable_all_dma_irq(priv);
+31 -29
drivers/net/ethernet/stmicro/stmmac/stmmac_vlan.c
··· 76 76 } 77 77 78 78 hw->vlan_filter[0] = vid; 79 - vlan_write_single(dev, vid); 79 + 80 + if (netif_running(dev)) 81 + vlan_write_single(dev, vid); 80 82 81 83 return 0; 82 84 } ··· 99 97 return -EPERM; 100 98 } 101 99 102 - ret = vlan_write_filter(dev, hw, index, val); 100 + if (netif_running(dev)) { 101 + ret = vlan_write_filter(dev, hw, index, val); 102 + if (ret) 103 + return ret; 104 + } 103 105 104 - if (!ret) 105 - hw->vlan_filter[index] = val; 106 + hw->vlan_filter[index] = val; 106 107 107 - return ret; 108 + return 0; 108 109 } 109 110 110 111 static int vlan_del_hw_rx_fltr(struct net_device *dev, ··· 120 115 if (hw->num_vlan == 1) { 121 116 if ((hw->vlan_filter[0] & VLAN_TAG_VID) == vid) { 122 117 hw->vlan_filter[0] = 0; 123 - vlan_write_single(dev, 0); 118 + 119 + if (netif_running(dev)) 120 + vlan_write_single(dev, 0); 124 121 } 125 122 return 0; 126 123 } ··· 131 124 for (i = 0; i < hw->num_vlan; i++) { 132 125 if ((hw->vlan_filter[i] & VLAN_TAG_DATA_VEN) && 133 126 ((hw->vlan_filter[i] & VLAN_TAG_DATA_VID) == vid)) { 134 - ret = vlan_write_filter(dev, hw, i, 0); 135 127 136 - if (!ret) 137 - hw->vlan_filter[i] = 0; 138 - else 139 - return ret; 128 + if (netif_running(dev)) { 129 + ret = vlan_write_filter(dev, hw, i, 0); 130 + if (ret) 131 + return ret; 132 + } 133 + 134 + hw->vlan_filter[i] = 0; 140 135 } 141 136 } 142 137 143 - return ret; 138 + return 0; 144 139 } 145 140 146 141 static void vlan_restore_hw_rx_fltr(struct net_device *dev, 147 142 struct mac_device_info *hw) 148 143 { 149 - void __iomem *ioaddr = hw->pcsr; 150 - u32 value; 151 - u32 hash; 152 - u32 val; 153 144 int i; 154 145 155 146 /* Single Rx VLAN Filter */ ··· 157 152 } 158 153 159 154 /* Extended Rx VLAN Filter Enable */ 160 - for (i = 0; i < hw->num_vlan; i++) { 161 - if (hw->vlan_filter[i] & VLAN_TAG_DATA_VEN) { 162 - val = hw->vlan_filter[i]; 163 - vlan_write_filter(dev, hw, i, val); 164 - } 165 - } 166 - 167 - hash = readl(ioaddr + VLAN_HASH_TABLE); 168 - if (hash & VLAN_VLHT) { 169 - value = readl(ioaddr + VLAN_TAG); 170 - value |= VLAN_VTHM; 171 - writel(value, ioaddr + VLAN_TAG); 172 - } 155 + for (i = 0; i < hw->num_vlan; i++) 156 + vlan_write_filter(dev, hw, i, hw->vlan_filter[i]); 173 157 } 174 158 175 159 static void vlan_update_hash(struct mac_device_info *hw, u32 hash, ··· 177 183 value |= VLAN_EDVLP; 178 184 value |= VLAN_ESVL; 179 185 value |= VLAN_DOVLTC; 186 + } else { 187 + value &= ~VLAN_EDVLP; 188 + value &= ~VLAN_ESVL; 189 + value &= ~VLAN_DOVLTC; 180 190 } 181 191 182 192 writel(value, ioaddr + VLAN_TAG); ··· 191 193 value |= VLAN_EDVLP; 192 194 value |= VLAN_ESVL; 193 195 value |= VLAN_DOVLTC; 196 + } else { 197 + value &= ~VLAN_EDVLP; 198 + value &= ~VLAN_ESVL; 199 + value &= ~VLAN_DOVLTC; 194 200 } 195 201 196 202 writel(value | perfect_match, ioaddr + VLAN_TAG);
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 391 391 cpsw_ale_set_allmulti(common->ale, 392 392 ndev->flags & IFF_ALLMULTI, port->port_id); 393 393 394 - port_mask = ALE_PORT_HOST; 394 + port_mask = BIT(port->port_id) | ALE_PORT_HOST; 395 395 /* Clear all mcast from ALE */ 396 396 cpsw_ale_flush_multicast(common->ale, port_mask, -1); 397 397
+4 -5
drivers/net/ethernet/ti/cpsw_ale.c
··· 450 450 ale->port_mask_bits); 451 451 if ((mask & port_mask) == 0) 452 452 return; /* ports dont intersect, not interested */ 453 - mask &= ~port_mask; 453 + mask &= (~port_mask | ALE_PORT_HOST); 454 454 455 - /* free if only remaining port is host port */ 456 - if (mask) 455 + if (mask == 0x0 || mask == ALE_PORT_HOST) 456 + cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE); 457 + else 457 458 cpsw_ale_set_port_mask(ale_entry, mask, 458 459 ale->port_mask_bits); 459 - else 460 - cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE); 461 460 } 462 461 463 462 int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask, int vid)
+8
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 273 273 if (ret) 274 274 goto disable_class; 275 275 276 + /* Reset link state to force reconfiguration in 277 + * emac_adjust_link(). Without this, if the link was already up 278 + * before restart, emac_adjust_link() won't detect any state 279 + * change and will skip critical configuration like writing 280 + * speed to firmware. 281 + */ 282 + emac->link = 0; 283 + 276 284 mutex_lock(&emac->ndev->phydev->lock); 277 285 emac_adjust_link(emac->ndev); 278 286 mutex_unlock(&emac->ndev->phydev->lock);
+1 -1
drivers/net/netconsole.c
··· 617 617 bool release_enabled; 618 618 619 619 dynamic_netconsole_mutex_lock(); 620 - release_enabled = !!(nt->sysdata_fields & SYSDATA_TASKNAME); 620 + release_enabled = !!(nt->sysdata_fields & SYSDATA_RELEASE); 621 621 dynamic_netconsole_mutex_unlock(); 622 622 623 623 return sysfs_emit(buf, "%d\n", release_enabled);
+1
drivers/net/usb/r8152.c
··· 10054 10054 { USB_DEVICE(VENDOR_ID_DLINK, 0xb301) }, 10055 10055 { USB_DEVICE(VENDOR_ID_DELL, 0xb097) }, 10056 10056 { USB_DEVICE(VENDOR_ID_ASUS, 0x1976) }, 10057 + { USB_DEVICE(VENDOR_ID_TRENDNET, 0xe02b) }, 10057 10058 {} 10058 10059 }; 10059 10060
+5
drivers/net/vxlan/vxlan_core.c
··· 2130 2130 { 2131 2131 struct ipv6hdr *pip6; 2132 2132 2133 + /* check if nd_tbl is not initiliazed due to 2134 + * ipv6.disable=1 set during boot 2135 + */ 2136 + if (!ipv6_stub->nd_tbl) 2137 + return false; 2133 2138 if (!pskb_may_pull(skb, sizeof(struct ipv6hdr))) 2134 2139 return false; 2135 2140 pip6 = ipv6_hdr(skb);
+3 -3
drivers/net/wireless/ath/ath12k/mac.c
··· 5430 5430 ar->last_tx_power_update)) 5431 5431 goto send_tx_power; 5432 5432 5433 - params.pdev_id = ar->pdev->pdev_id; 5433 + params.pdev_id = ath12k_mac_get_target_pdev_id(ar); 5434 5434 params.vdev_id = arvif->vdev_id; 5435 5435 params.stats_id = WMI_REQUEST_PDEV_STAT; 5436 5436 ret = ath12k_mac_get_fw_stats(ar, &params); ··· 13452 13452 /* TODO: Use real NF instead of default one. */ 13453 13453 signal = rate_info.rssi_comb; 13454 13454 13455 - params.pdev_id = ar->pdev->pdev_id; 13455 + params.pdev_id = ath12k_mac_get_target_pdev_id(ar); 13456 13456 params.vdev_id = 0; 13457 13457 params.stats_id = WMI_REQUEST_VDEV_STAT; 13458 13458 ··· 13580 13580 spin_unlock_bh(&ar->ab->dp->dp_lock); 13581 13581 13582 13582 if (!signal && ahsta->ahvif->vdev_type == WMI_VDEV_TYPE_STA) { 13583 - params.pdev_id = ar->pdev->pdev_id; 13583 + params.pdev_id = ath12k_mac_get_target_pdev_id(ar); 13584 13584 params.vdev_id = 0; 13585 13585 params.stats_id = WMI_REQUEST_VDEV_STAT; 13586 13586
+13 -23
drivers/net/wireless/ath/ath12k/wmi.c
··· 8241 8241 struct ath12k_fw_stats *stats = parse->stats; 8242 8242 struct ath12k *ar; 8243 8243 struct ath12k_link_vif *arvif; 8244 - struct ieee80211_sta *sta; 8245 - struct ath12k_sta *ahsta; 8246 8244 struct ath12k_link_sta *arsta; 8247 8245 int i, ret = 0; 8248 8246 const void *data = ptr; ··· 8276 8278 8277 8279 arvif = ath12k_mac_get_arvif(ar, le32_to_cpu(src->vdev_id)); 8278 8280 if (arvif) { 8279 - sta = ieee80211_find_sta_by_ifaddr(ath12k_ar_to_hw(ar), 8280 - arvif->bssid, 8281 - NULL); 8282 - if (sta) { 8283 - ahsta = ath12k_sta_to_ahsta(sta); 8284 - arsta = &ahsta->deflink; 8281 + spin_lock_bh(&ab->base_lock); 8282 + arsta = ath12k_link_sta_find_by_addr(ab, arvif->bssid); 8283 + if (arsta) { 8285 8284 arsta->rssi_beacon = le32_to_cpu(src->beacon_snr); 8286 8285 ath12k_dbg(ab, ATH12K_DBG_WMI, 8287 8286 "wmi stats vdev id %d snr %d\n", 8288 8287 src->vdev_id, src->beacon_snr); 8289 8288 } else { 8290 - ath12k_dbg(ab, ATH12K_DBG_WMI, 8291 - "not found station bssid %pM for vdev stat\n", 8292 - arvif->bssid); 8289 + ath12k_warn(ab, 8290 + "not found link sta with bssid %pM for vdev stat\n", 8291 + arvif->bssid); 8293 8292 } 8293 + spin_unlock_bh(&ab->base_lock); 8294 8294 } 8295 8295 8296 8296 data += sizeof(*src); ··· 8359 8363 struct ath12k_fw_stats *stats = parse->stats; 8360 8364 struct ath12k_link_vif *arvif; 8361 8365 struct ath12k_link_sta *arsta; 8362 - struct ieee80211_sta *sta; 8363 - struct ath12k_sta *ahsta; 8364 8366 struct ath12k *ar; 8365 8367 int vdev_id; 8366 8368 int j; ··· 8394 8400 "stats bssid %pM vif %p\n", 8395 8401 arvif->bssid, arvif->ahvif->vif); 8396 8402 8397 - sta = ieee80211_find_sta_by_ifaddr(ath12k_ar_to_hw(ar), 8398 - arvif->bssid, 8399 - NULL); 8400 - if (!sta) { 8401 - ath12k_dbg(ab, ATH12K_DBG_WMI, 8402 - "not found station of bssid %pM for rssi chain\n", 8403 - arvif->bssid); 8403 + guard(spinlock_bh)(&ab->base_lock); 8404 + arsta = ath12k_link_sta_find_by_addr(ab, arvif->bssid); 8405 + if (!arsta) { 8406 + ath12k_warn(ab, 8407 + "not found link sta with bssid %pM for rssi chain\n", 8408 + arvif->bssid); 8404 8409 return -EPROTO; 8405 8410 } 8406 - 8407 - ahsta = ath12k_sta_to_ahsta(sta); 8408 - arsta = &ahsta->deflink; 8409 8411 8410 8412 BUILD_BUG_ON(ARRAY_SIZE(arsta->chain_signal) > 8411 8413 ARRAY_SIZE(stats_rssi->rssi_avg_beacon));
+1
drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
··· 413 413 u32 val; 414 414 415 415 if (ieee80211_is_action(fc) && 416 + skb->len >= IEEE80211_MIN_ACTION_SIZE + 1 + 1 + 2 && 416 417 mgmt->u.action.category == WLAN_CATEGORY_BACK && 417 418 mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ) { 418 419 u16 capab = le16_to_cpu(mgmt->u.action.u.addba_req.capab);
+1
drivers/net/wireless/mediatek/mt76/mt7925/mac.c
··· 668 668 u32 val; 669 669 670 670 if (ieee80211_is_action(fc) && 671 + skb->len >= IEEE80211_MIN_ACTION_SIZE + 1 && 671 672 mgmt->u.action.category == WLAN_CATEGORY_BACK && 672 673 mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ) 673 674 tid = MT_TX_ADDBA;
+1
drivers/net/wireless/mediatek/mt76/mt7996/mac.c
··· 800 800 u32 val; 801 801 802 802 if (ieee80211_is_action(fc) && 803 + skb->len >= IEEE80211_MIN_ACTION_SIZE + 1 && 803 804 mgmt->u.action.category == WLAN_CATEGORY_BACK && 804 805 mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ) { 805 806 if (is_mt7990(&dev->mt76))
+1 -1
drivers/net/wireless/rsi/rsi_91x_mac80211.c
··· 668 668 struct rsi_hw *adapter = hw->priv; 669 669 struct rsi_common *common = adapter->priv; 670 670 struct ieee80211_conf *conf = &hw->conf; 671 - int status = -EOPNOTSUPP; 671 + int status = 0; 672 672 673 673 mutex_lock(&common->mutex); 674 674
+2
drivers/net/wireless/st/cw1200/pm.c
··· 264 264 wiphy_err(priv->hw->wiphy, 265 265 "PM request failed: %d. WoW is disabled.\n", ret); 266 266 cw1200_wow_resume(hw); 267 + mutex_unlock(&priv->conf_mutex); 267 268 return -EBUSY; 268 269 } 269 270 270 271 /* Force resume if event is coming from the device. */ 271 272 if (atomic_read(&priv->bh_rx)) { 272 273 cw1200_wow_resume(hw); 274 + mutex_unlock(&priv->conf_mutex); 273 275 return -EAGAIN; 274 276 } 275 277
+2 -2
drivers/net/wireless/ti/wlcore/main.c
··· 1875 1875 wl->wow_enabled); 1876 1876 WARN_ON(!wl->wow_enabled); 1877 1877 1878 + mutex_lock(&wl->mutex); 1879 + 1878 1880 ret = pm_runtime_force_resume(wl->dev); 1879 1881 if (ret < 0) { 1880 1882 wl1271_error("ELP wakeup failure!"); ··· 1892 1890 if (test_and_clear_bit(WL1271_FLAG_PENDING_WORK, &wl->flags)) 1893 1891 run_irq_work = true; 1894 1892 spin_unlock_irqrestore(&wl->wl_lock, flags); 1895 - 1896 - mutex_lock(&wl->mutex); 1897 1893 1898 1894 /* test the recovery flag before calling any SDIO functions */ 1899 1895 pending_recovery = test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS,
+17 -17
drivers/net/xen-netfront.c
··· 1646 1646 1647 1647 /* avoid the race with XDP headroom adjustment */ 1648 1648 wait_event(module_wq, 1649 - xenbus_read_driver_state(np->xbdev->otherend) == 1649 + xenbus_read_driver_state(np->xbdev, np->xbdev->otherend) == 1650 1650 XenbusStateReconfigured); 1651 1651 np->netfront_xdp_enabled = true; 1652 1652 ··· 1764 1764 do { 1765 1765 xenbus_switch_state(dev, XenbusStateInitialising); 1766 1766 err = wait_event_timeout(module_wq, 1767 - xenbus_read_driver_state(dev->otherend) != 1767 + xenbus_read_driver_state(dev, dev->otherend) != 1768 1768 XenbusStateClosed && 1769 - xenbus_read_driver_state(dev->otherend) != 1769 + xenbus_read_driver_state(dev, dev->otherend) != 1770 1770 XenbusStateUnknown, XENNET_TIMEOUT); 1771 1771 } while (!err); 1772 1772 ··· 2626 2626 { 2627 2627 int ret; 2628 2628 2629 - if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed) 2629 + if (xenbus_read_driver_state(dev, dev->otherend) == XenbusStateClosed) 2630 2630 return; 2631 2631 do { 2632 2632 xenbus_switch_state(dev, XenbusStateClosing); 2633 2633 ret = wait_event_timeout(module_wq, 2634 - xenbus_read_driver_state(dev->otherend) == 2635 - XenbusStateClosing || 2636 - xenbus_read_driver_state(dev->otherend) == 2637 - XenbusStateClosed || 2638 - xenbus_read_driver_state(dev->otherend) == 2639 - XenbusStateUnknown, 2640 - XENNET_TIMEOUT); 2634 + xenbus_read_driver_state(dev, dev->otherend) == 2635 + XenbusStateClosing || 2636 + xenbus_read_driver_state(dev, dev->otherend) == 2637 + XenbusStateClosed || 2638 + xenbus_read_driver_state(dev, dev->otherend) == 2639 + XenbusStateUnknown, 2640 + XENNET_TIMEOUT); 2641 2641 } while (!ret); 2642 2642 2643 - if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed) 2643 + if (xenbus_read_driver_state(dev, dev->otherend) == XenbusStateClosed) 2644 2644 return; 2645 2645 2646 2646 do { 2647 2647 xenbus_switch_state(dev, XenbusStateClosed); 2648 2648 ret = wait_event_timeout(module_wq, 2649 - xenbus_read_driver_state(dev->otherend) == 2650 - XenbusStateClosed || 2651 - xenbus_read_driver_state(dev->otherend) == 2652 - XenbusStateUnknown, 2653 - XENNET_TIMEOUT); 2649 + xenbus_read_driver_state(dev, dev->otherend) == 2650 + XenbusStateClosed || 2651 + xenbus_read_driver_state(dev, dev->otherend) == 2652 + XenbusStateUnknown, 2653 + XENNET_TIMEOUT); 2654 2654 } while (!ret); 2655 2655 } 2656 2656
+12 -16
drivers/nvme/host/core.c
··· 2046 2046 if (id->nabspf) 2047 2047 boundary = (le16_to_cpu(id->nabspf) + 1) * bs; 2048 2048 } else { 2049 - /* 2050 - * Use the controller wide atomic write unit. This sucks 2051 - * because the limit is defined in terms of logical blocks while 2052 - * namespaces can have different formats, and because there is 2053 - * no clear language in the specification prohibiting different 2054 - * values for different controllers in the subsystem. 2055 - */ 2056 - atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs; 2049 + if (ns->ctrl->awupf) 2050 + dev_info_once(ns->ctrl->device, 2051 + "AWUPF ignored, only NAWUPF accepted\n"); 2052 + atomic_bs = bs; 2057 2053 } 2058 2054 2059 2055 lim->atomic_write_hw_max = atomic_bs; ··· 3218 3222 memcpy(subsys->model, id->mn, sizeof(subsys->model)); 3219 3223 subsys->vendor_id = le16_to_cpu(id->vid); 3220 3224 subsys->cmic = id->cmic; 3221 - subsys->awupf = le16_to_cpu(id->awupf); 3222 3225 3223 3226 /* Versions prior to 1.4 don't necessarily report a valid type */ 3224 3227 if (id->cntrltype == NVME_CTRL_DISC || ··· 3650 3655 dev_pm_qos_expose_latency_tolerance(ctrl->device); 3651 3656 else if (!ctrl->apst_enabled && prev_apst_enabled) 3652 3657 dev_pm_qos_hide_latency_tolerance(ctrl->device); 3658 + ctrl->awupf = le16_to_cpu(id->awupf); 3653 3659 out_free: 3654 3660 kfree(id); 3655 3661 return ret; ··· 4181 4185 4182 4186 nvme_mpath_add_disk(ns, info->anagrpid); 4183 4187 nvme_fault_inject_init(&ns->fault_inject, ns->disk->disk_name); 4184 - 4185 - /* 4186 - * Set ns->disk->device->driver_data to ns so we can access 4187 - * ns->head->passthru_err_log_enabled in 4188 - * nvme_io_passthru_err_log_enabled_[store | show](). 4189 - */ 4190 - dev_set_drvdata(disk_to_dev(ns->disk), ns); 4191 4188 4192 4189 return; 4193 4190 ··· 4853 4864 ret = blk_mq_alloc_tag_set(set); 4854 4865 if (ret) 4855 4866 return ret; 4867 + 4868 + /* 4869 + * If a previous admin queue exists (e.g., from before a reset), 4870 + * put it now before allocating a new one to avoid orphaning it. 4871 + */ 4872 + if (ctrl->admin_q) 4873 + blk_put_queue(ctrl->admin_q); 4856 4874 4857 4875 ctrl->admin_q = blk_mq_alloc_queue(set, &lim, NULL); 4858 4876 if (IS_ERR(ctrl->admin_q)) {
+2 -2
drivers/nvme/host/fabrics.c
··· 1290 1290 kfree(opts->subsysnqn); 1291 1291 kfree(opts->host_traddr); 1292 1292 kfree(opts->host_iface); 1293 - kfree(opts->dhchap_secret); 1294 - kfree(opts->dhchap_ctrl_secret); 1293 + kfree_sensitive(opts->dhchap_secret); 1294 + kfree_sensitive(opts->dhchap_ctrl_secret); 1295 1295 kfree(opts); 1296 1296 } 1297 1297 EXPORT_SYMBOL_GPL(nvmf_free_options);
+6 -8
drivers/nvme/host/multipath.c
··· 1300 1300 mutex_lock(&head->subsys->lock); 1301 1301 /* 1302 1302 * We are called when all paths have been removed, and at that point 1303 - * head->list is expected to be empty. However, nvme_remove_ns() and 1303 + * head->list is expected to be empty. However, nvme_ns_remove() and 1304 1304 * nvme_init_ns_head() can run concurrently and so if head->delayed_ 1305 1305 * removal_secs is configured, it is possible that by the time we reach 1306 1306 * this point, head->list may no longer be empty. Therefore, we recheck ··· 1310 1310 if (!list_empty(&head->list)) 1311 1311 goto out; 1312 1312 1313 - if (head->delayed_removal_secs) { 1314 - /* 1315 - * Ensure that no one could remove this module while the head 1316 - * remove work is pending. 1317 - */ 1318 - if (!try_module_get(THIS_MODULE)) 1319 - goto out; 1313 + /* 1314 + * Ensure that no one could remove this module while the head 1315 + * remove work is pending. 1316 + */ 1317 + if (head->delayed_removal_secs && try_module_get(THIS_MODULE)) { 1320 1318 mod_delayed_work(nvme_wq, &head->remove_work, 1321 1319 head->delayed_removal_secs * HZ); 1322 1320 } else {
+56 -1
drivers/nvme/host/nvme.h
··· 180 180 NVME_QUIRK_DMAPOOL_ALIGN_512 = (1 << 22), 181 181 }; 182 182 183 + static inline char *nvme_quirk_name(enum nvme_quirks q) 184 + { 185 + switch (q) { 186 + case NVME_QUIRK_STRIPE_SIZE: 187 + return "stripe_size"; 188 + case NVME_QUIRK_IDENTIFY_CNS: 189 + return "identify_cns"; 190 + case NVME_QUIRK_DEALLOCATE_ZEROES: 191 + return "deallocate_zeroes"; 192 + case NVME_QUIRK_DELAY_BEFORE_CHK_RDY: 193 + return "delay_before_chk_rdy"; 194 + case NVME_QUIRK_NO_APST: 195 + return "no_apst"; 196 + case NVME_QUIRK_NO_DEEPEST_PS: 197 + return "no_deepest_ps"; 198 + case NVME_QUIRK_QDEPTH_ONE: 199 + return "qdepth_one"; 200 + case NVME_QUIRK_MEDIUM_PRIO_SQ: 201 + return "medium_prio_sq"; 202 + case NVME_QUIRK_IGNORE_DEV_SUBNQN: 203 + return "ignore_dev_subnqn"; 204 + case NVME_QUIRK_DISABLE_WRITE_ZEROES: 205 + return "disable_write_zeroes"; 206 + case NVME_QUIRK_SIMPLE_SUSPEND: 207 + return "simple_suspend"; 208 + case NVME_QUIRK_SINGLE_VECTOR: 209 + return "single_vector"; 210 + case NVME_QUIRK_128_BYTES_SQES: 211 + return "128_bytes_sqes"; 212 + case NVME_QUIRK_SHARED_TAGS: 213 + return "shared_tags"; 214 + case NVME_QUIRK_NO_TEMP_THRESH_CHANGE: 215 + return "no_temp_thresh_change"; 216 + case NVME_QUIRK_NO_NS_DESC_LIST: 217 + return "no_ns_desc_list"; 218 + case NVME_QUIRK_DMA_ADDRESS_BITS_48: 219 + return "dma_address_bits_48"; 220 + case NVME_QUIRK_SKIP_CID_GEN: 221 + return "skip_cid_gen"; 222 + case NVME_QUIRK_BOGUS_NID: 223 + return "bogus_nid"; 224 + case NVME_QUIRK_NO_SECONDARY_TEMP_THRESH: 225 + return "no_secondary_temp_thresh"; 226 + case NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND: 227 + return "force_no_simple_suspend"; 228 + case NVME_QUIRK_BROKEN_MSI: 229 + return "broken_msi"; 230 + case NVME_QUIRK_DMAPOOL_ALIGN_512: 231 + return "dmapool_align_512"; 232 + } 233 + 234 + return "unknown"; 235 + } 236 + 183 237 /* 184 238 * Common request structure for NVMe passthrough. All drivers must have 185 239 * this structure as the first member of their request-private data. ··· 464 410 465 411 enum nvme_ctrl_type cntrltype; 466 412 enum nvme_dctype dctype; 413 + 414 + u16 awupf; /* 0's based value. */ 467 415 }; 468 416 469 417 static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl) ··· 498 442 u8 cmic; 499 443 enum nvme_subsys_type subtype; 500 444 u16 vendor_id; 501 - u16 awupf; /* 0's based value. */ 502 445 struct ida ns_ida; 503 446 #ifdef CONFIG_NVME_MULTIPATH 504 447 enum nvme_iopolicy iopolicy;
+184 -2
drivers/nvme/host/pci.c
··· 72 72 static_assert(MAX_PRP_RANGE / NVME_CTRL_PAGE_SIZE <= 73 73 (1 /* prp1 */ + NVME_MAX_NR_DESCRIPTORS * PRPS_PER_PAGE)); 74 74 75 + struct quirk_entry { 76 + u16 vendor_id; 77 + u16 dev_id; 78 + u32 enabled_quirks; 79 + u32 disabled_quirks; 80 + }; 81 + 75 82 static int use_threaded_interrupts; 76 83 module_param(use_threaded_interrupts, int, 0444); 77 84 ··· 108 101 static unsigned int io_queue_depth = 1024; 109 102 module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 0644); 110 103 MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2 and < 4096"); 104 + 105 + static struct quirk_entry *nvme_pci_quirk_list; 106 + static unsigned int nvme_pci_quirk_count; 107 + 108 + /* Helper to parse individual quirk names */ 109 + static int nvme_parse_quirk_names(char *quirk_str, struct quirk_entry *entry) 110 + { 111 + int i; 112 + size_t field_len; 113 + bool disabled, found; 114 + char *p = quirk_str, *field; 115 + 116 + while ((field = strsep(&p, ",")) && *field) { 117 + disabled = false; 118 + found = false; 119 + 120 + if (*field == '^') { 121 + /* Skip the '^' character */ 122 + disabled = true; 123 + field++; 124 + } 125 + 126 + field_len = strlen(field); 127 + for (i = 0; i < 32; i++) { 128 + unsigned int bit = 1U << i; 129 + char *q_name = nvme_quirk_name(bit); 130 + size_t q_len = strlen(q_name); 131 + 132 + if (!strcmp(q_name, "unknown")) 133 + break; 134 + 135 + if (!strcmp(q_name, field) && 136 + q_len == field_len) { 137 + if (disabled) 138 + entry->disabled_quirks |= bit; 139 + else 140 + entry->enabled_quirks |= bit; 141 + found = true; 142 + break; 143 + } 144 + } 145 + 146 + if (!found) { 147 + pr_err("nvme: unrecognized quirk %s\n", field); 148 + return -EINVAL; 149 + } 150 + } 151 + return 0; 152 + } 153 + 154 + /* Helper to parse a single VID:DID:quirk_names entry */ 155 + static int nvme_parse_quirk_entry(char *s, struct quirk_entry *entry) 156 + { 157 + char *field; 158 + 159 + field = strsep(&s, ":"); 160 + if (!field || kstrtou16(field, 16, &entry->vendor_id)) 161 + return -EINVAL; 162 + 163 + field = strsep(&s, ":"); 164 + if (!field || kstrtou16(field, 16, &entry->dev_id)) 165 + return -EINVAL; 166 + 167 + field = strsep(&s, ":"); 168 + if (!field) 169 + return -EINVAL; 170 + 171 + return nvme_parse_quirk_names(field, entry); 172 + } 173 + 174 + static int quirks_param_set(const char *value, const struct kernel_param *kp) 175 + { 176 + int count, err, i; 177 + struct quirk_entry *qlist; 178 + char *field, *val, *sep_ptr; 179 + 180 + err = param_set_copystring(value, kp); 181 + if (err) 182 + return err; 183 + 184 + val = kstrdup(value, GFP_KERNEL); 185 + if (!val) 186 + return -ENOMEM; 187 + 188 + if (!*val) 189 + goto out_free_val; 190 + 191 + count = 1; 192 + for (i = 0; val[i]; i++) { 193 + if (val[i] == '-') 194 + count++; 195 + } 196 + 197 + qlist = kcalloc(count, sizeof(*qlist), GFP_KERNEL); 198 + if (!qlist) { 199 + err = -ENOMEM; 200 + goto out_free_val; 201 + } 202 + 203 + i = 0; 204 + sep_ptr = val; 205 + while ((field = strsep(&sep_ptr, "-"))) { 206 + if (nvme_parse_quirk_entry(field, &qlist[i])) { 207 + pr_err("nvme: failed to parse quirk string %s\n", 208 + value); 209 + goto out_free_qlist; 210 + } 211 + 212 + i++; 213 + } 214 + 215 + kfree(nvme_pci_quirk_list); 216 + nvme_pci_quirk_count = count; 217 + nvme_pci_quirk_list = qlist; 218 + goto out_free_val; 219 + 220 + out_free_qlist: 221 + kfree(qlist); 222 + out_free_val: 223 + kfree(val); 224 + return err; 225 + } 226 + 227 + static char quirks_param[128]; 228 + static const struct kernel_param_ops quirks_param_ops = { 229 + .set = quirks_param_set, 230 + .get = param_get_string, 231 + }; 232 + 233 + static struct kparam_string quirks_param_string = { 234 + .maxlen = sizeof(quirks_param), 235 + .string = quirks_param, 236 + }; 237 + 238 + module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0444); 239 + MODULE_PARM_DESC(quirks, "Enable/disable NVMe quirks by specifying " 240 + "quirks=VID:DID:quirk_names"); 111 241 112 242 static int io_queue_count_set(const char *val, const struct kernel_param *kp) 113 243 { ··· 1640 1496 struct nvme_queue *nvmeq = hctx->driver_data; 1641 1497 bool found; 1642 1498 1643 - if (!nvme_cqe_pending(nvmeq)) 1499 + if (!test_bit(NVMEQ_POLLED, &nvmeq->flags) || 1500 + !nvme_cqe_pending(nvmeq)) 1644 1501 return 0; 1645 1502 1646 1503 spin_lock(&nvmeq->cq_poll_lock); ··· 2919 2774 dev->nr_write_queues = write_queues; 2920 2775 dev->nr_poll_queues = poll_queues; 2921 2776 2922 - nr_io_queues = dev->nr_allocated_queues - 1; 2777 + if (dev->ctrl.tagset) { 2778 + /* 2779 + * The set's maps are allocated only once at initialization 2780 + * time. We can't add special queues later if their mq_map 2781 + * wasn't preallocated. 2782 + */ 2783 + if (dev->ctrl.tagset->nr_maps < 3) 2784 + dev->nr_poll_queues = 0; 2785 + if (dev->ctrl.tagset->nr_maps < 2) 2786 + dev->nr_write_queues = 0; 2787 + } 2788 + 2789 + /* 2790 + * The initial number of allocated queue slots may be too large if the 2791 + * user reduced the special queue parameters. Cap the value to the 2792 + * number we need for this round. 2793 + */ 2794 + nr_io_queues = min(nvme_max_io_queues(dev), 2795 + dev->nr_allocated_queues - 1); 2923 2796 result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues); 2924 2797 if (result < 0) 2925 2798 return result; ··· 3621 3458 return 0; 3622 3459 } 3623 3460 3461 + static struct quirk_entry *detect_dynamic_quirks(struct pci_dev *pdev) 3462 + { 3463 + int i; 3464 + 3465 + for (i = 0; i < nvme_pci_quirk_count; i++) 3466 + if (pdev->vendor == nvme_pci_quirk_list[i].vendor_id && 3467 + pdev->device == nvme_pci_quirk_list[i].dev_id) 3468 + return &nvme_pci_quirk_list[i]; 3469 + 3470 + return NULL; 3471 + } 3472 + 3624 3473 static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev, 3625 3474 const struct pci_device_id *id) 3626 3475 { 3627 3476 unsigned long quirks = id->driver_data; 3628 3477 int node = dev_to_node(&pdev->dev); 3629 3478 struct nvme_dev *dev; 3479 + struct quirk_entry *qentry; 3630 3480 int ret = -ENOMEM; 3631 3481 3632 3482 dev = kzalloc_node(struct_size(dev, descriptor_pools, nr_node_ids), ··· 3670 3494 dev_info(&pdev->dev, 3671 3495 "platform quirk: setting simple suspend\n"); 3672 3496 quirks |= NVME_QUIRK_SIMPLE_SUSPEND; 3497 + } 3498 + qentry = detect_dynamic_quirks(pdev); 3499 + if (qentry) { 3500 + quirks |= qentry->enabled_quirks; 3501 + quirks &= ~qentry->disabled_quirks; 3673 3502 } 3674 3503 ret = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops, 3675 3504 quirks); ··· 4276 4095 4277 4096 static void __exit nvme_exit(void) 4278 4097 { 4098 + kfree(nvme_pci_quirk_list); 4279 4099 pci_unregister_driver(&nvme_driver); 4280 4100 flush_workqueue(nvme_wq); 4281 4101 }
+2 -2
drivers/nvme/host/pr.c
··· 242 242 if (rse_len > U32_MAX) 243 243 return -EINVAL; 244 244 245 - rse = kzalloc(rse_len, GFP_KERNEL); 245 + rse = kvzalloc(rse_len, GFP_KERNEL); 246 246 if (!rse) 247 247 return -ENOMEM; 248 248 ··· 267 267 } 268 268 269 269 free_rse: 270 - kfree(rse); 270 + kvfree(rse); 271 271 return ret; 272 272 } 273 273
+23
drivers/nvme/host/sysfs.c
··· 601 601 } 602 602 static DEVICE_ATTR_RO(dctype); 603 603 604 + static ssize_t quirks_show(struct device *dev, struct device_attribute *attr, 605 + char *buf) 606 + { 607 + int count = 0, i; 608 + struct nvme_ctrl *ctrl = dev_get_drvdata(dev); 609 + unsigned long quirks = ctrl->quirks; 610 + 611 + if (!quirks) 612 + return sysfs_emit(buf, "none\n"); 613 + 614 + for (i = 0; quirks; ++i) { 615 + if (quirks & 1) { 616 + count += sysfs_emit_at(buf, count, "%s\n", 617 + nvme_quirk_name(BIT(i))); 618 + } 619 + quirks >>= 1; 620 + } 621 + 622 + return count; 623 + } 624 + static DEVICE_ATTR_RO(quirks); 625 + 604 626 #ifdef CONFIG_NVME_HOST_AUTH 605 627 static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev, 606 628 struct device_attribute *attr, char *buf) ··· 764 742 &dev_attr_kato.attr, 765 743 &dev_attr_cntrltype.attr, 766 744 &dev_attr_dctype.attr, 745 + &dev_attr_quirks.attr, 767 746 #ifdef CONFIG_NVME_HOST_AUTH 768 747 &dev_attr_dhchap_secret.attr, 769 748 &dev_attr_dhchap_ctrl_secret.attr,
+3 -2
drivers/nvme/host/tcp.c
··· 25 25 26 26 struct nvme_tcp_queue; 27 27 28 - /* Define the socket priority to use for connections were it is desirable 28 + /* 29 + * Define the socket priority to use for connections where it is desirable 29 30 * that the NIC consider performing optimized packet processing or filtering. 30 31 * A non-zero value being sufficient to indicate general consideration of any 31 32 * possible optimization. Making it a module param allows for alternative ··· 927 926 req->curr_bio = req->curr_bio->bi_next; 928 927 929 928 /* 930 - * If we don`t have any bios it means that controller 929 + * If we don't have any bios it means the controller 931 930 * sent more data than we requested, hence error 932 931 */ 933 932 if (!req->curr_bio) {
+11 -4
drivers/nvme/target/fcloop.c
··· 491 491 struct fcloop_rport *rport = remoteport->private; 492 492 struct nvmet_fc_target_port *targetport = rport->targetport; 493 493 struct fcloop_tport *tport; 494 + int ret = 0; 494 495 495 496 if (!targetport) { 496 497 /* ··· 501 500 * We end up here from delete association exchange: 502 501 * nvmet_fc_xmt_disconnect_assoc sends an async request. 503 502 * 504 - * Return success because this is what LLDDs do; silently 505 - * drop the response. 503 + * Return success when remoteport is still online because this 504 + * is what LLDDs do and silently drop the response. Otherwise, 505 + * return with error to signal upper layer to perform the lsrsp 506 + * resource cleanup. 506 507 */ 507 - lsrsp->done(lsrsp); 508 + if (remoteport->port_state == FC_OBJSTATE_ONLINE) 509 + lsrsp->done(lsrsp); 510 + else 511 + ret = -ENODEV; 512 + 508 513 kmem_cache_free(lsreq_cache, tls_req); 509 - return 0; 514 + return ret; 510 515 } 511 516 512 517 memcpy(lsreq->rspaddr, lsrsp->rspbuf,
+4 -4
drivers/pci/xen-pcifront.c
··· 856 856 int err; 857 857 858 858 /* Only connect once */ 859 - if (xenbus_read_driver_state(pdev->xdev->nodename) != 859 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != 860 860 XenbusStateInitialised) 861 861 return; 862 862 ··· 876 876 enum xenbus_state prev_state; 877 877 878 878 879 - prev_state = xenbus_read_driver_state(pdev->xdev->nodename); 879 + prev_state = xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename); 880 880 881 881 if (prev_state >= XenbusStateClosing) 882 882 goto out; ··· 895 895 896 896 static void pcifront_attach_devices(struct pcifront_device *pdev) 897 897 { 898 - if (xenbus_read_driver_state(pdev->xdev->nodename) == 898 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) == 899 899 XenbusStateReconfiguring) 900 900 pcifront_connect(pdev); 901 901 } ··· 909 909 struct pci_dev *pci_dev; 910 910 char str[64]; 911 911 912 - state = xenbus_read_driver_state(pdev->xdev->nodename); 912 + state = xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename); 913 913 if (state == XenbusStateInitialised) { 914 914 dev_dbg(&pdev->xdev->dev, "Handle skipped connect.\n"); 915 915 /* We missed Connected and need to initialize. */
+2 -3
drivers/pinctrl/cirrus/pinctrl-cs42l43.c
··· 574 574 if (child) { 575 575 ret = devm_add_action_or_reset(&pdev->dev, 576 576 cs42l43_fwnode_put, child); 577 - if (ret) { 578 - fwnode_handle_put(child); 577 + if (ret) 579 578 return ret; 580 - } 579 + 581 580 if (!child->dev) 582 581 child->dev = priv->dev; 583 582 fwnode = child;
+1 -2
drivers/pinctrl/cix/pinctrl-sky1.c
··· 522 522 return pinctrl_force_default(spctl->pctl); 523 523 } 524 524 525 - const struct dev_pm_ops sky1_pinctrl_pm_ops = { 525 + static const struct dev_pm_ops sky1_pinctrl_pm_ops = { 526 526 SET_LATE_SYSTEM_SLEEP_PM_OPS(sky1_pinctrl_suspend, 527 527 sky1_pinctrl_resume) 528 528 }; 529 - EXPORT_SYMBOL_GPL(sky1_pinctrl_pm_ops); 530 529 531 530 static int sky1_pinctrl_probe(struct platform_device *pdev) 532 531 {
+1 -2
drivers/pinctrl/meson/pinctrl-amlogic-a4.c
··· 679 679 unsigned int *num_maps) 680 680 { 681 681 struct device *dev = pctldev->dev; 682 - struct device_node *pnode; 683 682 unsigned long *configs = NULL; 684 683 unsigned int num_configs = 0; 685 684 struct property *prop; ··· 692 693 return -ENOENT; 693 694 } 694 695 695 - pnode = of_get_parent(np); 696 + struct device_node *pnode __free(device_node) = of_get_parent(np); 696 697 if (!pnode) { 697 698 dev_info(dev, "Missing function node\n"); 698 699 return -EINVAL;
+2 -2
drivers/pinctrl/pinconf-generic.c
··· 351 351 352 352 ret = parse_dt_cfg(np, dt_params, ARRAY_SIZE(dt_params), cfg, &ncfg); 353 353 if (ret) 354 - return ret; 354 + goto out; 355 355 if (pctldev && pctldev->desc->num_custom_params && 356 356 pctldev->desc->custom_params) { 357 357 ret = parse_dt_cfg(np, pctldev->desc->custom_params, 358 358 pctldev->desc->num_custom_params, cfg, &ncfg); 359 359 if (ret) 360 - return ret; 360 + goto out; 361 361 } 362 362 363 363 /* no configs found at all */
+1 -1
drivers/pinctrl/pinctrl-amdisp.c
··· 80 80 return 0; 81 81 } 82 82 83 - const struct pinctrl_ops amdisp_pinctrl_ops = { 83 + static const struct pinctrl_ops amdisp_pinctrl_ops = { 84 84 .get_groups_count = amdisp_get_groups_count, 85 85 .get_group_name = amdisp_get_group_name, 86 86 .get_group_pins = amdisp_get_group_pins,
+2 -2
drivers/pinctrl/pinctrl-cy8c95x0.c
··· 627 627 bitmap_scatter(tmask, mask, chip->map, MAX_LINE); 628 628 bitmap_scatter(tval, val, chip->map, MAX_LINE); 629 629 630 - for_each_set_clump8(offset, bits, tmask, chip->tpin) { 630 + for_each_set_clump8(offset, bits, tmask, chip->nport * BANK_SZ) { 631 631 unsigned int i = offset / 8; 632 632 633 633 write_val = bitmap_get_value8(tval, offset); ··· 655 655 bitmap_scatter(tmask, mask, chip->map, MAX_LINE); 656 656 bitmap_scatter(tval, val, chip->map, MAX_LINE); 657 657 658 - for_each_set_clump8(offset, bits, tmask, chip->tpin) { 658 + for_each_set_clump8(offset, bits, tmask, chip->nport * BANK_SZ) { 659 659 unsigned int i = offset / 8; 660 660 661 661 ret = cy8c95x0_regmap_read_bits(chip, reg, i, bits, &read_val);
+19 -12
drivers/pinctrl/pinctrl-equilibrium.c
··· 23 23 #define PIN_NAME_LEN 10 24 24 #define PAD_REG_OFF 0x100 25 25 26 - static void eqbr_gpio_disable_irq(struct irq_data *d) 26 + static void eqbr_irq_mask(struct irq_data *d) 27 27 { 28 28 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 29 29 struct eqbr_gpio_ctrl *gctrl = gpiochip_get_data(gc); ··· 36 36 gpiochip_disable_irq(gc, offset); 37 37 } 38 38 39 - static void eqbr_gpio_enable_irq(struct irq_data *d) 39 + static void eqbr_irq_unmask(struct irq_data *d) 40 40 { 41 41 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 42 42 struct eqbr_gpio_ctrl *gctrl = gpiochip_get_data(gc); ··· 50 50 raw_spin_unlock_irqrestore(&gctrl->lock, flags); 51 51 } 52 52 53 - static void eqbr_gpio_ack_irq(struct irq_data *d) 53 + static void eqbr_irq_ack(struct irq_data *d) 54 54 { 55 55 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 56 56 struct eqbr_gpio_ctrl *gctrl = gpiochip_get_data(gc); ··· 62 62 raw_spin_unlock_irqrestore(&gctrl->lock, flags); 63 63 } 64 64 65 - static void eqbr_gpio_mask_ack_irq(struct irq_data *d) 65 + static void eqbr_irq_mask_ack(struct irq_data *d) 66 66 { 67 - eqbr_gpio_disable_irq(d); 68 - eqbr_gpio_ack_irq(d); 67 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 68 + struct eqbr_gpio_ctrl *gctrl = gpiochip_get_data(gc); 69 + unsigned int offset = irqd_to_hwirq(d); 70 + unsigned long flags; 71 + 72 + raw_spin_lock_irqsave(&gctrl->lock, flags); 73 + writel(BIT(offset), gctrl->membase + GPIO_IRNENCLR); 74 + writel(BIT(offset), gctrl->membase + GPIO_IRNCR); 75 + raw_spin_unlock_irqrestore(&gctrl->lock, flags); 69 76 } 70 77 71 78 static inline void eqbr_cfg_bit(void __iomem *addr, ··· 99 92 return 0; 100 93 } 101 94 102 - static int eqbr_gpio_set_irq_type(struct irq_data *d, unsigned int type) 95 + static int eqbr_irq_set_type(struct irq_data *d, unsigned int type) 103 96 { 104 97 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 105 98 struct eqbr_gpio_ctrl *gctrl = gpiochip_get_data(gc); ··· 173 166 174 167 static const struct irq_chip eqbr_irq_chip = { 175 168 .name = "gpio_irq", 176 - .irq_mask = eqbr_gpio_disable_irq, 177 - .irq_unmask = eqbr_gpio_enable_irq, 178 - .irq_ack = eqbr_gpio_ack_irq, 179 - .irq_mask_ack = eqbr_gpio_mask_ack_irq, 180 - .irq_set_type = eqbr_gpio_set_irq_type, 169 + .irq_ack = eqbr_irq_ack, 170 + .irq_mask = eqbr_irq_mask, 171 + .irq_mask_ack = eqbr_irq_mask_ack, 172 + .irq_unmask = eqbr_irq_unmask, 173 + .irq_set_type = eqbr_irq_set_type, 181 174 .flags = IRQCHIP_IMMUTABLE, 182 175 GPIOCHIP_IRQ_RESOURCE_HELPERS, 183 176 };
+4 -8
drivers/pinctrl/pinctrl-rockchip.c
··· 3640 3640 * or the gpio driver hasn't probed yet. 3641 3641 */ 3642 3642 scoped_guard(mutex, &bank->deferred_lock) { 3643 - if (!gpio || !gpio->direction_output) { 3644 - rc = rockchip_pinconf_defer_pin(bank, 3645 - pin - bank->pin_base, 3646 - param, arg); 3647 - if (rc) 3648 - return rc; 3649 - break; 3650 - } 3643 + if (!gpio || !gpio->direction_output) 3644 + return rockchip_pinconf_defer_pin(bank, 3645 + pin - bank->pin_base, 3646 + param, arg); 3651 3647 } 3652 3648 } 3653 3649
+1
drivers/pinctrl/qcom/pinctrl-qcs615.c
··· 1067 1067 .ntiles = ARRAY_SIZE(qcs615_tiles), 1068 1068 .wakeirq_map = qcs615_pdc_map, 1069 1069 .nwakeirq_map = ARRAY_SIZE(qcs615_pdc_map), 1070 + .wakeirq_dual_edge_errata = true, 1070 1071 }; 1071 1072 1072 1073 static const struct of_device_id qcs615_tlmm_of_match[] = {
+2 -2
drivers/pinctrl/qcom/pinctrl-sdm660-lpass-lpi.c
··· 76 76 static const char * const pdm_rx_groups[] = { "gpio21", "gpio23", "gpio25" }; 77 77 static const char * const pdm_sync_groups[] = { "gpio19" }; 78 78 79 - const struct lpi_pingroup sdm660_lpi_pinctrl_groups[] = { 79 + static const struct lpi_pingroup sdm660_lpi_pinctrl_groups[] = { 80 80 LPI_PINGROUP_OFFSET(0, LPI_NO_SLEW, _, _, _, _, 0x0000), 81 81 LPI_PINGROUP_OFFSET(1, LPI_NO_SLEW, _, _, _, _, 0x1000), 82 82 LPI_PINGROUP_OFFSET(2, LPI_NO_SLEW, _, _, _, _, 0x2000), ··· 113 113 LPI_PINGROUP_OFFSET(31, LPI_NO_SLEW, _, _, _, _, 0xb010), 114 114 }; 115 115 116 - const struct lpi_function sdm660_lpi_pinctrl_functions[] = { 116 + static const struct lpi_function sdm660_lpi_pinctrl_functions[] = { 117 117 LPI_FUNCTION(comp_rx), 118 118 LPI_FUNCTION(dmic1_clk), 119 119 LPI_FUNCTION(dmic1_data),
+51
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 204 204 return NULL; 205 205 } 206 206 207 + static struct sunxi_desc_function * 208 + sunxi_pinctrl_desc_find_function_by_pin_and_mux(struct sunxi_pinctrl *pctl, 209 + const u16 pin_num, 210 + const u8 muxval) 211 + { 212 + for (unsigned int i = 0; i < pctl->desc->npins; i++) { 213 + const struct sunxi_desc_pin *pin = pctl->desc->pins + i; 214 + struct sunxi_desc_function *func = pin->functions; 215 + 216 + if (pin->pin.number != pin_num) 217 + continue; 218 + 219 + if (pin->variant && !(pctl->variant & pin->variant)) 220 + continue; 221 + 222 + while (func->name) { 223 + if (func->muxval == muxval) 224 + return func; 225 + 226 + func++; 227 + } 228 + } 229 + 230 + return NULL; 231 + } 232 + 207 233 static int sunxi_pctrl_get_groups_count(struct pinctrl_dev *pctldev) 208 234 { 209 235 struct sunxi_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev); ··· 956 930 .strict = true, 957 931 }; 958 932 933 + static int sunxi_pinctrl_gpio_get_direction(struct gpio_chip *chip, 934 + unsigned int offset) 935 + { 936 + struct sunxi_pinctrl *pctl = gpiochip_get_data(chip); 937 + const struct sunxi_desc_function *func; 938 + u32 pin = offset + chip->base; 939 + u32 reg, shift, mask; 940 + u8 muxval; 941 + 942 + sunxi_mux_reg(pctl, offset, &reg, &shift, &mask); 943 + 944 + muxval = (readl(pctl->membase + reg) & mask) >> shift; 945 + 946 + func = sunxi_pinctrl_desc_find_function_by_pin_and_mux(pctl, pin, muxval); 947 + if (!func) 948 + return -ENODEV; 949 + 950 + if (!strcmp(func->name, "gpio_out")) 951 + return GPIO_LINE_DIRECTION_OUT; 952 + if (!strcmp(func->name, "gpio_in") || !strcmp(func->name, "irq")) 953 + return GPIO_LINE_DIRECTION_IN; 954 + return -EINVAL; 955 + } 956 + 959 957 static int sunxi_pinctrl_gpio_direction_input(struct gpio_chip *chip, 960 958 unsigned offset) 961 959 { ··· 1649 1599 pctl->chip->request = gpiochip_generic_request; 1650 1600 pctl->chip->free = gpiochip_generic_free; 1651 1601 pctl->chip->set_config = gpiochip_generic_config; 1602 + pctl->chip->get_direction = sunxi_pinctrl_gpio_get_direction; 1652 1603 pctl->chip->direction_input = sunxi_pinctrl_gpio_direction_input; 1653 1604 pctl->chip->direction_output = sunxi_pinctrl_gpio_direction_output; 1654 1605 pctl->chip->get = sunxi_pinctrl_gpio_get;
+75
drivers/platform/x86/asus-armoury.h
··· 348 348 static const struct dmi_system_id power_limits[] = { 349 349 { 350 350 .matches = { 351 + DMI_MATCH(DMI_BOARD_NAME, "FA401UM"), 352 + }, 353 + .driver_data = &(struct power_data) { 354 + .ac_data = &(struct power_limits) { 355 + .ppt_pl1_spl_min = 15, 356 + .ppt_pl1_spl_max = 80, 357 + .ppt_pl2_sppt_min = 35, 358 + .ppt_pl2_sppt_max = 80, 359 + .ppt_pl3_fppt_min = 35, 360 + .ppt_pl3_fppt_max = 80, 361 + .nv_dynamic_boost_min = 5, 362 + .nv_dynamic_boost_max = 15, 363 + .nv_temp_target_min = 75, 364 + .nv_temp_target_max = 87, 365 + }, 366 + .dc_data = &(struct power_limits) { 367 + .ppt_pl1_spl_min = 25, 368 + .ppt_pl1_spl_max = 35, 369 + .ppt_pl2_sppt_min = 31, 370 + .ppt_pl2_sppt_max = 44, 371 + .ppt_pl3_fppt_min = 45, 372 + .ppt_pl3_fppt_max = 65, 373 + .nv_temp_target_min = 75, 374 + .nv_temp_target_max = 87, 375 + }, 376 + }, 377 + }, 378 + { 379 + .matches = { 351 380 DMI_MATCH(DMI_BOARD_NAME, "FA401UV"), 352 381 }, 353 382 .driver_data = &(struct power_data) { ··· 1488 1459 }, 1489 1460 { 1490 1461 .matches = { 1462 + DMI_MATCH(DMI_BOARD_NAME, "GX650RX"), 1463 + }, 1464 + .driver_data = &(struct power_data) { 1465 + .ac_data = &(struct power_limits) { 1466 + .ppt_pl1_spl_min = 28, 1467 + .ppt_pl1_spl_def = 70, 1468 + .ppt_pl1_spl_max = 90, 1469 + .ppt_pl2_sppt_min = 28, 1470 + .ppt_pl2_sppt_def = 70, 1471 + .ppt_pl2_sppt_max = 100, 1472 + .ppt_pl3_fppt_min = 28, 1473 + .ppt_pl3_fppt_def = 110, 1474 + .ppt_pl3_fppt_max = 125, 1475 + .nv_dynamic_boost_min = 5, 1476 + .nv_dynamic_boost_max = 25, 1477 + .nv_temp_target_min = 76, 1478 + .nv_temp_target_max = 87, 1479 + }, 1480 + .dc_data = &(struct power_limits) { 1481 + .ppt_pl1_spl_min = 28, 1482 + .ppt_pl1_spl_max = 50, 1483 + .ppt_pl2_sppt_min = 28, 1484 + .ppt_pl2_sppt_max = 50, 1485 + .ppt_pl3_fppt_min = 28, 1486 + .ppt_pl3_fppt_max = 65, 1487 + .nv_temp_target_min = 76, 1488 + .nv_temp_target_max = 87, 1489 + }, 1490 + }, 1491 + }, 1492 + { 1493 + .matches = { 1491 1494 DMI_MATCH(DMI_BOARD_NAME, "G513I"), 1492 1495 }, 1493 1496 .driver_data = &(struct power_data) { ··· 1767 1706 .nv_temp_target_max = 87, 1768 1707 }, 1769 1708 .requires_fan_curve = true, 1709 + }, 1710 + }, 1711 + { 1712 + .matches = { 1713 + DMI_MATCH(DMI_BOARD_NAME, "G733QS"), 1714 + }, 1715 + .driver_data = &(struct power_data) { 1716 + .ac_data = &(struct power_limits) { 1717 + .ppt_pl1_spl_min = 15, 1718 + .ppt_pl1_spl_max = 80, 1719 + .ppt_pl2_sppt_min = 15, 1720 + .ppt_pl2_sppt_max = 80, 1721 + }, 1722 + .requires_fan_curve = false, 1770 1723 }, 1771 1724 }, 1772 1725 {
+1 -1
drivers/platform/x86/dell/alienware-wmi-wmax.c
··· 175 175 DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 176 176 DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m18"), 177 177 }, 178 - .driver_data = &generic_quirks, 178 + .driver_data = &g_series_quirks, 179 179 }, 180 180 { 181 181 .ident = "Alienware x15",
+6
drivers/platform/x86/dell/dell-wmi-base.c
··· 80 80 static const struct key_entry dell_wmi_keymap_type_0000[] = { 81 81 { KE_IGNORE, 0x003a, { KEY_CAPSLOCK } }, 82 82 83 + /* Audio mute toggle */ 84 + { KE_KEY, 0x0109, { KEY_MUTE } }, 85 + 86 + /* Mic mute toggle */ 87 + { KE_KEY, 0x0150, { KEY_MICMUTE } }, 88 + 83 89 /* Meta key lock */ 84 90 { KE_IGNORE, 0xe000, { KEY_RIGHTMETA } }, 85 91
-1
drivers/platform/x86/dell/dell-wmi-sysman/passwordattr-interface.c
··· 93 93 if (ret < 0) 94 94 goto out; 95 95 96 - print_hex_dump_bytes("set new password data: ", DUMP_PREFIX_NONE, buffer, buffer_size); 97 96 ret = call_password_interface(wmi_priv.password_attr_wdev, buffer, buffer_size); 98 97 /* on success copy the new password to current password */ 99 98 if (!ret)
+6 -3
drivers/platform/x86/hp/hp-bioscfg/enum-attributes.c
··· 94 94 bioscfg_drv.enumeration_instances_count = 95 95 hp_get_instance_count(HP_WMI_BIOS_ENUMERATION_GUID); 96 96 97 - bioscfg_drv.enumeration_data = kzalloc_objs(*bioscfg_drv.enumeration_data, 98 - bioscfg_drv.enumeration_instances_count); 97 + if (!bioscfg_drv.enumeration_instances_count) 98 + return -EINVAL; 99 + bioscfg_drv.enumeration_data = kvcalloc(bioscfg_drv.enumeration_instances_count, 100 + sizeof(*bioscfg_drv.enumeration_data), GFP_KERNEL); 101 + 99 102 if (!bioscfg_drv.enumeration_data) { 100 103 bioscfg_drv.enumeration_instances_count = 0; 101 104 return -ENOMEM; ··· 447 444 } 448 445 bioscfg_drv.enumeration_instances_count = 0; 449 446 450 - kfree(bioscfg_drv.enumeration_data); 447 + kvfree(bioscfg_drv.enumeration_data); 451 448 bioscfg_drv.enumeration_data = NULL; 452 449 }
+11 -1
drivers/platform/x86/hp/hp-wmi.c
··· 146 146 "8900", "8901", "8902", "8912", "8917", "8918", "8949", "894A", "89EB", 147 147 "8A15", "8A42", 148 148 "8BAD", 149 + "8E41", 149 150 }; 150 151 151 152 /* DMI Board names of Omen laptops that are specifically set to be thermal ··· 167 166 "8BAD", 168 167 }; 169 168 170 - /* DMI Board names of Victus 16-d1xxx laptops */ 169 + /* DMI Board names of Victus 16-d laptops */ 171 170 static const char * const victus_thermal_profile_boards[] = { 171 + "88F8", 172 172 "8A25", 173 173 }; 174 174 175 175 /* DMI Board names of Victus 16-r and Victus 16-s laptops */ 176 176 static const struct dmi_system_id victus_s_thermal_profile_boards[] __initconst = { 177 177 { 178 + .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BAB") }, 179 + .driver_data = (void *)&omen_v1_thermal_params, 180 + }, 181 + { 178 182 .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BBE") }, 179 183 .driver_data = (void *)&victus_s_thermal_params, 184 + }, 185 + { 186 + .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BCD") }, 187 + .driver_data = (void *)&omen_v1_thermal_params, 180 188 }, 181 189 { 182 190 .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BD4") },
+19
drivers/platform/x86/intel/hid.c
··· 136 136 }, 137 137 }, 138 138 { 139 + .ident = "Lenovo ThinkPad X1 Fold 16 Gen 1", 140 + .matches = { 141 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 142 + DMI_MATCH(DMI_PRODUCT_FAMILY, "ThinkPad X1 Fold 16 Gen 1"), 143 + }, 144 + }, 145 + { 139 146 .ident = "Microsoft Surface Go 3", 140 147 .matches = { 141 148 DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"), ··· 194 187 .matches = { 195 188 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 196 189 DMI_MATCH(DMI_PRODUCT_NAME, "Dell Pro Rugged 12 Tablet RA02260"), 190 + }, 191 + }, 192 + { 193 + .matches = { 194 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 195 + DMI_MATCH(DMI_PRODUCT_NAME, "Dell 14 Plus 2-in-1 DB04250"), 196 + }, 197 + }, 198 + { 199 + .matches = { 200 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 201 + DMI_MATCH(DMI_PRODUCT_NAME, "Dell 16 Plus 2-in-1 DB06250"), 197 202 }, 198 203 }, 199 204 { }
+7
drivers/platform/x86/intel/int3472/discrete.c
··· 223 223 *con_id = "avdd"; 224 224 *gpio_flags = GPIO_ACTIVE_HIGH; 225 225 break; 226 + case INT3472_GPIO_TYPE_DOVDD: 227 + *con_id = "dovdd"; 228 + *gpio_flags = GPIO_ACTIVE_HIGH; 229 + break; 226 230 case INT3472_GPIO_TYPE_HANDSHAKE: 227 231 *con_id = "dvdd"; 228 232 *gpio_flags = GPIO_ACTIVE_HIGH; ··· 255 251 * 0x0b Power enable 256 252 * 0x0c Clock enable 257 253 * 0x0d Privacy LED 254 + * 0x10 DOVDD (digital I/O voltage) 258 255 * 0x13 Hotplug detect 259 256 * 260 257 * There are some known platform specific quirks where that does not quite ··· 337 332 case INT3472_GPIO_TYPE_CLK_ENABLE: 338 333 case INT3472_GPIO_TYPE_PRIVACY_LED: 339 334 case INT3472_GPIO_TYPE_POWER_ENABLE: 335 + case INT3472_GPIO_TYPE_DOVDD: 340 336 case INT3472_GPIO_TYPE_HANDSHAKE: 341 337 gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, con_id, gpio_flags); 342 338 if (IS_ERR(gpio)) { ··· 362 356 case INT3472_GPIO_TYPE_POWER_ENABLE: 363 357 second_sensor = int3472->quirks.avdd_second_sensor; 364 358 fallthrough; 359 + case INT3472_GPIO_TYPE_DOVDD: 365 360 case INT3472_GPIO_TYPE_HANDSHAKE: 366 361 ret = skl_int3472_register_regulator(int3472, gpio, enable_time_us, 367 362 con_id, second_sensor);
+4 -2
drivers/platform/x86/lenovo/thinkpad_acpi.c
··· 9525 9525 { 9526 9526 switch (what) { 9527 9527 case THRESHOLD_START: 9528 - if ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_START, ret, battery)) 9528 + if (!battery_info.batteries[battery].start_support || 9529 + ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_START, ret, battery))) 9529 9530 return -ENODEV; 9530 9531 9531 9532 /* The value is in the low 8 bits of the response */ 9532 9533 *ret = *ret & 0xFF; 9533 9534 return 0; 9534 9535 case THRESHOLD_STOP: 9535 - if ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_STOP, ret, battery)) 9536 + if (!battery_info.batteries[battery].stop_support || 9537 + ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_STOP, ret, battery))) 9536 9538 return -ENODEV; 9537 9539 /* Value is in lower 8 bits */ 9538 9540 *ret = *ret & 0xFF;
+29 -1
drivers/platform/x86/oxpec.c
··· 11 11 * 12 12 * Copyright (C) 2022 Joaquín I. Aramendía <samsagax@gmail.com> 13 13 * Copyright (C) 2024 Derek J. Clark <derekjohn.clark@gmail.com> 14 - * Copyright (C) 2025 Antheas Kapenekakis <lkml@antheas.dev> 14 + * Copyright (C) 2025-2026 Antheas Kapenekakis <lkml@antheas.dev> 15 15 */ 16 16 17 17 #include <linux/acpi.h> ··· 117 117 { 118 118 .matches = { 119 119 DMI_MATCH(DMI_BOARD_VENDOR, "AOKZOE"), 120 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "AOKZOE A2 Pro"), 121 + }, 122 + .driver_data = (void *)aok_zoe_a1, 123 + }, 124 + { 125 + .matches = { 126 + DMI_MATCH(DMI_BOARD_VENDOR, "AOKZOE"), 120 127 DMI_EXACT_MATCH(DMI_BOARD_NAME, "AOKZOE A1X"), 121 128 }, 122 129 .driver_data = (void *)oxp_fly, ··· 148 141 DMI_MATCH(DMI_BOARD_NAME, "ONEXPLAYER 2"), 149 142 }, 150 143 .driver_data = (void *)oxp_2, 144 + }, 145 + { 146 + .matches = { 147 + DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 148 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER APEX"), 149 + }, 150 + .driver_data = (void *)oxp_fly, 151 151 }, 152 152 { 153 153 .matches = { ··· 229 215 { 230 216 .matches = { 231 217 DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 218 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1z"), 219 + }, 220 + .driver_data = (void *)oxp_x1, 221 + }, 222 + { 223 + .matches = { 224 + DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 232 225 DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1 A"), 233 226 }, 234 227 .driver_data = (void *)oxp_x1, ··· 244 223 .matches = { 245 224 DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 246 225 DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1 i"), 226 + }, 227 + .driver_data = (void *)oxp_x1, 228 + }, 229 + { 230 + .matches = { 231 + DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 232 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1Air"), 247 233 }, 248 234 .driver_data = (void *)oxp_x1, 249 235 },
+24 -1
drivers/platform/x86/redmi-wmi.c
··· 20 20 static const struct key_entry redmi_wmi_keymap[] = { 21 21 {KE_KEY, 0x00000201, {KEY_SELECTIVE_SCREENSHOT}}, 22 22 {KE_KEY, 0x00000301, {KEY_ALL_APPLICATIONS}}, 23 - {KE_KEY, 0x00001b01, {KEY_SETUP}}, 23 + {KE_KEY, 0x00001b01, {KEY_CONFIG}}, 24 + {KE_KEY, 0x00011b01, {KEY_CONFIG}}, 25 + {KE_KEY, 0x00010101, {KEY_SWITCHVIDEOMODE}}, 26 + {KE_KEY, 0x00001a01, {KEY_REFRESH_RATE_TOGGLE}}, 24 27 25 28 /* AI button has code for each position */ 26 29 {KE_KEY, 0x00011801, {KEY_ASSISTANT}}, ··· 34 31 {KE_IGNORE, 0x00800501, {}}, 35 32 {KE_IGNORE, 0x00050501, {}}, 36 33 {KE_IGNORE, 0x000a0501, {}}, 34 + 35 + /* Xiaomi G Command Center */ 36 + {KE_KEY, 0x00010a01, {KEY_VENDOR}}, 37 + 38 + /* OEM preset power mode */ 39 + {KE_IGNORE, 0x00011601, {}}, 40 + {KE_IGNORE, 0x00021601, {}}, 41 + {KE_IGNORE, 0x00031601, {}}, 42 + {KE_IGNORE, 0x00041601, {}}, 43 + 44 + /* Fn Lock state */ 45 + {KE_IGNORE, 0x00000701, {}}, 46 + {KE_IGNORE, 0x00010701, {}}, 47 + 48 + /* Fn+`/1/2/3/4 */ 49 + {KE_KEY, 0x00011101, {KEY_F13}}, 50 + {KE_KEY, 0x00011201, {KEY_F14}}, 51 + {KE_KEY, 0x00011301, {KEY_F15}}, 52 + {KE_KEY, 0x00011401, {KEY_F16}}, 53 + {KE_KEY, 0x00011501, {KEY_F17}}, 37 54 38 55 {KE_END} 39 56 };
+18
drivers/platform/x86/touchscreen_dmi.c
··· 410 410 .properties = gdix1001_upside_down_props, 411 411 }; 412 412 413 + static const struct property_entry gdix1001_y_inverted_props[] = { 414 + PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 415 + { } 416 + }; 417 + 418 + static const struct ts_dmi_data gdix1001_y_inverted_data = { 419 + .acpi_name = "GDIX1001", 420 + .properties = gdix1001_y_inverted_props, 421 + }; 422 + 413 423 static const struct property_entry gp_electronic_t701_props[] = { 414 424 PROPERTY_ENTRY_U32("touchscreen-size-x", 960), 415 425 PROPERTY_ENTRY_U32("touchscreen-size-y", 640), ··· 1666 1656 DMI_MATCH(DMI_SYS_VENDOR, "Globalspace Tech Pvt Ltd"), 1667 1657 DMI_MATCH(DMI_PRODUCT_NAME, "SolTIVW"), 1668 1658 DMI_MATCH(DMI_PRODUCT_SKU, "PN20170413488"), 1659 + }, 1660 + }, 1661 + { 1662 + /* SUPI S10 */ 1663 + .driver_data = (void *)&gdix1001_y_inverted_data, 1664 + .matches = { 1665 + DMI_MATCH(DMI_SYS_VENDOR, "SUPI"), 1666 + DMI_MATCH(DMI_PRODUCT_NAME, "S10"), 1669 1667 }, 1670 1668 }, 1671 1669 {
+75 -35
drivers/platform/x86/uniwill/uniwill-acpi.c
··· 314 314 #define LED_CHANNELS 3 315 315 #define LED_MAX_BRIGHTNESS 200 316 316 317 - #define UNIWILL_FEATURE_FN_LOCK_TOGGLE BIT(0) 318 - #define UNIWILL_FEATURE_SUPER_KEY_TOGGLE BIT(1) 317 + #define UNIWILL_FEATURE_FN_LOCK BIT(0) 318 + #define UNIWILL_FEATURE_SUPER_KEY BIT(1) 319 319 #define UNIWILL_FEATURE_TOUCHPAD_TOGGLE BIT(2) 320 320 #define UNIWILL_FEATURE_LIGHTBAR BIT(3) 321 321 #define UNIWILL_FEATURE_BATTERY BIT(4) ··· 330 330 struct acpi_battery_hook hook; 331 331 unsigned int last_charge_ctrl; 332 332 struct mutex battery_lock; /* Protects the list of currently registered batteries */ 333 + unsigned int last_status; 333 334 unsigned int last_switch_status; 334 335 struct mutex super_key_lock; /* Protects the toggling of the super key lock state */ 335 336 struct list_head batteries; ··· 378 377 { KE_IGNORE, UNIWILL_OSD_CAPSLOCK, { KEY_CAPSLOCK }}, 379 378 { KE_IGNORE, UNIWILL_OSD_NUMLOCK, { KEY_NUMLOCK }}, 380 379 381 - /* Reported when the user locks/unlocks the super key */ 382 - { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_LOCK_ENABLE, { KEY_UNKNOWN }}, 383 - { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_LOCK_DISABLE, { KEY_UNKNOWN }}, 380 + /* 381 + * Reported when the user enables/disables the super key. 382 + * Those events might even be reported when the change was done 383 + * using the sysfs attribute! 384 + */ 385 + { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_DISABLE, { KEY_UNKNOWN }}, 386 + { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_ENABLE, { KEY_UNKNOWN }}, 384 387 /* Optional, might not be reported by all devices */ 385 - { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_LOCK_CHANGED, { KEY_UNKNOWN }}, 388 + { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_STATE_CHANGED, { KEY_UNKNOWN }}, 386 389 387 390 /* Reported in manual mode when toggling the airplane mode status */ 388 391 { KE_KEY, UNIWILL_OSD_RFKILL, { KEY_RFKILL }}, ··· 405 400 406 401 /* Reported when the user wants to toggle the mute status */ 407 402 { KE_IGNORE, UNIWILL_OSD_MUTE, { KEY_MUTE }}, 408 - 409 - /* Reported when the user locks/unlocks the Fn key */ 410 - { KE_IGNORE, UNIWILL_OSD_FN_LOCK, { KEY_FN_ESC }}, 411 403 412 404 /* Reported when the user wants to toggle the brightness of the keyboard */ 413 405 { KE_KEY, UNIWILL_OSD_KBDILLUMTOGGLE, { KEY_KBDILLUMTOGGLE }}, ··· 578 576 case EC_ADDR_SECOND_FAN_RPM_1: 579 577 case EC_ADDR_SECOND_FAN_RPM_2: 580 578 case EC_ADDR_BAT_ALERT: 579 + case EC_ADDR_BIOS_OEM: 581 580 case EC_ADDR_PWM_1: 582 581 case EC_ADDR_PWM_2: 583 582 case EC_ADDR_TRIGGER: ··· 603 600 .use_single_write = true, 604 601 }; 605 602 606 - static ssize_t fn_lock_toggle_enable_store(struct device *dev, struct device_attribute *attr, 607 - const char *buf, size_t count) 603 + static ssize_t fn_lock_store(struct device *dev, struct device_attribute *attr, const char *buf, 604 + size_t count) 608 605 { 609 606 struct uniwill_data *data = dev_get_drvdata(dev); 610 607 unsigned int value; ··· 627 624 return count; 628 625 } 629 626 630 - static ssize_t fn_lock_toggle_enable_show(struct device *dev, struct device_attribute *attr, 631 - char *buf) 627 + static ssize_t fn_lock_show(struct device *dev, struct device_attribute *attr, char *buf) 632 628 { 633 629 struct uniwill_data *data = dev_get_drvdata(dev); 634 630 unsigned int value; ··· 640 638 return sysfs_emit(buf, "%d\n", !!(value & FN_LOCK_STATUS)); 641 639 } 642 640 643 - static DEVICE_ATTR_RW(fn_lock_toggle_enable); 641 + static DEVICE_ATTR_RW(fn_lock); 644 642 645 - static ssize_t super_key_toggle_enable_store(struct device *dev, struct device_attribute *attr, 646 - const char *buf, size_t count) 643 + static ssize_t super_key_enable_store(struct device *dev, struct device_attribute *attr, 644 + const char *buf, size_t count) 647 645 { 648 646 struct uniwill_data *data = dev_get_drvdata(dev); 649 647 unsigned int value; ··· 675 673 return count; 676 674 } 677 675 678 - static ssize_t super_key_toggle_enable_show(struct device *dev, struct device_attribute *attr, 679 - char *buf) 676 + static ssize_t super_key_enable_show(struct device *dev, struct device_attribute *attr, char *buf) 680 677 { 681 678 struct uniwill_data *data = dev_get_drvdata(dev); 682 679 unsigned int value; ··· 688 687 return sysfs_emit(buf, "%d\n", !(value & SUPER_KEY_LOCK_STATUS)); 689 688 } 690 689 691 - static DEVICE_ATTR_RW(super_key_toggle_enable); 690 + static DEVICE_ATTR_RW(super_key_enable); 692 691 693 692 static ssize_t touchpad_toggle_enable_store(struct device *dev, struct device_attribute *attr, 694 693 const char *buf, size_t count) ··· 882 881 883 882 static struct attribute *uniwill_attrs[] = { 884 883 /* Keyboard-related */ 885 - &dev_attr_fn_lock_toggle_enable.attr, 886 - &dev_attr_super_key_toggle_enable.attr, 884 + &dev_attr_fn_lock.attr, 885 + &dev_attr_super_key_enable.attr, 887 886 &dev_attr_touchpad_toggle_enable.attr, 888 887 /* Lightbar-related */ 889 888 &dev_attr_rainbow_animation.attr, ··· 898 897 struct device *dev = kobj_to_dev(kobj); 899 898 struct uniwill_data *data = dev_get_drvdata(dev); 900 899 901 - if (attr == &dev_attr_fn_lock_toggle_enable.attr) { 902 - if (uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK_TOGGLE)) 900 + if (attr == &dev_attr_fn_lock.attr) { 901 + if (uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK)) 903 902 return attr->mode; 904 903 } 905 904 906 - if (attr == &dev_attr_super_key_toggle_enable.attr) { 907 - if (uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY_TOGGLE)) 905 + if (attr == &dev_attr_super_key_enable.attr) { 906 + if (uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY)) 908 907 return attr->mode; 909 908 } 910 909 ··· 1358 1357 1359 1358 switch (action) { 1360 1359 case UNIWILL_OSD_BATTERY_ALERT: 1360 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_BATTERY)) 1361 + return NOTIFY_DONE; 1362 + 1361 1363 mutex_lock(&data->battery_lock); 1362 1364 list_for_each_entry(entry, &data->batteries, head) { 1363 1365 power_supply_changed(entry->battery); ··· 1372 1368 /* noop for the time being, will change once charging priority 1373 1369 * gets implemented. 1374 1370 */ 1371 + 1372 + return NOTIFY_OK; 1373 + case UNIWILL_OSD_FN_LOCK: 1374 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK)) 1375 + return NOTIFY_DONE; 1376 + 1377 + sysfs_notify(&data->dev->kobj, NULL, "fn_lock"); 1375 1378 1376 1379 return NOTIFY_OK; 1377 1380 default: ··· 1514 1503 regmap_clear_bits(data->regmap, EC_ADDR_AP_OEM, ENABLE_MANUAL_CTRL); 1515 1504 } 1516 1505 1517 - static int uniwill_suspend_keyboard(struct uniwill_data *data) 1506 + static int uniwill_suspend_fn_lock(struct uniwill_data *data) 1518 1507 { 1519 - if (!uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY_TOGGLE)) 1508 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK)) 1509 + return 0; 1510 + 1511 + /* 1512 + * The EC_ADDR_BIOS_OEM is marked as volatile, so we have to restore it 1513 + * ourselves. 1514 + */ 1515 + return regmap_read(data->regmap, EC_ADDR_BIOS_OEM, &data->last_status); 1516 + } 1517 + 1518 + static int uniwill_suspend_super_key(struct uniwill_data *data) 1519 + { 1520 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY)) 1520 1521 return 0; 1521 1522 1522 1523 /* ··· 1565 1542 struct uniwill_data *data = dev_get_drvdata(dev); 1566 1543 int ret; 1567 1544 1568 - ret = uniwill_suspend_keyboard(data); 1545 + ret = uniwill_suspend_fn_lock(data); 1546 + if (ret < 0) 1547 + return ret; 1548 + 1549 + ret = uniwill_suspend_super_key(data); 1569 1550 if (ret < 0) 1570 1551 return ret; 1571 1552 ··· 1587 1560 return 0; 1588 1561 } 1589 1562 1590 - static int uniwill_resume_keyboard(struct uniwill_data *data) 1563 + static int uniwill_resume_fn_lock(struct uniwill_data *data) 1564 + { 1565 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK)) 1566 + return 0; 1567 + 1568 + return regmap_update_bits(data->regmap, EC_ADDR_BIOS_OEM, FN_LOCK_STATUS, 1569 + data->last_status); 1570 + } 1571 + 1572 + static int uniwill_resume_super_key(struct uniwill_data *data) 1591 1573 { 1592 1574 unsigned int value; 1593 1575 int ret; 1594 1576 1595 - if (!uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY_TOGGLE)) 1577 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY)) 1596 1578 return 0; 1597 1579 1598 1580 ret = regmap_read(data->regmap, EC_ADDR_SWITCH_STATUS, &value); ··· 1644 1608 if (ret < 0) 1645 1609 return ret; 1646 1610 1647 - ret = uniwill_resume_keyboard(data); 1611 + ret = uniwill_resume_fn_lock(data); 1612 + if (ret < 0) 1613 + return ret; 1614 + 1615 + ret = uniwill_resume_super_key(data); 1648 1616 if (ret < 0) 1649 1617 return ret; 1650 1618 ··· 1683 1643 }; 1684 1644 1685 1645 static struct uniwill_device_descriptor lapac71h_descriptor __initdata = { 1686 - .features = UNIWILL_FEATURE_FN_LOCK_TOGGLE | 1687 - UNIWILL_FEATURE_SUPER_KEY_TOGGLE | 1646 + .features = UNIWILL_FEATURE_FN_LOCK | 1647 + UNIWILL_FEATURE_SUPER_KEY | 1688 1648 UNIWILL_FEATURE_TOUCHPAD_TOGGLE | 1689 1649 UNIWILL_FEATURE_BATTERY | 1690 1650 UNIWILL_FEATURE_HWMON, 1691 1651 }; 1692 1652 1693 1653 static struct uniwill_device_descriptor lapkc71f_descriptor __initdata = { 1694 - .features = UNIWILL_FEATURE_FN_LOCK_TOGGLE | 1695 - UNIWILL_FEATURE_SUPER_KEY_TOGGLE | 1654 + .features = UNIWILL_FEATURE_FN_LOCK | 1655 + UNIWILL_FEATURE_SUPER_KEY | 1696 1656 UNIWILL_FEATURE_TOUCHPAD_TOGGLE | 1697 1657 UNIWILL_FEATURE_LIGHTBAR | 1698 1658 UNIWILL_FEATURE_BATTERY |
+3 -3
drivers/platform/x86/uniwill/uniwill-wmi.h
··· 64 64 #define UNIWILL_OSD_KB_LED_LEVEL3 0x3E 65 65 #define UNIWILL_OSD_KB_LED_LEVEL4 0x3F 66 66 67 - #define UNIWILL_OSD_SUPER_KEY_LOCK_ENABLE 0x40 68 - #define UNIWILL_OSD_SUPER_KEY_LOCK_DISABLE 0x41 67 + #define UNIWILL_OSD_SUPER_KEY_DISABLE 0x40 68 + #define UNIWILL_OSD_SUPER_KEY_ENABLE 0x41 69 69 70 70 #define UNIWILL_OSD_MENU_JP 0x42 71 71 ··· 74 74 75 75 #define UNIWILL_OSD_RFKILL 0xA4 76 76 77 - #define UNIWILL_OSD_SUPER_KEY_LOCK_CHANGED 0xA5 77 + #define UNIWILL_OSD_SUPER_KEY_STATE_CHANGED 0xA5 78 78 79 79 #define UNIWILL_OSD_LIGHTBAR_STATE_CHANGED 0xA6 80 80
+3 -3
drivers/pmdomain/bcm/bcm2835-power.c
··· 580 580 581 581 switch (id) { 582 582 case BCM2835_RESET_V3D: 583 - return !PM_READ(PM_GRAFX & PM_V3DRSTN); 583 + return !(PM_READ(PM_GRAFX) & PM_V3DRSTN); 584 584 case BCM2835_RESET_H264: 585 - return !PM_READ(PM_IMAGE & PM_H264RSTN); 585 + return !(PM_READ(PM_IMAGE) & PM_H264RSTN); 586 586 case BCM2835_RESET_ISP: 587 - return !PM_READ(PM_IMAGE & PM_ISPRSTN); 587 + return !(PM_READ(PM_IMAGE) & PM_ISPRSTN); 588 588 default: 589 589 return -EINVAL; 590 590 }
+1 -1
drivers/pmdomain/rockchip/pm-domains.c
··· 1311 1311 static const struct rockchip_domain_info rk3588_pm_domains[] = { 1312 1312 [RK3588_PD_GPU] = DOMAIN_RK3588("gpu", 0x0, BIT(0), 0, 0x0, 0, BIT(1), 0x0, BIT(0), BIT(0), false, true), 1313 1313 [RK3588_PD_NPU] = DOMAIN_RK3588("npu", 0x0, BIT(1), BIT(1), 0x0, 0, 0, 0x0, 0, 0, false, true), 1314 - [RK3588_PD_VCODEC] = DOMAIN_RK3588("vcodec", 0x0, BIT(2), BIT(2), 0x0, 0, 0, 0x0, 0, 0, false, false), 1314 + [RK3588_PD_VCODEC] = DOMAIN_RK3588("vcodec", 0x0, BIT(2), BIT(2), 0x0, 0, 0, 0x0, 0, 0, false, true), 1315 1315 [RK3588_PD_NPUTOP] = DOMAIN_RK3588("nputop", 0x0, BIT(3), 0, 0x0, BIT(11), BIT(2), 0x0, BIT(1), BIT(1), false, false), 1316 1316 [RK3588_PD_NPU1] = DOMAIN_RK3588("npu1", 0x0, BIT(4), 0, 0x0, BIT(12), BIT(3), 0x0, BIT(2), BIT(2), false, false), 1317 1317 [RK3588_PD_NPU2] = DOMAIN_RK3588("npu2", 0x0, BIT(5), 0, 0x0, BIT(13), BIT(4), 0x0, BIT(3), BIT(3), false, false),
+1 -3
drivers/regulator/mt6363-regulator.c
··· 899 899 "Failed to map IRQ%d\n", info->hwirq); 900 900 901 901 ret = devm_add_action_or_reset(dev, mt6363_irq_remove, &info->virq); 902 - if (ret) { 903 - irq_dispose_mapping(info->hwirq); 902 + if (ret) 904 903 return ret; 905 - } 906 904 907 905 config.driver_data = info; 908 906 INIT_DELAYED_WORK(&info->oc_work, mt6363_oc_irq_enable_work);
+1 -1
drivers/regulator/pf9453-regulator.c
··· 809 809 } 810 810 811 811 ret = devm_request_threaded_irq(pf9453->dev, pf9453->irq, NULL, pf9453_irq_handler, 812 - (IRQF_TRIGGER_FALLING | IRQF_ONESHOT), 812 + IRQF_ONESHOT, 813 813 "pf9453-irq", pf9453); 814 814 if (ret) 815 815 return dev_err_probe(pf9453->dev, ret, "Failed to request IRQ: %d\n", pf9453->irq);
+10
drivers/scsi/mpi3mr/mpi3mr_fw.c
··· 1618 1618 ioc_info(mrioc, 1619 1619 "successfully transitioned to %s state\n", 1620 1620 mpi3mr_iocstate_name(ioc_state)); 1621 + mpi3mr_clear_reset_history(mrioc); 1621 1622 return 0; 1622 1623 } 1623 1624 ioc_status = readl(&mrioc->sysif_regs->ioc_status); ··· 1637 1636 msleep(100); 1638 1637 elapsed_time_sec = jiffies_to_msecs(jiffies - start_time)/1000; 1639 1638 } while (elapsed_time_sec < mrioc->ready_timeout); 1639 + 1640 + ioc_state = mpi3mr_get_iocstate(mrioc); 1641 + if (ioc_state == MRIOC_STATE_READY) { 1642 + ioc_info(mrioc, 1643 + "successfully transitioned to %s state after %llu seconds\n", 1644 + mpi3mr_iocstate_name(ioc_state), elapsed_time_sec); 1645 + mpi3mr_clear_reset_history(mrioc); 1646 + return 0; 1647 + } 1640 1648 1641 1649 out_failed: 1642 1650 elapsed_time_sec = jiffies_to_msecs(jiffies - start_time)/1000;
+1 -1
drivers/scsi/scsi_devinfo.c
··· 190 190 {"IBM", "2076", NULL, BLIST_NO_VPD_SIZE}, 191 191 {"IBM", "2105", NULL, BLIST_RETRY_HWERROR}, 192 192 {"iomega", "jaz 1GB", "J.86", BLIST_NOTQ | BLIST_NOLUN}, 193 - {"IOMEGA", "ZIP", NULL, BLIST_NOTQ | BLIST_NOLUN}, 193 + {"IOMEGA", "ZIP", NULL, BLIST_NOTQ | BLIST_NOLUN | BLIST_SKIP_IO_HINTS}, 194 194 {"IOMEGA", "Io20S *F", NULL, BLIST_KEY}, 195 195 {"INSITE", "Floptical F*8I", NULL, BLIST_KEY}, 196 196 {"INSITE", "I325VM", NULL, BLIST_KEY},
+1
drivers/scsi/scsi_scan.c
··· 361 361 * since we use this queue depth most of times. 362 362 */ 363 363 if (scsi_realloc_sdev_budget_map(sdev, depth)) { 364 + kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags); 364 365 put_device(&starget->dev); 365 366 kfree(sdev); 366 367 goto out;
+1 -1
drivers/scsi/xen-scsifront.c
··· 1175 1175 return; 1176 1176 } 1177 1177 1178 - if (xenbus_read_driver_state(dev->nodename) == 1178 + if (xenbus_read_driver_state(dev, dev->nodename) == 1179 1179 XenbusStateInitialised) 1180 1180 scsifront_do_lun_hotplug(info, VSCSIFRONT_OP_ADD_LUN); 1181 1181
+1 -1
drivers/spi/spi-dw-dma.c
··· 271 271 msecs_to_jiffies(ms)); 272 272 273 273 if (ms == 0) { 274 - dev_err(&dws->ctlr->cur_msg->spi->dev, 274 + dev_err(&dws->ctlr->dev, 275 275 "DMA transaction timed out\n"); 276 276 return -ETIMEDOUT; 277 277 }
+6 -9
drivers/target/target_core_configfs.c
··· 108 108 const char *page, size_t count) 109 109 { 110 110 ssize_t read_bytes; 111 - struct file *fp; 112 111 ssize_t r = -EINVAL; 112 + struct path path = {}; 113 113 114 114 mutex_lock(&target_devices_lock); 115 115 if (target_devices) { ··· 131 131 db_root_stage[read_bytes - 1] = '\0'; 132 132 133 133 /* validate new db root before accepting it */ 134 - fp = filp_open(db_root_stage, O_RDONLY, 0); 135 - if (IS_ERR(fp)) { 134 + r = kern_path(db_root_stage, LOOKUP_FOLLOW | LOOKUP_DIRECTORY, &path); 135 + if (r) { 136 136 pr_err("db_root: cannot open: %s\n", db_root_stage); 137 + if (r == -ENOTDIR) 138 + pr_err("db_root: not a directory: %s\n", db_root_stage); 137 139 goto unlock; 138 140 } 139 - if (!S_ISDIR(file_inode(fp)->i_mode)) { 140 - filp_close(fp, NULL); 141 - pr_err("db_root: not a directory: %s\n", db_root_stage); 142 - goto unlock; 143 - } 144 - filp_close(fp, NULL); 141 + path_put(&path); 145 142 146 143 strscpy(db_root, db_root_stage); 147 144 pr_debug("Target_Core_ConfigFS: db_root set to %s\n", db_root);
+6 -2
drivers/video/fbdev/au1100fb.c
··· 380 380 #define panel_is_color(panel) (panel->control_base & LCD_CONTROL_PC) 381 381 #define panel_swap_rgb(panel) (panel->control_base & LCD_CONTROL_CCO) 382 382 383 - #if defined(CONFIG_COMPILE_TEST) && !defined(CONFIG_MIPS) 384 - /* This is only defined to be able to compile this driver on non-mips platforms */ 383 + #if defined(CONFIG_COMPILE_TEST) && (!defined(CONFIG_MIPS) || defined(CONFIG_64BIT)) 384 + /* 385 + * KSEG1ADDR() is defined in arch/mips/include/asm/addrspace.h 386 + * for 32 bit configurations. Provide a stub for compile testing 387 + * on other platforms. 388 + */ 385 389 #define KSEG1ADDR(x) (x) 386 390 #endif 387 391
+2 -5
drivers/xen/xen-acpi-processor.c
··· 378 378 acpi_psd[acpi_id].domain); 379 379 } 380 380 381 - status = acpi_evaluate_object(handle, "_CST", NULL, &buffer); 382 - if (ACPI_FAILURE(status)) { 383 - if (!pblk) 384 - return AE_OK; 385 - } 381 + if (!pblk && !acpi_has_method(handle, "_CST")) 382 + return AE_OK; 386 383 /* .. and it has a C-state */ 387 384 __set_bit(acpi_id, acpi_id_cst_present); 388 385
+5 -5
drivers/xen/xen-pciback/xenbus.c
··· 149 149 150 150 mutex_lock(&pdev->dev_lock); 151 151 /* Make sure we only do this setup once */ 152 - if (xenbus_read_driver_state(pdev->xdev->nodename) != 152 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != 153 153 XenbusStateInitialised) 154 154 goto out; 155 155 156 156 /* Wait for frontend to state that it has published the configuration */ 157 - if (xenbus_read_driver_state(pdev->xdev->otherend) != 157 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->otherend) != 158 158 XenbusStateInitialised) 159 159 goto out; 160 160 ··· 374 374 dev_dbg(&pdev->xdev->dev, "Reconfiguring device ...\n"); 375 375 376 376 mutex_lock(&pdev->dev_lock); 377 - if (xenbus_read_driver_state(pdev->xdev->nodename) != state) 377 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != state) 378 378 goto out; 379 379 380 380 err = xenbus_scanf(XBT_NIL, pdev->xdev->nodename, "num_devs", "%d", ··· 572 572 /* It's possible we could get the call to setup twice, so make sure 573 573 * we're not already connected. 574 574 */ 575 - if (xenbus_read_driver_state(pdev->xdev->nodename) != 575 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != 576 576 XenbusStateInitWait) 577 577 goto out; 578 578 ··· 662 662 struct xen_pcibk_device *pdev = 663 663 container_of(watch, struct xen_pcibk_device, be_watch); 664 664 665 - switch (xenbus_read_driver_state(pdev->xdev->nodename)) { 665 + switch (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename)) { 666 666 case XenbusStateInitWait: 667 667 xen_pcibk_setup_backend(pdev); 668 668 break;
+14 -3
drivers/xen/xenbus/xenbus_client.c
··· 226 226 struct xenbus_transaction xbt; 227 227 int current_state; 228 228 int err, abort; 229 + bool vanished = false; 229 230 230 - if (state == dev->state) 231 + if (state == dev->state || dev->vanished) 231 232 return 0; 232 233 233 234 again: ··· 243 242 err = xenbus_scanf(xbt, dev->nodename, "state", "%d", &current_state); 244 243 if (err != 1) 245 244 goto abort; 245 + if (current_state != dev->state && current_state == XenbusStateInitialising) { 246 + vanished = true; 247 + goto abort; 248 + } 246 249 247 250 err = xenbus_printf(xbt, dev->nodename, "state", "%d", state); 248 251 if (err) { ··· 261 256 if (err == -EAGAIN && !abort) 262 257 goto again; 263 258 xenbus_switch_fatal(dev, depth, err, "ending transaction"); 264 - } else 259 + } else if (!vanished) 265 260 dev->state = state; 266 261 267 262 return 0; ··· 936 931 937 932 /** 938 933 * xenbus_read_driver_state - read state from a store path 934 + * @dev: xenbus device pointer 939 935 * @path: path for driver 940 936 * 941 937 * Returns: the state of the driver rooted at the given store path, or 942 938 * XenbusStateUnknown if no state can be read. 943 939 */ 944 - enum xenbus_state xenbus_read_driver_state(const char *path) 940 + enum xenbus_state xenbus_read_driver_state(const struct xenbus_device *dev, 941 + const char *path) 945 942 { 946 943 enum xenbus_state result; 944 + 945 + if (dev && dev->vanished) 946 + return XenbusStateUnknown; 947 + 947 948 int err = xenbus_gather(XBT_NIL, path, "state", "%d", &result, NULL); 948 949 if (err) 949 950 result = XenbusStateUnknown;
+39 -3
drivers/xen/xenbus/xenbus_probe.c
··· 191 191 return; 192 192 } 193 193 194 - state = xenbus_read_driver_state(dev->otherend); 194 + state = xenbus_read_driver_state(dev, dev->otherend); 195 195 196 196 dev_dbg(&dev->dev, "state is %d, (%s), %s, %s\n", 197 197 state, xenbus_strstate(state), dev->otherend_watch.node, path); ··· 364 364 * closed. 365 365 */ 366 366 if (!drv->allow_rebind || 367 - xenbus_read_driver_state(dev->nodename) == XenbusStateClosing) 367 + xenbus_read_driver_state(dev, dev->nodename) == XenbusStateClosing) 368 368 xenbus_switch_state(dev, XenbusStateClosed); 369 369 } 370 370 EXPORT_SYMBOL_GPL(xenbus_dev_remove); ··· 444 444 info.dev = NULL; 445 445 bus_for_each_dev(bus, NULL, &info, cleanup_dev); 446 446 if (info.dev) { 447 + dev_warn(&info.dev->dev, 448 + "device forcefully removed from xenstore\n"); 449 + info.dev->vanished = true; 447 450 device_unregister(&info.dev->dev); 448 451 put_device(&info.dev->dev); 449 452 } ··· 517 514 size_t stringlen; 518 515 char *tmpstring; 519 516 520 - enum xenbus_state state = xenbus_read_driver_state(nodename); 517 + enum xenbus_state state = xenbus_read_driver_state(NULL, nodename); 521 518 522 519 if (state != XenbusStateInitialising) { 523 520 /* Device is not new, so ignore it. This can happen if a ··· 662 659 return; 663 660 664 661 dev = xenbus_device_find(root, &bus->bus); 662 + /* 663 + * Backend domain crash results in not coordinated frontend removal, 664 + * without going through XenbusStateClosing. If this is a new instance 665 + * of the same device Xen tools will have reset the state to 666 + * XenbusStateInitializing. 667 + * It might be that the backend crashed early during the init phase of 668 + * device setup, in which case the known state would have been 669 + * XenbusStateInitializing. So test the backend domid to match the 670 + * saved one. In case the new backend happens to have the same domid as 671 + * the old one, we can just carry on, as there is no inconsistency 672 + * resulting in this case. 673 + */ 674 + if (dev && !strcmp(bus->root, "device")) { 675 + enum xenbus_state state = xenbus_read_driver_state(dev, dev->nodename); 676 + unsigned int backend = xenbus_read_unsigned(root, "backend-id", 677 + dev->otherend_id); 678 + 679 + if (state == XenbusStateInitialising && 680 + (state != dev->state || backend != dev->otherend_id)) { 681 + /* 682 + * State has been reset, assume the old one vanished 683 + * and new one needs to be probed. 684 + */ 685 + dev_warn(&dev->dev, 686 + "state reset occurred, reconnecting\n"); 687 + dev->vanished = true; 688 + } 689 + if (dev->vanished) { 690 + device_unregister(&dev->dev); 691 + put_device(&dev->dev); 692 + dev = NULL; 693 + } 694 + } 665 695 if (!dev) 666 696 xenbus_probe_node(bus, type, root); 667 697 else
+1 -1
drivers/xen/xenbus/xenbus_probe_frontend.c
··· 253 253 } else if (xendev->state < XenbusStateConnected) { 254 254 enum xenbus_state rstate = XenbusStateUnknown; 255 255 if (xendev->otherend) 256 - rstate = xenbus_read_driver_state(xendev->otherend); 256 + rstate = xenbus_read_driver_state(xendev, xendev->otherend); 257 257 pr_warn("Timeout connecting to device: %s (local state %d, remote state %d)\n", 258 258 xendev->nodename, xendev->state, rstate); 259 259 }
-1
fs/btrfs/block-group.c
··· 3340 3340 btrfs_abort_transaction(trans, ret); 3341 3341 goto out_put; 3342 3342 } 3343 - WARN_ON(ret); 3344 3343 3345 3344 /* We've already setup this transaction, go ahead and exit */ 3346 3345 if (block_group->cache_generation == trans->transid &&
+1 -1
fs/btrfs/delayed-inode.c
··· 1657 1657 if (unlikely(ret)) { 1658 1658 btrfs_err(trans->fs_info, 1659 1659 "failed to add delayed dir index item, root: %llu, inode: %llu, index: %llu, error: %d", 1660 - index, btrfs_root_id(node->root), node->inode_id, ret); 1660 + btrfs_root_id(node->root), node->inode_id, index, ret); 1661 1661 btrfs_delayed_item_release_metadata(dir->root, item); 1662 1662 btrfs_release_delayed_item(item); 1663 1663 }
+21 -15
fs/btrfs/disk-io.c
··· 1994 1994 int level = btrfs_super_log_root_level(disk_super); 1995 1995 1996 1996 if (unlikely(fs_devices->rw_devices == 0)) { 1997 - btrfs_warn(fs_info, "log replay required on RO media"); 1997 + btrfs_err(fs_info, "log replay required on RO media"); 1998 1998 return -EIO; 1999 1999 } 2000 2000 ··· 2008 2008 check.owner_root = BTRFS_TREE_LOG_OBJECTID; 2009 2009 log_tree_root->node = read_tree_block(fs_info, bytenr, &check); 2010 2010 if (IS_ERR(log_tree_root->node)) { 2011 - btrfs_warn(fs_info, "failed to read log tree"); 2012 2011 ret = PTR_ERR(log_tree_root->node); 2013 2012 log_tree_root->node = NULL; 2013 + btrfs_err(fs_info, "failed to read log tree with error: %d", ret); 2014 2014 btrfs_put_root(log_tree_root); 2015 2015 return ret; 2016 2016 } ··· 2023 2023 /* returns with log_tree_root freed on success */ 2024 2024 ret = btrfs_recover_log_trees(log_tree_root); 2025 2025 btrfs_put_root(log_tree_root); 2026 - if (ret) { 2027 - btrfs_handle_fs_error(fs_info, ret, 2028 - "Failed to recover log tree"); 2026 + if (unlikely(ret)) { 2027 + ASSERT(BTRFS_FS_ERROR(fs_info) != 0); 2028 + btrfs_err(fs_info, "failed to recover log trees with error: %d", ret); 2029 2029 return ret; 2030 2030 } 2031 2031 ··· 2972 2972 task = kthread_run(btrfs_uuid_rescan_kthread, fs_info, "btrfs-uuid"); 2973 2973 if (IS_ERR(task)) { 2974 2974 /* fs_info->update_uuid_tree_gen remains 0 in all error case */ 2975 - btrfs_warn(fs_info, "failed to start uuid_rescan task"); 2976 2975 up(&fs_info->uuid_tree_rescan_sem); 2977 2976 return PTR_ERR(task); 2978 2977 } ··· 3187 3188 if (incompat & ~BTRFS_FEATURE_INCOMPAT_SUPP) { 3188 3189 btrfs_err(fs_info, 3189 3190 "cannot mount because of unknown incompat features (0x%llx)", 3190 - incompat); 3191 + incompat & ~BTRFS_FEATURE_INCOMPAT_SUPP); 3191 3192 return -EINVAL; 3192 3193 } 3193 3194 ··· 3219 3220 if (compat_ro_unsupp && is_rw_mount) { 3220 3221 btrfs_err(fs_info, 3221 3222 "cannot mount read-write because of unknown compat_ro features (0x%llx)", 3222 - compat_ro); 3223 + compat_ro_unsupp); 3223 3224 return -EINVAL; 3224 3225 } 3225 3226 ··· 3232 3233 !btrfs_test_opt(fs_info, NOLOGREPLAY)) { 3233 3234 btrfs_err(fs_info, 3234 3235 "cannot replay dirty log with unsupported compat_ro features (0x%llx), try rescue=nologreplay", 3235 - compat_ro); 3236 + compat_ro_unsupp); 3236 3237 return -EINVAL; 3237 3238 } 3238 3239 ··· 3641 3642 fs_info->fs_root = btrfs_get_fs_root(fs_info, BTRFS_FS_TREE_OBJECTID, true); 3642 3643 if (IS_ERR(fs_info->fs_root)) { 3643 3644 ret = PTR_ERR(fs_info->fs_root); 3644 - btrfs_warn(fs_info, "failed to read fs tree: %d", ret); 3645 + btrfs_err(fs_info, "failed to read fs tree: %d", ret); 3645 3646 fs_info->fs_root = NULL; 3646 3647 goto fail_qgroup; 3647 3648 } ··· 3662 3663 btrfs_info(fs_info, "checking UUID tree"); 3663 3664 ret = btrfs_check_uuid_tree(fs_info); 3664 3665 if (ret) { 3665 - btrfs_warn(fs_info, 3666 - "failed to check the UUID tree: %d", ret); 3666 + btrfs_err(fs_info, "failed to check the UUID tree: %d", ret); 3667 3667 close_ctree(fs_info); 3668 3668 return ret; 3669 3669 } ··· 4397 4399 */ 4398 4400 btrfs_flush_workqueue(fs_info->delayed_workers); 4399 4401 4400 - ret = btrfs_commit_super(fs_info); 4401 - if (ret) 4402 - btrfs_err(fs_info, "commit super ret %d", ret); 4402 + /* 4403 + * If the filesystem is shutdown, then an attempt to commit the 4404 + * super block (or any write) will just fail. Since we freeze 4405 + * the filesystem before shutting it down, the filesystem is in 4406 + * a consistent state and we don't need to commit super blocks. 4407 + */ 4408 + if (!btrfs_is_shutdown(fs_info)) { 4409 + ret = btrfs_commit_super(fs_info); 4410 + if (ret) 4411 + btrfs_err(fs_info, "commit super block returned %d", ret); 4412 + } 4403 4413 } 4404 4414 4405 4415 kthread_stop(fs_info->transaction_kthread);
+7 -1
fs/btrfs/extent-tree.c
··· 2933 2933 while (!TRANS_ABORTED(trans) && cached_state) { 2934 2934 struct extent_state *next_state; 2935 2935 2936 - if (btrfs_test_opt(fs_info, DISCARD_SYNC)) 2936 + if (btrfs_test_opt(fs_info, DISCARD_SYNC)) { 2937 2937 ret = btrfs_discard_extent(fs_info, start, 2938 2938 end + 1 - start, NULL, true); 2939 + if (ret) { 2940 + btrfs_warn(fs_info, 2941 + "discard failed for extent [%llu, %llu]: errno=%d %s", 2942 + start, end, ret, btrfs_decode_error(ret)); 2943 + } 2944 + } 2939 2945 2940 2946 next_state = btrfs_next_extent_state(unpin, cached_state); 2941 2947 btrfs_clear_extent_dirty(unpin, start, end, &cached_state);
+17 -2
fs/btrfs/inode.c
··· 1392 1392 return ret; 1393 1393 1394 1394 free_reserved: 1395 + /* 1396 + * If we have reserved an extent for the current range and failed to 1397 + * create the respective extent map or ordered extent, it means that 1398 + * when we reserved the extent we decremented the extent's size from 1399 + * the data space_info's bytes_may_use counter and 1400 + * incremented the space_info's bytes_reserved counter by the same 1401 + * amount. 1402 + * 1403 + * We must make sure extent_clear_unlock_delalloc() does not try 1404 + * to decrement again the data space_info's bytes_may_use counter, which 1405 + * will be handled by btrfs_free_reserved_extent(). 1406 + * 1407 + * Therefore we do not pass it the flag EXTENT_CLEAR_DATA_RESV, but only 1408 + * EXTENT_CLEAR_META_RESV. 1409 + */ 1395 1410 extent_clear_unlock_delalloc(inode, file_offset, cur_end, locked_folio, cached, 1396 1411 EXTENT_LOCKED | EXTENT_DELALLOC | 1397 1412 EXTENT_DELALLOC_NEW | 1398 - EXTENT_DEFRAG | EXTENT_DO_ACCOUNTING, 1413 + EXTENT_DEFRAG | EXTENT_CLEAR_META_RESV, 1399 1414 PAGE_UNLOCK | PAGE_START_WRITEBACK | 1400 1415 PAGE_END_WRITEBACK); 1401 1416 btrfs_qgroup_free_data(inode, NULL, file_offset, cur_len, NULL); ··· 4779 4764 spin_unlock(&dest->root_item_lock); 4780 4765 btrfs_warn(fs_info, 4781 4766 "attempt to delete subvolume %llu with active swapfile", 4782 - btrfs_root_id(root)); 4767 + btrfs_root_id(dest)); 4783 4768 ret = -EPERM; 4784 4769 goto out_up_write; 4785 4770 }
+6 -1
fs/btrfs/ioctl.c
··· 4581 4581 { 4582 4582 struct btrfs_inode *inode = BTRFS_I(file_inode(iocb->ki_filp)); 4583 4583 struct extent_io_tree *io_tree = &inode->io_tree; 4584 - struct page **pages; 4584 + struct page **pages = NULL; 4585 4585 struct btrfs_uring_priv *priv = NULL; 4586 4586 unsigned long nr_pages; 4587 4587 int ret; ··· 4639 4639 btrfs_unlock_extent(io_tree, start, lockend, &cached_state); 4640 4640 btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); 4641 4641 kfree(priv); 4642 + for (int i = 0; i < nr_pages; i++) { 4643 + if (pages[i]) 4644 + __free_page(pages[i]); 4645 + } 4646 + kfree(pages); 4642 4647 return ret; 4643 4648 } 4644 4649
+1 -1
fs/btrfs/qgroup.c
··· 370 370 nr_members++; 371 371 } 372 372 mismatch = (parent->excl != excl_sum || parent->rfer != rfer_sum || 373 - parent->excl_cmpr != excl_cmpr_sum || parent->rfer_cmpr != excl_cmpr_sum); 373 + parent->excl_cmpr != excl_cmpr_sum || parent->rfer_cmpr != rfer_cmpr_sum); 374 374 375 375 WARN(mismatch, 376 376 "parent squota qgroup %hu/%llu has mismatched usage from its %d members. "
+6
fs/btrfs/relocation.c
··· 4723 4723 ret = btrfs_remove_dev_extents(trans, chunk_map); 4724 4724 if (unlikely(ret)) { 4725 4725 btrfs_abort_transaction(trans, ret); 4726 + btrfs_end_transaction(trans); 4726 4727 return ret; 4727 4728 } 4728 4729 ··· 4733 4732 if (unlikely(ret)) { 4734 4733 mutex_unlock(&trans->fs_info->chunk_mutex); 4735 4734 btrfs_abort_transaction(trans, ret); 4735 + btrfs_end_transaction(trans); 4736 4736 return ret; 4737 4737 } 4738 4738 } ··· 4752 4750 ret = remove_chunk_stripes(trans, chunk_map, path); 4753 4751 if (unlikely(ret)) { 4754 4752 btrfs_abort_transaction(trans, ret); 4753 + btrfs_end_transaction(trans); 4755 4754 return ret; 4756 4755 } 4757 4756 ··· 5985 5982 struct btrfs_block_group *dest_bg; 5986 5983 5987 5984 dest_bg = btrfs_lookup_block_group(fs_info, new_addr); 5985 + if (unlikely(!dest_bg)) 5986 + return -EUCLEAN; 5987 + 5988 5988 adjust_block_group_remap_bytes(trans, dest_bg, -overlap_length); 5989 5989 btrfs_put_block_group(dest_bg); 5990 5990 ret = btrfs_add_to_free_space_tree(trans,
+1 -1
fs/btrfs/scrub.c
··· 743 743 btrfs_warn_rl(fs_info, 744 744 "scrub: tree block %llu mirror %u has bad fsid, has %pU want %pU", 745 745 logical, stripe->mirror_num, 746 - header->fsid, fs_info->fs_devices->fsid); 746 + header->fsid, fs_info->fs_devices->metadata_uuid); 747 747 return; 748 748 } 749 749 if (memcmp(header->chunk_tree_uuid, fs_info->chunk_tree_uuid,
+2 -2
fs/btrfs/tree-checker.c
··· 1740 1740 objectid > BTRFS_LAST_FREE_OBJECTID)) { 1741 1741 extent_err(leaf, slot, 1742 1742 "invalid extent data backref objectid value %llu", 1743 - root); 1743 + objectid); 1744 1744 return -EUCLEAN; 1745 1745 } 1746 1746 if (unlikely(!IS_ALIGNED(offset, leaf->fs_info->sectorsize))) { ··· 1921 1921 if (unlikely(prev_key->offset + prev_len > key->offset)) { 1922 1922 generic_err(leaf, slot, 1923 1923 "dev extent overlap, prev offset %llu len %llu current offset %llu", 1924 - prev_key->objectid, prev_len, key->offset); 1924 + prev_key->offset, prev_len, key->offset); 1925 1925 return -EUCLEAN; 1926 1926 } 1927 1927 }
+5 -3
fs/btrfs/volumes.c
··· 6907 6907 6908 6908 ret = btrfs_translate_remap(fs_info, &new_logical, length); 6909 6909 if (ret) 6910 - return ret; 6910 + goto out; 6911 6911 6912 6912 if (new_logical != logical) { 6913 6913 btrfs_free_chunk_map(map); ··· 6921 6921 } 6922 6922 6923 6923 num_copies = btrfs_chunk_map_num_copies(map); 6924 - if (io_geom.mirror_num > num_copies) 6925 - return -EINVAL; 6924 + if (io_geom.mirror_num > num_copies) { 6925 + ret = -EINVAL; 6926 + goto out; 6927 + } 6926 6928 6927 6929 map_offset = logical - map->start; 6928 6930 io_geom.raid56_full_stripe_start = (u64)-1;
+12 -3
fs/iomap/buffered-io.c
··· 80 80 { 81 81 struct iomap_folio_state *ifs = folio->private; 82 82 unsigned long flags; 83 - bool uptodate = true; 83 + bool mark_uptodate = true; 84 84 85 85 if (folio_test_uptodate(folio)) 86 86 return; 87 87 88 88 if (ifs) { 89 89 spin_lock_irqsave(&ifs->state_lock, flags); 90 - uptodate = ifs_set_range_uptodate(folio, ifs, off, len); 90 + /* 91 + * If a read with bytes pending is in progress, we must not call 92 + * folio_mark_uptodate(). The read completion path 93 + * (iomap_read_end()) will call folio_end_read(), which uses XOR 94 + * semantics to set the uptodate bit. If we set it here, the XOR 95 + * in folio_end_read() will clear it, leaving the folio not 96 + * uptodate. 97 + */ 98 + mark_uptodate = ifs_set_range_uptodate(folio, ifs, off, len) && 99 + !ifs->read_bytes_pending; 91 100 spin_unlock_irqrestore(&ifs->state_lock, flags); 92 101 } 93 102 94 - if (uptodate) 103 + if (mark_uptodate) 95 104 folio_mark_uptodate(folio); 96 105 } 97 106
+14 -1
fs/iomap/direct-io.c
··· 87 87 return FSERR_DIRECTIO_READ; 88 88 } 89 89 90 + static inline bool should_report_dio_fserror(const struct iomap_dio *dio) 91 + { 92 + switch (dio->error) { 93 + case 0: 94 + case -EAGAIN: 95 + case -ENOTBLK: 96 + /* don't send fsnotify for success or magic retry codes */ 97 + return false; 98 + default: 99 + return true; 100 + } 101 + } 102 + 90 103 ssize_t iomap_dio_complete(struct iomap_dio *dio) 91 104 { 92 105 const struct iomap_dio_ops *dops = dio->dops; ··· 109 96 110 97 if (dops && dops->end_io) 111 98 ret = dops->end_io(iocb, dio->size, ret, dio->flags); 112 - if (dio->error) 99 + if (should_report_dio_fserror(dio)) 113 100 fserror_report_io(file_inode(iocb->ki_filp), 114 101 iomap_dio_err_type(dio), offset, dio->size, 115 102 dio->error, GFP_NOFS);
+7 -6
fs/iomap/ioend.c
··· 215 215 WARN_ON_ONCE(!folio->private && map_len < dirty_len); 216 216 217 217 switch (wpc->iomap.type) { 218 - case IOMAP_INLINE: 219 - WARN_ON_ONCE(1); 220 - return -EIO; 218 + case IOMAP_UNWRITTEN: 219 + ioend_flags |= IOMAP_IOEND_UNWRITTEN; 220 + break; 221 + case IOMAP_MAPPED: 222 + break; 221 223 case IOMAP_HOLE: 222 224 return map_len; 223 225 default: 224 - break; 226 + WARN_ON_ONCE(1); 227 + return -EIO; 225 228 } 226 229 227 - if (wpc->iomap.type == IOMAP_UNWRITTEN) 228 - ioend_flags |= IOMAP_IOEND_UNWRITTEN; 229 230 if (wpc->iomap.flags & IOMAP_F_SHARED) 230 231 ioend_flags |= IOMAP_IOEND_SHARED; 231 232 if (folio_test_dropbehind(folio))
+212 -16
fs/netfs/direct_write.c
··· 10 10 #include "internal.h" 11 11 12 12 /* 13 + * Perform the cleanup rituals after an unbuffered write is complete. 14 + */ 15 + static void netfs_unbuffered_write_done(struct netfs_io_request *wreq) 16 + { 17 + struct netfs_inode *ictx = netfs_inode(wreq->inode); 18 + 19 + _enter("R=%x", wreq->debug_id); 20 + 21 + /* Okay, declare that all I/O is complete. */ 22 + trace_netfs_rreq(wreq, netfs_rreq_trace_write_done); 23 + 24 + if (!wreq->error) 25 + netfs_update_i_size(ictx, &ictx->inode, wreq->start, wreq->transferred); 26 + 27 + if (wreq->origin == NETFS_DIO_WRITE && 28 + wreq->mapping->nrpages) { 29 + /* mmap may have got underfoot and we may now have folios 30 + * locally covering the region we just wrote. Attempt to 31 + * discard the folios, but leave in place any modified locally. 32 + * ->write_iter() is prevented from interfering by the DIO 33 + * counter. 34 + */ 35 + pgoff_t first = wreq->start >> PAGE_SHIFT; 36 + pgoff_t last = (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT; 37 + 38 + invalidate_inode_pages2_range(wreq->mapping, first, last); 39 + } 40 + 41 + if (wreq->origin == NETFS_DIO_WRITE) 42 + inode_dio_end(wreq->inode); 43 + 44 + _debug("finished"); 45 + netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip); 46 + /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */ 47 + 48 + if (wreq->iocb) { 49 + size_t written = umin(wreq->transferred, wreq->len); 50 + 51 + wreq->iocb->ki_pos += written; 52 + if (wreq->iocb->ki_complete) { 53 + trace_netfs_rreq(wreq, netfs_rreq_trace_ki_complete); 54 + wreq->iocb->ki_complete(wreq->iocb, wreq->error ?: written); 55 + } 56 + wreq->iocb = VFS_PTR_POISON; 57 + } 58 + 59 + netfs_clear_subrequests(wreq); 60 + } 61 + 62 + /* 63 + * Collect the subrequest results of unbuffered write subrequests. 64 + */ 65 + static void netfs_unbuffered_write_collect(struct netfs_io_request *wreq, 66 + struct netfs_io_stream *stream, 67 + struct netfs_io_subrequest *subreq) 68 + { 69 + trace_netfs_collect_sreq(wreq, subreq); 70 + 71 + spin_lock(&wreq->lock); 72 + list_del_init(&subreq->rreq_link); 73 + spin_unlock(&wreq->lock); 74 + 75 + wreq->transferred += subreq->transferred; 76 + iov_iter_advance(&wreq->buffer.iter, subreq->transferred); 77 + 78 + stream->collected_to = subreq->start + subreq->transferred; 79 + wreq->collected_to = stream->collected_to; 80 + netfs_put_subrequest(subreq, netfs_sreq_trace_put_done); 81 + 82 + trace_netfs_collect_stream(wreq, stream); 83 + trace_netfs_collect_state(wreq, wreq->collected_to, 0); 84 + } 85 + 86 + /* 87 + * Write data to the server without going through the pagecache and without 88 + * writing it to the local cache. We dispatch the subrequests serially and 89 + * wait for each to complete before dispatching the next, lest we leave a gap 90 + * in the data written due to a failure such as ENOSPC. We could, however 91 + * attempt to do preparation such as content encryption for the next subreq 92 + * whilst the current is in progress. 93 + */ 94 + static int netfs_unbuffered_write(struct netfs_io_request *wreq) 95 + { 96 + struct netfs_io_subrequest *subreq = NULL; 97 + struct netfs_io_stream *stream = &wreq->io_streams[0]; 98 + int ret; 99 + 100 + _enter("%llx", wreq->len); 101 + 102 + if (wreq->origin == NETFS_DIO_WRITE) 103 + inode_dio_begin(wreq->inode); 104 + 105 + stream->collected_to = wreq->start; 106 + 107 + for (;;) { 108 + bool retry = false; 109 + 110 + if (!subreq) { 111 + netfs_prepare_write(wreq, stream, wreq->start + wreq->transferred); 112 + subreq = stream->construct; 113 + stream->construct = NULL; 114 + stream->front = NULL; 115 + } 116 + 117 + /* Check if (re-)preparation failed. */ 118 + if (unlikely(test_bit(NETFS_SREQ_FAILED, &subreq->flags))) { 119 + netfs_write_subrequest_terminated(subreq, subreq->error); 120 + wreq->error = subreq->error; 121 + break; 122 + } 123 + 124 + iov_iter_truncate(&subreq->io_iter, wreq->len - wreq->transferred); 125 + if (!iov_iter_count(&subreq->io_iter)) 126 + break; 127 + 128 + subreq->len = netfs_limit_iter(&subreq->io_iter, 0, 129 + stream->sreq_max_len, 130 + stream->sreq_max_segs); 131 + iov_iter_truncate(&subreq->io_iter, subreq->len); 132 + stream->submit_extendable_to = subreq->len; 133 + 134 + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 135 + stream->issue_write(subreq); 136 + 137 + /* Async, need to wait. */ 138 + netfs_wait_for_in_progress_stream(wreq, stream); 139 + 140 + if (test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { 141 + retry = true; 142 + } else if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) { 143 + ret = subreq->error; 144 + wreq->error = ret; 145 + netfs_see_subrequest(subreq, netfs_sreq_trace_see_failed); 146 + subreq = NULL; 147 + break; 148 + } 149 + ret = 0; 150 + 151 + if (!retry) { 152 + netfs_unbuffered_write_collect(wreq, stream, subreq); 153 + subreq = NULL; 154 + if (wreq->transferred >= wreq->len) 155 + break; 156 + if (!wreq->iocb && signal_pending(current)) { 157 + ret = wreq->transferred ? -EINTR : -ERESTARTSYS; 158 + trace_netfs_rreq(wreq, netfs_rreq_trace_intr); 159 + break; 160 + } 161 + continue; 162 + } 163 + 164 + /* We need to retry the last subrequest, so first reset the 165 + * iterator, taking into account what, if anything, we managed 166 + * to transfer. 167 + */ 168 + subreq->error = -EAGAIN; 169 + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); 170 + if (subreq->transferred > 0) 171 + iov_iter_advance(&wreq->buffer.iter, subreq->transferred); 172 + 173 + if (stream->source == NETFS_UPLOAD_TO_SERVER && 174 + wreq->netfs_ops->retry_request) 175 + wreq->netfs_ops->retry_request(wreq, stream); 176 + 177 + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 178 + __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); 179 + __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); 180 + subreq->io_iter = wreq->buffer.iter; 181 + subreq->start = wreq->start + wreq->transferred; 182 + subreq->len = wreq->len - wreq->transferred; 183 + subreq->transferred = 0; 184 + subreq->retry_count += 1; 185 + stream->sreq_max_len = UINT_MAX; 186 + stream->sreq_max_segs = INT_MAX; 187 + 188 + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); 189 + stream->prepare_write(subreq); 190 + 191 + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 192 + netfs_stat(&netfs_n_wh_retry_write_subreq); 193 + } 194 + 195 + netfs_unbuffered_write_done(wreq); 196 + _leave(" = %d", ret); 197 + return ret; 198 + } 199 + 200 + static void netfs_unbuffered_write_async(struct work_struct *work) 201 + { 202 + struct netfs_io_request *wreq = container_of(work, struct netfs_io_request, work); 203 + 204 + netfs_unbuffered_write(wreq); 205 + netfs_put_request(wreq, netfs_rreq_trace_put_complete); 206 + } 207 + 208 + /* 13 209 * Perform an unbuffered write where we may have to do an RMW operation on an 14 210 * encrypted file. This can also be used for direct I/O writes. 15 211 */ ··· 266 70 */ 267 71 wreq->buffer.iter = *iter; 268 72 } 73 + 74 + wreq->len = iov_iter_count(&wreq->buffer.iter); 269 75 } 270 76 271 77 __set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags); 272 - if (async) 273 - __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags); 274 78 275 79 /* Copy the data into the bounce buffer and encrypt it. */ 276 80 // TODO 277 81 278 82 /* Dispatch the write. */ 279 83 __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); 280 - if (async) 84 + 85 + if (async) { 86 + INIT_WORK(&wreq->work, netfs_unbuffered_write_async); 281 87 wreq->iocb = iocb; 282 - wreq->len = iov_iter_count(&wreq->buffer.iter); 283 - ret = netfs_unbuffered_write(wreq, is_sync_kiocb(iocb), wreq->len); 284 - if (ret < 0) { 285 - _debug("begin = %zd", ret); 286 - goto out; 287 - } 288 - 289 - if (!async) { 290 - ret = netfs_wait_for_write(wreq); 291 - if (ret > 0) 292 - iocb->ki_pos += ret; 293 - } else { 88 + queue_work(system_dfl_wq, &wreq->work); 294 89 ret = -EIOCBQUEUED; 90 + } else { 91 + ret = netfs_unbuffered_write(wreq); 92 + if (ret < 0) { 93 + _debug("begin = %zd", ret); 94 + } else { 95 + iocb->ki_pos += wreq->transferred; 96 + ret = wreq->transferred ?: wreq->error; 97 + } 98 + 99 + netfs_put_request(wreq, netfs_rreq_trace_put_complete); 295 100 } 296 101 297 - out: 298 102 netfs_put_request(wreq, netfs_rreq_trace_put_return); 299 103 return ret; 300 104
+3 -1
fs/netfs/internal.h
··· 198 198 struct file *file, 199 199 loff_t start, 200 200 enum netfs_io_origin origin); 201 + void netfs_prepare_write(struct netfs_io_request *wreq, 202 + struct netfs_io_stream *stream, 203 + loff_t start); 201 204 void netfs_reissue_write(struct netfs_io_stream *stream, 202 205 struct netfs_io_subrequest *subreq, 203 206 struct iov_iter *source); ··· 215 212 struct folio **writethrough_cache); 216 213 ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc, 217 214 struct folio *writethrough_cache); 218 - int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t len); 219 215 220 216 /* 221 217 * write_retry.c
-21
fs/netfs/write_collect.c
··· 399 399 ictx->ops->invalidate_cache(wreq); 400 400 } 401 401 402 - if ((wreq->origin == NETFS_UNBUFFERED_WRITE || 403 - wreq->origin == NETFS_DIO_WRITE) && 404 - !wreq->error) 405 - netfs_update_i_size(ictx, &ictx->inode, wreq->start, wreq->transferred); 406 - 407 - if (wreq->origin == NETFS_DIO_WRITE && 408 - wreq->mapping->nrpages) { 409 - /* mmap may have got underfoot and we may now have folios 410 - * locally covering the region we just wrote. Attempt to 411 - * discard the folios, but leave in place any modified locally. 412 - * ->write_iter() is prevented from interfering by the DIO 413 - * counter. 414 - */ 415 - pgoff_t first = wreq->start >> PAGE_SHIFT; 416 - pgoff_t last = (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT; 417 - invalidate_inode_pages2_range(wreq->mapping, first, last); 418 - } 419 - 420 - if (wreq->origin == NETFS_DIO_WRITE) 421 - inode_dio_end(wreq->inode); 422 - 423 402 _debug("finished"); 424 403 netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip); 425 404 /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */
+3 -38
fs/netfs/write_issue.c
··· 154 154 * Prepare a write subrequest. We need to allocate a new subrequest 155 155 * if we don't have one. 156 156 */ 157 - static void netfs_prepare_write(struct netfs_io_request *wreq, 158 - struct netfs_io_stream *stream, 159 - loff_t start) 157 + void netfs_prepare_write(struct netfs_io_request *wreq, 158 + struct netfs_io_stream *stream, 159 + loff_t start) 160 160 { 161 161 struct netfs_io_subrequest *subreq; 162 162 struct iov_iter *wreq_iter = &wreq->buffer.iter; ··· 696 696 ret = netfs_wait_for_write(wreq); 697 697 netfs_put_request(wreq, netfs_rreq_trace_put_return); 698 698 return ret; 699 - } 700 - 701 - /* 702 - * Write data to the server without going through the pagecache and without 703 - * writing it to the local cache. 704 - */ 705 - int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t len) 706 - { 707 - struct netfs_io_stream *upload = &wreq->io_streams[0]; 708 - ssize_t part; 709 - loff_t start = wreq->start; 710 - int error = 0; 711 - 712 - _enter("%zx", len); 713 - 714 - if (wreq->origin == NETFS_DIO_WRITE) 715 - inode_dio_begin(wreq->inode); 716 - 717 - while (len) { 718 - // TODO: Prepare content encryption 719 - 720 - _debug("unbuffered %zx", len); 721 - part = netfs_advance_write(wreq, upload, start, len, false); 722 - start += part; 723 - len -= part; 724 - rolling_buffer_advance(&wreq->buffer, part); 725 - if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) 726 - netfs_wait_for_paused_write(wreq); 727 - if (test_bit(NETFS_RREQ_FAILED, &wreq->flags)) 728 - break; 729 - } 730 - 731 - netfs_end_issue_write(wreq); 732 - _leave(" = %d", error); 733 - return error; 734 699 } 735 700 736 701 /*
+11 -11
fs/nfsd/nfsctl.c
··· 377 377 } 378 378 379 379 /* 380 - * write_threads - Start NFSD, or report the current number of running threads 380 + * write_threads - Start NFSD, or report the configured number of threads 381 381 * 382 382 * Input: 383 383 * buf: ignored 384 384 * size: zero 385 385 * Output: 386 386 * On success: passed-in buffer filled with '\n'-terminated C 387 - * string numeric value representing the number of 388 - * running NFSD threads; 387 + * string numeric value representing the configured 388 + * number of NFSD threads; 389 389 * return code is the size in bytes of the string 390 390 * On error: return code is zero 391 391 * ··· 399 399 * Output: 400 400 * On success: NFS service is started; 401 401 * passed-in buffer filled with '\n'-terminated C 402 - * string numeric value representing the number of 403 - * running NFSD threads; 402 + * string numeric value representing the configured 403 + * number of NFSD threads; 404 404 * return code is the size in bytes of the string 405 405 * On error: return code is zero or a negative errno value 406 406 */ ··· 430 430 } 431 431 432 432 /* 433 - * write_pool_threads - Set or report the current number of threads per pool 433 + * write_pool_threads - Set or report the configured number of threads per pool 434 434 * 435 435 * Input: 436 436 * buf: ignored ··· 447 447 * Output: 448 448 * On success: passed-in buffer filled with '\n'-terminated C 449 449 * string containing integer values representing the 450 - * number of NFSD threads in each pool; 450 + * configured number of NFSD threads in each pool; 451 451 * return code is the size in bytes of the string 452 452 * On error: return code is zero or a negative errno value 453 453 */ ··· 1647 1647 if (attr) 1648 1648 nn->min_threads = nla_get_u32(attr); 1649 1649 1650 - ret = nfsd_svc(nrpools, nthreads, net, get_current_cred(), scope); 1650 + ret = nfsd_svc(nrpools, nthreads, net, current_cred(), scope); 1651 1651 if (ret > 0) 1652 1652 ret = 0; 1653 1653 out_unlock: ··· 1657 1657 } 1658 1658 1659 1659 /** 1660 - * nfsd_nl_threads_get_doit - get the number of running threads 1660 + * nfsd_nl_threads_get_doit - get the maximum number of running threads 1661 1661 * @skb: reply buffer 1662 1662 * @info: netlink metadata and command arguments 1663 1663 * ··· 1700 1700 struct svc_pool *sp = &nn->nfsd_serv->sv_pools[i]; 1701 1701 1702 1702 err = nla_put_u32(skb, NFSD_A_SERVER_THREADS, 1703 - sp->sp_nrthreads); 1703 + sp->sp_nrthrmax); 1704 1704 if (err) 1705 1705 goto err_unlock; 1706 1706 } ··· 2000 2000 } 2001 2001 2002 2002 ret = svc_xprt_create_from_sa(serv, xcl_name, net, sa, 0, 2003 - get_current_cred()); 2003 + current_cred()); 2004 2004 /* always save the latest error */ 2005 2005 if (ret < 0) 2006 2006 err = ret;
+4 -3
fs/nfsd/nfssvc.c
··· 239 239 240 240 int nfsd_nrthreads(struct net *net) 241 241 { 242 - int rv = 0; 242 + int i, rv = 0; 243 243 struct nfsd_net *nn = net_generic(net, nfsd_net_id); 244 244 245 245 mutex_lock(&nfsd_mutex); 246 246 if (nn->nfsd_serv) 247 - rv = nn->nfsd_serv->sv_nrthreads; 247 + for (i = 0; i < nn->nfsd_serv->sv_nrpools; ++i) 248 + rv += nn->nfsd_serv->sv_pools[i].sp_nrthrmax; 248 249 mutex_unlock(&nfsd_mutex); 249 250 return rv; 250 251 } ··· 660 659 661 660 if (serv) 662 661 for (i = 0; i < serv->sv_nrpools && i < n; i++) 663 - nthreads[i] = serv->sv_pools[i].sp_nrthreads; 662 + nthreads[i] = serv->sv_pools[i].sp_nrthrmax; 664 663 return 0; 665 664 } 666 665
+14 -1
fs/nsfs.c
··· 199 199 return false; 200 200 } 201 201 202 + static bool may_use_nsfs_ioctl(unsigned int cmd) 203 + { 204 + switch (_IOC_NR(cmd)) { 205 + case _IOC_NR(NS_MNT_GET_NEXT): 206 + fallthrough; 207 + case _IOC_NR(NS_MNT_GET_PREV): 208 + return may_see_all_namespaces(); 209 + } 210 + return true; 211 + } 212 + 202 213 static long ns_ioctl(struct file *filp, unsigned int ioctl, 203 214 unsigned long arg) 204 215 { ··· 225 214 226 215 if (!nsfs_ioctl_valid(ioctl)) 227 216 return -ENOIOCTLCMD; 217 + if (!may_use_nsfs_ioctl(ioctl)) 218 + return -EPERM; 228 219 229 220 ns = get_proc_ns(file_inode(filp)); 230 221 switch (ioctl) { ··· 627 614 return ERR_PTR(-EOPNOTSUPP); 628 615 } 629 616 630 - if (owning_ns && !ns_capable(owning_ns, CAP_SYS_ADMIN)) { 617 + if (owning_ns && !may_see_all_namespaces()) { 631 618 ns->ops->put(ns); 632 619 return ERR_PTR(-EPERM); 633 620 }
+2
fs/smb/client/Makefile
··· 56 56 quiet_cmd_gen_smb2_mapping = GEN $@ 57 57 cmd_gen_smb2_mapping = perl $(src)/gen_smb2_mapping $< $@ 58 58 59 + obj-$(CONFIG_SMB_KUNIT_TESTS) += smb2maperror_test.o 60 + 59 61 clean-files += smb2_mapping_table.c
+5 -2
fs/smb/client/cifsfs.c
··· 332 332 333 333 /* 334 334 * We need to release all dentries for the cached directories 335 - * before we kill the sb. 335 + * and close all deferred file handles before we kill the sb. 336 336 */ 337 337 if (cifs_sb->root) { 338 338 close_all_cached_dirs(cifs_sb); 339 + cifs_close_all_deferred_files_sb(cifs_sb); 340 + 341 + /* Wait for all pending oplock breaks to complete */ 342 + flush_workqueue(cifsoplockd_wq); 339 343 340 344 /* finally release root dentry */ 341 345 dput(cifs_sb->root); ··· 872 868 spin_unlock(&tcon->tc_lock); 873 869 spin_unlock(&cifs_tcp_ses_lock); 874 870 875 - cifs_close_all_deferred_files(tcon); 876 871 /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */ 877 872 /* cancel_notify_requests(tcon); */ 878 873 if (tcon->ses && tcon->ses->server) {
+1
fs/smb/client/cifsproto.h
··· 261 261 262 262 void cifs_close_all_deferred_files(struct cifs_tcon *tcon); 263 263 264 + void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb); 264 265 void cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, 265 266 struct dentry *dentry); 266 267
-11
fs/smb/client/file.c
··· 711 711 mutex_init(&cfile->fh_mutex); 712 712 spin_lock_init(&cfile->file_info_lock); 713 713 714 - cifs_sb_active(inode->i_sb); 715 - 716 714 /* 717 715 * If the server returned a read oplock and we have mandatory brlocks, 718 716 * set oplock level to None. ··· 765 767 struct inode *inode = d_inode(cifs_file->dentry); 766 768 struct cifsInodeInfo *cifsi = CIFS_I(inode); 767 769 struct cifsLockInfo *li, *tmp; 768 - struct super_block *sb = inode->i_sb; 769 770 770 771 /* 771 772 * Delete any outstanding lock records. We'll lose them when the file ··· 782 785 783 786 cifs_put_tlink(cifs_file->tlink); 784 787 dput(cifs_file->dentry); 785 - cifs_sb_deactive(sb); 786 788 kfree(cifs_file->symlink_target); 787 789 kfree(cifs_file); 788 790 } ··· 3159 3163 __u64 persistent_fid, volatile_fid; 3160 3164 __u16 net_fid; 3161 3165 3162 - /* 3163 - * Hold a reference to the superblock to prevent it and its inodes from 3164 - * being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put() 3165 - * may release the last reference to the sb and trigger inode eviction. 3166 - */ 3167 - cifs_sb_active(sb); 3168 3166 wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS, 3169 3167 TASK_UNINTERRUPTIBLE); 3170 3168 ··· 3243 3253 cifs_put_tlink(tlink); 3244 3254 out: 3245 3255 cifs_done_oplock_break(cinode); 3246 - cifs_sb_deactive(sb); 3247 3256 } 3248 3257 3249 3258 static int cifs_swap_activate(struct swap_info_struct *sis,
+42
fs/smb/client/misc.c
··· 28 28 #include "fs_context.h" 29 29 #include "cached_dir.h" 30 30 31 + struct tcon_list { 32 + struct list_head entry; 33 + struct cifs_tcon *tcon; 34 + }; 35 + 31 36 /* The xid serves as a useful identifier for each incoming vfs request, 32 37 in a similar way to the mid which is useful to track each sent smb, 33 38 and CurrentXid can also provide a running counter (although it ··· 555 550 list_for_each_entry_safe(tmp_list, tmp_next_list, &file_head, list) { 556 551 _cifsFileInfo_put(tmp_list->cfile, true, false); 557 552 list_del(&tmp_list->list); 553 + kfree(tmp_list); 554 + } 555 + } 556 + 557 + void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb) 558 + { 559 + struct rb_root *root = &cifs_sb->tlink_tree; 560 + struct rb_node *node; 561 + struct cifs_tcon *tcon; 562 + struct tcon_link *tlink; 563 + struct tcon_list *tmp_list, *q; 564 + LIST_HEAD(tcon_head); 565 + 566 + spin_lock(&cifs_sb->tlink_tree_lock); 567 + for (node = rb_first(root); node; node = rb_next(node)) { 568 + tlink = rb_entry(node, struct tcon_link, tl_rbnode); 569 + tcon = tlink_tcon(tlink); 570 + if (IS_ERR(tcon)) 571 + continue; 572 + tmp_list = kmalloc_obj(struct tcon_list, GFP_ATOMIC); 573 + if (tmp_list == NULL) 574 + break; 575 + tmp_list->tcon = tcon; 576 + /* Take a reference on tcon to prevent it from being freed */ 577 + spin_lock(&tcon->tc_lock); 578 + ++tcon->tc_count; 579 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 580 + netfs_trace_tcon_ref_get_close_defer_files); 581 + spin_unlock(&tcon->tc_lock); 582 + list_add_tail(&tmp_list->entry, &tcon_head); 583 + } 584 + spin_unlock(&cifs_sb->tlink_tree_lock); 585 + 586 + list_for_each_entry_safe(tmp_list, q, &tcon_head, entry) { 587 + cifs_close_all_deferred_files(tmp_list->tcon); 588 + list_del(&tmp_list->entry); 589 + cifs_put_tcon(tmp_list->tcon, netfs_trace_tcon_ref_put_close_defer_files); 558 590 kfree(tmp_list); 559 591 } 560 592 }
+2 -1
fs/smb/client/smb1encrypt.c
··· 11 11 12 12 #include <linux/fips.h> 13 13 #include <crypto/md5.h> 14 + #include <crypto/utils.h> 14 15 #include "cifsproto.h" 15 16 #include "smb1proto.h" 16 17 #include "cifs_debug.h" ··· 132 131 /* cifs_dump_mem("what we think it should be: ", 133 132 what_we_think_sig_should_be, 16); */ 134 133 135 - if (memcmp(server_response_sig, what_we_think_sig_should_be, 8)) 134 + if (crypto_memneq(server_response_sig, what_we_think_sig_should_be, 8)) 136 135 return -EACCES; 137 136 else 138 137 return 0;
+12
fs/smb/client/smb2glob.h
··· 46 46 #define END_OF_CHAIN 4 47 47 #define RELATED_REQUEST 8 48 48 49 + /* 50 + ***************************************************************** 51 + * Struct definitions go here 52 + ***************************************************************** 53 + */ 54 + 55 + struct status_to_posix_error { 56 + __u32 smb2_status; 57 + int posix_error; 58 + char *status_string; 59 + }; 60 + 49 61 #endif /* _SMB2_GLOB_H */
+5 -3
fs/smb/client/smb2inode.c
··· 325 325 cfile->fid.volatile_fid, 326 326 SMB_FIND_FILE_POSIX_INFO, 327 327 SMB2_O_INFO_FILE, 0, 328 - sizeof(struct smb311_posix_qinfo *) + 328 + sizeof(struct smb311_posix_qinfo) + 329 329 (PATH_MAX * 2) + 330 330 (sizeof(struct smb_sid) * 2), 0, NULL); 331 331 } else { ··· 335 335 COMPOUND_FID, 336 336 SMB_FIND_FILE_POSIX_INFO, 337 337 SMB2_O_INFO_FILE, 0, 338 - sizeof(struct smb311_posix_qinfo *) + 338 + sizeof(struct smb311_posix_qinfo) + 339 339 (PATH_MAX * 2) + 340 340 (sizeof(struct smb_sid) * 2), 0, NULL); 341 341 } ··· 1216 1216 memset(resp_buftype, 0, sizeof(resp_buftype)); 1217 1217 memset(rsp_iov, 0, sizeof(rsp_iov)); 1218 1218 1219 + memset(open_iov, 0, sizeof(open_iov)); 1219 1220 rqst[0].rq_iov = open_iov; 1220 1221 rqst[0].rq_nvec = ARRAY_SIZE(open_iov); 1221 1222 ··· 1241 1240 creq = rqst[0].rq_iov[0].iov_base; 1242 1241 creq->ShareAccess = FILE_SHARE_DELETE_LE; 1243 1242 1243 + memset(&close_iov, 0, sizeof(close_iov)); 1244 1244 rqst[1].rq_iov = &close_iov; 1245 1245 rqst[1].rq_nvec = 1; 1246 1246 1247 1247 rc = SMB2_close_init(tcon, server, &rqst[1], 1248 1248 COMPOUND_FID, COMPOUND_FID, false); 1249 - smb2_set_related(&rqst[1]); 1250 1249 if (rc) 1251 1250 goto err_free; 1251 + smb2_set_related(&rqst[1]); 1252 1252 1253 1253 if (retries) { 1254 1254 /* Back-off before retry */
+15 -13
fs/smb/client/smb2maperror.c
··· 8 8 * 9 9 */ 10 10 #include <linux/errno.h> 11 - #include "cifsglob.h" 12 11 #include "cifsproto.h" 13 12 #include "cifs_debug.h" 14 13 #include "smb2proto.h" 15 14 #include "smb2glob.h" 16 15 #include "../common/smb2status.h" 17 16 #include "trace.h" 18 - 19 - struct status_to_posix_error { 20 - __u32 smb2_status; 21 - int posix_error; 22 - char *status_string; 23 - }; 24 17 25 18 static const struct status_to_posix_error smb2_error_map_table[] = { 26 19 /* ··· 108 115 return 0; 109 116 } 110 117 111 - #define SMB_CLIENT_KUNIT_AVAILABLE \ 112 - ((IS_MODULE(CONFIG_CIFS) && IS_ENABLED(CONFIG_KUNIT)) || \ 113 - (IS_BUILTIN(CONFIG_CIFS) && IS_BUILTIN(CONFIG_KUNIT))) 118 + #if IS_ENABLED(CONFIG_SMB_KUNIT_TESTS) 119 + /* Previous prototype for eliminating the build warning. */ 120 + const struct status_to_posix_error *smb2_get_err_map_test(__u32 smb2_status); 114 121 115 - #if SMB_CLIENT_KUNIT_AVAILABLE && IS_ENABLED(CONFIG_SMB_KUNIT_TESTS) 116 - #include "smb2maperror_test.c" 117 - #endif /* CONFIG_SMB_KUNIT_TESTS */ 122 + const struct status_to_posix_error *smb2_get_err_map_test(__u32 smb2_status) 123 + { 124 + return smb2_get_err_map(smb2_status); 125 + } 126 + EXPORT_SYMBOL_GPL(smb2_get_err_map_test); 127 + 128 + const struct status_to_posix_error *smb2_error_map_table_test = smb2_error_map_table; 129 + EXPORT_SYMBOL_GPL(smb2_error_map_table_test); 130 + 131 + unsigned int smb2_error_map_num = ARRAY_SIZE(smb2_error_map_table); 132 + EXPORT_SYMBOL_GPL(smb2_error_map_num); 133 + #endif
+9 -3
fs/smb/client/smb2maperror_test.c
··· 9 9 */ 10 10 11 11 #include <kunit/test.h> 12 + #include "smb2glob.h" 13 + 14 + const struct status_to_posix_error *smb2_get_err_map_test(__u32 smb2_status); 15 + extern const struct status_to_posix_error *smb2_error_map_table_test; 16 + extern unsigned int smb2_error_map_num; 12 17 13 18 static void 14 19 test_cmp_map(struct kunit *test, const struct status_to_posix_error *expect) 15 20 { 16 21 const struct status_to_posix_error *result; 17 22 18 - result = smb2_get_err_map(expect->smb2_status); 23 + result = smb2_get_err_map_test(expect->smb2_status); 19 24 KUNIT_EXPECT_PTR_NE(test, NULL, result); 20 25 KUNIT_EXPECT_EQ(test, expect->smb2_status, result->smb2_status); 21 26 KUNIT_EXPECT_EQ(test, expect->posix_error, result->posix_error); ··· 31 26 { 32 27 unsigned int i; 33 28 34 - for (i = 0; i < ARRAY_SIZE(smb2_error_map_table); i++) 35 - test_cmp_map(test, &smb2_error_map_table[i]); 29 + for (i = 0; i < smb2_error_map_num; i++) 30 + test_cmp_map(test, &smb2_error_map_table_test[i]); 36 31 } 37 32 38 33 static struct kunit_case maperror_test_cases[] = { ··· 48 43 kunit_test_suite(maperror_suite); 49 44 50 45 MODULE_LICENSE("GPL"); 46 + MODULE_DESCRIPTION("KUnit tests of SMB2 maperror");
-18
fs/smb/client/smb2pdu.c
··· 3989 3989 NULL); 3990 3990 } 3991 3991 3992 - #if 0 3993 - /* currently unused, as now we are doing compounding instead (see smb311_posix_query_path_info) */ 3994 - int 3995 - SMB311_posix_query_info(const unsigned int xid, struct cifs_tcon *tcon, 3996 - u64 persistent_fid, u64 volatile_fid, 3997 - struct smb311_posix_qinfo *data, u32 *plen) 3998 - { 3999 - size_t output_len = sizeof(struct smb311_posix_qinfo *) + 4000 - (sizeof(struct smb_sid) * 2) + (PATH_MAX * 2); 4001 - *plen = 0; 4002 - 4003 - return query_info(xid, tcon, persistent_fid, volatile_fid, 4004 - SMB_FIND_FILE_POSIX_INFO, SMB2_O_INFO_FILE, 0, 4005 - output_len, sizeof(struct smb311_posix_qinfo), (void **)&data, plen); 4006 - /* Note caller must free "data" (passed in above). It may be allocated in query_info call */ 4007 - } 4008 - #endif 4009 - 4010 3992 int 4011 3993 SMB2_query_acl(const unsigned int xid, struct cifs_tcon *tcon, 4012 3994 u64 persistent_fid, u64 volatile_fid,
+5 -2
fs/smb/client/smb2pdu.h
··· 224 224 __le32 Tag; 225 225 } __packed; 226 226 227 - /* See MS-FSCC 2.4.21 */ 227 + /* See MS-FSCC 2.4.26 */ 228 228 struct smb2_file_id_information { 229 229 __le64 VolumeSerialNumber; 230 230 __u64 PersistentFileId; /* opaque endianness */ ··· 251 251 252 252 extern char smb2_padding[7]; 253 253 254 - /* equivalent of the contents of SMB3.1.1 POSIX open context response */ 254 + /* 255 + * See POSIX-SMB2 2.2.14.2.16 256 + * Link: https://gitlab.com/samba-team/smb3-posix-spec/-/blob/master/smb3_posix_extensions.md 257 + */ 255 258 struct create_posix_rsp { 256 259 u32 nlink; 257 260 u32 reparse_tag;
-3
fs/smb/client/smb2proto.h
··· 167 167 struct cifs_tcon *tcon, struct TCP_Server_Info *server, 168 168 u64 persistent_fid, u64 volatile_fid); 169 169 void SMB2_flush_free(struct smb_rqst *rqst); 170 - int SMB311_posix_query_info(const unsigned int xid, struct cifs_tcon *tcon, 171 - u64 persistent_fid, u64 volatile_fid, 172 - struct smb311_posix_qinfo *data, u32 *plen); 173 170 int SMB2_query_info(const unsigned int xid, struct cifs_tcon *tcon, 174 171 u64 persistent_fid, u64 volatile_fid, 175 172 struct smb2_file_all_info *data);
+3 -1
fs/smb/client/smb2transport.c
··· 20 20 #include <linux/highmem.h> 21 21 #include <crypto/aead.h> 22 22 #include <crypto/sha2.h> 23 + #include <crypto/utils.h> 23 24 #include "cifsglob.h" 24 25 #include "cifsproto.h" 25 26 #include "smb2proto.h" ··· 618 617 if (rc) 619 618 return rc; 620 619 621 - if (memcmp(server_response_sig, shdr->Signature, SMB2_SIGNATURE_SIZE)) { 620 + if (crypto_memneq(server_response_sig, shdr->Signature, 621 + SMB2_SIGNATURE_SIZE)) { 622 622 cifs_dbg(VFS, "sign fail cmd 0x%x message id 0x%llx\n", 623 623 shdr->Command, shdr->MessageId); 624 624 return -EACCES;
+2
fs/smb/client/trace.h
··· 176 176 EM(netfs_trace_tcon_ref_get_cached_laundromat, "GET Ch-Lau") \ 177 177 EM(netfs_trace_tcon_ref_get_cached_lease_break, "GET Ch-Lea") \ 178 178 EM(netfs_trace_tcon_ref_get_cancelled_close, "GET Cn-Cls") \ 179 + EM(netfs_trace_tcon_ref_get_close_defer_files, "GET Cl-Def") \ 179 180 EM(netfs_trace_tcon_ref_get_dfs_refer, "GET DfsRef") \ 180 181 EM(netfs_trace_tcon_ref_get_find, "GET Find ") \ 181 182 EM(netfs_trace_tcon_ref_get_find_sess_tcon, "GET FndSes") \ ··· 188 187 EM(netfs_trace_tcon_ref_put_cancelled_close, "PUT Cn-Cls") \ 189 188 EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \ 190 189 EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \ 190 + EM(netfs_trace_tcon_ref_put_close_defer_files, "PUT Cl-Def") \ 191 191 EM(netfs_trace_tcon_ref_put_mnt_ctx, "PUT MntCtx") \ 192 192 EM(netfs_trace_tcon_ref_put_dfs_refer, "PUT DfsRfr") \ 193 193 EM(netfs_trace_tcon_ref_put_reconnect_server, "PUT Reconn") \
+4 -1
fs/smb/server/smb2pdu.h
··· 83 83 } Data; 84 84 } __packed; 85 85 86 - /* equivalent of the contents of SMB3.1.1 POSIX open context response */ 86 + /* 87 + * See POSIX-SMB2 2.2.14.2.16 88 + * Link: https://gitlab.com/samba-team/smb3-posix-spec/-/blob/master/smb3_posix_extensions.md 89 + */ 87 90 struct create_posix_rsp { 88 91 struct create_context_hdr ccontext; 89 92 __u8 Name[16];
+3
fs/verity/Kconfig
··· 2 2 3 3 config FS_VERITY 4 4 bool "FS Verity (read-only file-based authenticity protection)" 5 + # Filesystems cache the Merkle tree at a 64K aligned offset in the 6 + # pagecache. That approach assumes the page size is at most 64K. 7 + depends on PAGE_SHIFT <= 16 5 8 select CRYPTO_HASH_INFO 6 9 select CRYPTO_LIB_SHA256 7 10 select CRYPTO_LIB_SHA512
+3 -1
include/asm-generic/vmlinux.lds.h
··· 848 848 849 849 /* Required sections not related to debugging. */ 850 850 #define ELF_DETAILS \ 851 - .modinfo : { *(.modinfo) . = ALIGN(8); } \ 852 851 .comment 0 : { *(.comment) } \ 853 852 .symtab 0 : { *(.symtab) } \ 854 853 .strtab 0 : { *(.strtab) } \ 855 854 .shstrtab 0 : { *(.shstrtab) } 855 + 856 + #define MODINFO \ 857 + .modinfo : { *(.modinfo) . = ALIGN(8); } 856 858 857 859 #ifdef CONFIG_GENERIC_BUG 858 860 #define BUG_TABLE \
+2
include/drm/display/drm_dp.h
··· 571 571 # define DP_PANEL_REPLAY_LINK_OFF_SUPPORTED_IN_PR_AFTER_ADAPTIVE_SYNC_SDP (1 << 7) 572 572 573 573 #define DP_PANEL_REPLAY_CAP_X_GRANULARITY 0xb2 574 + # define DP_PANEL_REPLAY_FULL_LINE_GRANULARITY 0xffff 575 + 574 576 #define DP_PANEL_REPLAY_CAP_Y_GRANULARITY 0xb4 575 577 576 578 /* Link Configuration */
+28 -16
include/kunit/run-in-irq-context.h
··· 12 12 #include <linux/hrtimer.h> 13 13 #include <linux/workqueue.h> 14 14 15 - #define KUNIT_IRQ_TEST_HRTIMER_INTERVAL us_to_ktime(5) 16 - 17 15 struct kunit_irq_test_state { 18 16 bool (*func)(void *test_specific_state); 19 17 void *test_specific_state; 20 18 bool task_func_reported_failure; 21 19 bool hardirq_func_reported_failure; 22 20 bool softirq_func_reported_failure; 21 + atomic_t task_func_calls; 23 22 atomic_t hardirq_func_calls; 24 23 atomic_t softirq_func_calls; 24 + ktime_t interval; 25 25 struct hrtimer timer; 26 26 struct work_struct bh_work; 27 27 }; ··· 30 30 { 31 31 struct kunit_irq_test_state *state = 32 32 container_of(timer, typeof(*state), timer); 33 + int task_calls, hardirq_calls, softirq_calls; 33 34 34 35 WARN_ON_ONCE(!in_hardirq()); 35 - atomic_inc(&state->hardirq_func_calls); 36 + task_calls = atomic_read(&state->task_func_calls); 37 + hardirq_calls = atomic_inc_return(&state->hardirq_func_calls); 38 + softirq_calls = atomic_read(&state->softirq_func_calls); 39 + 40 + /* 41 + * If the timer is firing too often for the softirq or task to ever have 42 + * a chance to run, increase the timer interval. This is needed on very 43 + * slow systems. 44 + */ 45 + if (hardirq_calls >= 20 && (softirq_calls == 0 || task_calls == 0)) 46 + state->interval = ktime_add_ns(state->interval, 250); 36 47 37 48 if (!state->func(state->test_specific_state)) 38 49 state->hardirq_func_reported_failure = true; 39 50 40 - hrtimer_forward_now(&state->timer, KUNIT_IRQ_TEST_HRTIMER_INTERVAL); 51 + hrtimer_forward_now(&state->timer, state->interval); 41 52 queue_work(system_bh_wq, &state->bh_work); 42 53 return HRTIMER_RESTART; 43 54 } ··· 97 86 struct kunit_irq_test_state state = { 98 87 .func = func, 99 88 .test_specific_state = test_specific_state, 89 + /* 90 + * Start with a 5us timer interval. If the system can't keep 91 + * up, kunit_irq_test_timer_func() will increase it. 92 + */ 93 + .interval = us_to_ktime(5), 100 94 }; 101 95 unsigned long end_jiffies; 102 - int hardirq_calls, softirq_calls; 103 - bool allctx = false; 96 + int task_calls, hardirq_calls, softirq_calls; 104 97 105 98 /* 106 99 * Set up a hrtimer (the way we access hardirq context) and a work ··· 119 104 * and hardirq), or 1 second, whichever comes first. 120 105 */ 121 106 end_jiffies = jiffies + HZ; 122 - hrtimer_start(&state.timer, KUNIT_IRQ_TEST_HRTIMER_INTERVAL, 123 - HRTIMER_MODE_REL_HARD); 124 - for (int task_calls = 0, calls = 0; 125 - ((calls < max_iterations) || !allctx) && 126 - !time_after(jiffies, end_jiffies); 127 - task_calls++) { 107 + hrtimer_start(&state.timer, state.interval, HRTIMER_MODE_REL_HARD); 108 + do { 128 109 if (!func(test_specific_state)) 129 110 state.task_func_reported_failure = true; 130 111 112 + task_calls = atomic_inc_return(&state.task_func_calls); 131 113 hardirq_calls = atomic_read(&state.hardirq_func_calls); 132 114 softirq_calls = atomic_read(&state.softirq_func_calls); 133 - calls = task_calls + hardirq_calls + softirq_calls; 134 - allctx = (task_calls > 0) && (hardirq_calls > 0) && 135 - (softirq_calls > 0); 136 - } 115 + } while ((task_calls + hardirq_calls + softirq_calls < max_iterations || 116 + (task_calls == 0 || hardirq_calls == 0 || 117 + softirq_calls == 0)) && 118 + !time_after(jiffies, end_jiffies)); 137 119 138 120 /* Cancel the timer and work. */ 139 121 hrtimer_cancel(&state.timer);
+2
include/linux/device/bus.h
··· 35 35 * otherwise. It may also return error code if determining that 36 36 * the driver supports the device is not possible. In case of 37 37 * -EPROBE_DEFER it will queue the device for deferred probing. 38 + * Note: This callback may be invoked with or without the device 39 + * lock held. 38 40 * @uevent: Called when a device is added, removed, or a few other things 39 41 * that generate uevents to add the environment variables. 40 42 * @probe: Called when a new device or driver add to this bus, and callback
+7 -4
include/linux/eventpoll.h
··· 82 82 epoll_put_uevent(__poll_t revents, __u64 data, 83 83 struct epoll_event __user *uevent) 84 84 { 85 - if (__put_user(revents, &uevent->events) || 86 - __put_user(data, &uevent->data)) 87 - return NULL; 88 - 85 + scoped_user_write_access_size(uevent, sizeof(*uevent), efault) { 86 + unsafe_put_user(revents, &uevent->events, efault); 87 + unsafe_put_user(data, &uevent->data, efault); 88 + } 89 89 return uevent+1; 90 + 91 + efault: 92 + return NULL; 90 93 } 91 94 #endif 92 95
+6
include/linux/hid.h
··· 836 836 * raw_event and event should return negative on error, any other value will 837 837 * pass the event on to .event() typically return 0 for success. 838 838 * 839 + * report_fixup must return a report descriptor pointer whose lifetime is at 840 + * least that of the input rdesc. This is usually done by mutating the input 841 + * rdesc and returning it or a sub-portion of it. In case a new buffer is 842 + * allocated and returned, the implementation of report_fixup is responsible for 843 + * freeing it later. 844 + * 839 845 * input_mapping shall return a negative value to completely ignore this usage 840 846 * (e.g. doubled or invalid usage), zero to continue with parsing of this 841 847 * usage by generic code (no special handling needed) or positive to skip
+11 -7
include/linux/indirect_call_wrapper.h
··· 16 16 */ 17 17 #define INDIRECT_CALL_1(f, f1, ...) \ 18 18 ({ \ 19 - likely(f == f1) ? f1(__VA_ARGS__) : f(__VA_ARGS__); \ 19 + typeof(f) __f1 = (f); \ 20 + likely(__f1 == f1) ? f1(__VA_ARGS__) : __f1(__VA_ARGS__); \ 20 21 }) 21 22 #define INDIRECT_CALL_2(f, f2, f1, ...) \ 22 23 ({ \ 23 - likely(f == f2) ? f2(__VA_ARGS__) : \ 24 - INDIRECT_CALL_1(f, f1, __VA_ARGS__); \ 24 + typeof(f) __f2 = (f); \ 25 + likely(__f2 == f2) ? f2(__VA_ARGS__) : \ 26 + INDIRECT_CALL_1(__f2, f1, __VA_ARGS__); \ 25 27 }) 26 28 #define INDIRECT_CALL_3(f, f3, f2, f1, ...) \ 27 29 ({ \ 28 - likely(f == f3) ? f3(__VA_ARGS__) : \ 29 - INDIRECT_CALL_2(f, f2, f1, __VA_ARGS__); \ 30 + typeof(f) __f3 = (f); \ 31 + likely(__f3 == f3) ? f3(__VA_ARGS__) : \ 32 + INDIRECT_CALL_2(__f3, f2, f1, __VA_ARGS__); \ 30 33 }) 31 34 #define INDIRECT_CALL_4(f, f4, f3, f2, f1, ...) \ 32 35 ({ \ 33 - likely(f == f4) ? f4(__VA_ARGS__) : \ 34 - INDIRECT_CALL_3(f, f3, f2, f1, __VA_ARGS__); \ 36 + typeof(f) __f4 = (f); \ 37 + likely(__f4 == f4) ? f4(__VA_ARGS__) : \ 38 + INDIRECT_CALL_3(__f4, f3, f2, f1, __VA_ARGS__); \ 35 39 }) 36 40 37 41 #define INDIRECT_CALLABLE_DECLARE(f) f
+20 -1
include/linux/kthread.h
··· 7 7 8 8 struct mm_struct; 9 9 10 + /* opaque kthread data */ 11 + struct kthread; 12 + 13 + /* 14 + * When "(p->flags & PF_KTHREAD)" is set the task is a kthread and will 15 + * always remain a kthread. For kthreads p->worker_private always 16 + * points to a struct kthread. For tasks that are not kthreads 17 + * p->worker_private is used to point to other things. 18 + * 19 + * Return NULL for any task that is not a kthread. 20 + */ 21 + static inline struct kthread *tsk_is_kthread(struct task_struct *p) 22 + { 23 + if (p->flags & PF_KTHREAD) 24 + return p->worker_private; 25 + return NULL; 26 + } 27 + 10 28 __printf(4, 5) 11 29 struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), 12 30 void *data, ··· 116 98 int kthread_park(struct task_struct *k); 117 99 void kthread_unpark(struct task_struct *k); 118 100 void kthread_parkme(void); 119 - void kthread_exit(long result) __noreturn; 101 + #define kthread_exit(result) do_exit(result) 120 102 void kthread_complete_and_exit(struct completion *, long) __noreturn; 121 103 int kthreads_update_housekeeping(void); 104 + void kthread_do_exit(struct kthread *, long); 122 105 123 106 int kthreadd(void *unused); 124 107 extern struct task_struct *kthreadd_task;
+22 -5
include/linux/netdevice.h
··· 4711 4711 static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu) 4712 4712 { 4713 4713 spin_lock(&txq->_xmit_lock); 4714 - /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4714 + /* Pairs with READ_ONCE() in netif_tx_owned() */ 4715 4715 WRITE_ONCE(txq->xmit_lock_owner, cpu); 4716 4716 } 4717 4717 ··· 4729 4729 static inline void __netif_tx_lock_bh(struct netdev_queue *txq) 4730 4730 { 4731 4731 spin_lock_bh(&txq->_xmit_lock); 4732 - /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4732 + /* Pairs with READ_ONCE() in netif_tx_owned() */ 4733 4733 WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id()); 4734 4734 } 4735 4735 ··· 4738 4738 bool ok = spin_trylock(&txq->_xmit_lock); 4739 4739 4740 4740 if (likely(ok)) { 4741 - /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4741 + /* Pairs with READ_ONCE() in netif_tx_owned() */ 4742 4742 WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id()); 4743 4743 } 4744 4744 return ok; ··· 4746 4746 4747 4747 static inline void __netif_tx_unlock(struct netdev_queue *txq) 4748 4748 { 4749 - /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4749 + /* Pairs with READ_ONCE() in netif_tx_owned() */ 4750 4750 WRITE_ONCE(txq->xmit_lock_owner, -1); 4751 4751 spin_unlock(&txq->_xmit_lock); 4752 4752 } 4753 4753 4754 4754 static inline void __netif_tx_unlock_bh(struct netdev_queue *txq) 4755 4755 { 4756 - /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4756 + /* Pairs with READ_ONCE() in netif_tx_owned() */ 4757 4757 WRITE_ONCE(txq->xmit_lock_owner, -1); 4758 4758 spin_unlock_bh(&txq->_xmit_lock); 4759 4759 } ··· 4845 4845 spin_unlock(&dev->tx_global_lock); 4846 4846 local_bh_enable(); 4847 4847 } 4848 + 4849 + #ifndef CONFIG_PREEMPT_RT 4850 + static inline bool netif_tx_owned(struct netdev_queue *txq, unsigned int cpu) 4851 + { 4852 + /* Other cpus might concurrently change txq->xmit_lock_owner 4853 + * to -1 or to their cpu id, but not to our id. 4854 + */ 4855 + return READ_ONCE(txq->xmit_lock_owner) == cpu; 4856 + } 4857 + 4858 + #else 4859 + static inline bool netif_tx_owned(struct netdev_queue *txq, unsigned int cpu) 4860 + { 4861 + return rt_mutex_owner(&txq->_xmit_lock.lock) == current; 4862 + } 4863 + 4864 + #endif 4848 4865 4849 4866 static inline void netif_addr_lock(struct net_device *dev) 4850 4867 {
+2
include/linux/ns_common.h
··· 55 55 56 56 #define ns_common_free(__ns) __ns_common_free(to_ns_common((__ns))) 57 57 58 + bool may_see_all_namespaces(void); 59 + 58 60 static __always_inline __must_check int __ns_ref_active_read(const struct ns_common *ns) 59 61 { 60 62 return atomic_read(&ns->__ns_ref_active);
+7 -7
include/linux/platform_data/mlxreg.h
··· 13 13 /** 14 14 * enum mlxreg_wdt_type - type of HW watchdog 15 15 * 16 - * TYPE1 HW watchdog implementation exist in old systems. 17 - * All new systems have TYPE2 HW watchdog. 18 - * TYPE3 HW watchdog can exist on all systems with new CPLD. 19 - * TYPE3 is selected by WD capability bit. 16 + * @MLX_WDT_TYPE1: HW watchdog implementation in old systems. 17 + * @MLX_WDT_TYPE2: All new systems have TYPE2 HW watchdog. 18 + * @MLX_WDT_TYPE3: HW watchdog that can exist on all systems with new CPLD. 19 + * TYPE3 is selected by WD capability bit. 20 20 */ 21 21 enum mlxreg_wdt_type { 22 22 MLX_WDT_TYPE1, ··· 35 35 * @MLXREG_HOTPLUG_LC_SYNCED: entry for line card synchronization events, coming 36 36 * after hardware-firmware synchronization handshake; 37 37 * @MLXREG_HOTPLUG_LC_READY: entry for line card ready events, indicating line card 38 - PHYs ready / unready state; 38 + * PHYs ready / unready state; 39 39 * @MLXREG_HOTPLUG_LC_ACTIVE: entry for line card active events, indicating firmware 40 40 * availability / unavailability for the ports on line card; 41 41 * @MLXREG_HOTPLUG_LC_THERMAL: entry for line card thermal shutdown events, positive ··· 123 123 * @reg_pwr: attribute power register; 124 124 * @reg_ena: attribute enable register; 125 125 * @mode: access mode; 126 - * @np - pointer to node platform associated with attribute; 127 - * @hpdev - hotplug device data; 126 + * @np: pointer to node platform associated with attribute; 127 + * @hpdev: hotplug device data; 128 128 * @notifier: pointer to event notifier block; 129 129 * @health_cntr: dynamic device health indication counter; 130 130 * @attached: true if device has been attached after good health indication;
+3 -2
include/linux/platform_data/x86/int3472.h
··· 26 26 #define INT3472_GPIO_TYPE_POWER_ENABLE 0x0b 27 27 #define INT3472_GPIO_TYPE_CLK_ENABLE 0x0c 28 28 #define INT3472_GPIO_TYPE_PRIVACY_LED 0x0d 29 + #define INT3472_GPIO_TYPE_DOVDD 0x10 29 30 #define INT3472_GPIO_TYPE_HANDSHAKE 0x12 30 31 #define INT3472_GPIO_TYPE_HOTPLUG_DETECT 0x13 31 32 ··· 34 33 #define INT3472_MAX_SENSOR_GPIOS 3 35 34 #define INT3472_MAX_REGULATORS 3 36 35 37 - /* E.g. "avdd\0" */ 38 - #define GPIO_SUPPLY_NAME_LENGTH 5 36 + /* E.g. "dovdd\0" */ 37 + #define GPIO_SUPPLY_NAME_LENGTH 6 39 38 /* 12 chars for acpi_dev_name() + "-", e.g. "ABCD1234:00-" */ 40 39 #define GPIO_REGULATOR_NAME_LENGTH (12 + GPIO_SUPPLY_NAME_LENGTH) 41 40 /* lower- and upper-case mapping */
+1
include/linux/ring_buffer.h
··· 248 248 249 249 int ring_buffer_map(struct trace_buffer *buffer, int cpu, 250 250 struct vm_area_struct *vma); 251 + void ring_buffer_map_dup(struct trace_buffer *buffer, int cpu); 251 252 int ring_buffer_unmap(struct trace_buffer *buffer, int cpu); 252 253 int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu); 253 254 #endif /* _LINUX_RING_BUFFER_H */
+20 -34
include/linux/uaccess.h
··· 647 647 /* Define RW variant so the below _mode macro expansion works */ 648 648 #define masked_user_rw_access_begin(u) masked_user_access_begin(u) 649 649 #define user_rw_access_begin(u, s) user_access_begin(u, s) 650 - #define user_rw_access_end() user_access_end() 651 650 652 651 /* Scoped user access */ 653 - #define USER_ACCESS_GUARD(_mode) \ 654 - static __always_inline void __user * \ 655 - class_user_##_mode##_begin(void __user *ptr) \ 656 - { \ 657 - return ptr; \ 658 - } \ 659 - \ 660 - static __always_inline void \ 661 - class_user_##_mode##_end(void __user *ptr) \ 662 - { \ 663 - user_##_mode##_access_end(); \ 664 - } \ 665 - \ 666 - DEFINE_CLASS(user_ ##_mode## _access, void __user *, \ 667 - class_user_##_mode##_end(_T), \ 668 - class_user_##_mode##_begin(ptr), void __user *ptr) \ 669 - \ 670 - static __always_inline class_user_##_mode##_access_t \ 671 - class_user_##_mode##_access_ptr(void __user *scope) \ 672 - { \ 673 - return scope; \ 674 - } 675 652 676 - USER_ACCESS_GUARD(read) 677 - USER_ACCESS_GUARD(write) 678 - USER_ACCESS_GUARD(rw) 679 - #undef USER_ACCESS_GUARD 653 + /* Cleanup wrapper functions */ 654 + static __always_inline void __scoped_user_read_access_end(const void *p) 655 + { 656 + user_read_access_end(); 657 + }; 658 + static __always_inline void __scoped_user_write_access_end(const void *p) 659 + { 660 + user_write_access_end(); 661 + }; 662 + static __always_inline void __scoped_user_rw_access_end(const void *p) 663 + { 664 + user_access_end(); 665 + }; 680 666 681 667 /** 682 668 * __scoped_user_access_begin - Start a scoped user access ··· 736 750 * 737 751 * Don't use directly. Use scoped_masked_user_$MODE_access() instead. 738 752 */ 739 - #define __scoped_user_access(mode, uptr, size, elbl) \ 740 - for (bool done = false; !done; done = true) \ 741 - for (void __user *_tmpptr = __scoped_user_access_begin(mode, uptr, size, elbl); \ 742 - !done; done = true) \ 743 - for (CLASS(user_##mode##_access, scope)(_tmpptr); !done; done = true) \ 744 - /* Force modified pointer usage within the scope */ \ 745 - for (const typeof(uptr) uptr = _tmpptr; !done; done = true) 753 + #define __scoped_user_access(mode, uptr, size, elbl) \ 754 + for (bool done = false; !done; done = true) \ 755 + for (auto _tmpptr = __scoped_user_access_begin(mode, uptr, size, elbl); \ 756 + !done; done = true) \ 757 + /* Force modified pointer usage within the scope */ \ 758 + for (const auto uptr __cleanup(__scoped_user_##mode##_access_end) = \ 759 + _tmpptr; !done; done = true) 746 760 747 761 /** 748 762 * scoped_user_read_access_size - Start a scoped user read access with given size
+1
include/linux/usb/r8152.h
··· 32 32 #define VENDOR_ID_DLINK 0x2001 33 33 #define VENDOR_ID_DELL 0x413c 34 34 #define VENDOR_ID_ASUS 0x0b05 35 + #define VENDOR_ID_TRENDNET 0x20f4 35 36 36 37 #if IS_REACHABLE(CONFIG_USB_RTL8152) 37 38 extern u8 rtl8152_get_version(struct usb_interface *intf);
+1
include/net/act_api.h
··· 70 70 #define TCA_ACT_FLAGS_REPLACE (1U << (TCA_ACT_FLAGS_USER_BITS + 2)) 71 71 #define TCA_ACT_FLAGS_NO_RTNL (1U << (TCA_ACT_FLAGS_USER_BITS + 3)) 72 72 #define TCA_ACT_FLAGS_AT_INGRESS (1U << (TCA_ACT_FLAGS_USER_BITS + 4)) 73 + #define TCA_ACT_FLAGS_AT_INGRESS_OR_CLSACT (1U << (TCA_ACT_FLAGS_USER_BITS + 5)) 73 74 74 75 /* Update lastuse only if needed, to avoid dirtying a cache line. 75 76 * We use a temp variable to avoid fetching jiffies twice.
+1
include/net/bonding.h
··· 699 699 void bond_debug_unregister(struct bonding *bond); 700 700 void bond_debug_reregister(struct bonding *bond); 701 701 const char *bond_mode_name(int mode); 702 + bool __bond_xdp_check(int mode, int xmit_policy); 702 703 bool bond_xdp_check(struct bonding *bond, int mode); 703 704 void bond_setup(struct net_device *bond_dev); 704 705 unsigned int bond_get_num_tx_queues(void);
+1 -1
include/net/inet6_hashtables.h
··· 175 175 { 176 176 if (!net_eq(sock_net(sk), net) || 177 177 sk->sk_family != AF_INET6 || 178 - sk->sk_portpair != ports || 178 + READ_ONCE(sk->sk_portpair) != ports || 179 179 !ipv6_addr_equal(&sk->sk_v6_daddr, saddr) || 180 180 !ipv6_addr_equal(&sk->sk_v6_rcv_saddr, daddr)) 181 181 return false;
+1 -1
include/net/inet_hashtables.h
··· 345 345 int dif, int sdif) 346 346 { 347 347 if (!net_eq(sock_net(sk), net) || 348 - sk->sk_portpair != ports || 348 + READ_ONCE(sk->sk_portpair) != ports || 349 349 sk->sk_addrpair != cookie) 350 350 return false; 351 351
+1 -1
include/net/ip.h
··· 101 101 102 102 ipcm->oif = READ_ONCE(inet->sk.sk_bound_dev_if); 103 103 ipcm->addr = inet->inet_saddr; 104 - ipcm->protocol = inet->inet_num; 104 + ipcm->protocol = READ_ONCE(inet->inet_num); 105 105 } 106 106 107 107 #define IPCB(skb) ((struct inet_skb_parm*)((skb)->cb))
+1 -1
include/net/ip_fib.h
··· 559 559 siphash_aligned_key_t hash_key; 560 560 u32 mp_seed; 561 561 562 - mp_seed = READ_ONCE(net->ipv4.sysctl_fib_multipath_hash_seed).mp_seed; 562 + mp_seed = READ_ONCE(net->ipv4.sysctl_fib_multipath_hash_seed.mp_seed); 563 563 fib_multipath_hash_construct_key(&hash_key, mp_seed); 564 564 565 565 return flow_hash_from_keys_seed(keys, &hash_key);
+3
include/net/libeth/xsk.h
··· 597 597 * @pending: current number of XSkFQEs to refill 598 598 * @thresh: threshold below which the queue is refilled 599 599 * @buf_len: HW-writeable length per each buffer 600 + * @truesize: step between consecutive buffers, 0 if none exists 600 601 * @nid: ID of the closest NUMA node with memory 601 602 */ 602 603 struct libeth_xskfq { ··· 615 614 u32 thresh; 616 615 617 616 u32 buf_len; 617 + u32 truesize; 618 + 618 619 int nid; 619 620 }; 620 621
+7
include/net/netfilter/nf_tables.h
··· 320 320 * @NFT_ITER_UNSPEC: unspecified, to catch errors 321 321 * @NFT_ITER_READ: read-only iteration over set elements 322 322 * @NFT_ITER_UPDATE: iteration under mutex to update set element state 323 + * @NFT_ITER_UPDATE_CLONE: clone set before iteration under mutex to update element 323 324 */ 324 325 enum nft_iter_type { 325 326 NFT_ITER_UNSPEC, 326 327 NFT_ITER_READ, 327 328 NFT_ITER_UPDATE, 329 + NFT_ITER_UPDATE_CLONE, 328 330 }; 329 331 330 332 struct nft_set; ··· 1862 1860 struct nft_elem_priv *priv[NFT_TRANS_GC_BATCHCOUNT]; 1863 1861 struct rcu_head rcu; 1864 1862 }; 1863 + 1864 + static inline int nft_trans_gc_space(const struct nft_trans_gc *trans) 1865 + { 1866 + return NFT_TRANS_GC_BATCHCOUNT - trans->count; 1867 + } 1865 1868 1866 1869 static inline void nft_ctx_update(struct nft_ctx *ctx, 1867 1870 const struct nft_trans *trans)
+10
include/net/sch_generic.h
··· 778 778 static inline void qdisc_reset_all_tx_gt(struct net_device *dev, unsigned int i) 779 779 { 780 780 struct Qdisc *qdisc; 781 + bool nolock; 781 782 782 783 for (; i < dev->num_tx_queues; i++) { 783 784 qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc); 784 785 if (qdisc) { 786 + nolock = qdisc->flags & TCQ_F_NOLOCK; 787 + 788 + if (nolock) 789 + spin_lock_bh(&qdisc->seqlock); 785 790 spin_lock_bh(qdisc_lock(qdisc)); 786 791 qdisc_reset(qdisc); 787 792 spin_unlock_bh(qdisc_lock(qdisc)); 793 + if (nolock) { 794 + clear_bit(__QDISC_STATE_MISSED, &qdisc->state); 795 + clear_bit(__QDISC_STATE_DRAINING, &qdisc->state); 796 + spin_unlock_bh(&qdisc->seqlock); 797 + } 788 798 } 789 799 } 790 800 }
+38 -7
include/net/secure_seq.h
··· 5 5 #include <linux/types.h> 6 6 7 7 struct net; 8 + extern struct net init_net; 9 + 10 + union tcp_seq_and_ts_off { 11 + struct { 12 + u32 seq; 13 + u32 ts_off; 14 + }; 15 + u64 hash64; 16 + }; 8 17 9 18 u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport); 10 19 u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr, 11 20 __be16 dport); 12 - u32 secure_tcp_seq(__be32 saddr, __be32 daddr, 13 - __be16 sport, __be16 dport); 14 - u32 secure_tcp_ts_off(const struct net *net, __be32 saddr, __be32 daddr); 15 - u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr, 16 - __be16 sport, __be16 dport); 17 - u32 secure_tcpv6_ts_off(const struct net *net, 18 - const __be32 *saddr, const __be32 *daddr); 21 + union tcp_seq_and_ts_off 22 + secure_tcp_seq_and_ts_off(const struct net *net, __be32 saddr, __be32 daddr, 23 + __be16 sport, __be16 dport); 19 24 25 + static inline u32 secure_tcp_seq(__be32 saddr, __be32 daddr, 26 + __be16 sport, __be16 dport) 27 + { 28 + union tcp_seq_and_ts_off ts; 29 + 30 + ts = secure_tcp_seq_and_ts_off(&init_net, saddr, daddr, 31 + sport, dport); 32 + 33 + return ts.seq; 34 + } 35 + 36 + union tcp_seq_and_ts_off 37 + secure_tcpv6_seq_and_ts_off(const struct net *net, const __be32 *saddr, 38 + const __be32 *daddr, 39 + __be16 sport, __be16 dport); 40 + 41 + static inline u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr, 42 + __be16 sport, __be16 dport) 43 + { 44 + union tcp_seq_and_ts_off ts; 45 + 46 + ts = secure_tcpv6_seq_and_ts_off(&init_net, saddr, daddr, 47 + sport, dport); 48 + 49 + return ts.seq; 50 + } 20 51 #endif /* _NET_SECURE_SEQ */
+26 -7
include/net/tc_act/tc_gate.h
··· 32 32 s32 tcfg_clockid; 33 33 size_t num_entries; 34 34 struct list_head entries; 35 + struct rcu_head rcu; 35 36 }; 36 37 37 38 #define GATE_ACT_GATE_OPEN BIT(0) ··· 40 39 41 40 struct tcf_gate { 42 41 struct tc_action common; 43 - struct tcf_gate_params param; 42 + struct tcf_gate_params __rcu *param; 44 43 u8 current_gate_status; 45 44 ktime_t current_close_time; 46 45 u32 current_entry_octets; ··· 52 51 53 52 #define to_gate(a) ((struct tcf_gate *)a) 54 53 54 + static inline struct tcf_gate_params *tcf_gate_params_locked(const struct tc_action *a) 55 + { 56 + struct tcf_gate *gact = to_gate(a); 57 + 58 + return rcu_dereference_protected(gact->param, 59 + lockdep_is_held(&gact->tcf_lock)); 60 + } 61 + 55 62 static inline s32 tcf_gate_prio(const struct tc_action *a) 56 63 { 64 + struct tcf_gate_params *p; 57 65 s32 tcfg_prio; 58 66 59 - tcfg_prio = to_gate(a)->param.tcfg_priority; 67 + p = tcf_gate_params_locked(a); 68 + tcfg_prio = p->tcfg_priority; 60 69 61 70 return tcfg_prio; 62 71 } 63 72 64 73 static inline u64 tcf_gate_basetime(const struct tc_action *a) 65 74 { 75 + struct tcf_gate_params *p; 66 76 u64 tcfg_basetime; 67 77 68 - tcfg_basetime = to_gate(a)->param.tcfg_basetime; 78 + p = tcf_gate_params_locked(a); 79 + tcfg_basetime = p->tcfg_basetime; 69 80 70 81 return tcfg_basetime; 71 82 } 72 83 73 84 static inline u64 tcf_gate_cycletime(const struct tc_action *a) 74 85 { 86 + struct tcf_gate_params *p; 75 87 u64 tcfg_cycletime; 76 88 77 - tcfg_cycletime = to_gate(a)->param.tcfg_cycletime; 89 + p = tcf_gate_params_locked(a); 90 + tcfg_cycletime = p->tcfg_cycletime; 78 91 79 92 return tcfg_cycletime; 80 93 } 81 94 82 95 static inline u64 tcf_gate_cycletimeext(const struct tc_action *a) 83 96 { 97 + struct tcf_gate_params *p; 84 98 u64 tcfg_cycletimeext; 85 99 86 - tcfg_cycletimeext = to_gate(a)->param.tcfg_cycletime_ext; 100 + p = tcf_gate_params_locked(a); 101 + tcfg_cycletimeext = p->tcfg_cycletime_ext; 87 102 88 103 return tcfg_cycletimeext; 89 104 } 90 105 91 106 static inline u32 tcf_gate_num_entries(const struct tc_action *a) 92 107 { 108 + struct tcf_gate_params *p; 93 109 u32 num_entries; 94 110 95 - num_entries = to_gate(a)->param.num_entries; 111 + p = tcf_gate_params_locked(a); 112 + num_entries = p->num_entries; 96 113 97 114 return num_entries; 98 115 } ··· 124 105 u32 num_entries; 125 106 int i = 0; 126 107 127 - p = &to_gate(a)->param; 108 + p = tcf_gate_params_locked(a); 128 109 num_entries = p->num_entries; 129 110 130 111 list_for_each_entry(entry, &p->entries, list)
+1 -3
include/net/tc_act/tc_ife.h
··· 13 13 u8 eth_src[ETH_ALEN]; 14 14 u16 eth_type; 15 15 u16 flags; 16 - 16 + struct list_head metalist; 17 17 struct rcu_head rcu; 18 18 }; 19 19 20 20 struct tcf_ife_info { 21 21 struct tc_action common; 22 22 struct tcf_ife_params __rcu *params; 23 - /* list of metaids allowed */ 24 - struct list_head metalist; 25 23 }; 26 24 #define to_ife(a) ((struct tcf_ife_info *)a) 27 25
+4 -2
include/net/tcp.h
··· 43 43 #include <net/dst.h> 44 44 #include <net/mptcp.h> 45 45 #include <net/xfrm.h> 46 + #include <net/secure_seq.h> 46 47 47 48 #include <linux/seq_file.h> 48 49 #include <linux/memcontrol.h> ··· 2465 2464 struct flowi *fl, 2466 2465 struct request_sock *req, 2467 2466 u32 tw_isn); 2468 - u32 (*init_seq)(const struct sk_buff *skb); 2469 - u32 (*init_ts_off)(const struct net *net, const struct sk_buff *skb); 2467 + union tcp_seq_and_ts_off (*init_seq_and_ts_off)( 2468 + const struct net *net, 2469 + const struct sk_buff *skb); 2470 2470 int (*send_synack)(const struct sock *sk, struct dst_entry *dst, 2471 2471 struct flowi *fl, struct request_sock *req, 2472 2472 struct tcp_fastopen_cookie *foc,
+13 -3
include/net/xdp_sock_drv.h
··· 51 51 return xsk_pool_get_chunk_size(pool) - xsk_pool_get_headroom(pool); 52 52 } 53 53 54 + static inline u32 xsk_pool_get_rx_frag_step(struct xsk_buff_pool *pool) 55 + { 56 + return pool->unaligned ? 0 : xsk_pool_get_chunk_size(pool); 57 + } 58 + 54 59 static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool, 55 60 struct xdp_rxq_info *rxq) 56 61 { ··· 127 122 goto out; 128 123 129 124 list_for_each_entry_safe(pos, tmp, xskb_list, list_node) { 130 - list_del(&pos->list_node); 125 + list_del_init(&pos->list_node); 131 126 xp_free(pos); 132 127 } 133 128 ··· 162 157 frag = list_first_entry_or_null(&xskb->pool->xskb_list, 163 158 struct xdp_buff_xsk, list_node); 164 159 if (frag) { 165 - list_del(&frag->list_node); 160 + list_del_init(&frag->list_node); 166 161 ret = &frag->xdp; 167 162 } 168 163 ··· 173 168 { 174 169 struct xdp_buff_xsk *xskb = container_of(xdp, struct xdp_buff_xsk, xdp); 175 170 176 - list_del(&xskb->list_node); 171 + list_del_init(&xskb->list_node); 177 172 } 178 173 179 174 static inline struct xdp_buff *xsk_buff_get_head(struct xdp_buff *first) ··· 338 333 } 339 334 340 335 static inline u32 xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool) 336 + { 337 + return 0; 338 + } 339 + 340 + static inline u32 xsk_pool_get_rx_frag_step(struct xsk_buff_pool *pool) 341 341 { 342 342 return 0; 343 343 }
+1
include/sound/cs35l56.h
··· 406 406 extern const char * const cs35l56_tx_input_texts[CS35L56_NUM_INPUT_SRC]; 407 407 extern const unsigned int cs35l56_tx_input_values[CS35L56_NUM_INPUT_SRC]; 408 408 409 + int cs35l56_set_asp_patch(struct cs35l56_base *cs35l56_base); 409 410 int cs35l56_set_patch(struct cs35l56_base *cs35l56_base); 410 411 int cs35l56_mbox_send(struct cs35l56_base *cs35l56_base, unsigned int command); 411 412 int cs35l56_firmware_shutdown(struct cs35l56_base *cs35l56_base);
+1
include/sound/tas2781.h
··· 151 151 struct bulk_reg_val *cali_data_backup; 152 152 struct bulk_reg_val alp_cali_bckp; 153 153 struct tasdevice_fw *cali_data_fmw; 154 + void *cali_specific; 154 155 unsigned int dev_addr; 155 156 unsigned int err_code; 156 157 unsigned char cur_book;
+3 -1
include/trace/events/netfs.h
··· 57 57 EM(netfs_rreq_trace_done, "DONE ") \ 58 58 EM(netfs_rreq_trace_end_copy_to_cache, "END-C2C") \ 59 59 EM(netfs_rreq_trace_free, "FREE ") \ 60 + EM(netfs_rreq_trace_intr, "INTR ") \ 60 61 EM(netfs_rreq_trace_ki_complete, "KI-CMPL") \ 61 62 EM(netfs_rreq_trace_recollect, "RECLLCT") \ 62 63 EM(netfs_rreq_trace_redirty, "REDIRTY") \ ··· 170 169 EM(netfs_sreq_trace_put_oom, "PUT OOM ") \ 171 170 EM(netfs_sreq_trace_put_wip, "PUT WIP ") \ 172 171 EM(netfs_sreq_trace_put_work, "PUT WORK ") \ 173 - E_(netfs_sreq_trace_put_terminated, "PUT TERM ") 172 + EM(netfs_sreq_trace_put_terminated, "PUT TERM ") \ 173 + E_(netfs_sreq_trace_see_failed, "SEE FAILED ") 174 174 175 175 #define netfs_folio_traces \ 176 176 EM(netfs_folio_is_uptodate, "mod-uptodate") \
+1
include/uapi/linux/dma-buf.h
··· 20 20 #ifndef _DMA_BUF_UAPI_H_ 21 21 #define _DMA_BUF_UAPI_H_ 22 22 23 + #include <linux/ioctl.h> 23 24 #include <linux/types.h> 24 25 25 26 /**
+2 -1
include/uapi/linux/io_uring.h
··· 188 188 /* 189 189 * If COOP_TASKRUN is set, get notified if task work is available for 190 190 * running and a kernel transition would be needed to run it. This sets 191 - * IORING_SQ_TASKRUN in the sq ring flags. Not valid with COOP_TASKRUN. 191 + * IORING_SQ_TASKRUN in the sq ring flags. Not valid without COOP_TASKRUN 192 + * or DEFER_TASKRUN. 192 193 */ 193 194 #define IORING_SETUP_TASKRUN_FLAG (1U << 9) 194 195 #define IORING_SETUP_SQE128 (1U << 10) /* SQEs are 128 byte */
+3 -1
include/xen/xenbus.h
··· 80 80 const char *devicetype; 81 81 const char *nodename; 82 82 const char *otherend; 83 + bool vanished; 83 84 int otherend_id; 84 85 struct xenbus_watch otherend_watch; 85 86 struct device dev; ··· 229 228 int xenbus_alloc_evtchn(struct xenbus_device *dev, evtchn_port_t *port); 230 229 int xenbus_free_evtchn(struct xenbus_device *dev, evtchn_port_t port); 231 230 232 - enum xenbus_state xenbus_read_driver_state(const char *path); 231 + enum xenbus_state xenbus_read_driver_state(const struct xenbus_device *dev, 232 + const char *path); 233 233 234 234 __printf(3, 4) 235 235 void xenbus_dev_error(struct xenbus_device *dev, int err, const char *fmt, ...);
+1 -1
init/Kconfig
··· 1902 1902 default n 1903 1903 depends on IO_URING 1904 1904 help 1905 - Enable mock files for io_uring subststem testing. The ABI might 1905 + Enable mock files for io_uring subsystem testing. The ABI might 1906 1906 still change, so it's still experimental and should only be enabled 1907 1907 for specific test purposes. 1908 1908
+2
io_uring/net.c
··· 375 375 kmsg->msg.msg_namelen = addr_len; 376 376 } 377 377 if (sr->flags & IORING_RECVSEND_FIXED_BUF) { 378 + if (sr->flags & IORING_SEND_VECTORIZED) 379 + return -EINVAL; 378 380 req->flags |= REQ_F_IMPORT_BUFFER; 379 381 return 0; 380 382 }
+5 -3
io_uring/zcrx.c
··· 837 837 if (ret) 838 838 goto netdev_put_unlock; 839 839 840 - mp_param.rx_page_size = 1U << ifq->niov_shift; 840 + if (reg.rx_buf_len) 841 + mp_param.rx_page_size = 1U << ifq->niov_shift; 841 842 mp_param.mp_ops = &io_uring_pp_zc_ops; 842 843 mp_param.mp_priv = ifq; 843 844 ret = __net_mp_open_rxq(ifq->netdev, reg.if_rxq, &mp_param, NULL); ··· 927 926 struct io_zcrx_ifq *ifq, 928 927 struct net_iov **ret_niov) 929 928 { 929 + __u64 off = READ_ONCE(rqe->off); 930 930 unsigned niov_idx, area_idx; 931 931 struct io_zcrx_area *area; 932 932 933 - area_idx = rqe->off >> IORING_ZCRX_AREA_SHIFT; 934 - niov_idx = (rqe->off & ~IORING_ZCRX_AREA_MASK) >> ifq->niov_shift; 933 + area_idx = off >> IORING_ZCRX_AREA_SHIFT; 934 + niov_idx = (off & ~IORING_ZCRX_AREA_MASK) >> ifq->niov_shift; 935 935 936 936 if (unlikely(rqe->__pad || area_idx)) 937 937 return false;
+1 -3
kernel/bpf/trampoline.c
··· 1002 1002 mutex_lock(&tr->mutex); 1003 1003 1004 1004 shim_link = cgroup_shim_find(tr, bpf_func); 1005 - if (shim_link) { 1005 + if (shim_link && !IS_ERR(bpf_link_inc_not_zero(&shim_link->link.link))) { 1006 1006 /* Reusing existing shim attached by the other program. */ 1007 - bpf_link_inc(&shim_link->link.link); 1008 - 1009 1007 mutex_unlock(&tr->mutex); 1010 1008 bpf_trampoline_put(tr); /* bpf_trampoline_get above */ 1011 1009 return 0;
+34 -4
kernel/bpf/verifier.c
··· 2511 2511 if ((u32)reg->s32_min_value <= (u32)reg->s32_max_value) { 2512 2512 reg->u32_min_value = max_t(u32, reg->s32_min_value, reg->u32_min_value); 2513 2513 reg->u32_max_value = min_t(u32, reg->s32_max_value, reg->u32_max_value); 2514 + } else { 2515 + if (reg->u32_max_value < (u32)reg->s32_min_value) { 2516 + /* See __reg64_deduce_bounds() for detailed explanation. 2517 + * Refine ranges in the following situation: 2518 + * 2519 + * 0 U32_MAX 2520 + * | [xxxxxxxxxxxxxx u32 range xxxxxxxxxxxxxx] | 2521 + * |----------------------------|----------------------------| 2522 + * |xxxxx s32 range xxxxxxxxx] [xxxxxxx| 2523 + * 0 S32_MAX S32_MIN -1 2524 + */ 2525 + reg->s32_min_value = (s32)reg->u32_min_value; 2526 + reg->u32_max_value = min_t(u32, reg->u32_max_value, reg->s32_max_value); 2527 + } else if ((u32)reg->s32_max_value < reg->u32_min_value) { 2528 + /* 2529 + * 0 U32_MAX 2530 + * | [xxxxxxxxxxxxxx u32 range xxxxxxxxxxxxxx] | 2531 + * |----------------------------|----------------------------| 2532 + * |xxxxxxxxx] [xxxxxxxxxxxx s32 range | 2533 + * 0 S32_MAX S32_MIN -1 2534 + */ 2535 + reg->s32_max_value = (s32)reg->u32_max_value; 2536 + reg->u32_min_value = max_t(u32, reg->u32_min_value, reg->s32_min_value); 2537 + } 2514 2538 } 2515 2539 } 2516 2540 ··· 17359 17335 * in verifier state, save R in linked_regs if R->id == id. 17360 17336 * If there are too many Rs sharing same id, reset id for leftover Rs. 17361 17337 */ 17362 - static void collect_linked_regs(struct bpf_verifier_state *vstate, u32 id, 17338 + static void collect_linked_regs(struct bpf_verifier_env *env, 17339 + struct bpf_verifier_state *vstate, 17340 + u32 id, 17363 17341 struct linked_regs *linked_regs) 17364 17342 { 17343 + struct bpf_insn_aux_data *aux = env->insn_aux_data; 17365 17344 struct bpf_func_state *func; 17366 17345 struct bpf_reg_state *reg; 17346 + u16 live_regs; 17367 17347 int i, j; 17368 17348 17369 17349 id = id & ~BPF_ADD_CONST; 17370 17350 for (i = vstate->curframe; i >= 0; i--) { 17351 + live_regs = aux[frame_insn_idx(vstate, i)].live_regs_before; 17371 17352 func = vstate->frame[i]; 17372 17353 for (j = 0; j < BPF_REG_FP; j++) { 17354 + if (!(live_regs & BIT(j))) 17355 + continue; 17373 17356 reg = &func->regs[j]; 17374 17357 __collect_linked_regs(linked_regs, reg, id, i, j, true); 17375 17358 } ··· 17591 17560 * if parent state is created. 17592 17561 */ 17593 17562 if (BPF_SRC(insn->code) == BPF_X && src_reg->type == SCALAR_VALUE && src_reg->id) 17594 - collect_linked_regs(this_branch, src_reg->id, &linked_regs); 17563 + collect_linked_regs(env, this_branch, src_reg->id, &linked_regs); 17595 17564 if (dst_reg->type == SCALAR_VALUE && dst_reg->id) 17596 - collect_linked_regs(this_branch, dst_reg->id, &linked_regs); 17565 + collect_linked_regs(env, this_branch, dst_reg->id, &linked_regs); 17597 17566 if (linked_regs.cnt > 1) { 17598 17567 err = push_jmp_history(env, this_branch, 0, linked_regs_pack(&linked_regs)); 17599 17568 if (err) ··· 25292 25261 BTF_ID(func, do_exit) 25293 25262 BTF_ID(func, do_group_exit) 25294 25263 BTF_ID(func, kthread_complete_and_exit) 25295 - BTF_ID(func, kthread_exit) 25296 25264 BTF_ID(func, make_task_dead) 25297 25265 BTF_SET_END(noreturn_deny) 25298 25266
+1
kernel/cgroup/cgroup.c
··· 2608 2608 2609 2609 mgctx->tset.nr_tasks++; 2610 2610 2611 + css_set_skip_task_iters(cset, task); 2611 2612 list_move_tail(&task->cg_list, &cset->mg_tasks); 2612 2613 if (list_empty(&cset->mg_node)) 2613 2614 list_add_tail(&cset->mg_node,
+149 -73
kernel/cgroup/cpuset.c
··· 62 62 }; 63 63 64 64 /* 65 + * CPUSET Locking Convention 66 + * ------------------------- 67 + * 68 + * Below are the four global/local locks guarding cpuset structures in lock 69 + * acquisition order: 70 + * - cpuset_top_mutex 71 + * - cpu_hotplug_lock (cpus_read_lock/cpus_write_lock) 72 + * - cpuset_mutex 73 + * - callback_lock (raw spinlock) 74 + * 75 + * As cpuset will now indirectly flush a number of different workqueues in 76 + * housekeeping_update() to update housekeeping cpumasks when the set of 77 + * isolated CPUs is going to be changed, it may be vulnerable to deadlock 78 + * if we hold cpus_read_lock while calling into housekeeping_update(). 79 + * 80 + * The first cpuset_top_mutex will be held except when calling into 81 + * cpuset_handle_hotplug() from the CPU hotplug code where cpus_write_lock 82 + * and cpuset_mutex will be held instead. The main purpose of this mutex 83 + * is to prevent regular cpuset control file write actions from interfering 84 + * with the call to housekeeping_update(), though CPU hotplug operation can 85 + * still happen in parallel. This mutex also provides protection for some 86 + * internal variables. 87 + * 88 + * A task must hold all the remaining three locks to modify externally visible 89 + * or used fields of cpusets, though some of the internally used cpuset fields 90 + * and internal variables can be modified without holding callback_lock. If only 91 + * reliable read access of the externally used fields are needed, a task can 92 + * hold either cpuset_mutex or callback_lock which are exposed to other 93 + * external subsystems. 94 + * 95 + * If a task holds cpu_hotplug_lock and cpuset_mutex, it blocks others, 96 + * ensuring that it is the only task able to also acquire callback_lock and 97 + * be able to modify cpusets. It can perform various checks on the cpuset 98 + * structure first, knowing nothing will change. It can also allocate memory 99 + * without holding callback_lock. While it is performing these checks, various 100 + * callback routines can briefly acquire callback_lock to query cpusets. Once 101 + * it is ready to make the changes, it takes callback_lock, blocking everyone 102 + * else. 103 + * 104 + * Calls to the kernel memory allocator cannot be made while holding 105 + * callback_lock which is a spinlock, as the memory allocator may sleep or 106 + * call back into cpuset code and acquire callback_lock. 107 + * 108 + * Now, the task_struct fields mems_allowed and mempolicy may be changed 109 + * by other task, we use alloc_lock in the task_struct fields to protect 110 + * them. 111 + * 112 + * The cpuset_common_seq_show() handlers only hold callback_lock across 113 + * small pieces of code, such as when reading out possibly multi-word 114 + * cpumasks and nodemasks. 115 + */ 116 + 117 + static DEFINE_MUTEX(cpuset_top_mutex); 118 + static DEFINE_MUTEX(cpuset_mutex); 119 + 120 + /* 121 + * File level internal variables below follow one of the following exclusion 122 + * rules. 123 + * 124 + * RWCS: Read/write-able by holding either cpus_write_lock (and optionally 125 + * cpuset_mutex) or both cpus_read_lock and cpuset_mutex. 126 + * 127 + * CSCB: Readable by holding either cpuset_mutex or callback_lock. Writable 128 + * by holding both cpuset_mutex and callback_lock. 129 + * 130 + * T: Read/write-able by holding the cpuset_top_mutex. 131 + */ 132 + 133 + /* 65 134 * For local partitions, update to subpartitions_cpus & isolated_cpus is done 66 135 * in update_parent_effective_cpumask(). For remote partitions, it is done in 67 136 * the remote_partition_*() and remote_cpus_update() helpers. ··· 139 70 * Exclusive CPUs distributed out to local or remote sub-partitions of 140 71 * top_cpuset 141 72 */ 142 - static cpumask_var_t subpartitions_cpus; 73 + static cpumask_var_t subpartitions_cpus; /* RWCS */ 143 74 144 75 /* 145 - * Exclusive CPUs in isolated partitions 76 + * Exclusive CPUs in isolated partitions (shown in cpuset.cpus.isolated) 146 77 */ 147 - static cpumask_var_t isolated_cpus; 78 + static cpumask_var_t isolated_cpus; /* CSCB */ 148 79 149 80 /* 150 - * isolated_cpus updating flag (protected by cpuset_mutex) 151 - * Set if isolated_cpus is going to be updated in the current 152 - * cpuset_mutex crtical section. 81 + * Set if housekeeping cpumasks are to be updated. 153 82 */ 154 - static bool isolated_cpus_updating; 83 + static bool update_housekeeping; /* RWCS */ 84 + 85 + /* 86 + * Copy of isolated_cpus to be passed to housekeeping_update() 87 + */ 88 + static cpumask_var_t isolated_hk_cpus; /* T */ 155 89 156 90 /* 157 91 * A flag to force sched domain rebuild at the end of an operation. ··· 170 98 * Note that update_relax_domain_level() in cpuset-v1.c can still call 171 99 * rebuild_sched_domains_locked() directly without using this flag. 172 100 */ 173 - static bool force_sd_rebuild; 101 + static bool force_sd_rebuild; /* RWCS */ 174 102 175 103 /* 176 104 * Partition root states: ··· 290 218 .partition_root_state = PRS_ROOT, 291 219 }; 292 220 293 - /* 294 - * There are two global locks guarding cpuset structures - cpuset_mutex and 295 - * callback_lock. The cpuset code uses only cpuset_mutex. Other kernel 296 - * subsystems can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset 297 - * structures. Note that cpuset_mutex needs to be a mutex as it is used in 298 - * paths that rely on priority inheritance (e.g. scheduler - on RT) for 299 - * correctness. 300 - * 301 - * A task must hold both locks to modify cpusets. If a task holds 302 - * cpuset_mutex, it blocks others, ensuring that it is the only task able to 303 - * also acquire callback_lock and be able to modify cpusets. It can perform 304 - * various checks on the cpuset structure first, knowing nothing will change. 305 - * It can also allocate memory while just holding cpuset_mutex. While it is 306 - * performing these checks, various callback routines can briefly acquire 307 - * callback_lock to query cpusets. Once it is ready to make the changes, it 308 - * takes callback_lock, blocking everyone else. 309 - * 310 - * Calls to the kernel memory allocator can not be made while holding 311 - * callback_lock, as that would risk double tripping on callback_lock 312 - * from one of the callbacks into the cpuset code from within 313 - * __alloc_pages(). 314 - * 315 - * If a task is only holding callback_lock, then it has read-only 316 - * access to cpusets. 317 - * 318 - * Now, the task_struct fields mems_allowed and mempolicy may be changed 319 - * by other task, we use alloc_lock in the task_struct fields to protect 320 - * them. 321 - * 322 - * The cpuset_common_seq_show() handlers only hold callback_lock across 323 - * small pieces of code, such as when reading out possibly multi-word 324 - * cpumasks and nodemasks. 325 - */ 326 - 327 - static DEFINE_MUTEX(cpuset_mutex); 328 - 329 221 /** 330 222 * cpuset_lock - Acquire the global cpuset mutex 331 223 * ··· 319 283 */ 320 284 void cpuset_full_lock(void) 321 285 { 286 + mutex_lock(&cpuset_top_mutex); 322 287 cpus_read_lock(); 323 288 mutex_lock(&cpuset_mutex); 324 289 } ··· 328 291 { 329 292 mutex_unlock(&cpuset_mutex); 330 293 cpus_read_unlock(); 294 + mutex_unlock(&cpuset_top_mutex); 331 295 } 332 296 333 297 #ifdef CONFIG_LOCKDEP 334 298 bool lockdep_is_cpuset_held(void) 335 299 { 336 - return lockdep_is_held(&cpuset_mutex); 300 + return lockdep_is_held(&cpuset_mutex) || 301 + lockdep_is_held(&cpuset_top_mutex); 337 302 } 338 303 #endif 339 304 ··· 1000 961 * offline CPUs, a warning is emitted and we return directly to 1001 962 * prevent the panic. 1002 963 */ 1003 - for (i = 0; i < ndoms; ++i) { 964 + for (i = 0; doms && i < ndoms; i++) { 1004 965 if (WARN_ON_ONCE(!cpumask_subset(doms[i], cpu_active_mask))) 1005 966 return; 1006 967 } ··· 1200 1161 static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus) 1201 1162 { 1202 1163 WARN_ON_ONCE(old_prs == new_prs); 1203 - if (new_prs == PRS_ISOLATED) 1164 + lockdep_assert_held(&callback_lock); 1165 + lockdep_assert_held(&cpuset_mutex); 1166 + if (new_prs == PRS_ISOLATED) { 1167 + if (cpumask_subset(xcpus, isolated_cpus)) 1168 + return; 1204 1169 cpumask_or(isolated_cpus, isolated_cpus, xcpus); 1205 - else 1170 + } else { 1171 + if (!cpumask_intersects(xcpus, isolated_cpus)) 1172 + return; 1206 1173 cpumask_andnot(isolated_cpus, isolated_cpus, xcpus); 1207 - 1208 - isolated_cpus_updating = true; 1174 + } 1175 + update_housekeeping = true; 1209 1176 } 1210 1177 1211 1178 /* ··· 1264 1219 isolated_cpus_update(old_prs, parent->partition_root_state, 1265 1220 xcpus); 1266 1221 1267 - cpumask_and(xcpus, xcpus, cpu_active_mask); 1268 1222 cpumask_or(parent->effective_cpus, parent->effective_cpus, xcpus); 1223 + cpumask_and(parent->effective_cpus, parent->effective_cpus, cpu_active_mask); 1269 1224 } 1270 1225 1271 1226 /* ··· 1329 1284 } 1330 1285 1331 1286 /* 1332 - * update_isolation_cpumasks - Update external isolation related CPU masks 1287 + * update_hk_sched_domains - Update HK cpumasks & rebuild sched domains 1333 1288 * 1334 - * The following external CPU masks will be updated if necessary: 1335 - * - workqueue unbound cpumask 1289 + * Update housekeeping cpumasks and rebuild sched domains if necessary. 1290 + * This should be called at the end of cpuset or hotplug actions. 1336 1291 */ 1337 - static void update_isolation_cpumasks(void) 1292 + static void update_hk_sched_domains(void) 1338 1293 { 1339 - int ret; 1294 + if (update_housekeeping) { 1295 + /* Updating HK cpumasks implies rebuild sched domains */ 1296 + update_housekeeping = false; 1297 + force_sd_rebuild = true; 1298 + cpumask_copy(isolated_hk_cpus, isolated_cpus); 1340 1299 1341 - if (!isolated_cpus_updating) 1342 - return; 1300 + /* 1301 + * housekeeping_update() is now called without holding 1302 + * cpus_read_lock and cpuset_mutex. Only cpuset_top_mutex 1303 + * is still being held for mutual exclusion. 1304 + */ 1305 + mutex_unlock(&cpuset_mutex); 1306 + cpus_read_unlock(); 1307 + WARN_ON_ONCE(housekeeping_update(isolated_hk_cpus)); 1308 + cpus_read_lock(); 1309 + mutex_lock(&cpuset_mutex); 1310 + } 1311 + /* force_sd_rebuild will be cleared in rebuild_sched_domains_locked() */ 1312 + if (force_sd_rebuild) 1313 + rebuild_sched_domains_locked(); 1314 + } 1343 1315 1344 - ret = housekeeping_update(isolated_cpus); 1345 - WARN_ON_ONCE(ret < 0); 1346 - 1347 - isolated_cpus_updating = false; 1316 + /* 1317 + * Work function to invoke update_hk_sched_domains() 1318 + */ 1319 + static void hk_sd_workfn(struct work_struct *work) 1320 + { 1321 + cpuset_full_lock(); 1322 + update_hk_sched_domains(); 1323 + cpuset_full_unlock(); 1348 1324 } 1349 1325 1350 1326 /** ··· 1516 1450 cs->remote_partition = true; 1517 1451 cpumask_copy(cs->effective_xcpus, tmp->new_cpus); 1518 1452 spin_unlock_irq(&callback_lock); 1519 - update_isolation_cpumasks(); 1520 1453 cpuset_force_rebuild(); 1521 1454 cs->prs_err = 0; 1522 1455 ··· 1560 1495 compute_excpus(cs, cs->effective_xcpus); 1561 1496 reset_partition_data(cs); 1562 1497 spin_unlock_irq(&callback_lock); 1563 - update_isolation_cpumasks(); 1564 1498 cpuset_force_rebuild(); 1565 1499 1566 1500 /* ··· 1630 1566 if (xcpus) 1631 1567 cpumask_copy(cs->exclusive_cpus, xcpus); 1632 1568 spin_unlock_irq(&callback_lock); 1633 - update_isolation_cpumasks(); 1634 1569 if (adding || deleting) 1635 1570 cpuset_force_rebuild(); 1636 1571 ··· 1973 1910 partition_xcpus_add(new_prs, parent, tmp->delmask); 1974 1911 1975 1912 spin_unlock_irq(&callback_lock); 1976 - update_isolation_cpumasks(); 1977 1913 1978 1914 if ((old_prs != new_prs) && (cmd == partcmd_update)) 1979 1915 update_partition_exclusive_flag(cs, new_prs); ··· 2217 2155 WARN_ON(!is_in_v2_mode() && 2218 2156 !cpumask_equal(cp->cpus_allowed, cp->effective_cpus)); 2219 2157 2220 - cpuset_update_tasks_cpumask(cp, cp->effective_cpus); 2158 + cpuset_update_tasks_cpumask(cp, tmp->new_cpus); 2221 2159 2222 2160 /* 2223 2161 * On default hierarchy, inherit the CS_SCHED_LOAD_BALANCE ··· 2940 2878 else if (isolcpus_updated) 2941 2879 isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus); 2942 2880 spin_unlock_irq(&callback_lock); 2943 - update_isolation_cpumasks(); 2944 2881 2945 2882 /* Force update if switching back to member & update effective_xcpus */ 2946 2883 update_cpumasks_hier(cs, &tmpmask, !new_prs); ··· 3229 3168 } 3230 3169 3231 3170 free_cpuset(trialcs); 3232 - if (force_sd_rebuild) 3233 - rebuild_sched_domains_locked(); 3234 3171 out_unlock: 3172 + update_hk_sched_domains(); 3235 3173 cpuset_full_unlock(); 3236 3174 if (of_cft(of)->private == FILE_MEMLIST) 3237 3175 schedule_flush_migrate_mm(); ··· 3338 3278 cpuset_full_lock(); 3339 3279 if (is_cpuset_online(cs)) 3340 3280 retval = update_prstate(cs, val); 3281 + update_hk_sched_domains(); 3341 3282 cpuset_full_unlock(); 3342 3283 return retval ?: nbytes; 3343 3284 } ··· 3513 3452 /* Reset valid partition back to member */ 3514 3453 if (is_partition_valid(cs)) 3515 3454 update_prstate(cs, PRS_MEMBER); 3455 + update_hk_sched_domains(); 3516 3456 cpuset_full_unlock(); 3517 3457 } 3518 3458 ··· 3669 3607 BUG_ON(!alloc_cpumask_var(&top_cpuset.exclusive_cpus, GFP_KERNEL)); 3670 3608 BUG_ON(!zalloc_cpumask_var(&subpartitions_cpus, GFP_KERNEL)); 3671 3609 BUG_ON(!zalloc_cpumask_var(&isolated_cpus, GFP_KERNEL)); 3610 + BUG_ON(!zalloc_cpumask_var(&isolated_hk_cpus, GFP_KERNEL)); 3672 3611 3673 3612 cpumask_setall(top_cpuset.cpus_allowed); 3674 3613 nodes_setall(top_cpuset.mems_allowed); ··· 3841 3778 */ 3842 3779 static void cpuset_handle_hotplug(void) 3843 3780 { 3781 + static DECLARE_WORK(hk_sd_work, hk_sd_workfn); 3844 3782 static cpumask_t new_cpus; 3845 3783 static nodemask_t new_mems; 3846 3784 bool cpus_updated, mems_updated; ··· 3923 3859 rcu_read_unlock(); 3924 3860 } 3925 3861 3926 - /* rebuild sched domains if necessary */ 3927 - if (force_sd_rebuild) 3928 - rebuild_sched_domains_cpuslocked(); 3862 + 3863 + /* 3864 + * Queue a work to call housekeeping_update() & rebuild_sched_domains() 3865 + * There will be a slight delay before the HK_TYPE_DOMAIN housekeeping 3866 + * cpumask can correctly reflect what is in isolated_cpus. 3867 + * 3868 + * We rely on WORK_STRUCT_PENDING_BIT to not requeue a work item that 3869 + * is still pending. Before the pending bit is cleared, the work data 3870 + * is copied out and work item dequeued. So it is possible to queue 3871 + * the work again before the hk_sd_workfn() is invoked to process the 3872 + * previously queued work. Since hk_sd_workfn() doesn't use the work 3873 + * item at all, this is not a problem. 3874 + */ 3875 + if (update_housekeeping || force_sd_rebuild) 3876 + queue_work(system_unbound_wq, &hk_sd_work); 3929 3877 3930 3878 free_tmpmasks(ptmp); 3931 3879 }
+6
kernel/exit.c
··· 896 896 void __noreturn do_exit(long code) 897 897 { 898 898 struct task_struct *tsk = current; 899 + struct kthread *kthread; 899 900 int group_dead; 900 901 901 902 WARN_ON(irqs_disabled()); 902 903 WARN_ON(tsk->plug); 904 + 905 + kthread = tsk_is_kthread(tsk); 906 + if (unlikely(kthread)) 907 + kthread_do_exit(kthread, code); 903 908 904 909 kcov_task_exit(tsk); 905 910 kmsan_task_exit(tsk); ··· 1018 1013 lockdep_free_task(tsk); 1019 1014 do_task_dead(); 1020 1015 } 1016 + EXPORT_SYMBOL(do_exit); 1021 1017 1022 1018 void __noreturn make_task_dead(int signr) 1023 1019 {
+5 -36
kernel/kthread.c
··· 85 85 return k->worker_private; 86 86 } 87 87 88 - /* 89 - * Variant of to_kthread() that doesn't assume @p is a kthread. 90 - * 91 - * When "(p->flags & PF_KTHREAD)" is set the task is a kthread and will 92 - * always remain a kthread. For kthreads p->worker_private always 93 - * points to a struct kthread. For tasks that are not kthreads 94 - * p->worker_private is used to point to other things. 95 - * 96 - * Return NULL for any task that is not a kthread. 97 - */ 98 - static inline struct kthread *__to_kthread(struct task_struct *p) 99 - { 100 - void *kthread = p->worker_private; 101 - if (kthread && !(p->flags & PF_KTHREAD)) 102 - kthread = NULL; 103 - return kthread; 104 - } 105 - 106 88 void get_kthread_comm(char *buf, size_t buf_size, struct task_struct *tsk) 107 89 { 108 90 struct kthread *kthread = to_kthread(tsk); ··· 175 193 176 194 bool kthread_should_stop_or_park(void) 177 195 { 178 - struct kthread *kthread = __to_kthread(current); 196 + struct kthread *kthread = tsk_is_kthread(current); 179 197 180 198 if (!kthread) 181 199 return false; ··· 216 234 */ 217 235 void *kthread_func(struct task_struct *task) 218 236 { 219 - struct kthread *kthread = __to_kthread(task); 237 + struct kthread *kthread = tsk_is_kthread(task); 220 238 if (kthread) 221 239 return kthread->threadfn; 222 240 return NULL; ··· 248 266 */ 249 267 void *kthread_probe_data(struct task_struct *task) 250 268 { 251 - struct kthread *kthread = __to_kthread(task); 269 + struct kthread *kthread = tsk_is_kthread(task); 252 270 void *data = NULL; 253 271 254 272 if (kthread) ··· 291 309 } 292 310 EXPORT_SYMBOL_GPL(kthread_parkme); 293 311 294 - /** 295 - * kthread_exit - Cause the current kthread return @result to kthread_stop(). 296 - * @result: The integer value to return to kthread_stop(). 297 - * 298 - * While kthread_exit can be called directly, it exists so that 299 - * functions which do some additional work in non-modular code such as 300 - * module_put_and_kthread_exit can be implemented. 301 - * 302 - * Does not return. 303 - */ 304 - void __noreturn kthread_exit(long result) 312 + void kthread_do_exit(struct kthread *kthread, long result) 305 313 { 306 - struct kthread *kthread = to_kthread(current); 307 314 kthread->result = result; 308 315 if (!list_empty(&kthread->affinity_node)) { 309 316 mutex_lock(&kthread_affinity_lock); ··· 304 333 kthread->preferred_affinity = NULL; 305 334 } 306 335 } 307 - do_exit(0); 308 336 } 309 - EXPORT_SYMBOL(kthread_exit); 310 337 311 338 /** 312 339 * kthread_complete_and_exit - Exit the current kthread. ··· 652 683 653 684 bool kthread_is_per_cpu(struct task_struct *p) 654 685 { 655 - struct kthread *kthread = __to_kthread(p); 686 + struct kthread *kthread = tsk_is_kthread(p); 656 687 if (!kthread) 657 688 return false; 658 689
+13 -10
kernel/module/Kconfig
··· 169 169 make them incompatible with the kernel you are running. If 170 170 unsure, say N. 171 171 172 + if MODVERSIONS 173 + 172 174 choice 173 175 prompt "Module versioning implementation" 174 - depends on MODVERSIONS 175 176 help 176 177 Select the tool used to calculate symbol versions for modules. 177 178 ··· 207 206 208 207 config ASM_MODVERSIONS 209 208 bool 210 - default HAVE_ASM_MODVERSIONS && MODVERSIONS 209 + default HAVE_ASM_MODVERSIONS 211 210 help 212 211 This enables module versioning for exported symbols also from 213 212 assembly. This can be enabled only when the target architecture ··· 215 214 216 215 config EXTENDED_MODVERSIONS 217 216 bool "Extended Module Versioning Support" 218 - depends on MODVERSIONS 219 217 help 220 218 This enables extended MODVERSIONs support, allowing long symbol 221 219 names to be versioned. ··· 224 224 225 225 config BASIC_MODVERSIONS 226 226 bool "Basic Module Versioning Support" 227 - depends on MODVERSIONS 228 227 default y 229 228 help 230 229 This enables basic MODVERSIONS support, allowing older tools or ··· 235 236 236 237 This is enabled by default when MODVERSIONS are enabled. 237 238 If unsure, say Y. 239 + 240 + endif # MODVERSIONS 238 241 239 242 config MODULE_SRCVERSION_ALL 240 243 bool "Source checksum for all modules" ··· 278 277 Reject unsigned modules or signed modules for which we don't have a 279 278 key. Without this, such modules will simply taint the kernel. 280 279 280 + if MODULE_SIG || IMA_APPRAISE_MODSIG 281 + 281 282 config MODULE_SIG_ALL 282 283 bool "Automatically sign all modules" 283 284 default y 284 - depends on MODULE_SIG || IMA_APPRAISE_MODSIG 285 285 help 286 286 Sign all modules during make modules_install. Without this option, 287 287 modules must be signed manually, using the scripts/sign-file tool. ··· 292 290 293 291 choice 294 292 prompt "Hash algorithm to sign modules" 295 - depends on MODULE_SIG || IMA_APPRAISE_MODSIG 296 293 default MODULE_SIG_SHA512 297 294 help 298 295 This determines which sort of hashing algorithm will be used during ··· 328 327 329 328 config MODULE_SIG_HASH 330 329 string 331 - depends on MODULE_SIG || IMA_APPRAISE_MODSIG 332 330 default "sha256" if MODULE_SIG_SHA256 333 331 default "sha384" if MODULE_SIG_SHA384 334 332 default "sha512" if MODULE_SIG_SHA512 335 333 default "sha3-256" if MODULE_SIG_SHA3_256 336 334 default "sha3-384" if MODULE_SIG_SHA3_384 337 335 default "sha3-512" if MODULE_SIG_SHA3_512 336 + 337 + endif # MODULE_SIG || IMA_APPRAISE_MODSIG 338 338 339 339 config MODULE_COMPRESS 340 340 bool "Module compression" ··· 352 350 353 351 If unsure, say N. 354 352 353 + if MODULE_COMPRESS 354 + 355 355 choice 356 356 prompt "Module compression type" 357 - depends on MODULE_COMPRESS 358 357 help 359 358 Choose the supported algorithm for module compression. 360 359 ··· 382 379 config MODULE_COMPRESS_ALL 383 380 bool "Automatically compress all modules" 384 381 default y 385 - depends on MODULE_COMPRESS 386 382 help 387 383 Compress all modules during 'make modules_install'. 388 384 ··· 391 389 392 390 config MODULE_DECOMPRESS 393 391 bool "Support in-kernel module decompression" 394 - depends on MODULE_COMPRESS 395 392 select ZLIB_INFLATE if MODULE_COMPRESS_GZIP 396 393 select XZ_DEC if MODULE_COMPRESS_XZ 397 394 select ZSTD_DECOMPRESS if MODULE_COMPRESS_ZSTD ··· 400 399 load pinning security policy is enabled. 401 400 402 401 If unsure, say N. 402 + 403 + endif # MODULE_COMPRESS 403 404 404 405 config MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS 405 406 bool "Allow loading of modules with missing namespace imports"
+7 -6
kernel/module/main.c
··· 1568 1568 break; 1569 1569 1570 1570 default: 1571 + if (sym[i].st_shndx >= info->hdr->e_shnum) { 1572 + pr_err("%s: Symbol %s has an invalid section index %u (max %u)\n", 1573 + mod->name, name, sym[i].st_shndx, info->hdr->e_shnum - 1); 1574 + ret = -ENOEXEC; 1575 + break; 1576 + } 1577 + 1571 1578 /* Divert to percpu allocation if a percpu var. */ 1572 1579 if (sym[i].st_shndx == info->index.pcpu) 1573 1580 secbase = (unsigned long)mod_percpu(mod); ··· 3551 3544 mutex_unlock(&module_mutex); 3552 3545 free_module: 3553 3546 mod_stat_bump_invalid(info, flags); 3554 - /* Free lock-classes; relies on the preceding sync_rcu() */ 3555 - for_class_mod_mem_type(type, core_data) { 3556 - lockdep_free_key_range(mod->mem[type].base, 3557 - mod->mem[type].size); 3558 - } 3559 - 3560 3547 module_memory_restore_rox(mod); 3561 3548 module_deallocate(mod, info); 3562 3549 free_copy:
+6
kernel/nscommon.c
··· 309 309 return; 310 310 } 311 311 } 312 + 313 + bool may_see_all_namespaces(void) 314 + { 315 + return (task_active_pid_ns(current) == &init_pid_ns) && 316 + ns_capable_noaudit(init_pid_ns.user_ns, CAP_SYS_ADMIN); 317 + }
+4 -25
kernel/nstree.c
··· 515 515 static inline bool __must_check may_list_ns(const struct klistns *kls, 516 516 struct ns_common *ns) 517 517 { 518 - if (kls->user_ns) { 519 - if (kls->userns_capable) 520 - return true; 521 - } else { 522 - struct ns_common *owner; 523 - struct user_namespace *user_ns; 524 - 525 - owner = ns_owner(ns); 526 - if (owner) 527 - user_ns = to_user_ns(owner); 528 - else 529 - user_ns = &init_user_ns; 530 - if (ns_capable_noaudit(user_ns, CAP_SYS_ADMIN)) 531 - return true; 532 - } 533 - 518 + if (kls->user_ns && kls->userns_capable) 519 + return true; 534 520 if (is_current_namespace(ns)) 535 521 return true; 536 - 537 - if (ns->ns_type != CLONE_NEWUSER) 538 - return false; 539 - 540 - if (ns_capable_noaudit(to_user_ns(ns), CAP_SYS_ADMIN)) 541 - return true; 542 - 543 - return false; 522 + return may_see_all_namespaces(); 544 523 } 545 524 546 525 static inline void ns_put(struct ns_common *ns) ··· 579 600 580 601 ret = 0; 581 602 head = &to_ns_common(kls->user_ns)->ns_owner_root.ns_list_head; 582 - kls->userns_capable = ns_capable_noaudit(kls->user_ns, CAP_SYS_ADMIN); 603 + kls->userns_capable = may_see_all_namespaces(); 583 604 584 605 rcu_read_lock(); 585 606
+83 -18
kernel/sched/ext.c
··· 976 976 977 977 static void dsq_mod_nr(struct scx_dispatch_q *dsq, s32 delta) 978 978 { 979 - /* scx_bpf_dsq_nr_queued() reads ->nr without locking, use WRITE_ONCE() */ 980 - WRITE_ONCE(dsq->nr, dsq->nr + delta); 979 + /* 980 + * scx_bpf_dsq_nr_queued() reads ->nr without locking. Use READ_ONCE() 981 + * on the read side and WRITE_ONCE() on the write side to properly 982 + * annotate the concurrent lockless access and avoid KCSAN warnings. 983 + */ 984 + WRITE_ONCE(dsq->nr, READ_ONCE(dsq->nr) + delta); 981 985 } 982 986 983 987 static void refill_task_slice_dfl(struct scx_sched *sch, struct task_struct *p) ··· 2739 2735 unsigned long last_runnable = p->scx.runnable_at; 2740 2736 2741 2737 if (unlikely(time_after(jiffies, 2742 - last_runnable + scx_watchdog_timeout))) { 2738 + last_runnable + READ_ONCE(scx_watchdog_timeout)))) { 2743 2739 u32 dur_ms = jiffies_to_msecs(jiffies - last_runnable); 2744 2740 2745 2741 scx_exit(sch, SCX_EXIT_ERROR_STALL, 0, ··· 2767 2763 cond_resched(); 2768 2764 } 2769 2765 queue_delayed_work(system_unbound_wq, to_delayed_work(work), 2770 - scx_watchdog_timeout / 2); 2766 + READ_ONCE(scx_watchdog_timeout) / 2); 2771 2767 } 2772 2768 2773 2769 void scx_tick(struct rq *rq) ··· 3589 3585 ret = SCX_CALL_OP_RET(sch, SCX_KF_UNLOCKED, cgroup_init, NULL, 3590 3586 css->cgroup, &args); 3591 3587 if (ret) { 3592 - css_put(css); 3593 3588 scx_error(sch, "ops.cgroup_init() failed (%d)", ret); 3594 3589 return ret; 3595 3590 } ··· 3711 3708 static ssize_t scx_attr_ops_show(struct kobject *kobj, 3712 3709 struct kobj_attribute *ka, char *buf) 3713 3710 { 3714 - return sysfs_emit(buf, "%s\n", scx_root->ops.name); 3711 + struct scx_sched *sch = container_of(kobj, struct scx_sched, kobj); 3712 + 3713 + return sysfs_emit(buf, "%s\n", sch->ops.name); 3715 3714 } 3716 3715 SCX_ATTR(ops); 3717 3716 ··· 3757 3752 3758 3753 static int scx_uevent(const struct kobject *kobj, struct kobj_uevent_env *env) 3759 3754 { 3760 - return add_uevent_var(env, "SCXOPS=%s", scx_root->ops.name); 3755 + const struct scx_sched *sch = container_of(kobj, struct scx_sched, kobj); 3756 + 3757 + return add_uevent_var(env, "SCXOPS=%s", sch->ops.name); 3761 3758 } 3762 3759 3763 3760 static const struct kset_uevent_ops scx_uevent_ops = { ··· 4430 4423 scx_bypass(false); 4431 4424 } 4432 4425 4426 + /* 4427 + * Claim the exit on @sch. The caller must ensure that the helper kthread work 4428 + * is kicked before the current task can be preempted. Once exit_kind is 4429 + * claimed, scx_error() can no longer trigger, so if the current task gets 4430 + * preempted and the BPF scheduler fails to schedule it back, the helper work 4431 + * will never be kicked and the whole system can wedge. 4432 + */ 4433 4433 static bool scx_claim_exit(struct scx_sched *sch, enum scx_exit_kind kind) 4434 4434 { 4435 4435 int none = SCX_EXIT_NONE; 4436 + 4437 + lockdep_assert_preemption_disabled(); 4436 4438 4437 4439 if (!atomic_try_cmpxchg(&sch->exit_kind, &none, kind)) 4438 4440 return false; ··· 4465 4449 rcu_read_lock(); 4466 4450 sch = rcu_dereference(scx_root); 4467 4451 if (sch) { 4452 + guard(preempt)(); 4468 4453 scx_claim_exit(sch, kind); 4469 4454 kthread_queue_work(sch->helper, &sch->disable_work); 4470 4455 } ··· 4788 4771 { 4789 4772 struct scx_exit_info *ei = sch->exit_info; 4790 4773 4774 + guard(preempt)(); 4775 + 4791 4776 if (!scx_claim_exit(sch, kind)) 4792 4777 return false; 4793 4778 ··· 4974 4955 return 0; 4975 4956 } 4976 4957 4977 - static int scx_enable(struct sched_ext_ops *ops, struct bpf_link *link) 4958 + /* 4959 + * scx_enable() is offloaded to a dedicated system-wide RT kthread to avoid 4960 + * starvation. During the READY -> ENABLED task switching loop, the calling 4961 + * thread's sched_class gets switched from fair to ext. As fair has higher 4962 + * priority than ext, the calling thread can be indefinitely starved under 4963 + * fair-class saturation, leading to a system hang. 4964 + */ 4965 + struct scx_enable_cmd { 4966 + struct kthread_work work; 4967 + struct sched_ext_ops *ops; 4968 + int ret; 4969 + }; 4970 + 4971 + static void scx_enable_workfn(struct kthread_work *work) 4978 4972 { 4973 + struct scx_enable_cmd *cmd = 4974 + container_of(work, struct scx_enable_cmd, work); 4975 + struct sched_ext_ops *ops = cmd->ops; 4979 4976 struct scx_sched *sch; 4980 4977 struct scx_task_iter sti; 4981 4978 struct task_struct *p; 4982 4979 unsigned long timeout; 4983 4980 int i, cpu, ret; 4984 - 4985 - if (!cpumask_equal(housekeeping_cpumask(HK_TYPE_DOMAIN), 4986 - cpu_possible_mask)) { 4987 - pr_err("sched_ext: Not compatible with \"isolcpus=\" domain isolation\n"); 4988 - return -EINVAL; 4989 - } 4990 4981 4991 4982 mutex_lock(&scx_enable_mutex); 4992 4983 ··· 5089 5060 WRITE_ONCE(scx_watchdog_timeout, timeout); 5090 5061 WRITE_ONCE(scx_watchdog_timestamp, jiffies); 5091 5062 queue_delayed_work(system_unbound_wq, &scx_watchdog_work, 5092 - scx_watchdog_timeout / 2); 5063 + READ_ONCE(scx_watchdog_timeout) / 2); 5093 5064 5094 5065 /* 5095 5066 * Once __scx_enabled is set, %current can be switched to SCX anytime. ··· 5214 5185 5215 5186 atomic_long_inc(&scx_enable_seq); 5216 5187 5217 - return 0; 5188 + cmd->ret = 0; 5189 + return; 5218 5190 5219 5191 err_free_ksyncs: 5220 5192 free_kick_syncs(); 5221 5193 err_unlock: 5222 5194 mutex_unlock(&scx_enable_mutex); 5223 - return ret; 5195 + cmd->ret = ret; 5196 + return; 5224 5197 5225 5198 err_disable_unlock_all: 5226 5199 scx_cgroup_unlock(); ··· 5241 5210 */ 5242 5211 scx_error(sch, "scx_enable() failed (%d)", ret); 5243 5212 kthread_flush_work(&sch->disable_work); 5244 - return 0; 5213 + cmd->ret = 0; 5214 + } 5215 + 5216 + static int scx_enable(struct sched_ext_ops *ops, struct bpf_link *link) 5217 + { 5218 + static struct kthread_worker *helper; 5219 + static DEFINE_MUTEX(helper_mutex); 5220 + struct scx_enable_cmd cmd; 5221 + 5222 + if (!cpumask_equal(housekeeping_cpumask(HK_TYPE_DOMAIN), 5223 + cpu_possible_mask)) { 5224 + pr_err("sched_ext: Not compatible with \"isolcpus=\" domain isolation\n"); 5225 + return -EINVAL; 5226 + } 5227 + 5228 + if (!READ_ONCE(helper)) { 5229 + mutex_lock(&helper_mutex); 5230 + if (!helper) { 5231 + helper = kthread_run_worker(0, "scx_enable_helper"); 5232 + if (IS_ERR_OR_NULL(helper)) { 5233 + helper = NULL; 5234 + mutex_unlock(&helper_mutex); 5235 + return -ENOMEM; 5236 + } 5237 + sched_set_fifo(helper->task); 5238 + } 5239 + mutex_unlock(&helper_mutex); 5240 + } 5241 + 5242 + kthread_init_work(&cmd.work, scx_enable_workfn); 5243 + cmd.ops = ops; 5244 + 5245 + kthread_queue_work(READ_ONCE(helper), &cmd.work); 5246 + kthread_flush_work(&cmd.work); 5247 + return cmd.ret; 5245 5248 } 5246 5249 5247 5250
+2 -3
kernel/sched/ext_idle.c
··· 663 663 BUG_ON(!alloc_cpumask_var(&scx_idle_global_masks.cpu, GFP_KERNEL)); 664 664 BUG_ON(!alloc_cpumask_var(&scx_idle_global_masks.smt, GFP_KERNEL)); 665 665 666 - /* Allocate per-node idle cpumasks */ 667 - scx_idle_node_masks = kzalloc_objs(*scx_idle_node_masks, 668 - num_possible_nodes()); 666 + /* Allocate per-node idle cpumasks (use nr_node_ids for non-contiguous NUMA nodes) */ 667 + scx_idle_node_masks = kzalloc_objs(*scx_idle_node_masks, nr_node_ids); 669 668 BUG_ON(!scx_idle_node_masks); 670 669 671 670 for_each_node(i) {
+1 -1
kernel/sched/ext_internal.h
··· 74 74 * info communication. The following flag indicates whether ops.init() 75 75 * finished successfully. 76 76 */ 77 - SCX_EFLAG_INITIALIZED, 77 + SCX_EFLAG_INITIALIZED = 1LLU << 0, 78 78 }; 79 79 80 80 /*
+1 -3
kernel/sched/isolation.c
··· 123 123 struct cpumask *trial, *old = NULL; 124 124 int err; 125 125 126 - lockdep_assert_cpus_held(); 127 - 128 126 trial = kmalloc(cpumask_size(), GFP_KERNEL); 129 127 if (!trial) 130 128 return -ENOMEM; ··· 134 136 } 135 137 136 138 if (!housekeeping.flags) 137 - static_branch_enable_cpuslocked(&housekeeping_overridden); 139 + static_branch_enable(&housekeeping_overridden); 138 140 139 141 if (housekeeping.flags & HK_FLAG_DOMAIN) 140 142 old = housekeeping_cpumask_dereference(HK_TYPE_DOMAIN);
+30
kernel/sched/syscalls.c
··· 284 284 uid_eq(cred->euid, pcred->uid)); 285 285 } 286 286 287 + #ifdef CONFIG_RT_MUTEXES 288 + static inline void __setscheduler_dl_pi(int newprio, int policy, 289 + struct task_struct *p, 290 + struct sched_change_ctx *scope) 291 + { 292 + /* 293 + * In case a DEADLINE task (either proper or boosted) gets 294 + * setscheduled to a lower priority class, check if it neeeds to 295 + * inherit parameters from a potential pi_task. In that case make 296 + * sure replenishment happens with the next enqueue. 297 + */ 298 + 299 + if (dl_prio(newprio) && !dl_policy(policy)) { 300 + struct task_struct *pi_task = rt_mutex_get_top_task(p); 301 + 302 + if (pi_task) { 303 + p->dl.pi_se = pi_task->dl.pi_se; 304 + scope->flags |= ENQUEUE_REPLENISH; 305 + } 306 + } 307 + } 308 + #else /* !CONFIG_RT_MUTEXES */ 309 + static inline void __setscheduler_dl_pi(int newprio, int policy, 310 + struct task_struct *p, 311 + struct sched_change_ctx *scope) 312 + { 313 + } 314 + #endif /* !CONFIG_RT_MUTEXES */ 315 + 287 316 #ifdef CONFIG_UCLAMP_TASK 288 317 289 318 static int uclamp_validate(struct task_struct *p, ··· 684 655 __setscheduler_params(p, attr); 685 656 p->sched_class = next_class; 686 657 p->prio = newprio; 658 + __setscheduler_dl_pi(newprio, policy, p, scope); 687 659 } 688 660 __setscheduler_uclamp(p, attr); 689 661
-2
kernel/time/jiffies.c
··· 256 256 int proc_dointvec_userhz_jiffies(const struct ctl_table *table, int dir, 257 257 void *buffer, size_t *lenp, loff_t *ppos) 258 258 { 259 - if (SYSCTL_USER_TO_KERN(dir) && USER_HZ < HZ) 260 - return -EINVAL; 261 259 return proc_dointvec_conv(table, dir, buffer, lenp, ppos, 262 260 do_proc_int_conv_userhz_jiffies); 263 261 }
+4 -2
kernel/time/timekeeping.c
··· 2653 2653 2654 2654 if (aux_clock) { 2655 2655 /* Auxiliary clocks are similar to TAI and do not have leap seconds */ 2656 - if (txc->status & (STA_INS | STA_DEL)) 2656 + if (txc->modes & ADJ_STATUS && 2657 + txc->status & (STA_INS | STA_DEL)) 2657 2658 return -EINVAL; 2658 2659 2659 2660 /* No TAI offset setting */ ··· 2662 2661 return -EINVAL; 2663 2662 2664 2663 /* No PPS support either */ 2665 - if (txc->status & (STA_PPSFREQ | STA_PPSTIME)) 2664 + if (txc->modes & ADJ_STATUS && 2665 + txc->status & (STA_PPSFREQ | STA_PPSTIME)) 2666 2666 return -EINVAL; 2667 2667 } 2668 2668
+1 -3
kernel/time/timer_migration.c
··· 1559 1559 cpumask_var_t cpumask __free(free_cpumask_var) = CPUMASK_VAR_NULL; 1560 1560 int cpu; 1561 1561 1562 - lockdep_assert_cpus_held(); 1563 - 1564 1562 if (!works) 1565 1563 return -ENOMEM; 1566 1564 if (!alloc_cpumask_var(&cpumask, GFP_KERNEL)) ··· 1568 1570 * First set previously isolated CPUs as available (unisolate). 1569 1571 * This cpumask contains only CPUs that switched to available now. 1570 1572 */ 1573 + guard(cpus_read_lock)(); 1571 1574 cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask); 1572 1575 cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask); 1573 1576 ··· 1625 1626 cpumask_andnot(cpumask, cpu_possible_mask, housekeeping_cpumask(HK_TYPE_DOMAIN)); 1626 1627 1627 1628 /* Protect against RCU torture hotplug testing */ 1628 - guard(cpus_read_lock)(); 1629 1629 return tmigr_isolated_exclude_cpumask(cpumask); 1630 1630 } 1631 1631 late_initcall(tmigr_init_isolation);
+1 -2
kernel/trace/blktrace.c
··· 383 383 cpu = raw_smp_processor_id(); 384 384 385 385 if (blk_tracer) { 386 - tracing_record_cmdline(current); 387 - 388 386 buffer = blk_tr->array_buffer.buffer; 389 387 trace_ctx = tracing_gen_ctx_flags(0); 390 388 switch (bt->version) { ··· 417 419 if (!event) 418 420 return; 419 421 422 + tracing_record_cmdline(current); 420 423 switch (bt->version) { 421 424 case 1: 422 425 record_blktrace_event(ring_buffer_event_data(event),
+4
kernel/trace/ftrace.c
··· 6404 6404 new_filter_hash = old_filter_hash; 6405 6405 } 6406 6406 } else { 6407 + guard(mutex)(&ftrace_lock); 6407 6408 err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH); 6408 6409 /* 6409 6410 * new_filter_hash is dup-ed, so we need to release it anyway, ··· 6531 6530 ops->func_hash->filter_hash = NULL; 6532 6531 } 6533 6532 } else { 6533 + guard(mutex)(&ftrace_lock); 6534 6534 err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH); 6535 6535 /* 6536 6536 * new_filter_hash is dup-ed, so we need to release it anyway, ··· 8613 8611 struct trace_pid_list *pid_list; 8614 8612 struct trace_array *tr = data; 8615 8613 8614 + guard(preempt)(); 8616 8615 pid_list = rcu_dereference_sched(tr->function_pids); 8617 8616 trace_filter_add_remove_task(pid_list, self, task); 8618 8617 ··· 8627 8624 struct trace_pid_list *pid_list; 8628 8625 struct trace_array *tr = data; 8629 8626 8627 + guard(preempt)(); 8630 8628 pid_list = rcu_dereference_sched(tr->function_pids); 8631 8629 trace_filter_add_remove_task(pid_list, NULL, task); 8632 8630
+21
kernel/trace/ring_buffer.c
··· 7310 7310 return err; 7311 7311 } 7312 7312 7313 + /* 7314 + * This is called when a VMA is duplicated (e.g., on fork()) to increment 7315 + * the user_mapped counter without remapping pages. 7316 + */ 7317 + void ring_buffer_map_dup(struct trace_buffer *buffer, int cpu) 7318 + { 7319 + struct ring_buffer_per_cpu *cpu_buffer; 7320 + 7321 + if (WARN_ON(!cpumask_test_cpu(cpu, buffer->cpumask))) 7322 + return; 7323 + 7324 + cpu_buffer = buffer->buffers[cpu]; 7325 + 7326 + guard(mutex)(&cpu_buffer->mapping_lock); 7327 + 7328 + if (cpu_buffer->user_mapped) 7329 + __rb_inc_dec_mapped(cpu_buffer, true); 7330 + else 7331 + WARN(1, "Unexpected buffer stat, it should be mapped"); 7332 + } 7333 + 7313 7334 int ring_buffer_unmap(struct trace_buffer *buffer, int cpu) 7314 7335 { 7315 7336 struct ring_buffer_per_cpu *cpu_buffer;
+16 -3
kernel/trace/trace.c
··· 8213 8213 static inline void put_snapshot_map(struct trace_array *tr) { } 8214 8214 #endif 8215 8215 8216 + /* 8217 + * This is called when a VMA is duplicated (e.g., on fork()) to increment 8218 + * the user_mapped counter without remapping pages. 8219 + */ 8220 + static void tracing_buffers_mmap_open(struct vm_area_struct *vma) 8221 + { 8222 + struct ftrace_buffer_info *info = vma->vm_file->private_data; 8223 + struct trace_iterator *iter = &info->iter; 8224 + 8225 + ring_buffer_map_dup(iter->array_buffer->buffer, iter->cpu_file); 8226 + } 8227 + 8216 8228 static void tracing_buffers_mmap_close(struct vm_area_struct *vma) 8217 8229 { 8218 8230 struct ftrace_buffer_info *info = vma->vm_file->private_data; ··· 8244 8232 } 8245 8233 8246 8234 static const struct vm_operations_struct tracing_buffers_vmops = { 8235 + .open = tracing_buffers_mmap_open, 8247 8236 .close = tracing_buffers_mmap_close, 8248 8237 .may_split = tracing_buffers_may_split, 8249 8238 }; ··· 9350 9337 } 9351 9338 9352 9339 static int 9353 - allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, int size) 9340 + allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, unsigned long size) 9354 9341 { 9355 9342 enum ring_buffer_flags rb_flags; 9356 9343 struct trace_scratch *tscratch; ··· 9405 9392 } 9406 9393 } 9407 9394 9408 - static int allocate_trace_buffers(struct trace_array *tr, int size) 9395 + static int allocate_trace_buffers(struct trace_array *tr, unsigned long size) 9409 9396 { 9410 9397 int ret; 9411 9398 ··· 10769 10756 10770 10757 __init static int tracer_alloc_buffers(void) 10771 10758 { 10772 - int ring_buf_size; 10759 + unsigned long ring_buf_size; 10773 10760 int ret = -ENOMEM; 10774 10761 10775 10762
+44 -16
kernel/trace/trace_events.c
··· 1039 1039 struct trace_pid_list *pid_list; 1040 1040 struct trace_array *tr = data; 1041 1041 1042 + guard(preempt)(); 1042 1043 pid_list = rcu_dereference_raw(tr->filtered_pids); 1043 1044 trace_filter_add_remove_task(pid_list, NULL, task); 1044 1045 ··· 1055 1054 struct trace_pid_list *pid_list; 1056 1055 struct trace_array *tr = data; 1057 1056 1057 + guard(preempt)(); 1058 1058 pid_list = rcu_dereference_sched(tr->filtered_pids); 1059 1059 trace_filter_add_remove_task(pid_list, self, task); 1060 1060 ··· 4493 4491 4494 4492 static __init int setup_trace_event(char *str) 4495 4493 { 4496 - strscpy(bootup_event_buf, str, COMMAND_LINE_SIZE); 4494 + if (bootup_event_buf[0] != '\0') 4495 + strlcat(bootup_event_buf, ",", COMMAND_LINE_SIZE); 4496 + 4497 + strlcat(bootup_event_buf, str, COMMAND_LINE_SIZE); 4498 + 4497 4499 trace_set_ring_buffer_expanded(NULL); 4498 4500 disable_tracing_selftest("running event tracing"); 4499 4501 ··· 4674 4668 return 0; 4675 4669 } 4676 4670 4677 - __init void 4678 - early_enable_events(struct trace_array *tr, char *buf, bool disable_first) 4671 + /* 4672 + * Helper function to enable or disable a comma-separated list of events 4673 + * from the bootup buffer. 4674 + */ 4675 + static __init void __early_set_events(struct trace_array *tr, char *buf, bool enable) 4679 4676 { 4680 4677 char *token; 4681 - int ret; 4682 4678 4683 - while (true) { 4684 - token = strsep(&buf, ","); 4685 - 4686 - if (!token) 4687 - break; 4688 - 4679 + while ((token = strsep(&buf, ","))) { 4689 4680 if (*token) { 4690 - /* Restarting syscalls requires that we stop them first */ 4691 - if (disable_first) 4681 + if (enable) { 4682 + if (ftrace_set_clr_event(tr, token, 1)) 4683 + pr_warn("Failed to enable trace event: %s\n", token); 4684 + } else { 4692 4685 ftrace_set_clr_event(tr, token, 0); 4693 - 4694 - ret = ftrace_set_clr_event(tr, token, 1); 4695 - if (ret) 4696 - pr_warn("Failed to enable trace event: %s\n", token); 4686 + } 4697 4687 } 4698 4688 4699 4689 /* Put back the comma to allow this to be called again */ 4700 4690 if (buf) 4701 4691 *(buf - 1) = ','; 4702 4692 } 4693 + } 4694 + 4695 + /** 4696 + * early_enable_events - enable events from the bootup buffer 4697 + * @tr: The trace array to enable the events in 4698 + * @buf: The buffer containing the comma separated list of events 4699 + * @disable_first: If true, disable all events in @buf before enabling them 4700 + * 4701 + * This function enables events from the bootup buffer. If @disable_first 4702 + * is true, it will first disable all events in the buffer before enabling 4703 + * them. 4704 + * 4705 + * For syscall events, which rely on a global refcount to register the 4706 + * SYSCALL_WORK_SYSCALL_TRACEPOINT flag (especially for pid 1), we must 4707 + * ensure the refcount hits zero before re-enabling them. A simple 4708 + * "disable then enable" per-event is not enough if multiple syscalls are 4709 + * used, as the refcount will stay above zero. Thus, we need a two-phase 4710 + * approach: disable all, then enable all. 4711 + */ 4712 + __init void 4713 + early_enable_events(struct trace_array *tr, char *buf, bool disable_first) 4714 + { 4715 + if (disable_first) 4716 + __early_set_events(tr, buf, false); 4717 + 4718 + __early_set_events(tr, buf, true); 4703 4719 } 4704 4720 4705 4721 static __init int event_trace_enable(void)
+3
kernel/trace/trace_events_trigger.c
··· 50 50 51 51 void trigger_data_free(struct event_trigger_data *data) 52 52 { 53 + if (!data) 54 + return; 55 + 53 56 if (data->cmd_ops->set_filter) 54 57 data->cmd_ops->set_filter(NULL, data, NULL); 55 58
+13 -6
kernel/trace/trace_functions_graph.c
··· 400 400 struct fgraph_ops *gops, 401 401 struct ftrace_regs *fregs) 402 402 { 403 + unsigned long *task_var = fgraph_get_task_var(gops); 403 404 struct fgraph_times *ftimes; 404 405 struct trace_array *tr; 406 + unsigned int trace_ctx; 407 + u64 calltime, rettime; 405 408 int size; 409 + 410 + rettime = trace_clock_local(); 406 411 407 412 ftrace_graph_addr_finish(gops, trace); 408 413 409 - if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { 410 - trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); 414 + if (*task_var & TRACE_GRAPH_NOTRACE) { 415 + *task_var &= ~TRACE_GRAPH_NOTRACE; 411 416 return; 412 417 } 413 418 ··· 423 418 tr = gops->private; 424 419 handle_nosleeptime(tr, trace, ftimes, size); 425 420 426 - if (tracing_thresh && 427 - (trace_clock_local() - ftimes->calltime < tracing_thresh)) 421 + calltime = ftimes->calltime; 422 + 423 + if (tracing_thresh && (rettime - calltime < tracing_thresh)) 428 424 return; 429 - else 430 - trace_graph_return(trace, gops, fregs); 425 + 426 + trace_ctx = tracing_gen_ctx(); 427 + __trace_graph_return(tr, trace, trace_ctx, calltime, rettime); 431 428 } 432 429 433 430 static struct fgraph_ops funcgraph_ops = {
+34
lib/crypto/.kunitconfig
··· 1 + CONFIG_KUNIT=y 2 + 3 + # These kconfig options select all the CONFIG_CRYPTO_LIB_* symbols that have a 4 + # corresponding KUnit test. Those symbols cannot be directly enabled here, 5 + # since they are hidden symbols. 6 + CONFIG_CRYPTO=y 7 + CONFIG_CRYPTO_ADIANTUM=y 8 + CONFIG_CRYPTO_BLAKE2B=y 9 + CONFIG_CRYPTO_CHACHA20POLY1305=y 10 + CONFIG_CRYPTO_HCTR2=y 11 + CONFIG_CRYPTO_MD5=y 12 + CONFIG_CRYPTO_MLDSA=y 13 + CONFIG_CRYPTO_SHA1=y 14 + CONFIG_CRYPTO_SHA256=y 15 + CONFIG_CRYPTO_SHA512=y 16 + CONFIG_CRYPTO_SHA3=y 17 + CONFIG_INET=y 18 + CONFIG_IPV6=y 19 + CONFIG_NET=y 20 + CONFIG_NETDEVICES=y 21 + CONFIG_WIREGUARD=y 22 + 23 + CONFIG_CRYPTO_LIB_BLAKE2B_KUNIT_TEST=y 24 + CONFIG_CRYPTO_LIB_BLAKE2S_KUNIT_TEST=y 25 + CONFIG_CRYPTO_LIB_CURVE25519_KUNIT_TEST=y 26 + CONFIG_CRYPTO_LIB_MD5_KUNIT_TEST=y 27 + CONFIG_CRYPTO_LIB_MLDSA_KUNIT_TEST=y 28 + CONFIG_CRYPTO_LIB_NH_KUNIT_TEST=y 29 + CONFIG_CRYPTO_LIB_POLY1305_KUNIT_TEST=y 30 + CONFIG_CRYPTO_LIB_POLYVAL_KUNIT_TEST=y 31 + CONFIG_CRYPTO_LIB_SHA1_KUNIT_TEST=y 32 + CONFIG_CRYPTO_LIB_SHA256_KUNIT_TEST=y 33 + CONFIG_CRYPTO_LIB_SHA512_KUNIT_TEST=y 34 + CONFIG_CRYPTO_LIB_SHA3_KUNIT_TEST=y
+12 -23
lib/crypto/tests/Kconfig
··· 2 2 3 3 config CRYPTO_LIB_BLAKE2B_KUNIT_TEST 4 4 tristate "KUnit tests for BLAKE2b" if !KUNIT_ALL_TESTS 5 - depends on KUNIT 5 + depends on KUNIT && CRYPTO_LIB_BLAKE2B 6 6 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 7 7 select CRYPTO_LIB_BENCHMARK_VISIBLE 8 - select CRYPTO_LIB_BLAKE2B 9 8 help 10 9 KUnit tests for the BLAKE2b cryptographic hash function. 11 10 ··· 13 14 depends on KUNIT 14 15 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 15 16 select CRYPTO_LIB_BENCHMARK_VISIBLE 16 - # No need to select CRYPTO_LIB_BLAKE2S here, as that option doesn't 17 + # No need to depend on CRYPTO_LIB_BLAKE2S here, as that option doesn't 17 18 # exist; the BLAKE2s code is always built-in for the /dev/random driver. 18 19 help 19 20 KUnit tests for the BLAKE2s cryptographic hash function. 20 21 21 22 config CRYPTO_LIB_CURVE25519_KUNIT_TEST 22 23 tristate "KUnit tests for Curve25519" if !KUNIT_ALL_TESTS 23 - depends on KUNIT 24 + depends on KUNIT && CRYPTO_LIB_CURVE25519 24 25 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 25 26 select CRYPTO_LIB_BENCHMARK_VISIBLE 26 - select CRYPTO_LIB_CURVE25519 27 27 help 28 28 KUnit tests for the Curve25519 Diffie-Hellman function. 29 29 30 30 config CRYPTO_LIB_MD5_KUNIT_TEST 31 31 tristate "KUnit tests for MD5" if !KUNIT_ALL_TESTS 32 - depends on KUNIT 32 + depends on KUNIT && CRYPTO_LIB_MD5 33 33 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 34 34 select CRYPTO_LIB_BENCHMARK_VISIBLE 35 - select CRYPTO_LIB_MD5 36 35 help 37 36 KUnit tests for the MD5 cryptographic hash function and its 38 37 corresponding HMAC. 39 38 40 39 config CRYPTO_LIB_MLDSA_KUNIT_TEST 41 40 tristate "KUnit tests for ML-DSA" if !KUNIT_ALL_TESTS 42 - depends on KUNIT 41 + depends on KUNIT && CRYPTO_LIB_MLDSA 43 42 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 44 43 select CRYPTO_LIB_BENCHMARK_VISIBLE 45 - select CRYPTO_LIB_MLDSA 46 44 help 47 45 KUnit tests for the ML-DSA digital signature algorithm. 48 46 49 47 config CRYPTO_LIB_NH_KUNIT_TEST 50 48 tristate "KUnit tests for NH" if !KUNIT_ALL_TESTS 51 - depends on KUNIT 49 + depends on KUNIT && CRYPTO_LIB_NH 52 50 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 53 - select CRYPTO_LIB_NH 54 51 help 55 52 KUnit tests for the NH almost-universal hash function. 56 53 57 54 config CRYPTO_LIB_POLY1305_KUNIT_TEST 58 55 tristate "KUnit tests for Poly1305" if !KUNIT_ALL_TESTS 59 - depends on KUNIT 56 + depends on KUNIT && CRYPTO_LIB_POLY1305 60 57 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 61 58 select CRYPTO_LIB_BENCHMARK_VISIBLE 62 - select CRYPTO_LIB_POLY1305 63 59 help 64 60 KUnit tests for the Poly1305 library functions. 65 61 66 62 config CRYPTO_LIB_POLYVAL_KUNIT_TEST 67 63 tristate "KUnit tests for POLYVAL" if !KUNIT_ALL_TESTS 68 - depends on KUNIT 64 + depends on KUNIT && CRYPTO_LIB_POLYVAL 69 65 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 70 66 select CRYPTO_LIB_BENCHMARK_VISIBLE 71 - select CRYPTO_LIB_POLYVAL 72 67 help 73 68 KUnit tests for the POLYVAL library functions. 74 69 75 70 config CRYPTO_LIB_SHA1_KUNIT_TEST 76 71 tristate "KUnit tests for SHA-1" if !KUNIT_ALL_TESTS 77 - depends on KUNIT 72 + depends on KUNIT && CRYPTO_LIB_SHA1 78 73 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 79 74 select CRYPTO_LIB_BENCHMARK_VISIBLE 80 - select CRYPTO_LIB_SHA1 81 75 help 82 76 KUnit tests for the SHA-1 cryptographic hash function and its 83 77 corresponding HMAC. ··· 79 87 # included, for consistency with the naming used elsewhere (e.g. CRYPTO_SHA256). 80 88 config CRYPTO_LIB_SHA256_KUNIT_TEST 81 89 tristate "KUnit tests for SHA-224 and SHA-256" if !KUNIT_ALL_TESTS 82 - depends on KUNIT 90 + depends on KUNIT && CRYPTO_LIB_SHA256 83 91 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 84 92 select CRYPTO_LIB_BENCHMARK_VISIBLE 85 - select CRYPTO_LIB_SHA256 86 93 help 87 94 KUnit tests for the SHA-224 and SHA-256 cryptographic hash functions 88 95 and their corresponding HMACs. ··· 90 99 # included, for consistency with the naming used elsewhere (e.g. CRYPTO_SHA512). 91 100 config CRYPTO_LIB_SHA512_KUNIT_TEST 92 101 tristate "KUnit tests for SHA-384 and SHA-512" if !KUNIT_ALL_TESTS 93 - depends on KUNIT 102 + depends on KUNIT && CRYPTO_LIB_SHA512 94 103 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 95 104 select CRYPTO_LIB_BENCHMARK_VISIBLE 96 - select CRYPTO_LIB_SHA512 97 105 help 98 106 KUnit tests for the SHA-384 and SHA-512 cryptographic hash functions 99 107 and their corresponding HMACs. 100 108 101 109 config CRYPTO_LIB_SHA3_KUNIT_TEST 102 110 tristate "KUnit tests for SHA-3" if !KUNIT_ALL_TESTS 103 - depends on KUNIT 111 + depends on KUNIT && CRYPTO_LIB_SHA3 104 112 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 105 113 select CRYPTO_LIB_BENCHMARK_VISIBLE 106 - select CRYPTO_LIB_SHA3 107 114 help 108 115 KUnit tests for the SHA3 cryptographic hash and XOF functions, 109 116 including SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128 and
+125 -106
lib/kunit/test.c
··· 94 94 unsigned long total; 95 95 }; 96 96 97 - static bool kunit_should_print_stats(struct kunit_result_stats stats) 97 + static bool kunit_should_print_stats(struct kunit_result_stats *stats) 98 98 { 99 99 if (kunit_stats_enabled == 0) 100 100 return false; ··· 102 102 if (kunit_stats_enabled == 2) 103 103 return true; 104 104 105 - return (stats.total > 1); 105 + return (stats->total > 1); 106 106 } 107 107 108 108 static void kunit_print_test_stats(struct kunit *test, 109 - struct kunit_result_stats stats) 109 + struct kunit_result_stats *stats) 110 110 { 111 111 if (!kunit_should_print_stats(stats)) 112 112 return; ··· 115 115 KUNIT_SUBTEST_INDENT 116 116 "# %s: pass:%lu fail:%lu skip:%lu total:%lu", 117 117 test->name, 118 - stats.passed, 119 - stats.failed, 120 - stats.skipped, 121 - stats.total); 118 + stats->passed, 119 + stats->failed, 120 + stats->skipped, 121 + stats->total); 122 122 } 123 123 124 124 /* Append formatted message to log. */ ··· 600 600 } 601 601 602 602 static void kunit_print_suite_stats(struct kunit_suite *suite, 603 - struct kunit_result_stats suite_stats, 604 - struct kunit_result_stats param_stats) 603 + struct kunit_result_stats *suite_stats, 604 + struct kunit_result_stats *param_stats) 605 605 { 606 606 if (kunit_should_print_stats(suite_stats)) { 607 607 kunit_log(KERN_INFO, suite, 608 608 "# %s: pass:%lu fail:%lu skip:%lu total:%lu", 609 609 suite->name, 610 - suite_stats.passed, 611 - suite_stats.failed, 612 - suite_stats.skipped, 613 - suite_stats.total); 610 + suite_stats->passed, 611 + suite_stats->failed, 612 + suite_stats->skipped, 613 + suite_stats->total); 614 614 } 615 615 616 616 if (kunit_should_print_stats(param_stats)) { 617 617 kunit_log(KERN_INFO, suite, 618 618 "# Totals: pass:%lu fail:%lu skip:%lu total:%lu", 619 - param_stats.passed, 620 - param_stats.failed, 621 - param_stats.skipped, 622 - param_stats.total); 619 + param_stats->passed, 620 + param_stats->failed, 621 + param_stats->skipped, 622 + param_stats->total); 623 623 } 624 624 } 625 625 ··· 681 681 } 682 682 } 683 683 684 - int kunit_run_tests(struct kunit_suite *suite) 684 + static noinline_for_stack void 685 + kunit_run_param_test(struct kunit_suite *suite, struct kunit_case *test_case, 686 + struct kunit *test, 687 + struct kunit_result_stats *suite_stats, 688 + struct kunit_result_stats *total_stats, 689 + struct kunit_result_stats *param_stats) 685 690 { 686 691 char param_desc[KUNIT_PARAM_DESC_SIZE]; 692 + const void *curr_param; 693 + 694 + kunit_init_parent_param_test(test_case, test); 695 + if (test_case->status == KUNIT_FAILURE) { 696 + kunit_update_stats(param_stats, test->status); 697 + return; 698 + } 699 + /* Get initial param. */ 700 + param_desc[0] = '\0'; 701 + /* TODO: Make generate_params try-catch */ 702 + curr_param = test_case->generate_params(test, NULL, param_desc); 703 + test_case->status = KUNIT_SKIPPED; 704 + kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT 705 + "KTAP version 1\n"); 706 + kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT 707 + "# Subtest: %s", test_case->name); 708 + if (test->params_array.params && 709 + test_case->generate_params == kunit_array_gen_params) { 710 + kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT 711 + KUNIT_SUBTEST_INDENT "1..%zd\n", 712 + test->params_array.num_params); 713 + } 714 + 715 + while (curr_param) { 716 + struct kunit param_test = { 717 + .param_value = curr_param, 718 + .param_index = ++test->param_index, 719 + .parent = test, 720 + }; 721 + kunit_init_test(&param_test, test_case->name, NULL); 722 + param_test.log = test_case->log; 723 + kunit_run_case_catch_errors(suite, test_case, &param_test); 724 + 725 + if (param_desc[0] == '\0') { 726 + snprintf(param_desc, sizeof(param_desc), 727 + "param-%d", param_test.param_index); 728 + } 729 + 730 + kunit_print_ok_not_ok(&param_test, KUNIT_LEVEL_CASE_PARAM, 731 + param_test.status, 732 + param_test.param_index, 733 + param_desc, 734 + param_test.status_comment); 735 + 736 + kunit_update_stats(param_stats, param_test.status); 737 + 738 + /* Get next param. */ 739 + param_desc[0] = '\0'; 740 + curr_param = test_case->generate_params(test, curr_param, 741 + param_desc); 742 + } 743 + /* 744 + * TODO: Put into a try catch. Since we don't need suite->exit 745 + * for it we can't reuse kunit_try_run_cleanup for this yet. 746 + */ 747 + if (test_case->param_exit) 748 + test_case->param_exit(test); 749 + /* TODO: Put this kunit_cleanup into a try-catch. */ 750 + kunit_cleanup(test); 751 + } 752 + 753 + static noinline_for_stack void 754 + kunit_run_one_test(struct kunit_suite *suite, struct kunit_case *test_case, 755 + struct kunit_result_stats *suite_stats, 756 + struct kunit_result_stats *total_stats) 757 + { 758 + struct kunit test = { .param_value = NULL, .param_index = 0 }; 759 + struct kunit_result_stats param_stats = { 0 }; 760 + 761 + kunit_init_test(&test, test_case->name, test_case->log); 762 + if (test_case->status == KUNIT_SKIPPED) { 763 + /* Test marked as skip */ 764 + test.status = KUNIT_SKIPPED; 765 + kunit_update_stats(&param_stats, test.status); 766 + } else if (!test_case->generate_params) { 767 + /* Non-parameterised test. */ 768 + test_case->status = KUNIT_SKIPPED; 769 + kunit_run_case_catch_errors(suite, test_case, &test); 770 + kunit_update_stats(&param_stats, test.status); 771 + } else { 772 + kunit_run_param_test(suite, test_case, &test, suite_stats, 773 + total_stats, &param_stats); 774 + } 775 + kunit_print_attr((void *)test_case, true, KUNIT_LEVEL_CASE); 776 + 777 + kunit_print_test_stats(&test, &param_stats); 778 + 779 + kunit_print_ok_not_ok(&test, KUNIT_LEVEL_CASE, test_case->status, 780 + kunit_test_case_num(suite, test_case), 781 + test_case->name, 782 + test.status_comment); 783 + 784 + kunit_update_stats(suite_stats, test_case->status); 785 + kunit_accumulate_stats(total_stats, param_stats); 786 + } 787 + 788 + 789 + int kunit_run_tests(struct kunit_suite *suite) 790 + { 687 791 struct kunit_case *test_case; 688 792 struct kunit_result_stats suite_stats = { 0 }; 689 793 struct kunit_result_stats total_stats = { 0 }; 690 - const void *curr_param; 691 794 692 795 /* Taint the kernel so we know we've run tests. */ 693 796 add_taint(TAINT_TEST, LOCKDEP_STILL_OK); ··· 806 703 807 704 kunit_print_suite_start(suite); 808 705 809 - kunit_suite_for_each_test_case(suite, test_case) { 810 - struct kunit test = { .param_value = NULL, .param_index = 0 }; 811 - struct kunit_result_stats param_stats = { 0 }; 812 - 813 - kunit_init_test(&test, test_case->name, test_case->log); 814 - if (test_case->status == KUNIT_SKIPPED) { 815 - /* Test marked as skip */ 816 - test.status = KUNIT_SKIPPED; 817 - kunit_update_stats(&param_stats, test.status); 818 - } else if (!test_case->generate_params) { 819 - /* Non-parameterised test. */ 820 - test_case->status = KUNIT_SKIPPED; 821 - kunit_run_case_catch_errors(suite, test_case, &test); 822 - kunit_update_stats(&param_stats, test.status); 823 - } else { 824 - kunit_init_parent_param_test(test_case, &test); 825 - if (test_case->status == KUNIT_FAILURE) { 826 - kunit_update_stats(&param_stats, test.status); 827 - goto test_case_end; 828 - } 829 - /* Get initial param. */ 830 - param_desc[0] = '\0'; 831 - /* TODO: Make generate_params try-catch */ 832 - curr_param = test_case->generate_params(&test, NULL, param_desc); 833 - test_case->status = KUNIT_SKIPPED; 834 - kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT 835 - "KTAP version 1\n"); 836 - kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT 837 - "# Subtest: %s", test_case->name); 838 - if (test.params_array.params && 839 - test_case->generate_params == kunit_array_gen_params) { 840 - kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT 841 - KUNIT_SUBTEST_INDENT "1..%zd\n", 842 - test.params_array.num_params); 843 - } 844 - 845 - while (curr_param) { 846 - struct kunit param_test = { 847 - .param_value = curr_param, 848 - .param_index = ++test.param_index, 849 - .parent = &test, 850 - }; 851 - kunit_init_test(&param_test, test_case->name, NULL); 852 - param_test.log = test_case->log; 853 - kunit_run_case_catch_errors(suite, test_case, &param_test); 854 - 855 - if (param_desc[0] == '\0') { 856 - snprintf(param_desc, sizeof(param_desc), 857 - "param-%d", param_test.param_index); 858 - } 859 - 860 - kunit_print_ok_not_ok(&param_test, KUNIT_LEVEL_CASE_PARAM, 861 - param_test.status, 862 - param_test.param_index, 863 - param_desc, 864 - param_test.status_comment); 865 - 866 - kunit_update_stats(&param_stats, param_test.status); 867 - 868 - /* Get next param. */ 869 - param_desc[0] = '\0'; 870 - curr_param = test_case->generate_params(&test, curr_param, 871 - param_desc); 872 - } 873 - /* 874 - * TODO: Put into a try catch. Since we don't need suite->exit 875 - * for it we can't reuse kunit_try_run_cleanup for this yet. 876 - */ 877 - if (test_case->param_exit) 878 - test_case->param_exit(&test); 879 - /* TODO: Put this kunit_cleanup into a try-catch. */ 880 - kunit_cleanup(&test); 881 - } 882 - test_case_end: 883 - kunit_print_attr((void *)test_case, true, KUNIT_LEVEL_CASE); 884 - 885 - kunit_print_test_stats(&test, param_stats); 886 - 887 - kunit_print_ok_not_ok(&test, KUNIT_LEVEL_CASE, test_case->status, 888 - kunit_test_case_num(suite, test_case), 889 - test_case->name, 890 - test.status_comment); 891 - 892 - kunit_update_stats(&suite_stats, test_case->status); 893 - kunit_accumulate_stats(&total_stats, param_stats); 894 - } 706 + kunit_suite_for_each_test_case(suite, test_case) 707 + kunit_run_one_test(suite, test_case, &suite_stats, &total_stats); 895 708 896 709 if (suite->suite_exit) 897 710 suite->suite_exit(suite); 898 711 899 - kunit_print_suite_stats(suite, suite_stats, total_stats); 712 + kunit_print_suite_stats(suite, &suite_stats, &total_stats); 900 713 suite_end: 901 714 kunit_print_suite_end(suite); 902 715
+1 -1
mm/madvise.c
··· 1389 1389 new_flags |= VM_DONTCOPY; 1390 1390 break; 1391 1391 case MADV_DOFORK: 1392 - if (new_flags & VM_IO) 1392 + if (new_flags & VM_SPECIAL) 1393 1393 return -EINVAL; 1394 1394 new_flags &= ~VM_DONTCOPY; 1395 1395 break;
+5 -5
mm/slab.h
··· 59 59 * to save memory. In case ->stride field is not available, 60 60 * such optimizations are disabled. 61 61 */ 62 - unsigned short stride; 62 + unsigned int stride; 63 63 #endif 64 64 }; 65 65 }; ··· 559 559 } 560 560 561 561 #ifdef CONFIG_64BIT 562 - static inline void slab_set_stride(struct slab *slab, unsigned short stride) 562 + static inline void slab_set_stride(struct slab *slab, unsigned int stride) 563 563 { 564 564 slab->stride = stride; 565 565 } 566 - static inline unsigned short slab_get_stride(struct slab *slab) 566 + static inline unsigned int slab_get_stride(struct slab *slab) 567 567 { 568 568 return slab->stride; 569 569 } 570 570 #else 571 - static inline void slab_set_stride(struct slab *slab, unsigned short stride) 571 + static inline void slab_set_stride(struct slab *slab, unsigned int stride) 572 572 { 573 573 VM_WARN_ON_ONCE(stride != sizeof(struct slabobj_ext)); 574 574 } 575 - static inline unsigned short slab_get_stride(struct slab *slab) 575 + static inline unsigned int slab_get_stride(struct slab *slab) 576 576 { 577 577 return sizeof(struct slabobj_ext); 578 578 }
+47 -22
mm/slub.c
··· 2858 2858 * object pointers are moved to a on-stack array under the lock. To bound the 2859 2859 * stack usage, limit each batch to PCS_BATCH_MAX. 2860 2860 * 2861 - * returns true if at least partially flushed 2861 + * Must be called with s->cpu_sheaves->lock locked, returns with the lock 2862 + * unlocked. 2863 + * 2864 + * Returns how many objects are remaining to be flushed 2862 2865 */ 2863 - static bool sheaf_flush_main(struct kmem_cache *s) 2866 + static unsigned int __sheaf_flush_main_batch(struct kmem_cache *s) 2864 2867 { 2865 2868 struct slub_percpu_sheaves *pcs; 2866 2869 unsigned int batch, remaining; 2867 2870 void *objects[PCS_BATCH_MAX]; 2868 2871 struct slab_sheaf *sheaf; 2869 - bool ret = false; 2870 2872 2871 - next_batch: 2872 - if (!local_trylock(&s->cpu_sheaves->lock)) 2873 - return ret; 2873 + lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); 2874 2874 2875 2875 pcs = this_cpu_ptr(s->cpu_sheaves); 2876 2876 sheaf = pcs->main; ··· 2888 2888 2889 2889 stat_add(s, SHEAF_FLUSH, batch); 2890 2890 2891 - ret = true; 2891 + return remaining; 2892 + } 2892 2893 2893 - if (remaining) 2894 - goto next_batch; 2894 + static void sheaf_flush_main(struct kmem_cache *s) 2895 + { 2896 + unsigned int remaining; 2897 + 2898 + do { 2899 + local_lock(&s->cpu_sheaves->lock); 2900 + 2901 + remaining = __sheaf_flush_main_batch(s); 2902 + 2903 + } while (remaining); 2904 + } 2905 + 2906 + /* 2907 + * Returns true if the main sheaf was at least partially flushed. 2908 + */ 2909 + static bool sheaf_try_flush_main(struct kmem_cache *s) 2910 + { 2911 + unsigned int remaining; 2912 + bool ret = false; 2913 + 2914 + do { 2915 + if (!local_trylock(&s->cpu_sheaves->lock)) 2916 + return ret; 2917 + 2918 + ret = true; 2919 + remaining = __sheaf_flush_main_batch(s); 2920 + 2921 + } while (remaining); 2895 2922 2896 2923 return ret; 2897 2924 } ··· 4567 4540 struct slab_sheaf *empty = NULL; 4568 4541 struct slab_sheaf *full; 4569 4542 struct node_barn *barn; 4570 - bool can_alloc; 4543 + bool allow_spin; 4571 4544 4572 4545 lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); 4573 4546 ··· 4588 4561 return NULL; 4589 4562 } 4590 4563 4591 - full = barn_replace_empty_sheaf(barn, pcs->main, 4592 - gfpflags_allow_spinning(gfp)); 4564 + allow_spin = gfpflags_allow_spinning(gfp); 4565 + 4566 + full = barn_replace_empty_sheaf(barn, pcs->main, allow_spin); 4593 4567 4594 4568 if (full) { 4595 4569 stat(s, BARN_GET); ··· 4600 4572 4601 4573 stat(s, BARN_GET_FAIL); 4602 4574 4603 - can_alloc = gfpflags_allow_blocking(gfp); 4604 - 4605 - if (can_alloc) { 4575 + if (allow_spin) { 4606 4576 if (pcs->spare) { 4607 4577 empty = pcs->spare; 4608 4578 pcs->spare = NULL; ··· 4610 4584 } 4611 4585 4612 4586 local_unlock(&s->cpu_sheaves->lock); 4587 + pcs = NULL; 4613 4588 4614 - if (!can_alloc) 4589 + if (!allow_spin) 4615 4590 return NULL; 4616 4591 4617 4592 if (empty) { ··· 4632 4605 if (!full) 4633 4606 return NULL; 4634 4607 4635 - /* 4636 - * we can reach here only when gfpflags_allow_blocking 4637 - * so this must not be an irq 4638 - */ 4639 - local_lock(&s->cpu_sheaves->lock); 4608 + if (!local_trylock(&s->cpu_sheaves->lock)) 4609 + goto barn_put; 4640 4610 pcs = this_cpu_ptr(s->cpu_sheaves); 4641 4611 4642 4612 /* ··· 4664 4640 return pcs; 4665 4641 } 4666 4642 4643 + barn_put: 4667 4644 barn_put_full_sheaf(barn, full); 4668 4645 stat(s, BARN_PUT); 4669 4646 ··· 5729 5704 if (put_fail) 5730 5705 stat(s, BARN_PUT_FAIL); 5731 5706 5732 - if (!sheaf_flush_main(s)) 5707 + if (!sheaf_try_flush_main(s)) 5733 5708 return NULL; 5734 5709 5735 5710 if (!local_trylock(&s->cpu_sheaves->lock))
+15 -11
net/atm/lec.c
··· 1260 1260 struct lec_vcc_priv *vpriv = LEC_VCC_PRIV(vcc); 1261 1261 struct net_device *dev = (struct net_device *)vcc->proto_data; 1262 1262 1263 - vcc->pop = vpriv->old_pop; 1264 - if (vpriv->xoff) 1265 - netif_wake_queue(dev); 1266 - kfree(vpriv); 1267 - vcc->user_back = NULL; 1268 - vcc->push = entry->old_push; 1269 - vcc_release_async(vcc, -EPIPE); 1263 + if (vpriv) { 1264 + vcc->pop = vpriv->old_pop; 1265 + if (vpriv->xoff) 1266 + netif_wake_queue(dev); 1267 + kfree(vpriv); 1268 + vcc->user_back = NULL; 1269 + vcc->push = entry->old_push; 1270 + vcc_release_async(vcc, -EPIPE); 1271 + } 1270 1272 entry->vcc = NULL; 1271 1273 } 1272 1274 if (entry->recv_vcc) { 1273 1275 struct atm_vcc *vcc = entry->recv_vcc; 1274 1276 struct lec_vcc_priv *vpriv = LEC_VCC_PRIV(vcc); 1275 1277 1276 - kfree(vpriv); 1277 - vcc->user_back = NULL; 1278 + if (vpriv) { 1279 + kfree(vpriv); 1280 + vcc->user_back = NULL; 1278 1281 1279 - entry->recv_vcc->push = entry->old_recv_push; 1280 - vcc_release_async(entry->recv_vcc, -EPIPE); 1282 + entry->recv_vcc->push = entry->old_recv_push; 1283 + vcc_release_async(entry->recv_vcc, -EPIPE); 1284 + } 1281 1285 entry->recv_vcc = NULL; 1282 1286 } 1283 1287 }
+9 -1
net/batman-adv/bat_v_elp.c
··· 111 111 /* unsupported WiFi driver version */ 112 112 goto default_throughput; 113 113 114 - real_netdev = batadv_get_real_netdev(hard_iface->net_dev); 114 + /* only use rtnl_trylock because the elp worker will be cancelled while 115 + * the rntl_lock is held. the cancel_delayed_work_sync() would otherwise 116 + * wait forever when the elp work_item was started and it is then also 117 + * trying to rtnl_lock 118 + */ 119 + if (!rtnl_trylock()) 120 + return false; 121 + real_netdev = __batadv_get_real_netdev(hard_iface->net_dev); 122 + rtnl_unlock(); 115 123 if (!real_netdev) 116 124 goto default_throughput; 117 125
+4 -4
net/batman-adv/hard-interface.c
··· 204 204 } 205 205 206 206 /** 207 - * batadv_get_real_netdevice() - check if the given netdev struct is a virtual 207 + * __batadv_get_real_netdev() - check if the given netdev struct is a virtual 208 208 * interface on top of another 'real' interface 209 209 * @netdev: the device to check 210 210 * ··· 214 214 * Return: the 'real' net device or the original net device and NULL in case 215 215 * of an error. 216 216 */ 217 - static struct net_device *batadv_get_real_netdevice(struct net_device *netdev) 217 + struct net_device *__batadv_get_real_netdev(struct net_device *netdev) 218 218 { 219 219 struct batadv_hard_iface *hard_iface = NULL; 220 220 struct net_device *real_netdev = NULL; ··· 267 267 struct net_device *real_netdev; 268 268 269 269 rtnl_lock(); 270 - real_netdev = batadv_get_real_netdevice(net_device); 270 + real_netdev = __batadv_get_real_netdev(net_device); 271 271 rtnl_unlock(); 272 272 273 273 return real_netdev; ··· 336 336 if (batadv_is_cfg80211_netdev(net_device)) 337 337 wifi_flags |= BATADV_HARDIF_WIFI_CFG80211_DIRECT; 338 338 339 - real_netdev = batadv_get_real_netdevice(net_device); 339 + real_netdev = __batadv_get_real_netdev(net_device); 340 340 if (!real_netdev) 341 341 return wifi_flags; 342 342
+1
net/batman-adv/hard-interface.h
··· 67 67 68 68 extern struct notifier_block batadv_hard_if_notifier; 69 69 70 + struct net_device *__batadv_get_real_netdev(struct net_device *net_device); 70 71 struct net_device *batadv_get_real_netdev(struct net_device *net_device); 71 72 bool batadv_is_cfg80211_hardif(struct batadv_hard_iface *hard_iface); 72 73 bool batadv_is_wifi_hardif(struct batadv_hard_iface *hard_iface);
+1 -1
net/bridge/br_device.c
··· 74 74 eth_hdr(skb)->h_proto == htons(ETH_P_RARP)) && 75 75 br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED)) { 76 76 br_do_proxy_suppress_arp(skb, br, vid, NULL); 77 - } else if (IS_ENABLED(CONFIG_IPV6) && 77 + } else if (ipv6_mod_enabled() && 78 78 skb->protocol == htons(ETH_P_IPV6) && 79 79 br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED) && 80 80 pskb_may_pull(skb, sizeof(struct ipv6hdr) +
+1 -1
net/bridge/br_input.c
··· 170 170 (skb->protocol == htons(ETH_P_ARP) || 171 171 skb->protocol == htons(ETH_P_RARP))) { 172 172 br_do_proxy_suppress_arp(skb, br, vid, p); 173 - } else if (IS_ENABLED(CONFIG_IPV6) && 173 + } else if (ipv6_mod_enabled() && 174 174 skb->protocol == htons(ETH_P_IPV6) && 175 175 br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED) && 176 176 pskb_may_pull(skb, sizeof(struct ipv6hdr) +
+10
net/bridge/br_private.h
··· 1345 1345 } 1346 1346 1347 1347 static inline bool 1348 + br_multicast_port_ctx_options_equal(const struct net_bridge_mcast_port *pmctx1, 1349 + const struct net_bridge_mcast_port *pmctx2) 1350 + { 1351 + return br_multicast_ngroups_get(pmctx1) == 1352 + br_multicast_ngroups_get(pmctx2) && 1353 + br_multicast_ngroups_get_max(pmctx1) == 1354 + br_multicast_ngroups_get_max(pmctx2); 1355 + } 1356 + 1357 + static inline bool 1348 1358 br_multicast_ctx_matches_vlan_snooping(const struct net_bridge_mcast *brmctx) 1349 1359 { 1350 1360 bool vlan_snooping_enabled;
+23 -3
net/bridge/br_vlan_options.c
··· 43 43 u8 range_mc_rtr = br_vlan_multicast_router(range_end); 44 44 u8 curr_mc_rtr = br_vlan_multicast_router(v_curr); 45 45 46 - return v_curr->state == range_end->state && 47 - __vlan_tun_can_enter_range(v_curr, range_end) && 48 - curr_mc_rtr == range_mc_rtr; 46 + if (v_curr->state != range_end->state) 47 + return false; 48 + 49 + if (!__vlan_tun_can_enter_range(v_curr, range_end)) 50 + return false; 51 + 52 + if (curr_mc_rtr != range_mc_rtr) 53 + return false; 54 + 55 + /* Check user-visible priv_flags that affect output */ 56 + if ((v_curr->priv_flags ^ range_end->priv_flags) & 57 + (BR_VLFLAG_NEIGH_SUPPRESS_ENABLED | BR_VLFLAG_MCAST_ENABLED)) 58 + return false; 59 + 60 + #ifdef CONFIG_BRIDGE_IGMP_SNOOPING 61 + if (!br_vlan_is_master(v_curr) && 62 + !br_multicast_port_ctx_vlan_disabled(&v_curr->port_mcast_ctx) && 63 + !br_multicast_port_ctx_options_equal(&v_curr->port_mcast_ctx, 64 + &range_end->port_mcast_ctx)) 65 + return false; 66 + #endif 67 + 68 + return true; 49 69 } 50 70 51 71 bool br_vlan_opts_fill(struct sk_buff *skb, const struct net_bridge_vlan *v,
+1
net/can/bcm.c
··· 1176 1176 if (!op) 1177 1177 return -ENOMEM; 1178 1178 1179 + spin_lock_init(&op->bcm_tx_lock); 1179 1180 op->can_id = msg_head->can_id; 1180 1181 op->nframes = msg_head->nframes; 1181 1182 op->cfsiz = CFSIZ(msg_head->flags);
+13 -11
net/core/dev.c
··· 3987 3987 if (shinfo->nr_frags > 0) { 3988 3988 niov = netmem_to_net_iov(skb_frag_netmem(&shinfo->frags[0])); 3989 3989 if (net_is_devmem_iov(niov) && 3990 - net_devmem_iov_binding(niov)->dev != dev) 3990 + READ_ONCE(net_devmem_iov_binding(niov)->dev) != dev) 3991 3991 goto out_free; 3992 3992 } 3993 3993 ··· 4818 4818 if (dev->flags & IFF_UP) { 4819 4819 int cpu = smp_processor_id(); /* ok because BHs are off */ 4820 4820 4821 - /* Other cpus might concurrently change txq->xmit_lock_owner 4822 - * to -1 or to their cpu id, but not to our id. 4823 - */ 4824 - if (READ_ONCE(txq->xmit_lock_owner) != cpu) { 4821 + if (!netif_tx_owned(txq, cpu)) { 4825 4822 bool is_list = false; 4826 4823 4827 4824 if (dev_xmit_recursion()) ··· 7791 7794 return -1; 7792 7795 } 7793 7796 7794 - static void napi_threaded_poll_loop(struct napi_struct *napi, bool busy_poll) 7797 + static void napi_threaded_poll_loop(struct napi_struct *napi, 7798 + unsigned long *busy_poll_last_qs) 7795 7799 { 7800 + unsigned long last_qs = busy_poll_last_qs ? *busy_poll_last_qs : jiffies; 7796 7801 struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; 7797 7802 struct softnet_data *sd; 7798 - unsigned long last_qs = jiffies; 7799 7803 7800 7804 for (;;) { 7801 7805 bool repoll = false; ··· 7825 7827 /* When busy poll is enabled, the old packets are not flushed in 7826 7828 * napi_complete_done. So flush them here. 7827 7829 */ 7828 - if (busy_poll) 7830 + if (busy_poll_last_qs) 7829 7831 gro_flush_normal(&napi->gro, HZ >= 1000); 7830 7832 local_bh_enable(); 7831 7833 7832 7834 /* Call cond_resched here to avoid watchdog warnings. */ 7833 - if (repoll || busy_poll) { 7835 + if (repoll || busy_poll_last_qs) { 7834 7836 rcu_softirq_qs_periodic(last_qs); 7835 7837 cond_resched(); 7836 7838 } ··· 7838 7840 if (!repoll) 7839 7841 break; 7840 7842 } 7843 + 7844 + if (busy_poll_last_qs) 7845 + *busy_poll_last_qs = last_qs; 7841 7846 } 7842 7847 7843 7848 static int napi_threaded_poll(void *data) 7844 7849 { 7845 7850 struct napi_struct *napi = data; 7851 + unsigned long last_qs = jiffies; 7846 7852 bool want_busy_poll; 7847 7853 bool in_busy_poll; 7848 7854 unsigned long val; ··· 7864 7862 assign_bit(NAPI_STATE_IN_BUSY_POLL, &napi->state, 7865 7863 want_busy_poll); 7866 7864 7867 - napi_threaded_poll_loop(napi, want_busy_poll); 7865 + napi_threaded_poll_loop(napi, want_busy_poll ? &last_qs : NULL); 7868 7866 } 7869 7867 7870 7868 return 0; ··· 13177 13175 { 13178 13176 struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu); 13179 13177 13180 - napi_threaded_poll_loop(&sd->backlog, false); 13178 + napi_threaded_poll_loop(&sd->backlog, NULL); 13181 13179 } 13182 13180 13183 13181 static void backlog_napi_setup(unsigned int cpu)
+4 -2
net/core/devmem.c
··· 396 396 * net_device. 397 397 */ 398 398 dst_dev = dst_dev_rcu(dst); 399 - if (unlikely(!dst_dev) || unlikely(dst_dev != binding->dev)) { 399 + if (unlikely(!dst_dev) || 400 + unlikely(dst_dev != READ_ONCE(binding->dev))) { 400 401 err = -ENODEV; 401 402 goto out_unlock; 402 403 } ··· 514 513 xa_erase(&binding->bound_rxqs, xa_idx); 515 514 if (xa_empty(&binding->bound_rxqs)) { 516 515 mutex_lock(&binding->lock); 517 - binding->dev = NULL; 516 + ASSERT_EXCLUSIVE_WRITER(binding->dev); 517 + WRITE_ONCE(binding->dev, NULL); 518 518 mutex_unlock(&binding->lock); 519 519 } 520 520 break;
+4 -2
net/core/filter.c
··· 4150 4150 struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); 4151 4151 skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags - 1]; 4152 4152 struct xdp_rxq_info *rxq = xdp->rxq; 4153 - unsigned int tailroom; 4153 + int tailroom; 4154 4154 4155 4155 if (!rxq->frag_size || rxq->frag_size > xdp->frame_sz) 4156 4156 return -EOPNOTSUPP; 4157 4157 4158 - tailroom = rxq->frag_size - skb_frag_size(frag) - skb_frag_off(frag); 4158 + tailroom = rxq->frag_size - skb_frag_size(frag) - 4159 + skb_frag_off(frag) % rxq->frag_size; 4160 + WARN_ON_ONCE(tailroom < 0); 4159 4161 if (unlikely(offset > tailroom)) 4160 4162 return -EINVAL; 4161 4163
+1 -1
net/core/netpoll.c
··· 132 132 for (i = 0; i < dev->num_tx_queues; i++) { 133 133 struct netdev_queue *txq = netdev_get_tx_queue(dev, i); 134 134 135 - if (READ_ONCE(txq->xmit_lock_owner) == smp_processor_id()) 135 + if (netif_tx_owned(txq, smp_processor_id())) 136 136 return 1; 137 137 } 138 138
+29 -51
net/core/secure_seq.c
··· 20 20 #include <net/tcp.h> 21 21 22 22 static siphash_aligned_key_t net_secret; 23 - static siphash_aligned_key_t ts_secret; 24 23 25 24 #define EPHEMERAL_PORT_SHUFFLE_PERIOD (10 * HZ) 26 25 27 26 static __always_inline void net_secret_init(void) 28 27 { 29 28 net_get_random_once(&net_secret, sizeof(net_secret)); 30 - } 31 - 32 - static __always_inline void ts_secret_init(void) 33 - { 34 - net_get_random_once(&ts_secret, sizeof(ts_secret)); 35 29 } 36 30 #endif 37 31 ··· 47 53 #endif 48 54 49 55 #if IS_ENABLED(CONFIG_IPV6) 50 - u32 secure_tcpv6_ts_off(const struct net *net, 51 - const __be32 *saddr, const __be32 *daddr) 52 - { 53 - const struct { 54 - struct in6_addr saddr; 55 - struct in6_addr daddr; 56 - } __aligned(SIPHASH_ALIGNMENT) combined = { 57 - .saddr = *(struct in6_addr *)saddr, 58 - .daddr = *(struct in6_addr *)daddr, 59 - }; 60 - 61 - if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1) 62 - return 0; 63 - 64 - ts_secret_init(); 65 - return siphash(&combined, offsetofend(typeof(combined), daddr), 66 - &ts_secret); 67 - } 68 - EXPORT_IPV6_MOD(secure_tcpv6_ts_off); 69 - 70 - u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr, 71 - __be16 sport, __be16 dport) 56 + union tcp_seq_and_ts_off 57 + secure_tcpv6_seq_and_ts_off(const struct net *net, const __be32 *saddr, 58 + const __be32 *daddr, __be16 sport, __be16 dport) 72 59 { 73 60 const struct { 74 61 struct in6_addr saddr; ··· 62 87 .sport = sport, 63 88 .dport = dport 64 89 }; 65 - u32 hash; 90 + union tcp_seq_and_ts_off st; 66 91 67 92 net_secret_init(); 68 - hash = siphash(&combined, offsetofend(typeof(combined), dport), 69 - &net_secret); 70 - return seq_scale(hash); 93 + 94 + st.hash64 = siphash(&combined, offsetofend(typeof(combined), dport), 95 + &net_secret); 96 + 97 + if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1) 98 + st.ts_off = 0; 99 + 100 + st.seq = seq_scale(st.seq); 101 + return st; 71 102 } 72 - EXPORT_SYMBOL(secure_tcpv6_seq); 103 + EXPORT_SYMBOL(secure_tcpv6_seq_and_ts_off); 73 104 74 105 u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr, 75 106 __be16 dport) ··· 99 118 #endif 100 119 101 120 #ifdef CONFIG_INET 102 - u32 secure_tcp_ts_off(const struct net *net, __be32 saddr, __be32 daddr) 103 - { 104 - if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1) 105 - return 0; 106 - 107 - ts_secret_init(); 108 - return siphash_2u32((__force u32)saddr, (__force u32)daddr, 109 - &ts_secret); 110 - } 111 - 112 121 /* secure_tcp_seq_and_tsoff(a, b, 0, d) == secure_ipv4_port_ephemeral(a, b, d), 113 122 * but fortunately, `sport' cannot be 0 in any circumstances. If this changes, 114 123 * it would be easy enough to have the former function use siphash_4u32, passing 115 124 * the arguments as separate u32. 116 125 */ 117 - u32 secure_tcp_seq(__be32 saddr, __be32 daddr, 118 - __be16 sport, __be16 dport) 126 + union tcp_seq_and_ts_off 127 + secure_tcp_seq_and_ts_off(const struct net *net, __be32 saddr, __be32 daddr, 128 + __be16 sport, __be16 dport) 119 129 { 120 - u32 hash; 130 + u32 ports = (__force u32)sport << 16 | (__force u32)dport; 131 + union tcp_seq_and_ts_off st; 121 132 122 133 net_secret_init(); 123 - hash = siphash_3u32((__force u32)saddr, (__force u32)daddr, 124 - (__force u32)sport << 16 | (__force u32)dport, 125 - &net_secret); 126 - return seq_scale(hash); 134 + 135 + st.hash64 = siphash_3u32((__force u32)saddr, (__force u32)daddr, 136 + ports, &net_secret); 137 + 138 + if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1) 139 + st.ts_off = 0; 140 + 141 + st.seq = seq_scale(st.seq); 142 + return st; 127 143 } 128 - EXPORT_SYMBOL_GPL(secure_tcp_seq); 144 + EXPORT_SYMBOL_GPL(secure_tcp_seq_and_ts_off); 129 145 130 146 u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport) 131 147 {
+7 -7
net/core/skmsg.c
··· 1205 1205 return; 1206 1206 1207 1207 psock->saved_data_ready = sk->sk_data_ready; 1208 - sk->sk_data_ready = sk_psock_strp_data_ready; 1209 - sk->sk_write_space = sk_psock_write_space; 1208 + WRITE_ONCE(sk->sk_data_ready, sk_psock_strp_data_ready); 1209 + WRITE_ONCE(sk->sk_write_space, sk_psock_write_space); 1210 1210 } 1211 1211 1212 1212 void sk_psock_stop_strp(struct sock *sk, struct sk_psock *psock) ··· 1216 1216 if (!psock->saved_data_ready) 1217 1217 return; 1218 1218 1219 - sk->sk_data_ready = psock->saved_data_ready; 1220 - psock->saved_data_ready = NULL; 1219 + WRITE_ONCE(sk->sk_data_ready, psock->saved_data_ready); 1220 + WRITE_ONCE(psock->saved_data_ready, NULL); 1221 1221 strp_stop(&psock->strp); 1222 1222 } 1223 1223 ··· 1296 1296 return; 1297 1297 1298 1298 psock->saved_data_ready = sk->sk_data_ready; 1299 - sk->sk_data_ready = sk_psock_verdict_data_ready; 1300 - sk->sk_write_space = sk_psock_write_space; 1299 + WRITE_ONCE(sk->sk_data_ready, sk_psock_verdict_data_ready); 1300 + WRITE_ONCE(sk->sk_write_space, sk_psock_write_space); 1301 1301 } 1302 1302 1303 1303 void sk_psock_stop_verdict(struct sock *sk, struct sk_psock *psock) ··· 1308 1308 if (!psock->saved_data_ready) 1309 1309 return; 1310 1310 1311 - sk->sk_data_ready = psock->saved_data_ready; 1311 + WRITE_ONCE(sk->sk_data_ready, psock->saved_data_ready); 1312 1312 psock->saved_data_ready = NULL; 1313 1313 }
+2
net/ipv4/Kconfig
··· 748 748 config TCP_AO 749 749 bool "TCP: Authentication Option (RFC5925)" 750 750 select CRYPTO 751 + select CRYPTO_LIB_UTILS 751 752 select TCP_SIGPOOL 752 753 depends on 64BIT && IPV6 != m # seq-number extension needs WRITE_ONCE(u64) 753 754 help ··· 762 761 config TCP_MD5SIG 763 762 bool "TCP: MD5 Signature Option support (RFC2385)" 764 763 select CRYPTO_LIB_MD5 764 + select CRYPTO_LIB_UTILS 765 765 help 766 766 RFC2385 specifies a method of giving MD5 protection to TCP sessions. 767 767 Its main (only?) use is to protect BGP sessions between core routers
+4 -4
net/ipv4/inet_hashtables.c
··· 200 200 void inet_bind_hash(struct sock *sk, struct inet_bind_bucket *tb, 201 201 struct inet_bind2_bucket *tb2, unsigned short port) 202 202 { 203 - inet_sk(sk)->inet_num = port; 203 + WRITE_ONCE(inet_sk(sk)->inet_num, port); 204 204 inet_csk(sk)->icsk_bind_hash = tb; 205 205 inet_csk(sk)->icsk_bind2_hash = tb2; 206 206 sk_add_bind_node(sk, &tb2->owners); ··· 224 224 spin_lock(&head->lock); 225 225 tb = inet_csk(sk)->icsk_bind_hash; 226 226 inet_csk(sk)->icsk_bind_hash = NULL; 227 - inet_sk(sk)->inet_num = 0; 227 + WRITE_ONCE(inet_sk(sk)->inet_num, 0); 228 228 sk->sk_userlocks &= ~SOCK_CONNECT_BIND; 229 229 230 230 spin_lock(&head2->lock); ··· 352 352 { 353 353 int score = -1; 354 354 355 - if (net_eq(sock_net(sk), net) && sk->sk_num == hnum && 355 + if (net_eq(sock_net(sk), net) && READ_ONCE(sk->sk_num) == hnum && 356 356 !ipv6_only_sock(sk)) { 357 357 if (sk->sk_rcv_saddr != daddr) 358 358 return -1; ··· 1206 1206 1207 1207 sk->sk_hash = 0; 1208 1208 inet_sk(sk)->inet_sport = 0; 1209 - inet_sk(sk)->inet_num = 0; 1209 + WRITE_ONCE(inet_sk(sk)->inet_num, 0); 1210 1210 1211 1211 if (tw) 1212 1212 inet_twsk_bind_unhash(tw, hinfo);
+8 -3
net/ipv4/syncookies.c
··· 378 378 tcp_parse_options(net, skb, &tcp_opt, 0, NULL); 379 379 380 380 if (tcp_opt.saw_tstamp && tcp_opt.rcv_tsecr) { 381 - tsoff = secure_tcp_ts_off(net, 382 - ip_hdr(skb)->daddr, 383 - ip_hdr(skb)->saddr); 381 + union tcp_seq_and_ts_off st; 382 + 383 + st = secure_tcp_seq_and_ts_off(net, 384 + ip_hdr(skb)->daddr, 385 + ip_hdr(skb)->saddr, 386 + tcp_hdr(skb)->dest, 387 + tcp_hdr(skb)->source); 388 + tsoff = st.ts_off; 384 389 tcp_opt.rcv_tsecr -= tsoff; 385 390 } 386 391
+3 -2
net/ipv4/sysctl_net_ipv4.c
··· 486 486 proc_fib_multipath_hash_rand_seed), 487 487 }; 488 488 489 - WRITE_ONCE(net->ipv4.sysctl_fib_multipath_hash_seed, new); 489 + WRITE_ONCE(net->ipv4.sysctl_fib_multipath_hash_seed.user_seed, new.user_seed); 490 + WRITE_ONCE(net->ipv4.sysctl_fib_multipath_hash_seed.mp_seed, new.mp_seed); 490 491 } 491 492 492 493 static int proc_fib_multipath_hash_seed(const struct ctl_table *table, int write, ··· 501 500 int ret; 502 501 503 502 mphs = &net->ipv4.sysctl_fib_multipath_hash_seed; 504 - user_seed = mphs->user_seed; 503 + user_seed = READ_ONCE(mphs->user_seed); 505 504 506 505 tmp = *table; 507 506 tmp.data = &user_seed;
+4 -3
net/ipv4/tcp.c
··· 244 244 #define pr_fmt(fmt) "TCP: " fmt 245 245 246 246 #include <crypto/md5.h> 247 + #include <crypto/utils.h> 247 248 #include <linux/kernel.h> 248 249 #include <linux/module.h> 249 250 #include <linux/types.h> ··· 1447 1446 err = sk_stream_error(sk, flags, err); 1448 1447 /* make sure we wake any epoll edge trigger waiter */ 1449 1448 if (unlikely(tcp_rtx_and_write_queues_empty(sk) && err == -EAGAIN)) { 1450 - sk->sk_write_space(sk); 1449 + READ_ONCE(sk->sk_write_space)(sk); 1451 1450 tcp_chrono_stop(sk, TCP_CHRONO_SNDBUF_LIMITED); 1452 1451 } 1453 1452 if (binding) ··· 4182 4181 break; 4183 4182 case TCP_NOTSENT_LOWAT: 4184 4183 WRITE_ONCE(tp->notsent_lowat, val); 4185 - sk->sk_write_space(sk); 4184 + READ_ONCE(sk->sk_write_space)(sk); 4186 4185 break; 4187 4186 case TCP_INQ: 4188 4187 if (val > 1 || val < 0) ··· 4971 4970 tcp_v4_md5_hash_skb(newhash, key, NULL, skb); 4972 4971 else 4973 4972 tp->af_specific->calc_md5_hash(newhash, key, NULL, skb); 4974 - if (memcmp(hash_location, newhash, 16) != 0) { 4973 + if (crypto_memneq(hash_location, newhash, 16)) { 4975 4974 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5FAILURE); 4976 4975 trace_tcp_hash_md5_mismatch(sk, skb); 4977 4976 return SKB_DROP_REASON_TCP_MD5FAILURE;
+2 -1
net/ipv4/tcp_ao.c
··· 10 10 #define pr_fmt(fmt) "TCP: " fmt 11 11 12 12 #include <crypto/hash.h> 13 + #include <crypto/utils.h> 13 14 #include <linux/inetdevice.h> 14 15 #include <linux/tcp.h> 15 16 ··· 923 922 /* XXX: make it per-AF callback? */ 924 923 tcp_ao_hash_skb(family, hash_buf, key, sk, skb, traffic_key, 925 924 (phash - (u8 *)th), sne); 926 - if (memcmp(phash, hash_buf, maclen)) { 925 + if (crypto_memneq(phash, hash_buf, maclen)) { 927 926 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAOBAD); 928 927 atomic64_inc(&info->counters.pkt_bad); 929 928 atomic64_inc(&key->pkt_bad);
+1 -1
net/ipv4/tcp_bpf.c
··· 725 725 WRITE_ONCE(sk->sk_prot->unhash, psock->saved_unhash); 726 726 tcp_update_ulp(sk, psock->sk_proto, psock->saved_write_space); 727 727 } else { 728 - sk->sk_write_space = psock->saved_write_space; 728 + WRITE_ONCE(sk->sk_write_space, psock->saved_write_space); 729 729 /* Pairs with lockless read in sk_clone_lock() */ 730 730 sock_replace_proto(sk, psock->sk_proto); 731 731 }
+1 -1
net/ipv4/tcp_diag.c
··· 509 509 if (r->sdiag_family != AF_UNSPEC && 510 510 sk->sk_family != r->sdiag_family) 511 511 goto next_normal; 512 - if (r->id.idiag_sport != htons(sk->sk_num) && 512 + if (r->id.idiag_sport != htons(READ_ONCE(sk->sk_num)) && 513 513 r->id.idiag_sport) 514 514 goto next_normal; 515 515 if (r->id.idiag_dport != sk->sk_dport &&
+15 -23
net/ipv4/tcp_input.c
··· 5374 5374 static bool tcp_prune_ofo_queue(struct sock *sk, const struct sk_buff *in_skb); 5375 5375 static int tcp_prune_queue(struct sock *sk, const struct sk_buff *in_skb); 5376 5376 5377 - /* Check if this incoming skb can be added to socket receive queues 5378 - * while satisfying sk->sk_rcvbuf limit. 5379 - * 5380 - * In theory we should use skb->truesize, but this can cause problems 5381 - * when applications use too small SO_RCVBUF values. 5382 - * When LRO / hw gro is used, the socket might have a high tp->scaling_ratio, 5383 - * allowing RWIN to be close to available space. 5384 - * Whenever the receive queue gets full, we can receive a small packet 5385 - * filling RWIN, but with a high skb->truesize, because most NIC use 4K page 5386 - * plus sk_buff metadata even when receiving less than 1500 bytes of payload. 5387 - * 5388 - * Note that we use skb->len to decide to accept or drop this packet, 5389 - * but sk->sk_rmem_alloc is the sum of all skb->truesize. 5390 - */ 5391 5377 static bool tcp_can_ingest(const struct sock *sk, const struct sk_buff *skb) 5392 5378 { 5393 5379 unsigned int rmem = atomic_read(&sk->sk_rmem_alloc); 5394 5380 5395 - return rmem + skb->len <= sk->sk_rcvbuf; 5381 + return rmem <= sk->sk_rcvbuf; 5396 5382 } 5397 5383 5398 5384 static int tcp_try_rmem_schedule(struct sock *sk, const struct sk_buff *skb, ··· 5411 5425 5412 5426 if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) { 5413 5427 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP); 5414 - sk->sk_data_ready(sk); 5428 + READ_ONCE(sk->sk_data_ready)(sk); 5415 5429 tcp_drop_reason(sk, skb, SKB_DROP_REASON_PROTO_MEM); 5416 5430 return; 5417 5431 } ··· 5621 5635 void tcp_data_ready(struct sock *sk) 5622 5636 { 5623 5637 if (tcp_epollin_ready(sk, sk->sk_rcvlowat) || sock_flag(sk, SOCK_DONE)) 5624 - sk->sk_data_ready(sk); 5638 + READ_ONCE(sk->sk_data_ready)(sk); 5625 5639 } 5626 5640 5627 5641 static void tcp_data_queue(struct sock *sk, struct sk_buff *skb) ··· 5677 5691 inet_csk(sk)->icsk_ack.pending |= 5678 5692 (ICSK_ACK_NOMEM | ICSK_ACK_NOW); 5679 5693 inet_csk_schedule_ack(sk); 5680 - sk->sk_data_ready(sk); 5694 + READ_ONCE(sk->sk_data_ready)(sk); 5681 5695 5682 5696 if (skb_queue_len(&sk->sk_receive_queue) && skb->len) { 5683 5697 reason = SKB_DROP_REASON_PROTO_MEM; ··· 6100 6114 tp->snd_cwnd_stamp = tcp_jiffies32; 6101 6115 } 6102 6116 6103 - INDIRECT_CALL_1(sk->sk_write_space, sk_stream_write_space, sk); 6117 + INDIRECT_CALL_1(READ_ONCE(sk->sk_write_space), 6118 + sk_stream_write_space, 6119 + sk); 6104 6120 } 6105 6121 6106 6122 /* Caller made space either from: ··· 6313 6325 BUG(); 6314 6326 WRITE_ONCE(tp->urg_data, TCP_URG_VALID | tmp); 6315 6327 if (!sock_flag(sk, SOCK_DEAD)) 6316 - sk->sk_data_ready(sk); 6328 + READ_ONCE(sk->sk_data_ready)(sk); 6317 6329 } 6318 6330 } 6319 6331 } ··· 7646 7658 const struct tcp_sock *tp = tcp_sk(sk); 7647 7659 struct net *net = sock_net(sk); 7648 7660 struct sock *fastopen_sk = NULL; 7661 + union tcp_seq_and_ts_off st; 7649 7662 struct request_sock *req; 7650 7663 bool want_cookie = false; 7651 7664 struct dst_entry *dst; ··· 7716 7727 if (!dst) 7717 7728 goto drop_and_free; 7718 7729 7730 + if (tmp_opt.tstamp_ok || (!want_cookie && !isn)) 7731 + st = af_ops->init_seq_and_ts_off(net, skb); 7732 + 7719 7733 if (tmp_opt.tstamp_ok) { 7720 7734 tcp_rsk(req)->req_usec_ts = dst_tcp_usec_ts(dst); 7721 - tcp_rsk(req)->ts_off = af_ops->init_ts_off(net, skb); 7735 + tcp_rsk(req)->ts_off = st.ts_off; 7722 7736 } 7723 7737 if (!want_cookie && !isn) { 7724 7738 int max_syn_backlog = READ_ONCE(net->ipv4.sysctl_max_syn_backlog); ··· 7743 7751 goto drop_and_release; 7744 7752 } 7745 7753 7746 - isn = af_ops->init_seq(skb); 7754 + isn = st.seq; 7747 7755 } 7748 7756 7749 7757 tcp_ecn_create_request(req, skb, sk, dst); ··· 7784 7792 sock_put(fastopen_sk); 7785 7793 goto drop_and_free; 7786 7794 } 7787 - sk->sk_data_ready(sk); 7795 + READ_ONCE(sk->sk_data_ready)(sk); 7788 7796 bh_unlock_sock(fastopen_sk); 7789 7797 sock_put(fastopen_sk); 7790 7798 } else {
+19 -21
net/ipv4/tcp_ipv4.c
··· 88 88 #include <linux/skbuff_ref.h> 89 89 90 90 #include <crypto/md5.h> 91 + #include <crypto/utils.h> 91 92 92 93 #include <trace/events/tcp.h> 93 94 ··· 105 104 106 105 static DEFINE_MUTEX(tcp_exit_batch_mutex); 107 106 108 - static u32 tcp_v4_init_seq(const struct sk_buff *skb) 107 + static union tcp_seq_and_ts_off 108 + tcp_v4_init_seq_and_ts_off(const struct net *net, const struct sk_buff *skb) 109 109 { 110 - return secure_tcp_seq(ip_hdr(skb)->daddr, 111 - ip_hdr(skb)->saddr, 112 - tcp_hdr(skb)->dest, 113 - tcp_hdr(skb)->source); 114 - } 115 - 116 - static u32 tcp_v4_init_ts_off(const struct net *net, const struct sk_buff *skb) 117 - { 118 - return secure_tcp_ts_off(net, ip_hdr(skb)->daddr, ip_hdr(skb)->saddr); 110 + return secure_tcp_seq_and_ts_off(net, 111 + ip_hdr(skb)->daddr, 112 + ip_hdr(skb)->saddr, 113 + tcp_hdr(skb)->dest, 114 + tcp_hdr(skb)->source); 119 115 } 120 116 121 117 int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp) ··· 324 326 rt = NULL; 325 327 326 328 if (likely(!tp->repair)) { 329 + union tcp_seq_and_ts_off st; 330 + 331 + st = secure_tcp_seq_and_ts_off(net, 332 + inet->inet_saddr, 333 + inet->inet_daddr, 334 + inet->inet_sport, 335 + usin->sin_port); 327 336 if (!tp->write_seq) 328 - WRITE_ONCE(tp->write_seq, 329 - secure_tcp_seq(inet->inet_saddr, 330 - inet->inet_daddr, 331 - inet->inet_sport, 332 - usin->sin_port)); 333 - WRITE_ONCE(tp->tsoffset, 334 - secure_tcp_ts_off(net, inet->inet_saddr, 335 - inet->inet_daddr)); 337 + WRITE_ONCE(tp->write_seq, st.seq); 338 + WRITE_ONCE(tp->tsoffset, st.ts_off); 336 339 } 337 340 338 341 atomic_set(&inet->inet_id, get_random_u16()); ··· 838 839 goto out; 839 840 840 841 tcp_v4_md5_hash_skb(newhash, key, NULL, skb); 841 - if (memcmp(md5_hash_location, newhash, 16) != 0) 842 + if (crypto_memneq(md5_hash_location, newhash, 16)) 842 843 goto out; 843 844 } 844 845 ··· 1675 1676 .cookie_init_seq = cookie_v4_init_sequence, 1676 1677 #endif 1677 1678 .route_req = tcp_v4_route_req, 1678 - .init_seq = tcp_v4_init_seq, 1679 - .init_ts_off = tcp_v4_init_ts_off, 1679 + .init_seq_and_ts_off = tcp_v4_init_seq_and_ts_off, 1680 1680 .send_synack = tcp_v4_send_synack, 1681 1681 }; 1682 1682
+1 -1
net/ipv4/tcp_minisocks.c
··· 1004 1004 reason = tcp_rcv_state_process(child, skb); 1005 1005 /* Wakeup parent, send SIGIO */ 1006 1006 if (state == TCP_SYN_RECV && child->sk_state != state) 1007 - parent->sk_data_ready(parent); 1007 + READ_ONCE(parent->sk_data_ready)(parent); 1008 1008 } else { 1009 1009 /* Alas, it is possible again, because we do lookup 1010 1010 * in main socket hash table and lock on listening
+15 -10
net/ipv4/udp.c
··· 1787 1787 * using prepare_to_wait_exclusive(). 1788 1788 */ 1789 1789 while (nb) { 1790 - INDIRECT_CALL_1(sk->sk_data_ready, 1790 + INDIRECT_CALL_1(READ_ONCE(sk->sk_data_ready), 1791 1791 sock_def_readable, sk); 1792 1792 nb--; 1793 1793 } ··· 2287 2287 udp_sk(sk)->udp_port_hash); 2288 2288 hslot2 = udp_hashslot2(udptable, udp_sk(sk)->udp_portaddr_hash); 2289 2289 nhslot2 = udp_hashslot2(udptable, newhash); 2290 - udp_sk(sk)->udp_portaddr_hash = newhash; 2291 2290 2292 2291 if (hslot2 != nhslot2 || 2293 2292 rcu_access_pointer(sk->sk_reuseport_cb)) { ··· 2320 2321 if (udp_hashed4(sk)) { 2321 2322 spin_lock_bh(&hslot->lock); 2322 2323 2323 - udp_rehash4(udptable, sk, newhash4); 2324 - if (hslot2 != nhslot2) { 2325 - spin_lock(&hslot2->lock); 2326 - udp_hash4_dec(hslot2); 2327 - spin_unlock(&hslot2->lock); 2324 + if (inet_rcv_saddr_any(sk)) { 2325 + udp_unhash4(udptable, sk); 2326 + } else { 2327 + udp_rehash4(udptable, sk, newhash4); 2328 + if (hslot2 != nhslot2) { 2329 + spin_lock(&hslot2->lock); 2330 + udp_hash4_dec(hslot2); 2331 + spin_unlock(&hslot2->lock); 2328 2332 2329 - spin_lock(&nhslot2->lock); 2330 - udp_hash4_inc(nhslot2); 2331 - spin_unlock(&nhslot2->lock); 2333 + spin_lock(&nhslot2->lock); 2334 + udp_hash4_inc(nhslot2); 2335 + spin_unlock(&nhslot2->lock); 2336 + } 2332 2337 } 2333 2338 2334 2339 spin_unlock_bh(&hslot->lock); 2335 2340 } 2341 + 2342 + udp_sk(sk)->udp_portaddr_hash = newhash; 2336 2343 } 2337 2344 } 2338 2345 EXPORT_IPV6_MOD(udp_lib_rehash);
+1 -1
net/ipv4/udp_bpf.c
··· 158 158 int family = sk->sk_family == AF_INET ? UDP_BPF_IPV4 : UDP_BPF_IPV6; 159 159 160 160 if (restore) { 161 - sk->sk_write_space = psock->saved_write_space; 161 + WRITE_ONCE(sk->sk_write_space, psock->saved_write_space); 162 162 sock_replace_proto(sk, psock->sk_proto); 163 163 return 0; 164 164 }
+2 -1
net/ipv6/inet6_hashtables.c
··· 95 95 { 96 96 int score = -1; 97 97 98 - if (net_eq(sock_net(sk), net) && inet_sk(sk)->inet_num == hnum && 98 + if (net_eq(sock_net(sk), net) && 99 + READ_ONCE(inet_sk(sk)->inet_num) == hnum && 99 100 sk->sk_family == PF_INET6) { 100 101 if (!ipv6_addr_equal(&sk->sk_v6_rcv_saddr, daddr)) 101 102 return -1;
+5 -6
net/ipv6/route.c
··· 1063 1063 */ 1064 1064 if (netif_is_l3_slave(dev) && 1065 1065 !rt6_need_strict(&res->f6i->fib6_dst.addr)) 1066 - dev = l3mdev_master_dev_rcu(dev); 1066 + dev = l3mdev_master_dev_rcu(dev) ? : 1067 + dev_net(dev)->loopback_dev; 1067 1068 else if (!netif_is_l3_master(dev)) 1068 1069 dev = dev_net(dev)->loopback_dev; 1069 1070 /* last case is netif_is_l3_master(dev) is true in which ··· 3583 3582 netdevice_tracker *dev_tracker = &fib6_nh->fib_nh_dev_tracker; 3584 3583 struct net_device *dev = NULL; 3585 3584 struct inet6_dev *idev = NULL; 3586 - int addr_type; 3587 3585 int err; 3588 3586 3589 3587 fib6_nh->fib_nh_family = AF_INET6; ··· 3624 3624 3625 3625 fib6_nh->fib_nh_weight = 1; 3626 3626 3627 - /* We cannot add true routes via loopback here, 3628 - * they would result in kernel looping; promote them to reject routes 3627 + /* Reset the nexthop device to the loopback device in case of reject 3628 + * routes. 3629 3629 */ 3630 - addr_type = ipv6_addr_type(&cfg->fc_dst); 3631 - if (fib6_is_reject(cfg->fc_flags, dev, addr_type)) { 3630 + if (cfg->fc_flags & RTF_REJECT) { 3632 3631 /* hold loopback dev/idev if we haven't done so. */ 3633 3632 if (dev != net->loopback_dev) { 3634 3633 if (dev) {
+8 -3
net/ipv6/syncookies.c
··· 151 151 tcp_parse_options(net, skb, &tcp_opt, 0, NULL); 152 152 153 153 if (tcp_opt.saw_tstamp && tcp_opt.rcv_tsecr) { 154 - tsoff = secure_tcpv6_ts_off(net, 155 - ipv6_hdr(skb)->daddr.s6_addr32, 156 - ipv6_hdr(skb)->saddr.s6_addr32); 154 + union tcp_seq_and_ts_off st; 155 + 156 + st = secure_tcpv6_seq_and_ts_off(net, 157 + ipv6_hdr(skb)->daddr.s6_addr32, 158 + ipv6_hdr(skb)->saddr.s6_addr32, 159 + tcp_hdr(skb)->dest, 160 + tcp_hdr(skb)->source); 161 + tsoff = st.ts_off; 157 162 tcp_opt.rcv_tsecr -= tsoff; 158 163 } 159 164
+19 -21
net/ipv6/tcp_ipv6.c
··· 68 68 #include <linux/seq_file.h> 69 69 70 70 #include <crypto/md5.h> 71 + #include <crypto/utils.h> 71 72 72 73 #include <trace/events/tcp.h> 73 74 ··· 105 104 } 106 105 } 107 106 108 - static u32 tcp_v6_init_seq(const struct sk_buff *skb) 107 + static union tcp_seq_and_ts_off 108 + tcp_v6_init_seq_and_ts_off(const struct net *net, const struct sk_buff *skb) 109 109 { 110 - return secure_tcpv6_seq(ipv6_hdr(skb)->daddr.s6_addr32, 111 - ipv6_hdr(skb)->saddr.s6_addr32, 112 - tcp_hdr(skb)->dest, 113 - tcp_hdr(skb)->source); 114 - } 115 - 116 - static u32 tcp_v6_init_ts_off(const struct net *net, const struct sk_buff *skb) 117 - { 118 - return secure_tcpv6_ts_off(net, ipv6_hdr(skb)->daddr.s6_addr32, 119 - ipv6_hdr(skb)->saddr.s6_addr32); 110 + return secure_tcpv6_seq_and_ts_off(net, 111 + ipv6_hdr(skb)->daddr.s6_addr32, 112 + ipv6_hdr(skb)->saddr.s6_addr32, 113 + tcp_hdr(skb)->dest, 114 + tcp_hdr(skb)->source); 120 115 } 121 116 122 117 static int tcp_v6_pre_connect(struct sock *sk, struct sockaddr_unsized *uaddr, ··· 316 319 sk_set_txhash(sk); 317 320 318 321 if (likely(!tp->repair)) { 322 + union tcp_seq_and_ts_off st; 323 + 324 + st = secure_tcpv6_seq_and_ts_off(net, 325 + np->saddr.s6_addr32, 326 + sk->sk_v6_daddr.s6_addr32, 327 + inet->inet_sport, 328 + inet->inet_dport); 319 329 if (!tp->write_seq) 320 - WRITE_ONCE(tp->write_seq, 321 - secure_tcpv6_seq(np->saddr.s6_addr32, 322 - sk->sk_v6_daddr.s6_addr32, 323 - inet->inet_sport, 324 - inet->inet_dport)); 325 - tp->tsoffset = secure_tcpv6_ts_off(net, np->saddr.s6_addr32, 326 - sk->sk_v6_daddr.s6_addr32); 330 + WRITE_ONCE(tp->write_seq, st.seq); 331 + tp->tsoffset = st.ts_off; 327 332 } 328 333 329 334 if (tcp_fastopen_defer_connect(sk, &err)) ··· 815 816 .cookie_init_seq = cookie_v6_init_sequence, 816 817 #endif 817 818 .route_req = tcp_v6_route_req, 818 - .init_seq = tcp_v6_init_seq, 819 - .init_ts_off = tcp_v6_init_ts_off, 819 + .init_seq_and_ts_off = tcp_v6_init_seq_and_ts_off, 820 820 .send_synack = tcp_v6_send_synack, 821 821 }; 822 822 ··· 1046 1048 key.type = TCP_KEY_MD5; 1047 1049 1048 1050 tcp_v6_md5_hash_skb(newhash, key.md5_key, NULL, skb); 1049 - if (memcmp(md5_hash_location, newhash, 16) != 0) 1051 + if (crypto_memneq(md5_hash_location, newhash, 16)) 1050 1052 goto out; 1051 1053 } 1052 1054 #endif
+1
net/mac80211/eht.c
··· 154 154 u8 *ptr = mgmt->u.action.u.eml_omn.variable; 155 155 struct ieee80211_eml_params eml_params = { 156 156 .link_id = status->link_id, 157 + .control = control, 157 158 }; 158 159 struct sta_info *sta; 159 160 int opt_len = 0;
+43 -12
net/mptcp/pm.c
··· 212 212 spin_lock_bh(&msk->pm.lock); 213 213 } 214 214 215 - void mptcp_pm_addr_send_ack(struct mptcp_sock *msk) 215 + static bool subflow_in_rm_list(const struct mptcp_subflow_context *subflow, 216 + const struct mptcp_rm_list *rm_list) 216 217 { 217 - struct mptcp_subflow_context *subflow, *alt = NULL; 218 + u8 i, id = subflow_get_local_id(subflow); 219 + 220 + for (i = 0; i < rm_list->nr; i++) { 221 + if (rm_list->ids[i] == id) 222 + return true; 223 + } 224 + 225 + return false; 226 + } 227 + 228 + static void 229 + mptcp_pm_addr_send_ack_avoid_list(struct mptcp_sock *msk, 230 + const struct mptcp_rm_list *rm_list) 231 + { 232 + struct mptcp_subflow_context *subflow, *stale = NULL, *same_id = NULL; 218 233 219 234 msk_owned_by_me(msk); 220 235 lockdep_assert_held(&msk->pm.lock); ··· 239 224 return; 240 225 241 226 mptcp_for_each_subflow(msk, subflow) { 242 - if (__mptcp_subflow_active(subflow)) { 243 - if (!subflow->stale) { 244 - mptcp_pm_send_ack(msk, subflow, false, false); 245 - return; 246 - } 227 + if (!__mptcp_subflow_active(subflow)) 228 + continue; 247 229 248 - if (!alt) 249 - alt = subflow; 230 + if (unlikely(subflow->stale)) { 231 + if (!stale) 232 + stale = subflow; 233 + } else if (unlikely(rm_list && 234 + subflow_in_rm_list(subflow, rm_list))) { 235 + if (!same_id) 236 + same_id = subflow; 237 + } else { 238 + goto send_ack; 250 239 } 251 240 } 252 241 253 - if (alt) 254 - mptcp_pm_send_ack(msk, alt, false, false); 242 + if (same_id) 243 + subflow = same_id; 244 + else if (stale) 245 + subflow = stale; 246 + else 247 + return; 248 + 249 + send_ack: 250 + mptcp_pm_send_ack(msk, subflow, false, false); 251 + } 252 + 253 + void mptcp_pm_addr_send_ack(struct mptcp_sock *msk) 254 + { 255 + mptcp_pm_addr_send_ack_avoid_list(msk, NULL); 255 256 } 256 257 257 258 int mptcp_pm_mp_prio_send_ack(struct mptcp_sock *msk, ··· 501 470 msk->pm.rm_list_tx = *rm_list; 502 471 rm_addr |= BIT(MPTCP_RM_ADDR_SIGNAL); 503 472 WRITE_ONCE(msk->pm.addr_signal, rm_addr); 504 - mptcp_pm_addr_send_ack(msk); 473 + mptcp_pm_addr_send_ack_avoid_list(msk, rm_list); 505 474 return 0; 506 475 } 507 476
+9
net/mptcp/pm_kernel.c
··· 418 418 } 419 419 420 420 exit: 421 + /* If an endpoint has both the signal and subflow flags, but it is not 422 + * possible to create subflows -- the 'while' loop body above never 423 + * executed -- then still mark the endp as used, which is somehow the 424 + * case. This avoids issues later when removing the endpoint and calling 425 + * __mark_subflow_endp_available(), which expects the increment here. 426 + */ 427 + if (signal_and_subflow && local.addr.id != msk->mpc_endpoint_id) 428 + msk->pm.local_addr_used++; 429 + 421 430 mptcp_pm_nl_check_work_pending(msk); 422 431 } 423 432
+25 -20
net/netfilter/nf_tables_api.c
··· 833 833 } 834 834 } 835 835 836 + /* Use NFT_ITER_UPDATE iterator even if this may be called from the preparation 837 + * phase, the set clone might already exist from a previous command, or it might 838 + * be a set that is going away and does not require a clone. The netns and 839 + * netlink release paths also need to work on the live set. 840 + */ 836 841 static void nft_map_deactivate(const struct nft_ctx *ctx, struct nft_set *set) 837 842 { 838 843 struct nft_set_iter iter = { ··· 7175 7170 struct nft_data_desc desc; 7176 7171 enum nft_registers dreg; 7177 7172 struct nft_trans *trans; 7173 + bool set_full = false; 7178 7174 u64 expiration; 7179 7175 u64 timeout; 7180 7176 int err, i; ··· 7467 7461 if (err < 0) 7468 7462 goto err_elem_free; 7469 7463 7464 + if (!(flags & NFT_SET_ELEM_CATCHALL)) { 7465 + unsigned int max = nft_set_maxsize(set), nelems; 7466 + 7467 + nelems = atomic_inc_return(&set->nelems); 7468 + if (nelems > max) 7469 + set_full = true; 7470 + } 7471 + 7470 7472 trans = nft_trans_elem_alloc(ctx, NFT_MSG_NEWSETELEM, set); 7471 7473 if (trans == NULL) { 7472 7474 err = -ENOMEM; 7473 - goto err_elem_free; 7475 + goto err_set_size; 7474 7476 } 7475 7477 7476 7478 ext->genmask = nft_genmask_cur(ctx->net); ··· 7530 7516 7531 7517 ue->priv = elem_priv; 7532 7518 nft_trans_commit_list_add_elem(ctx->net, trans); 7533 - goto err_elem_free; 7519 + goto err_set_size; 7534 7520 } 7535 7521 } 7536 7522 } ··· 7548 7534 goto err_element_clash; 7549 7535 } 7550 7536 7551 - if (!(flags & NFT_SET_ELEM_CATCHALL)) { 7552 - unsigned int max = nft_set_maxsize(set); 7553 - 7554 - if (!atomic_add_unless(&set->nelems, 1, max)) { 7555 - err = -ENFILE; 7556 - goto err_set_full; 7557 - } 7558 - } 7559 - 7560 7537 nft_trans_container_elem(trans)->elems[0].priv = elem.priv; 7561 7538 nft_trans_commit_list_add_elem(ctx->net, trans); 7562 - return 0; 7563 7539 7564 - err_set_full: 7565 - nft_setelem_remove(ctx->net, set, elem.priv); 7540 + return set_full ? -ENFILE : 0; 7541 + 7566 7542 err_element_clash: 7567 7543 kfree(trans); 7544 + err_set_size: 7545 + if (!(flags & NFT_SET_ELEM_CATCHALL)) 7546 + atomic_dec(&set->nelems); 7568 7547 err_elem_free: 7569 7548 nf_tables_set_elem_destroy(ctx, set, elem.priv); 7570 7549 err_parse_data: ··· 7908 7901 7909 7902 static int nft_set_flush(struct nft_ctx *ctx, struct nft_set *set, u8 genmask) 7910 7903 { 7904 + /* The set backend might need to clone the set, do it now from the 7905 + * preparation phase, use NFT_ITER_UPDATE_CLONE iterator type. 7906 + */ 7911 7907 struct nft_set_iter iter = { 7912 7908 .genmask = genmask, 7913 - .type = NFT_ITER_UPDATE, 7909 + .type = NFT_ITER_UPDATE_CLONE, 7914 7910 .fn = nft_setelem_flush, 7915 7911 }; 7916 7912 ··· 10491 10481 spin_unlock(&nf_tables_gc_list_lock); 10492 10482 10493 10483 schedule_work(&trans_gc_work); 10494 - } 10495 - 10496 - static int nft_trans_gc_space(struct nft_trans_gc *trans) 10497 - { 10498 - return NFT_TRANS_GC_BATCHCOUNT - trans->count; 10499 10484 } 10500 10485 10501 10486 struct nft_trans_gc *nft_trans_gc_queue_async(struct nft_trans_gc *gc,
+1
net/netfilter/nft_set_hash.c
··· 374 374 { 375 375 switch (iter->type) { 376 376 case NFT_ITER_UPDATE: 377 + case NFT_ITER_UPDATE_CLONE: 377 378 /* only relevant for netlink dumps which use READ type */ 378 379 WARN_ON_ONCE(iter->skip != 0); 379 380
+52 -10
net/netfilter/nft_set_pipapo.c
··· 1680 1680 } 1681 1681 1682 1682 /** 1683 - * pipapo_gc() - Drop expired entries from set, destroy start and end elements 1683 + * pipapo_gc_scan() - Drop expired entries from set and link them to gc list 1684 1684 * @set: nftables API set representation 1685 1685 * @m: Matching data 1686 1686 */ 1687 - static void pipapo_gc(struct nft_set *set, struct nft_pipapo_match *m) 1687 + static void pipapo_gc_scan(struct nft_set *set, struct nft_pipapo_match *m) 1688 1688 { 1689 1689 struct nft_pipapo *priv = nft_set_priv(set); 1690 1690 struct net *net = read_pnet(&set->net); ··· 1696 1696 gc = nft_trans_gc_alloc(set, 0, GFP_KERNEL); 1697 1697 if (!gc) 1698 1698 return; 1699 + 1700 + list_add(&gc->list, &priv->gc_head); 1699 1701 1700 1702 while ((rules_f0 = pipapo_rules_same_key(m->f, first_rule))) { 1701 1703 union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS]; ··· 1726 1724 * NFT_SET_ELEM_DEAD_BIT. 1727 1725 */ 1728 1726 if (__nft_set_elem_expired(&e->ext, tstamp)) { 1729 - gc = nft_trans_gc_queue_sync(gc, GFP_KERNEL); 1730 - if (!gc) 1731 - return; 1727 + if (!nft_trans_gc_space(gc)) { 1728 + gc = nft_trans_gc_alloc(set, 0, GFP_KERNEL); 1729 + if (!gc) 1730 + return; 1731 + 1732 + list_add(&gc->list, &priv->gc_head); 1733 + } 1732 1734 1733 1735 nft_pipapo_gc_deactivate(net, set, e); 1734 1736 pipapo_drop(m, rulemap); ··· 1746 1740 } 1747 1741 } 1748 1742 1749 - gc = nft_trans_gc_catchall_sync(gc); 1743 + priv->last_gc = jiffies; 1744 + } 1745 + 1746 + /** 1747 + * pipapo_gc_queue() - Free expired elements 1748 + * @set: nftables API set representation 1749 + */ 1750 + static void pipapo_gc_queue(struct nft_set *set) 1751 + { 1752 + struct nft_pipapo *priv = nft_set_priv(set); 1753 + struct nft_trans_gc *gc, *next; 1754 + 1755 + /* always do a catchall cycle: */ 1756 + gc = nft_trans_gc_alloc(set, 0, GFP_KERNEL); 1750 1757 if (gc) { 1758 + gc = nft_trans_gc_catchall_sync(gc); 1759 + if (gc) 1760 + nft_trans_gc_queue_sync_done(gc); 1761 + } 1762 + 1763 + /* always purge queued gc elements. */ 1764 + list_for_each_entry_safe(gc, next, &priv->gc_head, list) { 1765 + list_del(&gc->list); 1751 1766 nft_trans_gc_queue_sync_done(gc); 1752 - priv->last_gc = jiffies; 1753 1767 } 1754 1768 } 1755 1769 ··· 1823 1797 * 1824 1798 * We also need to create a new working copy for subsequent insertions and 1825 1799 * deletions. 1800 + * 1801 + * After the live copy has been replaced by the clone, we can safely queue 1802 + * expired elements that have been collected by pipapo_gc_scan() for 1803 + * memory reclaim. 1826 1804 */ 1827 1805 static void nft_pipapo_commit(struct nft_set *set) 1828 1806 { ··· 1837 1807 return; 1838 1808 1839 1809 if (time_after_eq(jiffies, priv->last_gc + nft_set_gc_interval(set))) 1840 - pipapo_gc(set, priv->clone); 1810 + pipapo_gc_scan(set, priv->clone); 1841 1811 1842 1812 old = rcu_replace_pointer(priv->match, priv->clone, 1843 1813 nft_pipapo_transaction_mutex_held(set)); ··· 1845 1815 1846 1816 if (old) 1847 1817 call_rcu(&old->rcu, pipapo_reclaim_match); 1818 + 1819 + pipapo_gc_queue(set); 1848 1820 } 1849 1821 1850 1822 static void nft_pipapo_abort(const struct nft_set *set) ··· 2176 2144 const struct nft_pipapo_match *m; 2177 2145 2178 2146 switch (iter->type) { 2179 - case NFT_ITER_UPDATE: 2147 + case NFT_ITER_UPDATE_CLONE: 2180 2148 m = pipapo_maybe_clone(set); 2181 2149 if (!m) { 2182 2150 iter->err = -ENOMEM; 2183 2151 return; 2184 2152 } 2185 - 2153 + nft_pipapo_do_walk(ctx, set, m, iter); 2154 + break; 2155 + case NFT_ITER_UPDATE: 2156 + if (priv->clone) 2157 + m = priv->clone; 2158 + else 2159 + m = rcu_dereference_protected(priv->match, 2160 + nft_pipapo_transaction_mutex_held(set)); 2186 2161 nft_pipapo_do_walk(ctx, set, m, iter); 2187 2162 break; 2188 2163 case NFT_ITER_READ: ··· 2311 2272 f->mt = NULL; 2312 2273 } 2313 2274 2275 + INIT_LIST_HEAD(&priv->gc_head); 2314 2276 rcu_assign_pointer(priv->match, m); 2315 2277 2316 2278 return 0; ··· 2360 2320 { 2361 2321 struct nft_pipapo *priv = nft_set_priv(set); 2362 2322 struct nft_pipapo_match *m; 2323 + 2324 + WARN_ON_ONCE(!list_empty(&priv->gc_head)); 2363 2325 2364 2326 m = rcu_dereference_protected(priv->match, true); 2365 2327
+2
net/netfilter/nft_set_pipapo.h
··· 156 156 * @clone: Copy where pending insertions and deletions are kept 157 157 * @width: Total bytes to be matched for one packet, including padding 158 158 * @last_gc: Timestamp of last garbage collection run, jiffies 159 + * @gc_head: list of nft_trans_gc to queue up for mem reclaim 159 160 */ 160 161 struct nft_pipapo { 161 162 struct nft_pipapo_match __rcu *match; 162 163 struct nft_pipapo_match *clone; 163 164 int width; 164 165 unsigned long last_gc; 166 + struct list_head gc_head; 165 167 }; 166 168 167 169 struct nft_pipapo_elem;
+5 -3
net/netfilter/nft_set_rbtree.c
··· 861 861 struct nft_rbtree *priv = nft_set_priv(set); 862 862 863 863 switch (iter->type) { 864 - case NFT_ITER_UPDATE: 865 - lockdep_assert_held(&nft_pernet(ctx->net)->commit_mutex); 866 - 864 + case NFT_ITER_UPDATE_CLONE: 867 865 if (nft_array_may_resize(set) < 0) { 868 866 iter->err = -ENOMEM; 869 867 break; 870 868 } 869 + fallthrough; 870 + case NFT_ITER_UPDATE: 871 + lockdep_assert_held(&nft_pernet(ctx->net)->commit_mutex); 872 + 871 873 nft_rbtree_do_walk(ctx, set, iter); 872 874 break; 873 875 case NFT_ITER_READ:
+6 -2
net/nfc/digital_core.c
··· 707 707 int rc; 708 708 709 709 data_exch = kzalloc_obj(*data_exch); 710 - if (!data_exch) 710 + if (!data_exch) { 711 + kfree_skb(skb); 711 712 return -ENOMEM; 713 + } 712 714 713 715 data_exch->cb = cb; 714 716 data_exch->cb_context = cb_context; ··· 733 731 data_exch); 734 732 735 733 exit: 736 - if (rc) 734 + if (rc) { 735 + kfree_skb(skb); 737 736 kfree(data_exch); 737 + } 738 738 739 739 return rc; 740 740 }
+27 -3
net/nfc/nci/core.c
··· 567 567 flush_workqueue(ndev->cmd_wq); 568 568 timer_delete_sync(&ndev->cmd_timer); 569 569 timer_delete_sync(&ndev->data_timer); 570 + if (test_bit(NCI_DATA_EXCHANGE, &ndev->flags)) 571 + nci_data_exchange_complete(ndev, NULL, 572 + ndev->cur_conn_id, 573 + -ENODEV); 570 574 mutex_unlock(&ndev->req_lock); 571 575 return 0; 572 576 } ··· 602 598 flush_workqueue(ndev->cmd_wq); 603 599 604 600 timer_delete_sync(&ndev->cmd_timer); 601 + timer_delete_sync(&ndev->data_timer); 602 + 603 + if (test_bit(NCI_DATA_EXCHANGE, &ndev->flags)) 604 + nci_data_exchange_complete(ndev, NULL, ndev->cur_conn_id, 605 + -ENODEV); 605 606 606 607 /* Clear flags except NCI_UNREG */ 607 608 ndev->flags &= BIT(NCI_UNREG); ··· 1044 1035 struct nci_conn_info *conn_info; 1045 1036 1046 1037 conn_info = ndev->rf_conn_info; 1047 - if (!conn_info) 1038 + if (!conn_info) { 1039 + kfree_skb(skb); 1048 1040 return -EPROTO; 1041 + } 1049 1042 1050 1043 pr_debug("target_idx %d, len %d\n", target->idx, skb->len); 1051 1044 1052 1045 if (!ndev->target_active_prot) { 1053 1046 pr_err("unable to exchange data, no active target\n"); 1047 + kfree_skb(skb); 1054 1048 return -EINVAL; 1055 1049 } 1056 1050 1057 - if (test_and_set_bit(NCI_DATA_EXCHANGE, &ndev->flags)) 1051 + if (test_and_set_bit(NCI_DATA_EXCHANGE, &ndev->flags)) { 1052 + kfree_skb(skb); 1058 1053 return -EBUSY; 1054 + } 1059 1055 1060 1056 /* store cb and context to be used on receiving data */ 1061 1057 conn_info->data_exchange_cb = cb; ··· 1496 1482 unsigned int hdr_size = NCI_CTRL_HDR_SIZE; 1497 1483 1498 1484 if (skb->len < hdr_size || 1499 - !nci_plen(skb->data) || 1500 1485 skb->len < hdr_size + nci_plen(skb->data)) { 1501 1486 return false; 1502 1487 } 1488 + 1489 + if (!nci_plen(skb->data)) { 1490 + /* Allow zero length in proprietary notifications (0x20 - 0x3F). */ 1491 + if (nci_opcode_oid(nci_opcode(skb->data)) >= 0x20 && 1492 + nci_mt(skb->data) == NCI_MT_NTF_PKT) 1493 + return true; 1494 + 1495 + /* Disallow zero length otherwise. */ 1496 + return false; 1497 + } 1498 + 1503 1499 return true; 1504 1500 } 1505 1501
+8 -4
net/nfc/nci/data.c
··· 33 33 conn_info = nci_get_conn_info_by_conn_id(ndev, conn_id); 34 34 if (!conn_info) { 35 35 kfree_skb(skb); 36 - goto exit; 36 + clear_bit(NCI_DATA_EXCHANGE, &ndev->flags); 37 + return; 37 38 } 38 39 39 40 cb = conn_info->data_exchange_cb; ··· 46 45 timer_delete_sync(&ndev->data_timer); 47 46 clear_bit(NCI_DATA_EXCHANGE_TO, &ndev->flags); 48 47 48 + /* Mark the exchange as done before calling the callback. 49 + * The callback (e.g. rawsock_data_exchange_complete) may 50 + * want to immediately queue another data exchange. 51 + */ 52 + clear_bit(NCI_DATA_EXCHANGE, &ndev->flags); 53 + 49 54 if (cb) { 50 55 /* forward skb to nfc core */ 51 56 cb(cb_context, skb, err); ··· 61 54 /* no waiting callback, free skb */ 62 55 kfree_skb(skb); 63 56 } 64 - 65 - exit: 66 - clear_bit(NCI_DATA_EXCHANGE, &ndev->flags); 67 57 } 68 58 69 59 /* ----------------- NCI TX Data ----------------- */
+11
net/nfc/rawsock.c
··· 67 67 if (sock->type == SOCK_RAW) 68 68 nfc_sock_unlink(&raw_sk_list, sk); 69 69 70 + if (sk->sk_state == TCP_ESTABLISHED) { 71 + /* Prevent rawsock_tx_work from starting new transmits and 72 + * wait for any in-progress work to finish. This must happen 73 + * before the socket is orphaned to avoid a race where 74 + * rawsock_tx_work runs after the NCI device has been freed. 75 + */ 76 + sk->sk_shutdown |= SEND_SHUTDOWN; 77 + cancel_work_sync(&nfc_rawsock(sk)->tx_work); 78 + rawsock_write_queue_purge(sk); 79 + } 80 + 70 81 sock_orphan(sk); 71 82 sock_put(sk); 72 83
+10 -4
net/rds/tcp.c
··· 490 490 struct rds_tcp_net *rtn; 491 491 492 492 tcp_sock_set_nodelay(sock->sk); 493 - lock_sock(sk); 494 493 /* TCP timer functions might access net namespace even after 495 494 * a process which created this net namespace terminated. 496 495 */ 497 496 if (!sk->sk_net_refcnt) { 498 - if (!maybe_get_net(net)) { 499 - release_sock(sk); 497 + if (!maybe_get_net(net)) 500 498 return false; 501 - } 499 + /* 500 + * sk_net_refcnt_upgrade() must be called before lock_sock() 501 + * because it does a GFP_KERNEL allocation, which can trigger 502 + * fs_reclaim and create a circular lock dependency with the 503 + * socket lock. The fields it modifies (sk_net_refcnt, 504 + * ns_tracker) are not accessed by any concurrent code path 505 + * at this point. 506 + */ 502 507 sk_net_refcnt_upgrade(sk); 503 508 put_net(net); 504 509 } 510 + lock_sock(sk); 505 511 rtn = net_generic(net, rds_tcp_netid); 506 512 if (rtn->sndbuf_size > 0) { 507 513 sk->sk_sndbuf = rtn->sndbuf_size;
+6
net/sched/act_ct.c
··· 1360 1360 return -EINVAL; 1361 1361 } 1362 1362 1363 + if (bind && !(flags & TCA_ACT_FLAGS_AT_INGRESS_OR_CLSACT)) { 1364 + NL_SET_ERR_MSG_MOD(extack, 1365 + "Attaching ct to a non ingress/clsact qdisc is unsupported"); 1366 + return -EOPNOTSUPP; 1367 + } 1368 + 1363 1369 err = nla_parse_nested(tb, TCA_CT_MAX, nla, ct_policy, extack); 1364 1370 if (err < 0) 1365 1371 return err;
+187 -80
net/sched/act_gate.c
··· 32 32 return KTIME_MAX; 33 33 } 34 34 35 - static void gate_get_start_time(struct tcf_gate *gact, ktime_t *start) 35 + static void tcf_gate_params_free_rcu(struct rcu_head *head); 36 + 37 + static void gate_get_start_time(struct tcf_gate *gact, 38 + const struct tcf_gate_params *param, 39 + ktime_t *start) 36 40 { 37 - struct tcf_gate_params *param = &gact->param; 38 41 ktime_t now, base, cycle; 39 42 u64 n; 40 43 ··· 72 69 { 73 70 struct tcf_gate *gact = container_of(timer, struct tcf_gate, 74 71 hitimer); 75 - struct tcf_gate_params *p = &gact->param; 76 72 struct tcfg_gate_entry *next; 73 + struct tcf_gate_params *p; 77 74 ktime_t close_time, now; 78 75 79 76 spin_lock(&gact->tcf_lock); 80 77 78 + p = rcu_dereference_protected(gact->param, 79 + lockdep_is_held(&gact->tcf_lock)); 81 80 next = gact->next_entry; 82 81 83 82 /* cycle start, clear pending bit, clear total octets */ ··· 230 225 } 231 226 } 232 227 228 + static int tcf_gate_copy_entries(struct tcf_gate_params *dst, 229 + const struct tcf_gate_params *src, 230 + struct netlink_ext_ack *extack) 231 + { 232 + struct tcfg_gate_entry *entry; 233 + int i = 0; 234 + 235 + list_for_each_entry(entry, &src->entries, list) { 236 + struct tcfg_gate_entry *new; 237 + 238 + new = kzalloc(sizeof(*new), GFP_ATOMIC); 239 + if (!new) { 240 + NL_SET_ERR_MSG(extack, "Not enough memory for entry"); 241 + return -ENOMEM; 242 + } 243 + 244 + new->index = entry->index; 245 + new->gate_state = entry->gate_state; 246 + new->interval = entry->interval; 247 + new->ipv = entry->ipv; 248 + new->maxoctets = entry->maxoctets; 249 + list_add_tail(&new->list, &dst->entries); 250 + i++; 251 + } 252 + 253 + dst->num_entries = i; 254 + return 0; 255 + } 256 + 233 257 static int parse_gate_list(struct nlattr *list_attr, 234 258 struct tcf_gate_params *sched, 235 259 struct netlink_ext_ack *extack) ··· 304 270 return err; 305 271 } 306 272 307 - static void gate_setup_timer(struct tcf_gate *gact, u64 basetime, 308 - enum tk_offsets tko, s32 clockid, 309 - bool do_init) 273 + static bool gate_timer_needs_cancel(u64 basetime, u64 old_basetime, 274 + enum tk_offsets tko, 275 + enum tk_offsets old_tko, 276 + s32 clockid, s32 old_clockid) 310 277 { 311 - if (!do_init) { 312 - if (basetime == gact->param.tcfg_basetime && 313 - tko == gact->tk_offset && 314 - clockid == gact->param.tcfg_clockid) 315 - return; 278 + return basetime != old_basetime || 279 + clockid != old_clockid || 280 + tko != old_tko; 281 + } 316 282 317 - spin_unlock_bh(&gact->tcf_lock); 318 - hrtimer_cancel(&gact->hitimer); 319 - spin_lock_bh(&gact->tcf_lock); 283 + static int gate_clock_resolve(s32 clockid, enum tk_offsets *tko, 284 + struct netlink_ext_ack *extack) 285 + { 286 + switch (clockid) { 287 + case CLOCK_REALTIME: 288 + *tko = TK_OFFS_REAL; 289 + return 0; 290 + case CLOCK_MONOTONIC: 291 + *tko = TK_OFFS_MAX; 292 + return 0; 293 + case CLOCK_BOOTTIME: 294 + *tko = TK_OFFS_BOOT; 295 + return 0; 296 + case CLOCK_TAI: 297 + *tko = TK_OFFS_TAI; 298 + return 0; 299 + default: 300 + NL_SET_ERR_MSG(extack, "Invalid 'clockid'"); 301 + return -EINVAL; 320 302 } 321 - gact->param.tcfg_basetime = basetime; 322 - gact->param.tcfg_clockid = clockid; 323 - gact->tk_offset = tko; 324 - hrtimer_setup(&gact->hitimer, gate_timer_func, clockid, HRTIMER_MODE_ABS_SOFT); 303 + } 304 + 305 + static void gate_setup_timer(struct tcf_gate *gact, s32 clockid, 306 + enum tk_offsets tko) 307 + { 308 + WRITE_ONCE(gact->tk_offset, tko); 309 + hrtimer_setup(&gact->hitimer, gate_timer_func, clockid, 310 + HRTIMER_MODE_ABS_SOFT); 325 311 } 326 312 327 313 static int tcf_gate_init(struct net *net, struct nlattr *nla, ··· 350 296 struct netlink_ext_ack *extack) 351 297 { 352 298 struct tc_action_net *tn = net_generic(net, act_gate_ops.net_id); 353 - enum tk_offsets tk_offset = TK_OFFS_TAI; 299 + u64 cycletime = 0, basetime = 0, cycletime_ext = 0; 300 + struct tcf_gate_params *p = NULL, *old_p = NULL; 301 + enum tk_offsets old_tk_offset = TK_OFFS_TAI; 302 + const struct tcf_gate_params *cur_p = NULL; 354 303 bool bind = flags & TCA_ACT_FLAGS_BIND; 355 304 struct nlattr *tb[TCA_GATE_MAX + 1]; 305 + enum tk_offsets tko = TK_OFFS_TAI; 356 306 struct tcf_chain *goto_ch = NULL; 357 - u64 cycletime = 0, basetime = 0; 358 - struct tcf_gate_params *p; 307 + s32 timer_clockid = CLOCK_TAI; 308 + bool use_old_entries = false; 309 + s32 old_clockid = CLOCK_TAI; 310 + bool need_cancel = false; 359 311 s32 clockid = CLOCK_TAI; 360 312 struct tcf_gate *gact; 361 313 struct tc_gate *parm; 314 + u64 old_basetime = 0; 362 315 int ret = 0, err; 363 316 u32 gflags = 0; 364 317 s32 prio = -1; ··· 382 321 if (!tb[TCA_GATE_PARMS]) 383 322 return -EINVAL; 384 323 385 - if (tb[TCA_GATE_CLOCKID]) { 324 + if (tb[TCA_GATE_CLOCKID]) 386 325 clockid = nla_get_s32(tb[TCA_GATE_CLOCKID]); 387 - switch (clockid) { 388 - case CLOCK_REALTIME: 389 - tk_offset = TK_OFFS_REAL; 390 - break; 391 - case CLOCK_MONOTONIC: 392 - tk_offset = TK_OFFS_MAX; 393 - break; 394 - case CLOCK_BOOTTIME: 395 - tk_offset = TK_OFFS_BOOT; 396 - break; 397 - case CLOCK_TAI: 398 - tk_offset = TK_OFFS_TAI; 399 - break; 400 - default: 401 - NL_SET_ERR_MSG(extack, "Invalid 'clockid'"); 402 - return -EINVAL; 403 - } 404 - } 405 326 406 327 parm = nla_data(tb[TCA_GATE_PARMS]); 407 328 index = parm->index; ··· 409 366 return -EEXIST; 410 367 } 411 368 369 + gact = to_gate(*a); 370 + 371 + err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack); 372 + if (err < 0) 373 + goto release_idr; 374 + 375 + p = kzalloc(sizeof(*p), GFP_KERNEL); 376 + if (!p) { 377 + err = -ENOMEM; 378 + goto chain_put; 379 + } 380 + INIT_LIST_HEAD(&p->entries); 381 + 382 + use_old_entries = !tb[TCA_GATE_ENTRY_LIST]; 383 + if (!use_old_entries) { 384 + err = parse_gate_list(tb[TCA_GATE_ENTRY_LIST], p, extack); 385 + if (err < 0) 386 + goto err_free; 387 + use_old_entries = !err; 388 + } 389 + 390 + if (ret == ACT_P_CREATED && use_old_entries) { 391 + NL_SET_ERR_MSG(extack, "The entry list is empty"); 392 + err = -EINVAL; 393 + goto err_free; 394 + } 395 + 396 + if (ret != ACT_P_CREATED) { 397 + rcu_read_lock(); 398 + cur_p = rcu_dereference(gact->param); 399 + 400 + old_basetime = cur_p->tcfg_basetime; 401 + old_clockid = cur_p->tcfg_clockid; 402 + old_tk_offset = READ_ONCE(gact->tk_offset); 403 + 404 + basetime = old_basetime; 405 + cycletime_ext = cur_p->tcfg_cycletime_ext; 406 + prio = cur_p->tcfg_priority; 407 + gflags = cur_p->tcfg_flags; 408 + 409 + if (!tb[TCA_GATE_CLOCKID]) 410 + clockid = old_clockid; 411 + 412 + err = 0; 413 + if (use_old_entries) { 414 + err = tcf_gate_copy_entries(p, cur_p, extack); 415 + if (!err && !tb[TCA_GATE_CYCLE_TIME]) 416 + cycletime = cur_p->tcfg_cycletime; 417 + } 418 + rcu_read_unlock(); 419 + if (err) 420 + goto err_free; 421 + } 422 + 412 423 if (tb[TCA_GATE_PRIORITY]) 413 424 prio = nla_get_s32(tb[TCA_GATE_PRIORITY]); 414 425 ··· 472 375 if (tb[TCA_GATE_FLAGS]) 473 376 gflags = nla_get_u32(tb[TCA_GATE_FLAGS]); 474 377 475 - gact = to_gate(*a); 476 - if (ret == ACT_P_CREATED) 477 - INIT_LIST_HEAD(&gact->param.entries); 478 - 479 - err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack); 480 - if (err < 0) 481 - goto release_idr; 482 - 483 - spin_lock_bh(&gact->tcf_lock); 484 - p = &gact->param; 485 - 486 378 if (tb[TCA_GATE_CYCLE_TIME]) 487 379 cycletime = nla_get_u64(tb[TCA_GATE_CYCLE_TIME]); 488 380 489 - if (tb[TCA_GATE_ENTRY_LIST]) { 490 - err = parse_gate_list(tb[TCA_GATE_ENTRY_LIST], p, extack); 491 - if (err < 0) 492 - goto chain_put; 493 - } 381 + if (tb[TCA_GATE_CYCLE_TIME_EXT]) 382 + cycletime_ext = nla_get_u64(tb[TCA_GATE_CYCLE_TIME_EXT]); 383 + 384 + err = gate_clock_resolve(clockid, &tko, extack); 385 + if (err) 386 + goto err_free; 387 + timer_clockid = clockid; 388 + 389 + need_cancel = ret != ACT_P_CREATED && 390 + gate_timer_needs_cancel(basetime, old_basetime, 391 + tko, old_tk_offset, 392 + timer_clockid, old_clockid); 393 + 394 + if (need_cancel) 395 + hrtimer_cancel(&gact->hitimer); 396 + 397 + spin_lock_bh(&gact->tcf_lock); 494 398 495 399 if (!cycletime) { 496 400 struct tcfg_gate_entry *entry; ··· 500 402 list_for_each_entry(entry, &p->entries, list) 501 403 cycle = ktime_add_ns(cycle, entry->interval); 502 404 cycletime = cycle; 503 - if (!cycletime) { 504 - err = -EINVAL; 505 - goto chain_put; 506 - } 507 405 } 508 406 p->tcfg_cycletime = cycletime; 407 + p->tcfg_cycletime_ext = cycletime_ext; 509 408 510 - if (tb[TCA_GATE_CYCLE_TIME_EXT]) 511 - p->tcfg_cycletime_ext = 512 - nla_get_u64(tb[TCA_GATE_CYCLE_TIME_EXT]); 513 - 514 - gate_setup_timer(gact, basetime, tk_offset, clockid, 515 - ret == ACT_P_CREATED); 409 + if (need_cancel || ret == ACT_P_CREATED) 410 + gate_setup_timer(gact, timer_clockid, tko); 516 411 p->tcfg_priority = prio; 517 412 p->tcfg_flags = gflags; 518 - gate_get_start_time(gact, &start); 413 + p->tcfg_basetime = basetime; 414 + p->tcfg_clockid = timer_clockid; 415 + gate_get_start_time(gact, p, &start); 416 + 417 + old_p = rcu_replace_pointer(gact->param, p, 418 + lockdep_is_held(&gact->tcf_lock)); 519 419 520 420 gact->current_close_time = start; 521 421 gact->current_gate_status = GATE_ACT_GATE_OPEN | GATE_ACT_PENDING; ··· 530 434 if (goto_ch) 531 435 tcf_chain_put_by_act(goto_ch); 532 436 437 + if (old_p) 438 + call_rcu(&old_p->rcu, tcf_gate_params_free_rcu); 439 + 533 440 return ret; 534 441 442 + err_free: 443 + release_entry_list(&p->entries); 444 + kfree(p); 535 445 chain_put: 536 - spin_unlock_bh(&gact->tcf_lock); 537 - 538 446 if (goto_ch) 539 447 tcf_chain_put_by_act(goto_ch); 540 448 release_idr: ··· 546 446 * without taking tcf_lock. 547 447 */ 548 448 if (ret == ACT_P_CREATED) 549 - gate_setup_timer(gact, gact->param.tcfg_basetime, 550 - gact->tk_offset, gact->param.tcfg_clockid, 551 - true); 449 + gate_setup_timer(gact, timer_clockid, tko); 450 + 552 451 tcf_idr_release(*a, bind); 553 452 return err; 453 + } 454 + 455 + static void tcf_gate_params_free_rcu(struct rcu_head *head) 456 + { 457 + struct tcf_gate_params *p = container_of(head, struct tcf_gate_params, rcu); 458 + 459 + release_entry_list(&p->entries); 460 + kfree(p); 554 461 } 555 462 556 463 static void tcf_gate_cleanup(struct tc_action *a) ··· 565 458 struct tcf_gate *gact = to_gate(a); 566 459 struct tcf_gate_params *p; 567 460 568 - p = &gact->param; 569 461 hrtimer_cancel(&gact->hitimer); 570 - release_entry_list(&p->entries); 462 + p = rcu_dereference_protected(gact->param, 1); 463 + if (p) 464 + call_rcu(&p->rcu, tcf_gate_params_free_rcu); 571 465 } 572 466 573 467 static int dumping_entry(struct sk_buff *skb, ··· 617 509 struct nlattr *entry_list; 618 510 struct tcf_t t; 619 511 620 - spin_lock_bh(&gact->tcf_lock); 621 - opt.action = gact->tcf_action; 622 - 623 - p = &gact->param; 512 + rcu_read_lock(); 513 + opt.action = READ_ONCE(gact->tcf_action); 514 + p = rcu_dereference(gact->param); 624 515 625 516 if (nla_put(skb, TCA_GATE_PARMS, sizeof(opt), &opt)) 626 517 goto nla_put_failure; ··· 659 552 tcf_tm_dump(&t, &gact->tcf_tm); 660 553 if (nla_put_64bit(skb, TCA_GATE_TM, sizeof(t), &t, TCA_GATE_PAD)) 661 554 goto nla_put_failure; 662 - spin_unlock_bh(&gact->tcf_lock); 555 + rcu_read_unlock(); 663 556 664 557 return skb->len; 665 558 666 559 nla_put_failure: 667 - spin_unlock_bh(&gact->tcf_lock); 560 + rcu_read_unlock(); 668 561 nlmsg_trim(skb, b); 669 562 return -1; 670 563 }
+44 -49
net/sched/act_ife.c
··· 293 293 /* called when adding new meta information 294 294 */ 295 295 static int __add_metainfo(const struct tcf_meta_ops *ops, 296 - struct tcf_ife_info *ife, u32 metaid, void *metaval, 297 - int len, bool atomic, bool exists) 296 + struct tcf_ife_params *p, u32 metaid, void *metaval, 297 + int len, bool atomic) 298 298 { 299 299 struct tcf_meta_info *mi = NULL; 300 300 int ret = 0; ··· 313 313 } 314 314 } 315 315 316 - if (exists) 317 - spin_lock_bh(&ife->tcf_lock); 318 - list_add_tail(&mi->metalist, &ife->metalist); 319 - if (exists) 320 - spin_unlock_bh(&ife->tcf_lock); 316 + list_add_tail(&mi->metalist, &p->metalist); 321 317 322 318 return ret; 323 319 } 324 320 325 321 static int add_metainfo_and_get_ops(const struct tcf_meta_ops *ops, 326 - struct tcf_ife_info *ife, u32 metaid, 327 - bool exists) 322 + struct tcf_ife_params *p, u32 metaid) 328 323 { 329 324 int ret; 330 325 331 326 if (!try_module_get(ops->owner)) 332 327 return -ENOENT; 333 - ret = __add_metainfo(ops, ife, metaid, NULL, 0, true, exists); 328 + ret = __add_metainfo(ops, p, metaid, NULL, 0, true); 334 329 if (ret) 335 330 module_put(ops->owner); 336 331 return ret; 337 332 } 338 333 339 - static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval, 340 - int len, bool exists) 334 + static int add_metainfo(struct tcf_ife_params *p, u32 metaid, void *metaval, 335 + int len) 341 336 { 342 337 const struct tcf_meta_ops *ops = find_ife_oplist(metaid); 343 338 int ret; 344 339 345 340 if (!ops) 346 341 return -ENOENT; 347 - ret = __add_metainfo(ops, ife, metaid, metaval, len, false, exists); 342 + ret = __add_metainfo(ops, p, metaid, metaval, len, false); 348 343 if (ret) 349 344 /*put back what find_ife_oplist took */ 350 345 module_put(ops->owner); 351 346 return ret; 352 347 } 353 348 354 - static int use_all_metadata(struct tcf_ife_info *ife, bool exists) 349 + static int use_all_metadata(struct tcf_ife_params *p) 355 350 { 356 351 struct tcf_meta_ops *o; 357 352 int rc = 0; ··· 354 359 355 360 read_lock(&ife_mod_lock); 356 361 list_for_each_entry(o, &ifeoplist, list) { 357 - rc = add_metainfo_and_get_ops(o, ife, o->metaid, exists); 362 + rc = add_metainfo_and_get_ops(o, p, o->metaid); 358 363 if (rc == 0) 359 364 installed += 1; 360 365 } ··· 366 371 return -EINVAL; 367 372 } 368 373 369 - static int dump_metalist(struct sk_buff *skb, struct tcf_ife_info *ife) 374 + static int dump_metalist(struct sk_buff *skb, struct tcf_ife_params *p) 370 375 { 371 376 struct tcf_meta_info *e; 372 377 struct nlattr *nest; ··· 374 379 int total_encoded = 0; 375 380 376 381 /*can only happen on decode */ 377 - if (list_empty(&ife->metalist)) 382 + if (list_empty(&p->metalist)) 378 383 return 0; 379 384 380 385 nest = nla_nest_start_noflag(skb, TCA_IFE_METALST); 381 386 if (!nest) 382 387 goto out_nlmsg_trim; 383 388 384 - list_for_each_entry(e, &ife->metalist, metalist) { 389 + list_for_each_entry(e, &p->metalist, metalist) { 385 390 if (!e->ops->get(skb, e)) 386 391 total_encoded += 1; 387 392 } ··· 398 403 return -1; 399 404 } 400 405 401 - /* under ife->tcf_lock */ 402 - static void _tcf_ife_cleanup(struct tc_action *a) 406 + static void __tcf_ife_cleanup(struct tcf_ife_params *p) 403 407 { 404 - struct tcf_ife_info *ife = to_ife(a); 405 408 struct tcf_meta_info *e, *n; 406 409 407 - list_for_each_entry_safe(e, n, &ife->metalist, metalist) { 410 + list_for_each_entry_safe(e, n, &p->metalist, metalist) { 408 411 list_del(&e->metalist); 409 412 if (e->metaval) { 410 413 if (e->ops->release) ··· 415 422 } 416 423 } 417 424 425 + static void tcf_ife_cleanup_params(struct rcu_head *head) 426 + { 427 + struct tcf_ife_params *p = container_of(head, struct tcf_ife_params, 428 + rcu); 429 + 430 + __tcf_ife_cleanup(p); 431 + kfree(p); 432 + } 433 + 418 434 static void tcf_ife_cleanup(struct tc_action *a) 419 435 { 420 436 struct tcf_ife_info *ife = to_ife(a); 421 437 struct tcf_ife_params *p; 422 438 423 - spin_lock_bh(&ife->tcf_lock); 424 - _tcf_ife_cleanup(a); 425 - spin_unlock_bh(&ife->tcf_lock); 426 - 427 439 p = rcu_dereference_protected(ife->params, 1); 428 440 if (p) 429 - kfree_rcu(p, rcu); 441 + call_rcu(&p->rcu, tcf_ife_cleanup_params); 430 442 } 431 443 432 444 static int load_metalist(struct nlattr **tb, bool rtnl_held) ··· 453 455 return 0; 454 456 } 455 457 456 - static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb, 457 - bool exists, bool rtnl_held) 458 + static int populate_metalist(struct tcf_ife_params *p, struct nlattr **tb) 458 459 { 459 460 int len = 0; 460 461 int rc = 0; ··· 465 468 val = nla_data(tb[i]); 466 469 len = nla_len(tb[i]); 467 470 468 - rc = add_metainfo(ife, i, val, len, exists); 471 + rc = add_metainfo(p, i, val, len); 469 472 if (rc) 470 473 return rc; 471 474 } ··· 520 523 p = kzalloc_obj(*p); 521 524 if (!p) 522 525 return -ENOMEM; 526 + INIT_LIST_HEAD(&p->metalist); 523 527 524 528 if (tb[TCA_IFE_METALST]) { 525 529 err = nla_parse_nested_deprecated(tb2, IFE_META_MAX, ··· 565 567 } 566 568 567 569 ife = to_ife(*a); 568 - if (ret == ACT_P_CREATED) 569 - INIT_LIST_HEAD(&ife->metalist); 570 570 571 571 err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack); 572 572 if (err < 0) ··· 596 600 } 597 601 598 602 if (tb[TCA_IFE_METALST]) { 599 - err = populate_metalist(ife, tb2, exists, 600 - !(flags & TCA_ACT_FLAGS_NO_RTNL)); 603 + err = populate_metalist(p, tb2); 601 604 if (err) 602 605 goto metadata_parse_err; 603 606 } else { ··· 605 610 * as we can. You better have at least one else we are 606 611 * going to bail out 607 612 */ 608 - err = use_all_metadata(ife, exists); 613 + err = use_all_metadata(p); 609 614 if (err) 610 615 goto metadata_parse_err; 611 616 } ··· 621 626 if (goto_ch) 622 627 tcf_chain_put_by_act(goto_ch); 623 628 if (p) 624 - kfree_rcu(p, rcu); 629 + call_rcu(&p->rcu, tcf_ife_cleanup_params); 625 630 626 631 return ret; 627 632 metadata_parse_err: 628 633 if (goto_ch) 629 634 tcf_chain_put_by_act(goto_ch); 630 635 release_idr: 636 + __tcf_ife_cleanup(p); 631 637 kfree(p); 632 638 tcf_idr_release(*a, bind); 633 639 return err; ··· 675 679 if (nla_put(skb, TCA_IFE_TYPE, 2, &p->eth_type)) 676 680 goto nla_put_failure; 677 681 678 - if (dump_metalist(skb, ife)) { 682 + if (dump_metalist(skb, p)) { 679 683 /*ignore failure to dump metalist */ 680 684 pr_info("Failed to dump metalist\n"); 681 685 } ··· 689 693 return -1; 690 694 } 691 695 692 - static int find_decode_metaid(struct sk_buff *skb, struct tcf_ife_info *ife, 696 + static int find_decode_metaid(struct sk_buff *skb, struct tcf_ife_params *p, 693 697 u16 metaid, u16 mlen, void *mdata) 694 698 { 695 699 struct tcf_meta_info *e; 696 700 697 701 /* XXX: use hash to speed up */ 698 - list_for_each_entry(e, &ife->metalist, metalist) { 702 + list_for_each_entry_rcu(e, &p->metalist, metalist) { 699 703 if (metaid == e->metaid) { 700 704 if (e->ops) { 701 705 /* We check for decode presence already */ ··· 712 716 { 713 717 struct tcf_ife_info *ife = to_ife(a); 714 718 int action = ife->tcf_action; 719 + struct tcf_ife_params *p; 715 720 u8 *ifehdr_end; 716 721 u8 *tlv_data; 717 722 u16 metalen; 723 + 724 + p = rcu_dereference_bh(ife->params); 718 725 719 726 bstats_update(this_cpu_ptr(ife->common.cpu_bstats), skb); 720 727 tcf_lastuse_update(&ife->tcf_tm); ··· 744 745 return TC_ACT_SHOT; 745 746 } 746 747 747 - if (find_decode_metaid(skb, ife, mtype, dlen, curr_data)) { 748 + if (find_decode_metaid(skb, p, mtype, dlen, curr_data)) { 748 749 /* abuse overlimits to count when we receive metadata 749 750 * but dont have an ops for it 750 751 */ ··· 768 769 /*XXX: check if we can do this at install time instead of current 769 770 * send data path 770 771 **/ 771 - static int ife_get_sz(struct sk_buff *skb, struct tcf_ife_info *ife) 772 + static int ife_get_sz(struct sk_buff *skb, struct tcf_ife_params *p) 772 773 { 773 - struct tcf_meta_info *e, *n; 774 + struct tcf_meta_info *e; 774 775 int tot_run_sz = 0, run_sz = 0; 775 776 776 - list_for_each_entry_safe(e, n, &ife->metalist, metalist) { 777 + list_for_each_entry_rcu(e, &p->metalist, metalist) { 777 778 if (e->ops->check_presence) { 778 779 run_sz = e->ops->check_presence(skb, e); 779 780 tot_run_sz += run_sz; ··· 794 795 OUTERHDR:TOTMETALEN:{TLVHDR:Metadatum:TLVHDR..}:ORIGDATA 795 796 where ORIGDATA = original ethernet header ... 796 797 */ 797 - u16 metalen = ife_get_sz(skb, ife); 798 + u16 metalen = ife_get_sz(skb, p); 798 799 int hdrm = metalen + skb->dev->hard_header_len + IFE_METAHDRLEN; 799 800 unsigned int skboff = 0; 800 801 int new_len = skb->len + hdrm; ··· 832 833 if (!ife_meta) 833 834 goto drop; 834 835 835 - spin_lock(&ife->tcf_lock); 836 - 837 836 /* XXX: we dont have a clever way of telling encode to 838 837 * not repeat some of the computations that are done by 839 838 * ops->presence_check... 840 839 */ 841 - list_for_each_entry(e, &ife->metalist, metalist) { 840 + list_for_each_entry_rcu(e, &p->metalist, metalist) { 842 841 if (e->ops->encode) { 843 842 err = e->ops->encode(skb, (void *)(ife_meta + skboff), 844 843 e); 845 844 } 846 845 if (err < 0) { 847 846 /* too corrupt to keep around if overwritten */ 848 - spin_unlock(&ife->tcf_lock); 849 847 goto drop; 850 848 } 851 849 skboff += err; 852 850 } 853 - spin_unlock(&ife->tcf_lock); 854 851 oethh = (struct ethhdr *)skb->data; 855 852 856 853 if (!is_zero_ether_addr(p->eth_src))
+7
net/sched/cls_api.c
··· 2228 2228 return (TC_H_MIN(classid) == TC_H_MIN(TC_H_MIN_INGRESS)); 2229 2229 } 2230 2230 2231 + static bool is_ingress_or_clsact(struct tcf_block *block, struct Qdisc *q) 2232 + { 2233 + return tcf_block_shared(block) || (q && !!(q->flags & TCQ_F_INGRESS)); 2234 + } 2235 + 2231 2236 static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n, 2232 2237 struct netlink_ext_ack *extack) 2233 2238 { ··· 2425 2420 flags |= TCA_ACT_FLAGS_NO_RTNL; 2426 2421 if (is_qdisc_ingress(parent)) 2427 2422 flags |= TCA_ACT_FLAGS_AT_INGRESS; 2423 + if (is_ingress_or_clsact(block, q)) 2424 + flags |= TCA_ACT_FLAGS_AT_INGRESS_OR_CLSACT; 2428 2425 err = tp->ops->change(net, skb, tp, cl, t->tcm_handle, tca, &fh, 2429 2426 flags, extack); 2430 2427 if (err == 0) {
+25 -28
net/sched/sch_cake.c
··· 391 391 1239850263, 1191209601, 1147878294, 1108955788 392 392 }; 393 393 394 - static void cake_set_rate(struct cake_tin_data *b, u64 rate, u32 mtu, 395 - u64 target_ns, u64 rtt_est_ns); 394 + static void cake_configure_rates(struct Qdisc *sch, u64 rate, bool rate_adjust); 395 + 396 396 /* http://en.wikipedia.org/wiki/Methods_of_computing_square_roots 397 397 * new_invsqrt = (invsqrt / 2) * (3 - count * invsqrt^2) 398 398 * ··· 2013 2013 u64 delay; 2014 2014 u32 len; 2015 2015 2016 - if (q->config->is_shared && now - q->last_checked_active >= q->config->sync_time) { 2016 + if (q->config->is_shared && q->rate_ns && 2017 + now - q->last_checked_active >= q->config->sync_time) { 2017 2018 struct net_device *dev = qdisc_dev(sch); 2018 2019 struct cake_sched_data *other_priv; 2019 2020 u64 new_rate = q->config->rate_bps; ··· 2040 2039 if (num_active_qs > 1) 2041 2040 new_rate = div64_u64(q->config->rate_bps, num_active_qs); 2042 2041 2043 - /* mtu = 0 is used to only update the rate and not mess with cobalt params */ 2044 - cake_set_rate(b, new_rate, 0, 0, 0); 2042 + cake_configure_rates(sch, new_rate, true); 2045 2043 q->last_checked_active = now; 2046 2044 q->active_queues = num_active_qs; 2047 - q->rate_ns = b->tin_rate_ns; 2048 - q->rate_shft = b->tin_rate_shft; 2049 2045 } 2050 2046 2051 2047 begin: ··· 2359 2361 b->cparams.p_dec = 1 << 20; /* 1/4096 */ 2360 2362 } 2361 2363 2362 - static int cake_config_besteffort(struct Qdisc *sch) 2364 + static int cake_config_besteffort(struct Qdisc *sch, u64 rate, u32 mtu) 2363 2365 { 2364 2366 struct cake_sched_data *q = qdisc_priv(sch); 2365 2367 struct cake_tin_data *b = &q->tins[0]; 2366 - u32 mtu = psched_mtu(qdisc_dev(sch)); 2367 - u64 rate = q->config->rate_bps; 2368 2368 2369 2369 q->tin_cnt = 1; 2370 2370 ··· 2376 2380 return 0; 2377 2381 } 2378 2382 2379 - static int cake_config_precedence(struct Qdisc *sch) 2383 + static int cake_config_precedence(struct Qdisc *sch, u64 rate, u32 mtu) 2380 2384 { 2381 2385 /* convert high-level (user visible) parameters into internal format */ 2382 2386 struct cake_sched_data *q = qdisc_priv(sch); 2383 - u32 mtu = psched_mtu(qdisc_dev(sch)); 2384 - u64 rate = q->config->rate_bps; 2385 2387 u32 quantum = 256; 2386 2388 u32 i; 2387 2389 ··· 2450 2456 * Total 12 traffic classes. 2451 2457 */ 2452 2458 2453 - static int cake_config_diffserv8(struct Qdisc *sch) 2459 + static int cake_config_diffserv8(struct Qdisc *sch, u64 rate, u32 mtu) 2454 2460 { 2455 2461 /* Pruned list of traffic classes for typical applications: 2456 2462 * ··· 2467 2473 */ 2468 2474 2469 2475 struct cake_sched_data *q = qdisc_priv(sch); 2470 - u32 mtu = psched_mtu(qdisc_dev(sch)); 2471 - u64 rate = q->config->rate_bps; 2472 2476 u32 quantum = 256; 2473 2477 u32 i; 2474 2478 ··· 2496 2504 return 0; 2497 2505 } 2498 2506 2499 - static int cake_config_diffserv4(struct Qdisc *sch) 2507 + static int cake_config_diffserv4(struct Qdisc *sch, u64 rate, u32 mtu) 2500 2508 { 2501 2509 /* Further pruned list of traffic classes for four-class system: 2502 2510 * ··· 2509 2517 */ 2510 2518 2511 2519 struct cake_sched_data *q = qdisc_priv(sch); 2512 - u32 mtu = psched_mtu(qdisc_dev(sch)); 2513 - u64 rate = q->config->rate_bps; 2514 2520 u32 quantum = 1024; 2515 2521 2516 2522 q->tin_cnt = 4; ··· 2536 2546 return 0; 2537 2547 } 2538 2548 2539 - static int cake_config_diffserv3(struct Qdisc *sch) 2549 + static int cake_config_diffserv3(struct Qdisc *sch, u64 rate, u32 mtu) 2540 2550 { 2541 2551 /* Simplified Diffserv structure with 3 tins. 2542 2552 * Latency Sensitive (CS7, CS6, EF, VA, TOS4) ··· 2544 2554 * Low Priority (LE, CS1) 2545 2555 */ 2546 2556 struct cake_sched_data *q = qdisc_priv(sch); 2547 - u32 mtu = psched_mtu(qdisc_dev(sch)); 2548 - u64 rate = q->config->rate_bps; 2549 2557 u32 quantum = 1024; 2550 2558 2551 2559 q->tin_cnt = 3; ··· 2568 2580 return 0; 2569 2581 } 2570 2582 2571 - static void cake_reconfigure(struct Qdisc *sch) 2583 + static void cake_configure_rates(struct Qdisc *sch, u64 rate, bool rate_adjust) 2572 2584 { 2585 + u32 mtu = likely(rate_adjust) ? 0 : psched_mtu(qdisc_dev(sch)); 2573 2586 struct cake_sched_data *qd = qdisc_priv(sch); 2574 2587 struct cake_sched_config *q = qd->config; 2575 2588 int c, ft; 2576 2589 2577 2590 switch (q->tin_mode) { 2578 2591 case CAKE_DIFFSERV_BESTEFFORT: 2579 - ft = cake_config_besteffort(sch); 2592 + ft = cake_config_besteffort(sch, rate, mtu); 2580 2593 break; 2581 2594 2582 2595 case CAKE_DIFFSERV_PRECEDENCE: 2583 - ft = cake_config_precedence(sch); 2596 + ft = cake_config_precedence(sch, rate, mtu); 2584 2597 break; 2585 2598 2586 2599 case CAKE_DIFFSERV_DIFFSERV8: 2587 - ft = cake_config_diffserv8(sch); 2600 + ft = cake_config_diffserv8(sch, rate, mtu); 2588 2601 break; 2589 2602 2590 2603 case CAKE_DIFFSERV_DIFFSERV4: 2591 - ft = cake_config_diffserv4(sch); 2604 + ft = cake_config_diffserv4(sch, rate, mtu); 2592 2605 break; 2593 2606 2594 2607 case CAKE_DIFFSERV_DIFFSERV3: 2595 2608 default: 2596 - ft = cake_config_diffserv3(sch); 2609 + ft = cake_config_diffserv3(sch, rate, mtu); 2597 2610 break; 2598 2611 } 2599 2612 ··· 2605 2616 2606 2617 qd->rate_ns = qd->tins[ft].tin_rate_ns; 2607 2618 qd->rate_shft = qd->tins[ft].tin_rate_shft; 2619 + } 2620 + 2621 + static void cake_reconfigure(struct Qdisc *sch) 2622 + { 2623 + struct cake_sched_data *qd = qdisc_priv(sch); 2624 + struct cake_sched_config *q = qd->config; 2625 + 2626 + cake_configure_rates(sch, qd->config->rate_bps, false); 2608 2627 2609 2628 if (q->buffer_config_limit) { 2610 2629 qd->buffer_limit = q->buffer_config_limit;
+8 -4
net/sched/sch_ets.c
··· 115 115 struct ets_sched *q = qdisc_priv(sch); 116 116 struct tc_ets_qopt_offload qopt; 117 117 unsigned int w_psum_prev = 0; 118 - unsigned int q_psum = 0; 119 - unsigned int q_sum = 0; 120 118 unsigned int quantum; 121 119 unsigned int w_psum; 122 120 unsigned int weight; 123 121 unsigned int i; 122 + u64 q_psum = 0; 123 + u64 q_sum = 0; 124 124 125 125 if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc) 126 126 return; ··· 138 138 139 139 for (i = 0; i < q->nbands; i++) { 140 140 quantum = q->classes[i].quantum; 141 - q_psum += quantum; 142 - w_psum = quantum ? q_psum * 100 / q_sum : 0; 141 + if (quantum) { 142 + q_psum += quantum; 143 + w_psum = div64_u64(q_psum * 100, q_sum); 144 + } else { 145 + w_psum = 0; 146 + } 143 147 weight = w_psum - w_psum_prev; 144 148 w_psum_prev = w_psum; 145 149
+1
net/sched/sch_fq.c
··· 827 827 for (idx = 0; idx < FQ_BANDS; idx++) { 828 828 q->band_flows[idx].new_flows.first = NULL; 829 829 q->band_flows[idx].old_flows.first = NULL; 830 + q->band_pkt_count[idx] = 0; 830 831 } 831 832 q->delayed = RB_ROOT; 832 833 q->flows = 0;
+4 -4
net/unix/af_unix.c
··· 1785 1785 __skb_queue_tail(&other->sk_receive_queue, skb); 1786 1786 spin_unlock(&other->sk_receive_queue.lock); 1787 1787 unix_state_unlock(other); 1788 - other->sk_data_ready(other); 1788 + READ_ONCE(other->sk_data_ready)(other); 1789 1789 sock_put(other); 1790 1790 return 0; 1791 1791 ··· 2278 2278 scm_stat_add(other, skb); 2279 2279 skb_queue_tail(&other->sk_receive_queue, skb); 2280 2280 unix_state_unlock(other); 2281 - other->sk_data_ready(other); 2281 + READ_ONCE(other->sk_data_ready)(other); 2282 2282 sock_put(other); 2283 2283 scm_destroy(&scm); 2284 2284 return len; ··· 2351 2351 2352 2352 sk_send_sigurg(other); 2353 2353 unix_state_unlock(other); 2354 - other->sk_data_ready(other); 2354 + READ_ONCE(other->sk_data_ready)(other); 2355 2355 2356 2356 return 0; 2357 2357 out_unlock: ··· 2477 2477 spin_unlock(&other->sk_receive_queue.lock); 2478 2478 2479 2479 unix_state_unlock(other); 2480 - other->sk_data_ready(other); 2480 + READ_ONCE(other->sk_data_ready)(other); 2481 2481 sent += size; 2482 2482 } 2483 2483
+17 -11
net/xdp/xsk.c
··· 167 167 struct xdp_buff_xsk *pos, *tmp; 168 168 struct list_head *xskb_list; 169 169 u32 contd = 0; 170 + u32 num_desc; 170 171 int err; 171 172 172 - if (frags) 173 - contd = XDP_PKT_CONTD; 174 - 175 - err = __xsk_rcv_zc(xs, xskb, len, contd); 176 - if (err) 177 - goto err; 178 - if (likely(!frags)) 173 + if (likely(!frags)) { 174 + err = __xsk_rcv_zc(xs, xskb, len, contd); 175 + if (err) 176 + goto err; 179 177 return 0; 178 + } 180 179 180 + contd = XDP_PKT_CONTD; 181 + num_desc = xdp_get_shared_info_from_buff(xdp)->nr_frags + 1; 182 + if (xskq_prod_nb_free(xs->rx, num_desc) < num_desc) { 183 + xs->rx_queue_full++; 184 + err = -ENOBUFS; 185 + goto err; 186 + } 187 + 188 + __xsk_rcv_zc(xs, xskb, len, contd); 181 189 xskb_list = &xskb->pool->xskb_list; 182 190 list_for_each_entry_safe(pos, tmp, xskb_list, list_node) { 183 191 if (list_is_singular(xskb_list)) 184 192 contd = 0; 185 193 len = pos->xdp.data_end - pos->xdp.data; 186 - err = __xsk_rcv_zc(xs, pos, len, contd); 187 - if (err) 188 - goto err; 189 - list_del(&pos->list_node); 194 + __xsk_rcv_zc(xs, pos, len, contd); 195 + list_del_init(&pos->list_node); 190 196 } 191 197 192 198 return 0;
+8
rust/kernel/kunit.rs
··· 14 14 /// Public but hidden since it should only be used from KUnit generated code. 15 15 #[doc(hidden)] 16 16 pub fn err(args: fmt::Arguments<'_>) { 17 + // `args` is unused if `CONFIG_PRINTK` is not set - this avoids a build-time warning. 18 + #[cfg(not(CONFIG_PRINTK))] 19 + let _ = args; 20 + 17 21 // SAFETY: The format string is null-terminated and the `%pA` specifier matches the argument we 18 22 // are passing. 19 23 #[cfg(CONFIG_PRINTK)] ··· 34 30 /// Public but hidden since it should only be used from KUnit generated code. 35 31 #[doc(hidden)] 36 32 pub fn info(args: fmt::Arguments<'_>) { 33 + // `args` is unused if `CONFIG_PRINTK` is not set - this avoids a build-time warning. 34 + #[cfg(not(CONFIG_PRINTK))] 35 + let _ = args; 36 + 37 37 // SAFETY: The format string is null-terminated and the `%pA` specifier matches the argument we 38 38 // are passing. 39 39 #[cfg(CONFIG_PRINTK)]
+2 -2
scripts/genksyms/parse.y
··· 325 325 { $$ = $4; } 326 326 | direct_declarator BRACKET_PHRASE 327 327 { $$ = $2; } 328 - | '(' declarator ')' 329 - { $$ = $3; } 328 + | '(' attribute_opt declarator ')' 329 + { $$ = $4; } 330 330 ; 331 331 332 332 /* Nested declarators differ from regular declarators in that they do
+4
scripts/package/install-extmod-build
··· 32 32 echo tools/objtool/objtool 33 33 fi 34 34 35 + if is_enabled CONFIG_DEBUG_INFO_BTF_MODULES; then 36 + echo tools/bpf/resolve_btfids/resolve_btfids 37 + fi 38 + 35 39 echo Module.symvers 36 40 echo "arch/${SRCARCH}/include/generated" 37 41 echo include/config/auto.conf
+1 -1
sound/firewire/dice/dice.c
··· 122 122 fw_csr_string(dev->config_rom + 5, CSR_VENDOR, vendor, sizeof(vendor)); 123 123 strscpy(model, "?"); 124 124 fw_csr_string(dice->unit->directory, CSR_MODEL, model, sizeof(model)); 125 - snprintf(card->longname, sizeof(card->longname), 125 + scnprintf(card->longname, sizeof(card->longname), 126 126 "%s %s (serial %u) at %s, S%d", 127 127 vendor, model, dev->config_rom[4] & 0x3fffff, 128 128 dev_name(&dice->unit->device), 100 << dev->max_speed);
+9
sound/hda/codecs/ca0132.c
··· 9816 9816 spec->dig_in = 0x09; 9817 9817 break; 9818 9818 } 9819 + 9820 + /* Default HP/Speaker auto-detect from headphone pin verb: enable if the 9821 + * pin config indicates presence detect (not AC_DEFCFG_MISC_NO_PRESENCE). 9822 + */ 9823 + if (spec->unsol_tag_hp && 9824 + (snd_hda_query_pin_caps(codec, spec->unsol_tag_hp) & AC_PINCAP_PRES_DETECT) && 9825 + !(get_defcfg_misc(snd_hda_codec_get_pincfg(codec, spec->unsol_tag_hp)) & 9826 + AC_DEFCFG_MISC_NO_PRESENCE)) 9827 + spec->vnode_lswitch[VNID_HP_ASEL - VNODE_START_NID] = 1; 9819 9828 } 9820 9829 9821 9830 static int ca0132_prepare_verbs(struct hda_codec *codec)
+1
sound/hda/codecs/hdmi/tegrahdmi.c
··· 299 299 HDA_CODEC_ID_MODEL(0x10de002f, "Tegra194 HDMI/DP2", MODEL_TEGRA), 300 300 HDA_CODEC_ID_MODEL(0x10de0030, "Tegra194 HDMI/DP3", MODEL_TEGRA), 301 301 HDA_CODEC_ID_MODEL(0x10de0031, "Tegra234 HDMI/DP", MODEL_TEGRA234), 302 + HDA_CODEC_ID_MODEL(0x10de0032, "Tegra238 HDMI/DP", MODEL_TEGRA234), 302 303 HDA_CODEC_ID_MODEL(0x10de0033, "SoC 33 HDMI/DP", MODEL_TEGRA234), 303 304 HDA_CODEC_ID_MODEL(0x10de0034, "Tegra264 HDMI/DP", MODEL_TEGRA234), 304 305 HDA_CODEC_ID_MODEL(0x10de0035, "SoC 35 HDMI/DP", MODEL_TEGRA234),
+1
sound/hda/codecs/realtek/alc269.c
··· 6904 6904 SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST), 6905 6905 SND_PCI_QUIRK(0x103c, 0x88b3, "HP ENVY x360 Convertible 15-es0xxx", ALC245_FIXUP_HP_ENVY_X360_MUTE_LED), 6906 6906 SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED), 6907 + SND_PCI_QUIRK(0x103c, 0x88d1, "HP Pavilion 15-eh1xxx (mainboard 88D1)", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT), 6907 6908 SND_PCI_QUIRK(0x103c, 0x88dd, "HP Pavilion 15z-ec200", ALC285_FIXUP_HP_MUTE_LED), 6908 6909 SND_PCI_QUIRK(0x103c, 0x88eb, "HP Victus 16-e0xxx", ALC245_FIXUP_HP_MUTE_LED_V2_COEFBIT), 6909 6910 SND_PCI_QUIRK(0x103c, 0x8902, "HP OMEN 16", ALC285_FIXUP_HP_MUTE_LED),
+8 -6
sound/hda/codecs/senarytech.c
··· 19 19 #include "hda_jack.h" 20 20 #include "generic.h" 21 21 22 - /* GPIO node ID */ 23 - #define SENARY_GPIO_NODE 0x01 24 - 25 22 struct senary_spec { 26 23 struct hda_gen_spec gen; 27 24 28 25 /* extra EAPD pins */ 29 26 unsigned int num_eapds; 30 27 hda_nid_t eapds[4]; 28 + bool dynamic_eapd; 31 29 hda_nid_t mute_led_eapd; 32 30 33 31 unsigned int parse_flags; /* flag for snd_hda_parse_pin_defcfg() */ ··· 121 123 unsigned int mask = spec->gpio_mute_led_mask | spec->gpio_mic_led_mask; 122 124 123 125 if (mask) { 124 - snd_hda_codec_write(codec, SENARY_GPIO_NODE, 0, AC_VERB_SET_GPIO_MASK, 126 + snd_hda_codec_write(codec, codec->core.afg, 0, AC_VERB_SET_GPIO_MASK, 125 127 mask); 126 - snd_hda_codec_write(codec, SENARY_GPIO_NODE, 0, AC_VERB_SET_GPIO_DIRECTION, 128 + snd_hda_codec_write(codec, codec->core.afg, 0, AC_VERB_SET_GPIO_DIRECTION, 127 129 mask); 128 - snd_hda_codec_write(codec, SENARY_GPIO_NODE, 0, AC_VERB_SET_GPIO_DATA, 130 + snd_hda_codec_write(codec, codec->core.afg, 0, AC_VERB_SET_GPIO_DATA, 129 131 spec->gpio_led); 130 132 } 131 133 } 132 134 133 135 static int senary_init(struct hda_codec *codec) 134 136 { 137 + struct senary_spec *spec = codec->spec; 138 + 135 139 snd_hda_gen_init(codec); 136 140 senary_init_gpio_led(codec); 141 + if (!spec->dynamic_eapd) 142 + senary_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, true); 137 143 snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT); 138 144 139 145 return 0;
+3 -12
sound/hda/codecs/side-codecs/tas2781_hda_i2c.c
··· 60 60 int (*save_calibration)(struct tas2781_hda *h); 61 61 62 62 int hda_chip_id; 63 - bool skip_calibration; 64 63 }; 65 64 66 65 static int tas2781_get_i2c_res(struct acpi_resource *ares, void *data) ··· 478 479 /* If calibrated data occurs error, dsp will still works with default 479 480 * calibrated data inside algo. 480 481 */ 481 - if (!hda_priv->skip_calibration) 482 - hda_priv->save_calibration(tas_hda); 482 + hda_priv->save_calibration(tas_hda); 483 483 } 484 484 485 485 static void tasdev_fw_ready(const struct firmware *fmw, void *context) ··· 533 535 void *master_data) 534 536 { 535 537 struct tas2781_hda *tas_hda = dev_get_drvdata(dev); 536 - struct tas2781_hda_i2c_priv *hda_priv = tas_hda->hda_priv; 537 538 struct hda_component_parent *parent = master_data; 538 539 struct hda_component *comp; 539 540 struct hda_codec *codec; ··· 560 563 tas_hda->catlog_id = LENOVO; 561 564 break; 562 565 } 563 - 564 - /* 565 - * Using ASUS ROG Xbox Ally X (RC73XA) UEFI calibration data 566 - * causes audio dropouts during playback, use fallback data 567 - * from DSP firmware as a workaround. 568 - */ 569 - if (codec->core.subsystem_id == 0x10431384) 570 - hda_priv->skip_calibration = true; 571 566 572 567 guard(pm_runtime_active_auto)(dev); 573 568 ··· 632 643 */ 633 644 device_name = "TIAS2781"; 634 645 hda_priv->hda_chip_id = HDA_TAS2781; 646 + tas_hda->priv->chip_id = TAS2781; 635 647 hda_priv->save_calibration = tas2781_save_calibration; 636 648 tas_hda->priv->global_addr = TAS2781_GLOBAL_ADDR; 637 649 } else if (strstarts(dev_name(&clt->dev), "i2c-TXNW2770")) { ··· 646 656 "i2c-TXNW2781:00-tas2781-hda.0")) { 647 657 device_name = "TXNW2781"; 648 658 hda_priv->hda_chip_id = HDA_TAS2781; 659 + tas_hda->priv->chip_id = TAS2781; 649 660 hda_priv->save_calibration = tas2781_save_calibration; 650 661 tas_hda->priv->global_addr = TAS2781_GLOBAL_ADDR; 651 662 } else if (strstr(dev_name(&clt->dev), "INT8866")) {
+413
sound/soc/amd/acp/amd-acp63-acpi-match.c
··· 30 30 .group_id = 1 31 31 }; 32 32 33 + static const struct snd_soc_acpi_endpoint spk_2_endpoint = { 34 + .num = 0, 35 + .aggregated = 1, 36 + .group_position = 2, 37 + .group_id = 1 38 + }; 39 + 40 + static const struct snd_soc_acpi_endpoint spk_3_endpoint = { 41 + .num = 0, 42 + .aggregated = 1, 43 + .group_position = 3, 44 + .group_id = 1 45 + }; 46 + 33 47 static const struct snd_soc_acpi_adr_device rt711_rt1316_group_adr[] = { 34 48 { 35 49 .adr = 0x000030025D071101ull, ··· 117 103 } 118 104 }; 119 105 106 + static const struct snd_soc_acpi_endpoint cs42l43_endpoints[] = { 107 + { /* Jack Playback Endpoint */ 108 + .num = 0, 109 + .aggregated = 0, 110 + .group_position = 0, 111 + .group_id = 0, 112 + }, 113 + { /* DMIC Capture Endpoint */ 114 + .num = 1, 115 + .aggregated = 0, 116 + .group_position = 0, 117 + .group_id = 0, 118 + }, 119 + { /* Jack Capture Endpoint */ 120 + .num = 2, 121 + .aggregated = 0, 122 + .group_position = 0, 123 + .group_id = 0, 124 + }, 125 + { /* Speaker Playback Endpoint */ 126 + .num = 3, 127 + .aggregated = 0, 128 + .group_position = 0, 129 + .group_id = 0, 130 + }, 131 + }; 132 + 133 + static const struct snd_soc_acpi_adr_device cs35l56x4_l1u3210_adr[] = { 134 + { 135 + .adr = 0x00013301FA355601ull, 136 + .num_endpoints = 1, 137 + .endpoints = &spk_l_endpoint, 138 + .name_prefix = "AMP1" 139 + }, 140 + { 141 + .adr = 0x00013201FA355601ull, 142 + .num_endpoints = 1, 143 + .endpoints = &spk_r_endpoint, 144 + .name_prefix = "AMP2" 145 + }, 146 + { 147 + .adr = 0x00013101FA355601ull, 148 + .num_endpoints = 1, 149 + .endpoints = &spk_2_endpoint, 150 + .name_prefix = "AMP3" 151 + }, 152 + { 153 + .adr = 0x00013001FA355601ull, 154 + .num_endpoints = 1, 155 + .endpoints = &spk_3_endpoint, 156 + .name_prefix = "AMP4" 157 + }, 158 + }; 159 + 160 + static const struct snd_soc_acpi_adr_device cs35l63x2_l0u01_adr[] = { 161 + { 162 + .adr = 0x00003001FA356301ull, 163 + .num_endpoints = 1, 164 + .endpoints = &spk_l_endpoint, 165 + .name_prefix = "AMP1" 166 + }, 167 + { 168 + .adr = 0x00003101FA356301ull, 169 + .num_endpoints = 1, 170 + .endpoints = &spk_r_endpoint, 171 + .name_prefix = "AMP2" 172 + }, 173 + }; 174 + 175 + static const struct snd_soc_acpi_adr_device cs35l63x2_l1u01_adr[] = { 176 + { 177 + .adr = 0x00013001FA356301ull, 178 + .num_endpoints = 1, 179 + .endpoints = &spk_l_endpoint, 180 + .name_prefix = "AMP1" 181 + }, 182 + { 183 + .adr = 0x00013101FA356301ull, 184 + .num_endpoints = 1, 185 + .endpoints = &spk_r_endpoint, 186 + .name_prefix = "AMP2" 187 + }, 188 + }; 189 + 190 + static const struct snd_soc_acpi_adr_device cs35l63x2_l1u13_adr[] = { 191 + { 192 + .adr = 0x00013101FA356301ull, 193 + .num_endpoints = 1, 194 + .endpoints = &spk_l_endpoint, 195 + .name_prefix = "AMP1" 196 + }, 197 + { 198 + .adr = 0x00013301FA356301ull, 199 + .num_endpoints = 1, 200 + .endpoints = &spk_r_endpoint, 201 + .name_prefix = "AMP2" 202 + }, 203 + }; 204 + 205 + static const struct snd_soc_acpi_adr_device cs35l63x4_l0u0246_adr[] = { 206 + { 207 + .adr = 0x00003001FA356301ull, 208 + .num_endpoints = 1, 209 + .endpoints = &spk_l_endpoint, 210 + .name_prefix = "AMP1" 211 + }, 212 + { 213 + .adr = 0x00003201FA356301ull, 214 + .num_endpoints = 1, 215 + .endpoints = &spk_r_endpoint, 216 + .name_prefix = "AMP2" 217 + }, 218 + { 219 + .adr = 0x00003401FA356301ull, 220 + .num_endpoints = 1, 221 + .endpoints = &spk_2_endpoint, 222 + .name_prefix = "AMP3" 223 + }, 224 + { 225 + .adr = 0x00003601FA356301ull, 226 + .num_endpoints = 1, 227 + .endpoints = &spk_3_endpoint, 228 + .name_prefix = "AMP4" 229 + }, 230 + }; 231 + 232 + static const struct snd_soc_acpi_adr_device cs42l43_l0u0_adr[] = { 233 + { 234 + .adr = 0x00003001FA424301ull, 235 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 236 + .endpoints = cs42l43_endpoints, 237 + .name_prefix = "cs42l43" 238 + } 239 + }; 240 + 241 + static const struct snd_soc_acpi_adr_device cs42l43_l0u1_adr[] = { 242 + { 243 + .adr = 0x00003101FA424301ull, 244 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 245 + .endpoints = cs42l43_endpoints, 246 + .name_prefix = "cs42l43" 247 + } 248 + }; 249 + 250 + static const struct snd_soc_acpi_adr_device cs42l43b_l0u1_adr[] = { 251 + { 252 + .adr = 0x00003101FA2A3B01ull, 253 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 254 + .endpoints = cs42l43_endpoints, 255 + .name_prefix = "cs42l43" 256 + } 257 + }; 258 + 259 + static const struct snd_soc_acpi_adr_device cs42l43_l1u0_cs35l56x4_l1u0123_adr[] = { 260 + { 261 + .adr = 0x00013001FA424301ull, 262 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 263 + .endpoints = cs42l43_endpoints, 264 + .name_prefix = "cs42l43" 265 + }, 266 + { 267 + .adr = 0x00013001FA355601ull, 268 + .num_endpoints = 1, 269 + .endpoints = &spk_l_endpoint, 270 + .name_prefix = "AMP1" 271 + }, 272 + { 273 + .adr = 0x00013101FA355601ull, 274 + .num_endpoints = 1, 275 + .endpoints = &spk_r_endpoint, 276 + .name_prefix = "AMP2" 277 + }, 278 + { 279 + .adr = 0x00013201FA355601ull, 280 + .num_endpoints = 1, 281 + .endpoints = &spk_2_endpoint, 282 + .name_prefix = "AMP3" 283 + }, 284 + { 285 + .adr = 0x00013301FA355601ull, 286 + .num_endpoints = 1, 287 + .endpoints = &spk_3_endpoint, 288 + .name_prefix = "AMP4" 289 + }, 290 + }; 291 + 292 + static const struct snd_soc_acpi_adr_device cs42l45_l0u0_adr[] = { 293 + { 294 + .adr = 0x00003001FA424501ull, 295 + /* Re-use endpoints, but cs42l45 has no speaker */ 296 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints) - 1, 297 + .endpoints = cs42l43_endpoints, 298 + .name_prefix = "cs42l45" 299 + } 300 + }; 301 + 302 + static const struct snd_soc_acpi_adr_device cs42l45_l1u0_adr[] = { 303 + { 304 + .adr = 0x00013001FA424501ull, 305 + /* Re-use endpoints, but cs42l45 has no speaker */ 306 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints) - 1, 307 + .endpoints = cs42l43_endpoints, 308 + .name_prefix = "cs42l45" 309 + } 310 + }; 311 + 312 + static const struct snd_soc_acpi_link_adr acp63_cs35l56x4_l1u3210[] = { 313 + { 314 + .mask = BIT(1), 315 + .num_adr = ARRAY_SIZE(cs35l56x4_l1u3210_adr), 316 + .adr_d = cs35l56x4_l1u3210_adr, 317 + }, 318 + {} 319 + }; 320 + 321 + static const struct snd_soc_acpi_link_adr acp63_cs35l63x4_l0u0246[] = { 322 + { 323 + .mask = BIT(0), 324 + .num_adr = ARRAY_SIZE(cs35l63x4_l0u0246_adr), 325 + .adr_d = cs35l63x4_l0u0246_adr, 326 + }, 327 + {} 328 + }; 329 + 330 + static const struct snd_soc_acpi_link_adr acp63_cs42l43_l0u1[] = { 331 + { 332 + .mask = BIT(0), 333 + .num_adr = ARRAY_SIZE(cs42l43_l0u1_adr), 334 + .adr_d = cs42l43_l0u1_adr, 335 + }, 336 + {} 337 + }; 338 + 339 + static const struct snd_soc_acpi_link_adr acp63_cs42l43b_l0u1[] = { 340 + { 341 + .mask = BIT(0), 342 + .num_adr = ARRAY_SIZE(cs42l43b_l0u1_adr), 343 + .adr_d = cs42l43b_l0u1_adr, 344 + }, 345 + {} 346 + }; 347 + 348 + static const struct snd_soc_acpi_link_adr acp63_cs42l43_l0u0_cs35l56x4_l1u3210[] = { 349 + { 350 + .mask = BIT(0), 351 + .num_adr = ARRAY_SIZE(cs42l43_l0u0_adr), 352 + .adr_d = cs42l43_l0u0_adr, 353 + }, 354 + { 355 + .mask = BIT(1), 356 + .num_adr = ARRAY_SIZE(cs35l56x4_l1u3210_adr), 357 + .adr_d = cs35l56x4_l1u3210_adr, 358 + }, 359 + {} 360 + }; 361 + 362 + static const struct snd_soc_acpi_link_adr acp63_cs42l43_l1u0_cs35l56x4_l1u0123[] = { 363 + { 364 + .mask = BIT(1), 365 + .num_adr = ARRAY_SIZE(cs42l43_l1u0_cs35l56x4_l1u0123_adr), 366 + .adr_d = cs42l43_l1u0_cs35l56x4_l1u0123_adr, 367 + }, 368 + {} 369 + }; 370 + 371 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l0u0[] = { 372 + { 373 + .mask = BIT(0), 374 + .num_adr = ARRAY_SIZE(cs42l45_l0u0_adr), 375 + .adr_d = cs42l45_l0u0_adr, 376 + }, 377 + {} 378 + }; 379 + 380 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l0u0_cs35l63x2_l1u01[] = { 381 + { 382 + .mask = BIT(0), 383 + .num_adr = ARRAY_SIZE(cs42l45_l0u0_adr), 384 + .adr_d = cs42l45_l0u0_adr, 385 + }, 386 + { 387 + .mask = BIT(1), 388 + .num_adr = ARRAY_SIZE(cs35l63x2_l1u01_adr), 389 + .adr_d = cs35l63x2_l1u01_adr, 390 + }, 391 + {} 392 + }; 393 + 394 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l0u0_cs35l63x2_l1u13[] = { 395 + { 396 + .mask = BIT(0), 397 + .num_adr = ARRAY_SIZE(cs42l45_l0u0_adr), 398 + .adr_d = cs42l45_l0u0_adr, 399 + }, 400 + { 401 + .mask = BIT(1), 402 + .num_adr = ARRAY_SIZE(cs35l63x2_l1u13_adr), 403 + .adr_d = cs35l63x2_l1u13_adr, 404 + }, 405 + {} 406 + }; 407 + 408 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l1u0[] = { 409 + { 410 + .mask = BIT(1), 411 + .num_adr = ARRAY_SIZE(cs42l45_l1u0_adr), 412 + .adr_d = cs42l45_l1u0_adr, 413 + }, 414 + {} 415 + }; 416 + 417 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l1u0_cs35l63x2_l0u01[] = { 418 + { 419 + .mask = BIT(1), 420 + .num_adr = ARRAY_SIZE(cs42l45_l1u0_adr), 421 + .adr_d = cs42l45_l1u0_adr, 422 + }, 423 + { 424 + .mask = BIT(0), 425 + .num_adr = ARRAY_SIZE(cs35l63x2_l0u01_adr), 426 + .adr_d = cs35l63x2_l0u01_adr, 427 + }, 428 + {} 429 + }; 430 + 431 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l1u0_cs35l63x4_l0u0246[] = { 432 + { 433 + .mask = BIT(1), 434 + .num_adr = ARRAY_SIZE(cs42l45_l1u0_adr), 435 + .adr_d = cs42l45_l1u0_adr, 436 + }, 437 + { 438 + .mask = BIT(0), 439 + .num_adr = ARRAY_SIZE(cs35l63x4_l0u0246_adr), 440 + .adr_d = cs35l63x4_l0u0246_adr, 441 + }, 442 + {} 443 + }; 444 + 120 445 static const struct snd_soc_acpi_link_adr acp63_rt722_only[] = { 121 446 { 122 447 .mask = BIT(0), ··· 486 133 { 487 134 .link_mask = BIT(0) | BIT(1), 488 135 .links = acp63_4_in_1_sdca, 136 + .drv_name = "amd_sdw", 137 + }, 138 + { 139 + .link_mask = BIT(0) | BIT(1), 140 + .links = acp63_cs42l43_l0u0_cs35l56x4_l1u3210, 141 + .drv_name = "amd_sdw", 142 + }, 143 + { 144 + .link_mask = BIT(0) | BIT(1), 145 + .links = acp63_cs42l45_l1u0_cs35l63x4_l0u0246, 146 + .drv_name = "amd_sdw", 147 + }, 148 + { 149 + .link_mask = BIT(0) | BIT(1), 150 + .links = acp63_cs42l45_l0u0_cs35l63x2_l1u01, 151 + .drv_name = "amd_sdw", 152 + }, 153 + { 154 + .link_mask = BIT(0) | BIT(1), 155 + .links = acp63_cs42l45_l0u0_cs35l63x2_l1u13, 156 + .drv_name = "amd_sdw", 157 + }, 158 + { 159 + .link_mask = BIT(0) | BIT(1), 160 + .links = acp63_cs42l45_l1u0_cs35l63x2_l0u01, 161 + .drv_name = "amd_sdw", 162 + }, 163 + { 164 + .link_mask = BIT(1), 165 + .links = acp63_cs42l43_l1u0_cs35l56x4_l1u0123, 166 + .drv_name = "amd_sdw", 167 + }, 168 + { 169 + .link_mask = BIT(1), 170 + .links = acp63_cs35l56x4_l1u3210, 171 + .drv_name = "amd_sdw", 172 + }, 173 + { 174 + .link_mask = BIT(0), 175 + .links = acp63_cs35l63x4_l0u0246, 176 + .drv_name = "amd_sdw", 177 + }, 178 + { 179 + .link_mask = BIT(0), 180 + .links = acp63_cs42l43_l0u1, 181 + .drv_name = "amd_sdw", 182 + }, 183 + { 184 + .link_mask = BIT(0), 185 + .links = acp63_cs42l43b_l0u1, 186 + .drv_name = "amd_sdw", 187 + }, 188 + { 189 + .link_mask = BIT(0), 190 + .links = acp63_cs42l45_l0u0, 191 + .drv_name = "amd_sdw", 192 + }, 193 + { 194 + .link_mask = BIT(1), 195 + .links = acp63_cs42l45_l1u0, 489 196 .drv_name = "amd_sdw", 490 197 }, 491 198 {},
+7
sound/soc/amd/yc/acp6x-mach.c
··· 710 710 DMI_MATCH(DMI_PRODUCT_NAME, "ASUS EXPERTBOOK BM1503CDA"), 711 711 } 712 712 }, 713 + { 714 + .driver_data = &acp6x_card, 715 + .matches = { 716 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 717 + DMI_MATCH(DMI_BOARD_NAME, "PM1503CDA"), 718 + } 719 + }, 713 720 {} 714 721 }; 715 722
+15 -1
sound/soc/codecs/cs35l56-shared.c
··· 26 26 27 27 #include "cs35l56.h" 28 28 29 - static const struct reg_sequence cs35l56_patch[] = { 29 + static const struct reg_sequence cs35l56_asp_patch[] = { 30 30 /* 31 31 * Firmware can change these to non-defaults to satisfy SDCA. 32 32 * Ensure that they are at known defaults. ··· 43 43 { CS35L56_ASP1TX2_INPUT, 0x00000000 }, 44 44 { CS35L56_ASP1TX3_INPUT, 0x00000000 }, 45 45 { CS35L56_ASP1TX4_INPUT, 0x00000000 }, 46 + }; 47 + 48 + int cs35l56_set_asp_patch(struct cs35l56_base *cs35l56_base) 49 + { 50 + return regmap_register_patch(cs35l56_base->regmap, cs35l56_asp_patch, 51 + ARRAY_SIZE(cs35l56_asp_patch)); 52 + } 53 + EXPORT_SYMBOL_NS_GPL(cs35l56_set_asp_patch, "SND_SOC_CS35L56_SHARED"); 54 + 55 + static const struct reg_sequence cs35l56_patch[] = { 56 + /* 57 + * Firmware can change these to non-defaults to satisfy SDCA. 58 + * Ensure that they are at known defaults. 59 + */ 46 60 { CS35L56_SWIRE_DP3_CH1_INPUT, 0x00000018 }, 47 61 { CS35L56_SWIRE_DP3_CH2_INPUT, 0x00000019 }, 48 62 { CS35L56_SWIRE_DP3_CH3_INPUT, 0x00000029 },
+10 -2
sound/soc/codecs/cs35l56.c
··· 348 348 return wm_adsp_event(w, kcontrol, event); 349 349 } 350 350 351 + static int cs35l56_asp_dai_probe(struct snd_soc_dai *codec_dai) 352 + { 353 + struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(codec_dai->component); 354 + 355 + return cs35l56_set_asp_patch(&cs35l56->base); 356 + } 357 + 351 358 static int cs35l56_asp_dai_set_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt) 352 359 { 353 360 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(codec_dai->component); ··· 559 552 } 560 553 561 554 static const struct snd_soc_dai_ops cs35l56_ops = { 555 + .probe = cs35l56_asp_dai_probe, 562 556 .set_fmt = cs35l56_asp_dai_set_fmt, 563 557 .set_tdm_slot = cs35l56_asp_dai_set_tdm_slot, 564 558 .hw_params = cs35l56_asp_dai_hw_params, ··· 1625 1617 if (num_pulls < 0) 1626 1618 return num_pulls; 1627 1619 1628 - if (num_pulls != num_gpios) { 1620 + if (num_pulls && (num_pulls != num_gpios)) { 1629 1621 dev_warn(cs35l56->base.dev, "%s count(%d) != %s count(%d)\n", 1630 - pull_name, num_pulls, gpio_name, num_gpios); 1622 + pull_name, num_pulls, gpio_name, num_gpios); 1631 1623 } 1632 1624 1633 1625 ret = cs35l56_check_and_save_onchip_spkid_gpios(&cs35l56->base,
+3 -2
sound/soc/codecs/rt1320-sdw.c
··· 2629 2629 struct sdw_port_config port_config; 2630 2630 struct sdw_port_config dmic_port_config[2]; 2631 2631 struct sdw_stream_runtime *sdw_stream; 2632 - int retval; 2632 + int retval, num_channels; 2633 2633 unsigned int sampling_rate; 2634 2634 2635 2635 dev_dbg(dai->dev, "%s %s", __func__, dai->name); ··· 2661 2661 dmic_port_config[1].num = 10; 2662 2662 break; 2663 2663 case RT1321_DEV_ID: 2664 - dmic_port_config[0].ch_mask = BIT(0) | BIT(1); 2664 + num_channels = params_channels(params); 2665 + dmic_port_config[0].ch_mask = GENMASK(num_channels - 1, 0); 2665 2666 dmic_port_config[0].num = 8; 2666 2667 break; 2667 2668 default:
+94
sound/soc/codecs/tas2781-fmwlib.c
··· 32 32 #define TAS2781_YRAM1_PAGE 42 33 33 #define TAS2781_YRAM1_START_REG 88 34 34 35 + #define TAS2781_PG_REG TASDEVICE_REG(0x00, 0x00, 0x7c) 36 + #define TAS2781_PG_1_0 0xA0 37 + #define TAS2781_PG_2_0 0xA8 38 + 35 39 #define TAS2781_YRAM2_START_PAGE 43 36 40 #define TAS2781_YRAM2_END_PAGE 49 37 41 #define TAS2781_YRAM2_START_REG 8 ··· 100 96 struct blktyp_devidx_map { 101 97 unsigned char blktyp; 102 98 unsigned char dev_idx; 99 + }; 100 + 101 + struct tas2781_cali_specific { 102 + unsigned char sin_gni[4]; 103 + int sin_gni_reg; 104 + bool is_sin_gn_flush; 103 105 }; 104 106 105 107 static const char deviceNumber[TASDEVICE_DSP_TAS_MAX_DEVICE] = { ··· 2464 2454 return ret; 2465 2455 } 2466 2456 2457 + static int tas2781_cali_preproc(struct tasdevice_priv *priv, int i) 2458 + { 2459 + struct tas2781_cali_specific *spec = priv->tasdevice[i].cali_specific; 2460 + struct calidata *cali_data = &priv->cali_data; 2461 + struct cali_reg *p = &cali_data->cali_reg_array; 2462 + unsigned char *data = cali_data->data; 2463 + int rc; 2464 + 2465 + /* 2466 + * On TAS2781, if the Speaker calibrated impedance is lower than 2467 + * default value hard-coded inside the TAS2781, it will cuase vol 2468 + * lower than normal. In order to fix this issue, the parameter of 2469 + * SineGainI need updating. 2470 + */ 2471 + if (spec == NULL) { 2472 + int k = i * (cali_data->cali_dat_sz_per_dev + 1); 2473 + int re_org, re_cal, corrected_sin_gn, pg_id; 2474 + unsigned char r0_deflt[4]; 2475 + 2476 + spec = devm_kzalloc(priv->dev, sizeof(*spec), GFP_KERNEL); 2477 + if (spec == NULL) 2478 + return -ENOMEM; 2479 + priv->tasdevice[i].cali_specific = spec; 2480 + rc = tasdevice_dev_bulk_read(priv, i, p->r0_reg, r0_deflt, 4); 2481 + if (rc < 0) { 2482 + dev_err(priv->dev, "invalid RE from %d = %d\n", i, rc); 2483 + return rc; 2484 + } 2485 + /* 2486 + * SineGainI need to be re-calculated, calculate the high 16 2487 + * bits. 2488 + */ 2489 + re_org = r0_deflt[0] << 8 | r0_deflt[1]; 2490 + re_cal = data[k + 1] << 8 | data[k + 2]; 2491 + if (re_org > re_cal) { 2492 + rc = tasdevice_dev_read(priv, i, TAS2781_PG_REG, 2493 + &pg_id); 2494 + if (rc < 0) { 2495 + dev_err(priv->dev, "invalid PG id %d = %d\n", 2496 + i, rc); 2497 + return rc; 2498 + } 2499 + 2500 + spec->sin_gni_reg = (pg_id == TAS2781_PG_1_0) ? 2501 + TASDEVICE_REG(0, 0x1b, 0x34) : 2502 + TASDEVICE_REG(0, 0x18, 0x1c); 2503 + 2504 + rc = tasdevice_dev_bulk_read(priv, i, 2505 + spec->sin_gni_reg, 2506 + spec->sin_gni, 4); 2507 + if (rc < 0) { 2508 + dev_err(priv->dev, "wrong sinegaini %d = %d\n", 2509 + i, rc); 2510 + return rc; 2511 + } 2512 + corrected_sin_gn = re_org * ((spec->sin_gni[0] << 8) + 2513 + spec->sin_gni[1]); 2514 + corrected_sin_gn /= re_cal; 2515 + spec->sin_gni[0] = corrected_sin_gn >> 8; 2516 + spec->sin_gni[1] = corrected_sin_gn & 0xff; 2517 + 2518 + spec->is_sin_gn_flush = true; 2519 + } 2520 + } 2521 + 2522 + if (spec->is_sin_gn_flush) { 2523 + rc = tasdevice_dev_bulk_write(priv, i, spec->sin_gni_reg, 2524 + spec->sin_gni, 4); 2525 + if (rc < 0) { 2526 + dev_err(priv->dev, "update failed %d = %d\n", 2527 + i, rc); 2528 + return rc; 2529 + } 2530 + } 2531 + 2532 + return 0; 2533 + } 2534 + 2467 2535 static void tasdev_load_calibrated_data(struct tasdevice_priv *priv, int i) 2468 2536 { 2469 2537 struct calidata *cali_data = &priv->cali_data; ··· 2556 2468 return; 2557 2469 } 2558 2470 k++; 2471 + 2472 + if (priv->chip_id == TAS2781) { 2473 + rc = tas2781_cali_preproc(priv, i); 2474 + if (rc < 0) 2475 + return; 2476 + } 2559 2477 2560 2478 rc = tasdevice_dev_bulk_write(priv, i, p->r0_reg, &(data[k]), 4); 2561 2479 if (rc < 0) {
+10 -4
sound/soc/fsl/fsl_easrc.c
··· 52 52 struct soc_mreg_control *mc = 53 53 (struct soc_mreg_control *)kcontrol->private_value; 54 54 unsigned int regval = ucontrol->value.integer.value[0]; 55 + int ret; 56 + 57 + ret = (easrc_priv->bps_iec958[mc->regbase] != regval); 55 58 56 59 easrc_priv->bps_iec958[mc->regbase] = regval; 57 60 58 - return 0; 61 + return ret; 59 62 } 60 63 61 64 static int fsl_easrc_iec958_get_bits(struct snd_kcontrol *kcontrol, ··· 96 93 struct snd_soc_component *component = snd_kcontrol_chip(kcontrol); 97 94 struct soc_mreg_control *mc = 98 95 (struct soc_mreg_control *)kcontrol->private_value; 96 + struct fsl_asrc *easrc = snd_soc_component_get_drvdata(component); 99 97 unsigned int regval = ucontrol->value.integer.value[0]; 98 + bool changed; 100 99 int ret; 101 100 102 - ret = snd_soc_component_write(component, mc->regbase, regval); 103 - if (ret < 0) 101 + ret = regmap_update_bits_check(easrc->regmap, mc->regbase, 102 + GENMASK(31, 0), regval, &changed); 103 + if (ret != 0) 104 104 return ret; 105 105 106 - return 0; 106 + return changed; 107 107 } 108 108 109 109 #define SOC_SINGLE_REG_RW(xname, xreg) \
+8
sound/soc/intel/boards/sof_sdw.c
··· 763 763 }, 764 764 .driver_data = (void *)(SOC_SDW_CODEC_SPKR), 765 765 }, 766 + { 767 + .callback = sof_sdw_quirk_cb, 768 + .matches = { 769 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 770 + DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0CCD") 771 + }, 772 + .driver_data = (void *)(SOC_SDW_CODEC_SPKR), 773 + }, 766 774 /* Pantherlake devices*/ 767 775 { 768 776 .callback = sof_sdw_quirk_cb,
+4 -1
sound/soc/sdca/sdca_functions.c
··· 1156 1156 if (!terminal->is_dataport) { 1157 1157 const char *type_name = sdca_find_terminal_name(terminal->type); 1158 1158 1159 - if (type_name) 1159 + if (type_name) { 1160 1160 entity->label = devm_kasprintf(dev, GFP_KERNEL, "%s %s", 1161 1161 entity->label, type_name); 1162 + if (!entity->label) 1163 + return -ENOMEM; 1164 + } 1162 1165 } 1163 1166 1164 1167 ret = fwnode_property_read_u32(entity_node,
+2
sound/usb/quirks.c
··· 2219 2219 QUIRK_FLAG_ALIGN_TRANSFER), 2220 2220 DEVICE_FLG(0x05e1, 0x0480, /* Hauppauge Woodbury */ 2221 2221 QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER), 2222 + DEVICE_FLG(0x0624, 0x3d3f, /* AB13X USB Audio */ 2223 + QUIRK_FLAG_FORCE_IFACE_RESET | QUIRK_FLAG_IFACE_DELAY), 2222 2224 DEVICE_FLG(0x0644, 0x8043, /* TEAC UD-501/UD-501V2/UD-503/NT-503 */ 2223 2225 QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY | 2224 2226 QUIRK_FLAG_IFACE_DELAY),
+2 -10
sound/usb/usx2y/us122l.c
··· 520 520 return err; 521 521 } 522 522 523 - usb_get_intf(usb_ifnum_to_if(device, 0)); 524 - usb_get_dev(device); 525 523 *cardp = card; 526 524 return 0; 527 525 } ··· 540 542 if (intf->cur_altsetting->desc.bInterfaceNumber != 1) 541 543 return 0; 542 544 543 - err = us122l_usb_probe(usb_get_intf(intf), id, &card); 544 - if (err < 0) { 545 - usb_put_intf(intf); 545 + err = us122l_usb_probe(intf, id, &card); 546 + if (err < 0) 546 547 return err; 547 - } 548 548 549 549 usb_set_intfdata(intf, card); 550 550 return 0; ··· 569 573 list_for_each(p, &us122l->midi_list) { 570 574 snd_usbmidi_disconnect(p); 571 575 } 572 - 573 - usb_put_intf(usb_ifnum_to_if(us122l->dev, 0)); 574 - usb_put_intf(usb_ifnum_to_if(us122l->dev, 1)); 575 - usb_put_dev(us122l->dev); 576 576 577 577 snd_card_free_when_closed(card); 578 578 }
+7 -2
tools/bpf/resolve_btfids/Makefile
··· 23 23 HOSTCC ?= gcc 24 24 HOSTLD ?= ld 25 25 HOSTAR ?= ar 26 + HOSTPKG_CONFIG ?= pkg-config 26 27 CROSS_COMPILE = 27 28 28 29 OUTPUT ?= $(srctree)/tools/bpf/resolve_btfids/ ··· 64 63 $(abspath $@) install_headers 65 64 66 65 LIBELF_FLAGS := $(shell $(HOSTPKG_CONFIG) libelf --cflags 2>/dev/null) 66 + 67 + ifneq ($(filter -static,$(EXTRA_LDFLAGS)),) 68 + LIBELF_LIBS := $(shell $(HOSTPKG_CONFIG) libelf --libs --static 2>/dev/null || echo -lelf -lzstd) 69 + else 67 70 LIBELF_LIBS := $(shell $(HOSTPKG_CONFIG) libelf --libs 2>/dev/null || echo -lelf) 71 + endif 68 72 69 73 ZLIB_LIBS := $(shell $(HOSTPKG_CONFIG) zlib --libs 2>/dev/null || echo -lz) 70 - ZSTD_LIBS := $(shell $(HOSTPKG_CONFIG) libzstd --libs 2>/dev/null || echo -lzstd) 71 74 72 75 HOSTCFLAGS_resolve_btfids += -g \ 73 76 -I$(srctree)/tools/include \ ··· 81 76 $(LIBELF_FLAGS) \ 82 77 -Wall -Werror 83 78 84 - LIBS = $(LIBELF_LIBS) $(ZLIB_LIBS) $(ZSTD_LIBS) 79 + LIBS = $(LIBELF_LIBS) $(ZLIB_LIBS) 85 80 86 81 export srctree OUTPUT HOSTCFLAGS_resolve_btfids Q HOSTCC HOSTLD HOSTAR 87 82 include $(srctree)/tools/build/Makefile.include
+5 -3
tools/objtool/Makefile
··· 142 142 $(Q)$(RM) -r -- $(LIBSUBCMD_OUTPUT) 143 143 144 144 clean: $(LIBSUBCMD)-clean 145 - $(call QUIET_CLEAN, objtool) $(RM) $(OBJTOOL) 146 - $(Q)find $(OUTPUT) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete 145 + $(Q)find $(OUTPUT) \( -name '*.o' -o -name '\.*.cmd' -o -name '\.*.d' \) -type f -print | xargs $(RM) 147 146 $(Q)$(RM) $(OUTPUT)arch/x86/lib/cpu-feature-names.c $(OUTPUT)fixdep 148 147 $(Q)$(RM) $(OUTPUT)arch/x86/lib/inat-tables.c $(OUTPUT)fixdep 149 148 $(Q)$(RM) -- $(OUTPUT)FEATURE-DUMP.objtool 150 149 $(Q)$(RM) -r -- $(OUTPUT)feature 151 150 151 + mrproper: clean 152 + $(call QUIET_CLEAN, objtool) $(RM) $(OBJTOOL) 153 + 152 154 FORCE: 153 155 154 - .PHONY: clean FORCE 156 + .PHONY: clean mrproper FORCE
+61
tools/sched_ext/Kconfig
··· 1 + # sched-ext mandatory options 2 + # 3 + CONFIG_BPF=y 4 + CONFIG_BPF_SYSCALL=y 5 + CONFIG_BPF_JIT=y 6 + CONFIG_DEBUG_INFO_BTF=y 7 + CONFIG_BPF_JIT_ALWAYS_ON=y 8 + CONFIG_BPF_JIT_DEFAULT_ON=y 9 + CONFIG_SCHED_CLASS_EXT=y 10 + 11 + # Required by some rust schedulers (e.g. scx_p2dq) 12 + # 13 + CONFIG_KALLSYMS_ALL=y 14 + 15 + # Required on arm64 16 + # 17 + # CONFIG_DEBUG_INFO_REDUCED is not set 18 + 19 + # LAVD tracks futex to give an additional time slice for futex holder 20 + # (i.e., avoiding lock holder preemption) for better system-wide progress. 21 + # LAVD first tries to use ftrace to trace futex function calls. 22 + # If that is not available, it tries to use a tracepoint. 23 + CONFIG_FUNCTION_TRACER=y 24 + 25 + # Enable scheduling debugging 26 + # 27 + CONFIG_SCHED_DEBUG=y 28 + 29 + # Enable extra scheduling features (for a better code coverage while testing 30 + # the schedulers) 31 + # 32 + CONFIG_SCHED_AUTOGROUP=y 33 + CONFIG_SCHED_CORE=y 34 + CONFIG_SCHED_MC=y 35 + 36 + # Enable fully preemptible kernel for a better test coverage of the schedulers 37 + # 38 + # CONFIG_PREEMPT_NONE is not set 39 + # CONFIG_PREEMPT_VOLUNTARY is not set 40 + CONFIG_PREEMPT=y 41 + CONFIG_PREEMPT_DYNAMIC=y 42 + 43 + # Additional debugging information (useful to catch potential locking issues) 44 + CONFIG_DEBUG_LOCKDEP=y 45 + CONFIG_DEBUG_ATOMIC_SLEEP=y 46 + CONFIG_PROVE_LOCKING=y 47 + 48 + # Bpftrace headers (for additional debug info) 49 + CONFIG_BPF_EVENTS=y 50 + CONFIG_FTRACE_SYSCALLS=y 51 + CONFIG_DYNAMIC_FTRACE=y 52 + CONFIG_KPROBES=y 53 + CONFIG_KPROBE_EVENTS=y 54 + CONFIG_UPROBES=y 55 + CONFIG_UPROBE_EVENTS=y 56 + CONFIG_DEBUG_FS=y 57 + 58 + # Enable access to kernel configuration and headers at runtime 59 + CONFIG_IKHEADERS=y 60 + CONFIG_IKCONFIG_PROC=y 61 + CONFIG_IKCONFIG=y
+2
tools/sched_ext/Makefile
··· 122 122 -I../../include \ 123 123 $(call get_sys_includes,$(CLANG)) \ 124 124 -Wall -Wno-compare-distinct-pointer-types \ 125 + -Wno-microsoft-anon-tag \ 126 + -fms-extensions \ 125 127 -O2 -mcpu=v3 126 128 127 129 # sort removes libbpf duplicates when not cross-building
-6
tools/sched_ext/README.md
··· 58 58 CONFIG_BPF_SYSCALL=y 59 59 CONFIG_BPF_JIT=y 60 60 CONFIG_DEBUG_INFO_BTF=y 61 - ``` 62 - 63 - It's also recommended that you also include the following Kconfig options: 64 - 65 - ``` 66 61 CONFIG_BPF_JIT_ALWAYS_ON=y 67 62 CONFIG_BPF_JIT_DEFAULT_ON=y 68 - CONFIG_PAHOLE_HAS_BTF_TAG=y 69 63 ``` 70 64 71 65 There is a `Kconfig` file in this directory whose contents you can append to
+5 -2
tools/sched_ext/include/scx/compat.h
··· 125 125 { 126 126 int fd; 127 127 char buf[32]; 128 + char *endptr; 128 129 ssize_t len; 129 130 long val; 130 131 ··· 138 137 buf[len] = 0; 139 138 close(fd); 140 139 141 - val = strtoul(buf, NULL, 10); 142 - SCX_BUG_ON(val < 0, "invalid num hotplug events: %lu", val); 140 + errno = 0; 141 + val = strtoul(buf, &endptr, 10); 142 + SCX_BUG_ON(errno == ERANGE || endptr == buf || 143 + (*endptr != '\n' && *endptr != '\0'), "invalid num hotplug events: %ld", val); 143 144 144 145 return val; 145 146 }
+1 -1
tools/sched_ext/scx_central.c
··· 66 66 assert(skel->rodata->nr_cpu_ids > 0); 67 67 assert(skel->rodata->nr_cpu_ids <= INT32_MAX); 68 68 69 - while ((opt = getopt(argc, argv, "s:c:pvh")) != -1) { 69 + while ((opt = getopt(argc, argv, "s:c:vh")) != -1) { 70 70 switch (opt) { 71 71 case 's': 72 72 skel->rodata->slice_ns = strtoull(optarg, NULL, 0) * 1000;
+1 -1
tools/sched_ext/scx_sdt.c
··· 54 54 optind = 1; 55 55 skel = SCX_OPS_OPEN(sdt_ops, scx_sdt); 56 56 57 - while ((opt = getopt(argc, argv, "fvh")) != -1) { 57 + while ((opt = getopt(argc, argv, "vh")) != -1) { 58 58 switch (opt) { 59 59 case 'v': 60 60 verbose = true;
+4 -2
tools/testing/kunit/kunit_kernel.py
··· 346 346 return self.validate_config(build_dir) 347 347 348 348 def run_kernel(self, args: Optional[List[str]]=None, build_dir: str='', filter_glob: str='', filter: str='', filter_action: Optional[str]=None, timeout: Optional[int]=None) -> Iterator[str]: 349 - if not args: 350 - args = [] 349 + # Copy to avoid mutating the caller-supplied list. exec_tests() reuses 350 + # the same args across repeated run_kernel() calls (e.g. --run_isolated), 351 + # so appending to the original would accumulate stale flags on each call. 352 + args = list(args) if args else [] 351 353 if filter_glob: 352 354 args.append('kunit.filter_glob=' + filter_glob) 353 355 if filter:
+26
tools/testing/kunit/kunit_tool_test.py
··· 503 503 with open(kunit_kernel.get_outfile_path(build_dir), 'rt') as outfile: 504 504 self.assertEqual(outfile.read(), 'hi\nbye\n', msg='Missing some output') 505 505 506 + def test_run_kernel_args_not_mutated(self): 507 + """Verify run_kernel() copies args so callers can reuse them.""" 508 + start_calls = [] 509 + 510 + def fake_start(start_args, unused_build_dir): 511 + start_calls.append(list(start_args)) 512 + return subprocess.Popen(['printf', 'KTAP version 1\n'], 513 + text=True, stdout=subprocess.PIPE) 514 + 515 + with tempfile.TemporaryDirectory('') as build_dir: 516 + tree = kunit_kernel.LinuxSourceTree(build_dir, 517 + kunitconfig_paths=[os.devnull]) 518 + with mock.patch.object(tree._ops, 'start', side_effect=fake_start), \ 519 + mock.patch.object(kunit_kernel.subprocess, 'call'): 520 + kernel_args = ['mem=1G'] 521 + for _ in tree.run_kernel(args=kernel_args, build_dir=build_dir, 522 + filter_glob='suite.test1'): 523 + pass 524 + for _ in tree.run_kernel(args=kernel_args, build_dir=build_dir, 525 + filter_glob='suite.test2'): 526 + pass 527 + self.assertEqual(kernel_args, ['mem=1G'], 528 + 'run_kernel() should not modify caller args') 529 + self.assertIn('kunit.filter_glob=suite.test1', start_calls[0]) 530 + self.assertIn('kunit.filter_glob=suite.test2', start_calls[1]) 531 + 506 532 def test_build_reconfig_no_config(self): 507 533 with tempfile.TemporaryDirectory('') as build_dir: 508 534 with open(kunit_kernel.get_kunitconfig_path(build_dir), 'w') as f:
+2 -2
tools/testing/selftests/arm64/abi/hwcap.c
··· 475 475 476 476 static void sve2p1_sigill(void) 477 477 { 478 - /* BFADD Z0.H, Z0.H, Z0.H */ 479 - asm volatile(".inst 0x65000000" : : : "z0"); 478 + /* LD1Q {Z0.Q}, P0/Z, [Z0.D, X0] */ 479 + asm volatile(".inst 0xC400A000" : : : "z0"); 480 480 } 481 481 482 482 static void sve2p2_sigill(void)
+1
tools/testing/selftests/bpf/Makefile
··· 409 409 CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \ 410 410 LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \ 411 411 EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' \ 412 + HOSTPKG_CONFIG=$(PKG_CONFIG) \ 412 413 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ) 413 414 414 415 # Get Clang's default includes on this system, as opposed to those seen by
+58 -18
tools/testing/selftests/bpf/prog_tests/reg_bounds.c
··· 422 422 } 423 423 } 424 424 425 - static struct range range_improve(enum num_t t, struct range old, struct range new) 425 + static struct range range_intersection(enum num_t t, struct range old, struct range new) 426 426 { 427 427 return range(t, max_t(t, old.a, new.a), min_t(t, old.b, new.b)); 428 + } 429 + 430 + /* 431 + * Result is precise when 'x' and 'y' overlap or form a continuous range, 432 + * result is an over-approximation if 'x' and 'y' do not overlap. 433 + */ 434 + static struct range range_union(enum num_t t, struct range x, struct range y) 435 + { 436 + if (!is_valid_range(t, x)) 437 + return y; 438 + if (!is_valid_range(t, y)) 439 + return x; 440 + return range(t, min_t(t, x.a, y.a), max_t(t, x.b, y.b)); 441 + } 442 + 443 + /* 444 + * This function attempts to improve x range intersecting it with y. 445 + * range_cast(... to_t ...) looses precision for ranges that pass to_t 446 + * min/max boundaries. To avoid such precision loses this function 447 + * splits both x and y into halves corresponding to non-overflowing 448 + * sub-ranges: [0, smin] and [smax, -1]. 449 + * Final result is computed as follows: 450 + * 451 + * ((x ∩ [0, smax]) ∩ (y ∩ [0, smax])) ∪ 452 + * ((x ∩ [smin,-1]) ∩ (y ∩ [smin,-1])) 453 + * 454 + * Precision might still be lost if final union is not a continuous range. 455 + */ 456 + static struct range range_refine_in_halves(enum num_t x_t, struct range x, 457 + enum num_t y_t, struct range y) 458 + { 459 + struct range x_pos, x_neg, y_pos, y_neg, r_pos, r_neg; 460 + u64 smax, smin, neg_one; 461 + 462 + if (t_is_32(x_t)) { 463 + smax = (u64)(u32)S32_MAX; 464 + smin = (u64)(u32)S32_MIN; 465 + neg_one = (u64)(u32)(s32)(-1); 466 + } else { 467 + smax = (u64)S64_MAX; 468 + smin = (u64)S64_MIN; 469 + neg_one = U64_MAX; 470 + } 471 + x_pos = range_intersection(x_t, x, range(x_t, 0, smax)); 472 + x_neg = range_intersection(x_t, x, range(x_t, smin, neg_one)); 473 + y_pos = range_intersection(y_t, y, range(x_t, 0, smax)); 474 + y_neg = range_intersection(y_t, y, range(y_t, smin, neg_one)); 475 + r_pos = range_intersection(x_t, x_pos, range_cast(y_t, x_t, y_pos)); 476 + r_neg = range_intersection(x_t, x_neg, range_cast(y_t, x_t, y_neg)); 477 + return range_union(x_t, r_pos, r_neg); 478 + 428 479 } 429 480 430 481 static struct range range_refine(enum num_t x_t, struct range x, enum num_t y_t, struct range y) 431 482 { 432 483 struct range y_cast; 484 + 485 + if (t_is_32(x_t) == t_is_32(y_t)) 486 + x = range_refine_in_halves(x_t, x, y_t, y); 433 487 434 488 y_cast = range_cast(y_t, x_t, y); 435 489 ··· 498 444 */ 499 445 if (x_t == S64 && y_t == S32 && y_cast.a <= S32_MAX && y_cast.b <= S32_MAX && 500 446 (s64)x.a >= S32_MIN && (s64)x.b <= S32_MAX) 501 - return range_improve(x_t, x, y_cast); 447 + return range_intersection(x_t, x, y_cast); 502 448 503 449 /* the case when new range knowledge, *y*, is a 32-bit subregister 504 450 * range, while previous range knowledge, *x*, is a full register ··· 516 462 x_swap = range(x_t, swap_low32(x.a, y_cast.a), swap_low32(x.b, y_cast.b)); 517 463 if (!is_valid_range(x_t, x_swap)) 518 464 return x; 519 - return range_improve(x_t, x, x_swap); 520 - } 521 - 522 - if (!t_is_32(x_t) && !t_is_32(y_t) && x_t != y_t) { 523 - if (x_t == S64 && x.a > x.b) { 524 - if (x.b < y.a && x.a <= y.b) 525 - return range(x_t, x.a, y.b); 526 - if (x.a > y.b && x.b >= y.a) 527 - return range(x_t, y.a, x.b); 528 - } else if (x_t == U64 && y.a > y.b) { 529 - if (y.b < x.a && y.a <= x.b) 530 - return range(x_t, y.a, x.b); 531 - if (y.a > x.b && y.b >= x.a) 532 - return range(x_t, x.a, y.b); 533 - } 465 + return range_intersection(x_t, x, x_swap); 534 466 } 535 467 536 468 /* otherwise, plain range cast and intersection works */ 537 - return range_improve(x_t, x, y_cast); 469 + return range_intersection(x_t, x, y_cast); 538 470 } 539 471 540 472 /* =======================
+58
tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
··· 610 610 system("ip link del bond"); 611 611 } 612 612 613 + /* 614 + * Test that changing xmit_hash_policy to vlan+srcmac is rejected when a 615 + * native XDP program is loaded on a bond in 802.3ad or balance-xor mode. 616 + * These modes support XDP only when xmit_hash_policy != vlan+srcmac; freely 617 + * changing the policy creates an inconsistency that triggers a WARNING in 618 + * dev_xdp_uninstall() during device teardown. 619 + */ 620 + static void test_xdp_bonding_xmit_policy_compat(struct skeletons *skeletons) 621 + { 622 + struct nstoken *nstoken = NULL; 623 + int bond_ifindex = -1; 624 + int xdp_fd, err; 625 + 626 + SYS(out, "ip netns add ns_xmit_policy"); 627 + nstoken = open_netns("ns_xmit_policy"); 628 + if (!ASSERT_OK_PTR(nstoken, "open ns_xmit_policy")) 629 + goto out; 630 + 631 + /* 802.3ad with layer2+3 policy: native XDP is supported */ 632 + SYS(out, "ip link add bond0 type bond mode 802.3ad xmit_hash_policy layer2+3"); 633 + SYS(out, "ip link add veth0 type veth peer name veth0p"); 634 + SYS(out, "ip link set veth0 master bond0"); 635 + SYS(out, "ip link set bond0 up"); 636 + 637 + bond_ifindex = if_nametoindex("bond0"); 638 + if (!ASSERT_GT(bond_ifindex, 0, "bond0 ifindex")) 639 + goto out; 640 + 641 + xdp_fd = bpf_program__fd(skeletons->xdp_dummy->progs.xdp_dummy_prog); 642 + if (!ASSERT_GE(xdp_fd, 0, "xdp_dummy fd")) 643 + goto out; 644 + 645 + err = bpf_xdp_attach(bond_ifindex, xdp_fd, XDP_FLAGS_DRV_MODE, NULL); 646 + if (!ASSERT_OK(err, "attach XDP to bond0")) 647 + goto out; 648 + 649 + /* With XDP loaded, switching to vlan+srcmac must be rejected */ 650 + err = system("ip link set bond0 type bond xmit_hash_policy vlan+srcmac 2>/dev/null"); 651 + ASSERT_NEQ(err, 0, "vlan+srcmac change with XDP loaded should fail"); 652 + 653 + /* Detach XDP first, then the same change must succeed */ 654 + ASSERT_OK(bpf_xdp_detach(bond_ifindex, XDP_FLAGS_DRV_MODE, NULL), 655 + "detach XDP from bond0"); 656 + 657 + bond_ifindex = -1; 658 + err = system("ip link set bond0 type bond xmit_hash_policy vlan+srcmac 2>/dev/null"); 659 + ASSERT_OK(err, "vlan+srcmac change without XDP should succeed"); 660 + 661 + out: 662 + if (bond_ifindex > 0) 663 + bpf_xdp_detach(bond_ifindex, XDP_FLAGS_DRV_MODE, NULL); 664 + close_netns(nstoken); 665 + SYS_NOFAIL("ip netns del ns_xmit_policy"); 666 + } 667 + 613 668 static int libbpf_debug_print(enum libbpf_print_level level, 614 669 const char *format, va_list args) 615 670 { ··· 731 676 test_case->mode, 732 677 test_case->xmit_policy); 733 678 } 679 + 680 + if (test__start_subtest("xdp_bonding_xmit_policy_compat")) 681 + test_xdp_bonding_xmit_policy_compat(&skeletons); 734 682 735 683 if (test__start_subtest("xdp_bonding_redirect_multi")) 736 684 test_xdp_bonding_redirect_multi(&skeletons);
+17 -17
tools/testing/selftests/bpf/progs/exceptions_assert.c
··· 18 18 return *(u64 *)num; \ 19 19 } 20 20 21 - __msg(": R0=0xffffffff80000000") 21 + __msg("R{{.}}=0xffffffff80000000") 22 22 check_assert(s64, ==, eq_int_min, INT_MIN); 23 - __msg(": R0=0x7fffffff") 23 + __msg("R{{.}}=0x7fffffff") 24 24 check_assert(s64, ==, eq_int_max, INT_MAX); 25 - __msg(": R0=0") 25 + __msg("R{{.}}=0") 26 26 check_assert(s64, ==, eq_zero, 0); 27 - __msg(": R0=0x8000000000000000 R1=0x8000000000000000") 27 + __msg("R{{.}}=0x8000000000000000") 28 28 check_assert(s64, ==, eq_llong_min, LLONG_MIN); 29 - __msg(": R0=0x7fffffffffffffff R1=0x7fffffffffffffff") 29 + __msg("R{{.}}=0x7fffffffffffffff") 30 30 check_assert(s64, ==, eq_llong_max, LLONG_MAX); 31 31 32 - __msg(": R0=scalar(id=1,smax=0x7ffffffe)") 32 + __msg("R{{.}}=scalar(id=1,smax=0x7ffffffe)") 33 33 check_assert(s64, <, lt_pos, INT_MAX); 34 - __msg(": R0=scalar(id=1,smax=-1,umin=0x8000000000000000,var_off=(0x8000000000000000; 0x7fffffffffffffff))") 34 + __msg("R{{.}}=scalar(id=1,smax=-1,umin=0x8000000000000000,var_off=(0x8000000000000000; 0x7fffffffffffffff))") 35 35 check_assert(s64, <, lt_zero, 0); 36 - __msg(": R0=scalar(id=1,smax=0xffffffff7fffffff") 36 + __msg("R{{.}}=scalar(id=1,smax=0xffffffff7fffffff") 37 37 check_assert(s64, <, lt_neg, INT_MIN); 38 38 39 - __msg(": R0=scalar(id=1,smax=0x7fffffff)") 39 + __msg("R{{.}}=scalar(id=1,smax=0x7fffffff)") 40 40 check_assert(s64, <=, le_pos, INT_MAX); 41 - __msg(": R0=scalar(id=1,smax=0)") 41 + __msg("R{{.}}=scalar(id=1,smax=0)") 42 42 check_assert(s64, <=, le_zero, 0); 43 - __msg(": R0=scalar(id=1,smax=0xffffffff80000000") 43 + __msg("R{{.}}=scalar(id=1,smax=0xffffffff80000000") 44 44 check_assert(s64, <=, le_neg, INT_MIN); 45 45 46 - __msg(": R0=scalar(id=1,smin=umin=0x80000000,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 46 + __msg("R{{.}}=scalar(id=1,smin=umin=0x80000000,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 47 47 check_assert(s64, >, gt_pos, INT_MAX); 48 - __msg(": R0=scalar(id=1,smin=umin=1,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 48 + __msg("R{{.}}=scalar(id=1,smin=umin=1,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 49 49 check_assert(s64, >, gt_zero, 0); 50 - __msg(": R0=scalar(id=1,smin=0xffffffff80000001") 50 + __msg("R{{.}}=scalar(id=1,smin=0xffffffff80000001") 51 51 check_assert(s64, >, gt_neg, INT_MIN); 52 52 53 - __msg(": R0=scalar(id=1,smin=umin=0x7fffffff,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 53 + __msg("R{{.}}=scalar(id=1,smin=umin=0x7fffffff,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 54 54 check_assert(s64, >=, ge_pos, INT_MAX); 55 - __msg(": R0=scalar(id=1,smin=0,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 55 + __msg("R{{.}}=scalar(id=1,smin=0,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 56 56 check_assert(s64, >=, ge_zero, 0); 57 - __msg(": R0=scalar(id=1,smin=0xffffffff80000000") 57 + __msg("R{{.}}=scalar(id=1,smin=0xffffffff80000000") 58 58 check_assert(s64, >=, ge_neg, INT_MIN); 59 59 60 60 SEC("?tc")
+38 -1
tools/testing/selftests/bpf/progs/verifier_bounds.c
··· 1148 1148 SEC("xdp") 1149 1149 __description("bound check with JMP32_JSLT for crossing 32-bit signed boundary") 1150 1150 __success __retval(0) 1151 - __flag(!BPF_F_TEST_REG_INVARIANTS) /* known invariants violation */ 1151 + __flag(BPF_F_TEST_REG_INVARIANTS) 1152 1152 __naked void crossing_32_bit_signed_boundary_2(void) 1153 1153 { 1154 1154 asm volatile (" \ ··· 1995 1995 if r0 == 0x10 goto +1; \ 1996 1996 r10 = 0; \ 1997 1997 exit; \ 1998 + " : 1999 + : __imm(bpf_get_prandom_u32) 2000 + : __clobber_all); 2001 + } 2002 + 2003 + SEC("socket") 2004 + __success 2005 + __flag(BPF_F_TEST_REG_INVARIANTS) 2006 + __naked void signed_unsigned_intersection32_case1(void *ctx) 2007 + { 2008 + asm volatile(" \ 2009 + call %[bpf_get_prandom_u32]; \ 2010 + w0 &= 0xffffffff; \ 2011 + if w0 < 0x3 goto 1f; /* on fall-through u32 range [3..U32_MAX] */ \ 2012 + if w0 s> 0x1 goto 1f; /* on fall-through s32 range [S32_MIN..1] */ \ 2013 + if w0 s< 0x0 goto 1f; /* range can be narrowed to [S32_MIN..-1] */ \ 2014 + r10 = 0; /* thus predicting the jump. */ \ 2015 + 1: exit; \ 2016 + " : 2017 + : __imm(bpf_get_prandom_u32) 2018 + : __clobber_all); 2019 + } 2020 + 2021 + SEC("socket") 2022 + __success 2023 + __flag(BPF_F_TEST_REG_INVARIANTS) 2024 + __naked void signed_unsigned_intersection32_case2(void *ctx) 2025 + { 2026 + asm volatile(" \ 2027 + call %[bpf_get_prandom_u32]; \ 2028 + w0 &= 0xffffffff; \ 2029 + if w0 > 0x80000003 goto 1f; /* on fall-through u32 range [0..S32_MIN+3] */ \ 2030 + if w0 s< -3 goto 1f; /* on fall-through s32 range [-3..S32_MAX] */ \ 2031 + if w0 s> 5 goto 1f; /* on fall-through s32 range [-3..5] */ \ 2032 + if w0 <= 5 goto 1f; /* range can be narrowed to [0..5] */ \ 2033 + r10 = 0; /* thus predicting the jump */ \ 2034 + 1: exit; \ 1998 2035 " : 1999 2036 : __imm(bpf_get_prandom_u32) 2000 2037 : __clobber_all);
+64
tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
··· 363 363 __sink(path[0]); 364 364 } 365 365 366 + void dummy_calls(void) 367 + { 368 + bpf_iter_num_new(0, 0, 0); 369 + bpf_iter_num_next(0); 370 + bpf_iter_num_destroy(0); 371 + } 372 + 373 + SEC("socket") 374 + __success 375 + __flag(BPF_F_TEST_STATE_FREQ) 376 + int spurious_precision_marks(void *ctx) 377 + { 378 + struct bpf_iter_num iter; 379 + 380 + asm volatile( 381 + "r1 = %[iter];" 382 + "r2 = 0;" 383 + "r3 = 10;" 384 + "call %[bpf_iter_num_new];" 385 + "1:" 386 + "r1 = %[iter];" 387 + "call %[bpf_iter_num_next];" 388 + "if r0 == 0 goto 4f;" 389 + "r7 = *(u32 *)(r0 + 0);" 390 + "r8 = *(u32 *)(r0 + 0);" 391 + /* This jump can't be predicted and does not change r7 or r8 state. */ 392 + "if r7 > r8 goto 2f;" 393 + /* Branch explored first ties r2 and r7 as having the same id. */ 394 + "r2 = r7;" 395 + "goto 3f;" 396 + "2:" 397 + /* Branch explored second does not tie r2 and r7 but has a function call. */ 398 + "call %[bpf_get_prandom_u32];" 399 + "3:" 400 + /* 401 + * A checkpoint. 402 + * When first branch is explored, this would inject linked registers 403 + * r2 and r7 into the jump history. 404 + * When second branch is explored, this would be a cache hit point, 405 + * triggering propagate_precision(). 406 + */ 407 + "if r7 <= 42 goto +0;" 408 + /* 409 + * Mark r7 as precise using an if condition that is always true. 410 + * When reached via the second branch, this triggered a bug in the backtrack_insn() 411 + * because r2 (tied to r7) was propagated as precise to a call. 412 + */ 413 + "if r7 <= 0xffffFFFF goto +0;" 414 + "goto 1b;" 415 + "4:" 416 + "r1 = %[iter];" 417 + "call %[bpf_iter_num_destroy];" 418 + : 419 + : __imm_ptr(iter), 420 + __imm(bpf_iter_num_new), 421 + __imm(bpf_iter_num_next), 422 + __imm(bpf_iter_num_destroy), 423 + __imm(bpf_get_prandom_u32) 424 + : __clobber_common, "r7", "r8" 425 + ); 426 + 427 + return 0; 428 + } 429 + 366 430 char _license[] SEC("license") = "GPL";
+42 -14
tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
··· 40 40 */ 41 41 "r3 = r10;" 42 42 "r3 += r0;" 43 + /* Mark r1 and r2 as alive. */ 44 + "r1 = r1;" 45 + "r2 = r2;" 43 46 "r0 = 0;" 44 47 "exit;" 45 48 : ··· 76 73 */ 77 74 "r4 = r10;" 78 75 "r4 += r0;" 76 + /* Mark r1 and r2 as alive. */ 77 + "r1 = r1;" 78 + "r2 = r2;" 79 79 "r0 = 0;" 80 80 "exit;" 81 81 : ··· 112 106 */ 113 107 "r4 = r10;" 114 108 "r4 += r3;" 109 + /* Mark r1 and r2 as alive. */ 110 + "r0 = r0;" 111 + "r1 = r1;" 112 + "r2 = r2;" 115 113 "r0 = 0;" 116 114 "exit;" 117 115 : ··· 153 143 */ 154 144 "r3 = r10;" 155 145 "r3 += r0;" 146 + /* Mark r1 and r2 as alive. */ 147 + "r1 = r1;" 148 + "r2 = r2;" 156 149 "r0 = 0;" 157 150 "exit;" 158 151 : ··· 169 156 */ 170 157 SEC("socket") 171 158 __success __log_level(2) 172 - __msg("12: (0f) r2 += r1") 159 + __msg("17: (0f) r2 += r1") 173 160 /* Current state */ 174 - __msg("frame2: last_idx 12 first_idx 11 subseq_idx -1 ") 175 - __msg("frame2: regs=r1 stack= before 11: (bf) r2 = r10") 161 + __msg("frame2: last_idx 17 first_idx 14 subseq_idx -1 ") 162 + __msg("frame2: regs=r1 stack= before 16: (bf) r2 = r10") 176 163 __msg("frame2: parent state regs=r1 stack=") 177 164 __msg("frame1: parent state regs= stack=") 178 165 __msg("frame0: parent state regs= stack=") 179 166 /* Parent state */ 180 - __msg("frame2: last_idx 10 first_idx 10 subseq_idx 11 ") 181 - __msg("frame2: regs=r1 stack= before 10: (25) if r1 > 0x7 goto pc+0") 167 + __msg("frame2: last_idx 13 first_idx 13 subseq_idx 14 ") 168 + __msg("frame2: regs=r1 stack= before 13: (25) if r1 > 0x7 goto pc+0") 182 169 __msg("frame2: parent state regs=r1 stack=") 183 170 /* frame1.r{6,7} are marked because mark_precise_scalar_ids() 184 171 * looks for all registers with frame2.r1.id in the current state ··· 186 173 __msg("frame1: parent state regs=r6,r7 stack=") 187 174 __msg("frame0: parent state regs=r6 stack=") 188 175 /* Parent state */ 189 - __msg("frame2: last_idx 8 first_idx 8 subseq_idx 10") 190 - __msg("frame2: regs=r1 stack= before 8: (85) call pc+1") 176 + __msg("frame2: last_idx 9 first_idx 9 subseq_idx 13") 177 + __msg("frame2: regs=r1 stack= before 9: (85) call pc+3") 191 178 /* frame1.r1 is marked because of backtracking of call instruction */ 192 179 __msg("frame1: parent state regs=r1,r6,r7 stack=") 193 180 __msg("frame0: parent state regs=r6 stack=") 194 181 /* Parent state */ 195 - __msg("frame1: last_idx 7 first_idx 6 subseq_idx 8") 196 - __msg("frame1: regs=r1,r6,r7 stack= before 7: (bf) r7 = r1") 197 - __msg("frame1: regs=r1,r6 stack= before 6: (bf) r6 = r1") 182 + __msg("frame1: last_idx 8 first_idx 7 subseq_idx 9") 183 + __msg("frame1: regs=r1,r6,r7 stack= before 8: (bf) r7 = r1") 184 + __msg("frame1: regs=r1,r6 stack= before 7: (bf) r6 = r1") 198 185 __msg("frame1: parent state regs=r1 stack=") 199 186 __msg("frame0: parent state regs=r6 stack=") 200 187 /* Parent state */ 201 - __msg("frame1: last_idx 4 first_idx 4 subseq_idx 6") 202 - __msg("frame1: regs=r1 stack= before 4: (85) call pc+1") 188 + __msg("frame1: last_idx 4 first_idx 4 subseq_idx 7") 189 + __msg("frame1: regs=r1 stack= before 4: (85) call pc+2") 203 190 __msg("frame0: parent state regs=r1,r6 stack=") 204 191 /* Parent state */ 205 192 __msg("frame0: last_idx 3 first_idx 1 subseq_idx 4") ··· 217 204 "r1 = r0;" 218 205 "r6 = r0;" 219 206 "call precision_many_frames__foo;" 207 + "r6 = r6;" /* mark r6 as live */ 220 208 "exit;" 221 209 : 222 210 : __imm(bpf_ktime_get_ns) ··· 234 220 "r6 = r1;" 235 221 "r7 = r1;" 236 222 "call precision_many_frames__bar;" 223 + "r6 = r6;" /* mark r6 as live */ 224 + "r7 = r7;" /* mark r7 as live */ 237 225 "exit" 238 226 ::: __clobber_all); 239 227 } ··· 245 229 { 246 230 asm volatile ( 247 231 "if r1 > 7 goto +0;" 232 + "r6 = 0;" /* mark r6 as live */ 233 + "r7 = 0;" /* mark r7 as live */ 248 234 /* force r1 to be precise, this eventually marks: 249 235 * - bar frame r1 250 236 * - foo frame r{1,6,7} ··· 358 340 "r3 += r7;" 359 341 /* force r9 to be precise, this also marks r8 */ 360 342 "r3 += r9;" 343 + "r6 = r6;" /* mark r6 as live */ 344 + "r8 = r8;" /* mark r8 as live */ 361 345 "exit;" 362 346 : 363 347 : __imm(bpf_ktime_get_ns) ··· 373 353 * collect_linked_regs() can't tie more than 6 registers for a single insn. 374 354 */ 375 355 __msg("8: (25) if r0 > 0x7 goto pc+0 ; R0=scalar(id=1") 376 - __msg("9: (bf) r6 = r6 ; R6=scalar(id=2") 356 + __msg("14: (bf) r6 = r6 ; R6=scalar(id=2") 377 357 /* check that r{0-5} are marked precise after 'if' */ 378 358 __msg("frame0: regs=r0 stack= before 8: (25) if r0 > 0x7 goto pc+0") 379 359 __msg("frame0: parent state regs=r0,r1,r2,r3,r4,r5 stack=:") ··· 392 372 "r6 = r0;" 393 373 /* propagate range for r{0-6} */ 394 374 "if r0 > 7 goto +0;" 375 + /* keep r{1-5} live */ 376 + "r1 = r1;" 377 + "r2 = r2;" 378 + "r3 = r3;" 379 + "r4 = r4;" 380 + "r5 = r5;" 395 381 /* make r6 appear in the log */ 396 382 "r6 = r6;" 397 383 /* force r0 to be precise, ··· 543 517 "*(u64*)(r10 - 8) = r1;" 544 518 /* r9 = pointer to stack */ 545 519 "r9 = r10;" 546 - "r9 += -8;" 520 + "r9 += -16;" 547 521 /* r8 = ktime_get_ns() */ 548 522 "call %[bpf_ktime_get_ns];" 549 523 "r8 = r0;" ··· 564 538 "if r7 > 4 goto l2_%=;" 565 539 /* Access memory at r9[r6] */ 566 540 "r9 += r6;" 541 + "r9 += r7;" 542 + "r9 += r8;" 567 543 "r0 = *(u8*)(r9 + 0);" 568 544 "l2_%=:" 569 545 "r0 = 0;"
+4 -4
tools/testing/selftests/bpf/verifier/precise.c
··· 44 44 mark_precise: frame0: regs=r2 stack= before 23\ 45 45 mark_precise: frame0: regs=r2 stack= before 22\ 46 46 mark_precise: frame0: regs=r2 stack= before 20\ 47 - mark_precise: frame0: parent state regs=r2,r9 stack=:\ 47 + mark_precise: frame0: parent state regs=r2 stack=:\ 48 48 mark_precise: frame0: last_idx 19 first_idx 10\ 49 - mark_precise: frame0: regs=r2,r9 stack= before 19\ 49 + mark_precise: frame0: regs=r2 stack= before 19\ 50 50 mark_precise: frame0: regs=r9 stack= before 18\ 51 51 mark_precise: frame0: regs=r8,r9 stack= before 17\ 52 52 mark_precise: frame0: regs=r0,r9 stack= before 15\ ··· 107 107 mark_precise: frame0: parent state regs=r2 stack=:\ 108 108 mark_precise: frame0: last_idx 20 first_idx 20\ 109 109 mark_precise: frame0: regs=r2 stack= before 20\ 110 - mark_precise: frame0: parent state regs=r2,r9 stack=:\ 110 + mark_precise: frame0: parent state regs=r2 stack=:\ 111 111 mark_precise: frame0: last_idx 19 first_idx 17\ 112 - mark_precise: frame0: regs=r2,r9 stack= before 19\ 112 + mark_precise: frame0: regs=r2 stack= before 19\ 113 113 mark_precise: frame0: regs=r9 stack= before 18\ 114 114 mark_precise: frame0: regs=r8,r9 stack= before 17\ 115 115 mark_precise: frame0: parent state regs= stack=:",
+114 -110
tools/testing/selftests/cgroup/test_cpuset_prs.sh
··· 196 196 # P<v> = set cpus.partition (0:member, 1:root, 2:isolated) 197 197 # C<l> = add cpu-list to cpuset.cpus 198 198 # X<l> = add cpu-list to cpuset.cpus.exclusive 199 - # S<p> = use prefix in subtree_control 200 199 # T = put a task into cgroup 201 200 # CX<l> = add cpu-list to both cpuset.cpus and cpuset.cpus.exclusive 202 201 # O<c>=<v> = Write <v> to CPU online file of <c> ··· 208 209 # sched-debug matching which includes offline CPUs and single-CPU partitions 209 210 # while the second one is for matching cpuset.cpus.isolated. 210 211 # 211 - SETUP_A123_PARTITIONS="C1-3:P1:S+ C2-3:P1:S+ C3:P1" 212 + SETUP_A123_PARTITIONS="C1-3:P1 C2-3:P1 C3:P1" 212 213 TEST_MATRIX=( 213 214 # old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate ISOLCPUS 214 215 # ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ -------- 215 - " C0-1 . . C2-3 S+ C4-5 . . 0 A2:0-1" 216 + " C0-1 . . C2-3 . C4-5 . . 0 A2:0-1" 216 217 " C0-1 . . C2-3 P1 . . . 0 " 217 - " C0-1 . . C2-3 P1:S+ C0-1:P1 . . 0 " 218 - " C0-1 . . C2-3 P1:S+ C1:P1 . . 0 " 219 - " C0-1:S+ . . C2-3 . . . P1 0 " 220 - " C0-1:P1 . . C2-3 S+ C1 . . 0 " 221 - " C0-1:P1 . . C2-3 S+ C1:P1 . . 0 " 222 - " C0-1:P1 . . C2-3 S+ C1:P1 . P1 0 " 218 + " C0-1 . . C2-3 P1 C0-1:P1 . . 0 " 219 + " C0-1 . . C2-3 P1 C1:P1 . . 0 " 220 + " C0-1 . . C2-3 . . . P1 0 " 221 + " C0-1:P1 . . C2-3 . C1 . . 0 " 222 + " C0-1:P1 . . C2-3 . C1:P1 . . 0 " 223 + " C0-1:P1 . . C2-3 . C1:P1 . P1 0 " 223 224 " C0-1:P1 . . C2-3 C4-5 . . . 0 A1:4-5" 224 - " C0-1:P1 . . C2-3 S+:C4-5 . . . 0 A1:4-5" 225 225 " C0-1 . . C2-3:P1 . . . C2 0 " 226 226 " C0-1 . . C2-3:P1 . . . C4-5 0 B1:4-5" 227 - "C0-3:P1:S+ C2-3:P1 . . . . . . 0 A1:0-1|A2:2-3|XA2:2-3" 228 - "C0-3:P1:S+ C2-3:P1 . . C1-3 . . . 0 A1:1|A2:2-3|XA2:2-3" 229 - "C2-3:P1:S+ C3:P1 . . C3 . . . 0 A1:|A2:3|XA2:3 A1:P1|A2:P1" 230 - "C2-3:P1:S+ C3:P1 . . C3 P0 . . 0 A1:3|A2:3 A1:P1|A2:P0" 231 - "C2-3:P1:S+ C2:P1 . . C2-4 . . . 0 A1:3-4|A2:2" 232 - "C2-3:P1:S+ C3:P1 . . C3 . . C0-2 0 A1:|B1:0-2 A1:P1|A2:P1" 227 + " C0-3:P1 C2-3:P1 . . . . . . 0 A1:0-1|A2:2-3|XA2:2-3" 228 + " C0-3:P1 C2-3:P1 . . C1-3 . . . 0 A1:1|A2:2-3|XA2:2-3" 229 + " C2-3:P1 C3:P1 . . C3 . . . 0 A1:|A2:3|XA2:3 A1:P1|A2:P1" 230 + " C2-3:P1 C3:P1 . . C3 P0 . . 0 A1:3|A2:3 A1:P1|A2:P0" 231 + " C2-3:P1 C2:P1 . . C2-4 . . . 0 A1:3-4|A2:2" 232 + " C2-3:P1 C3:P1 . . C3 . . C0-2 0 A1:|B1:0-2 A1:P1|A2:P1" 233 233 "$SETUP_A123_PARTITIONS . C2-3 . . . 0 A1:|A2:2|A3:3 A1:P1|A2:P1|A3:P1" 234 234 235 235 # CPU offlining cases: 236 - " C0-1 . . C2-3 S+ C4-5 . O2=0 0 A1:0-1|B1:3" 237 - "C0-3:P1:S+ C2-3:P1 . . O2=0 . . . 0 A1:0-1|A2:3" 238 - "C0-3:P1:S+ C2-3:P1 . . O2=0 O2=1 . . 0 A1:0-1|A2:2-3" 239 - "C0-3:P1:S+ C2-3:P1 . . O1=0 . . . 0 A1:0|A2:2-3" 240 - "C0-3:P1:S+ C2-3:P1 . . O1=0 O1=1 . . 0 A1:0-1|A2:2-3" 241 - "C2-3:P1:S+ C3:P1 . . O3=0 O3=1 . . 0 A1:2|A2:3 A1:P1|A2:P1" 242 - "C2-3:P1:S+ C3:P2 . . O3=0 O3=1 . . 0 A1:2|A2:3 A1:P1|A2:P2" 243 - "C2-3:P1:S+ C3:P1 . . O2=0 O2=1 . . 0 A1:2|A2:3 A1:P1|A2:P1" 244 - "C2-3:P1:S+ C3:P2 . . O2=0 O2=1 . . 0 A1:2|A2:3 A1:P1|A2:P2" 245 - "C2-3:P1:S+ C3:P1 . . O2=0 . . . 0 A1:|A2:3 A1:P1|A2:P1" 246 - "C2-3:P1:S+ C3:P1 . . O3=0 . . . 0 A1:2|A2: A1:P1|A2:P1" 247 - "C2-3:P1:S+ C3:P1 . . T:O2=0 . . . 0 A1:3|A2:3 A1:P1|A2:P-1" 248 - "C2-3:P1:S+ C3:P1 . . . T:O3=0 . . 0 A1:2|A2:2 A1:P1|A2:P-1" 236 + " C0-1 . . C2-3 . C4-5 . O2=0 0 A1:0-1|B1:3" 237 + " C0-3:P1 C2-3:P1 . . O2=0 . . . 0 A1:0-1|A2:3" 238 + " C0-3:P1 C2-3:P1 . . O2=0 O2=1 . . 0 A1:0-1|A2:2-3" 239 + " C0-3:P1 C2-3:P1 . . O1=0 . . . 0 A1:0|A2:2-3" 240 + " C0-3:P1 C2-3:P1 . . O1=0 O1=1 . . 0 A1:0-1|A2:2-3" 241 + " C2-3:P1 C3:P1 . . O3=0 O3=1 . . 0 A1:2|A2:3 A1:P1|A2:P1" 242 + " C2-3:P1 C3:P2 . . O3=0 O3=1 . . 0 A1:2|A2:3 A1:P1|A2:P2" 243 + " C2-3:P1 C3:P1 . . O2=0 O2=1 . . 0 A1:2|A2:3 A1:P1|A2:P1" 244 + " C2-3:P1 C3:P2 . . O2=0 O2=1 . . 0 A1:2|A2:3 A1:P1|A2:P2" 245 + " C2-3:P1 C3:P1 . . O2=0 . . . 0 A1:|A2:3 A1:P1|A2:P1" 246 + " C2-3:P1 C3:P1 . . O3=0 . . . 0 A1:2|A2: A1:P1|A2:P1" 247 + " C2-3:P1 C3:P1 . . T:O2=0 . . . 0 A1:3|A2:3 A1:P1|A2:P-1" 248 + " C2-3:P1 C3:P1 . . . T:O3=0 . . 0 A1:2|A2:2 A1:P1|A2:P-1" 249 + " C2-3:P1 C3:P2 . . T:O2=0 . . . 0 A1:3|A2:3 A1:P1|A2:P-2" 250 + " C1-3:P1 C3:P2 . . . T:O3=0 . . 0 A1:1-2|A2:1-2 A1:P1|A2:P-2 3|" 251 + " C1-3:P1 C3:P2 . . . T:O3=0 O3=1 . 0 A1:1-2|A2:3 A1:P1|A2:P2 3" 249 252 "$SETUP_A123_PARTITIONS . O1=0 . . . 0 A1:|A2:2|A3:3 A1:P1|A2:P1|A3:P1" 250 253 "$SETUP_A123_PARTITIONS . O2=0 . . . 0 A1:1|A2:|A3:3 A1:P1|A2:P1|A3:P1" 251 254 "$SETUP_A123_PARTITIONS . O3=0 . . . 0 A1:1|A2:2|A3: A1:P1|A2:P1|A3:P1" ··· 265 264 # 266 265 # Remote partition and cpuset.cpus.exclusive tests 267 266 # 268 - " C0-3:S+ C1-3:S+ C2-3 . X2-3 . . . 0 A1:0-3|A2:1-3|A3:2-3|XA1:2-3" 269 - " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3:P2 . . 0 A1:0-1|A2:2-3|A3:2-3 A1:P0|A2:P2 2-3" 270 - " C0-3:S+ C1-3:S+ C2-3 . X2-3 X3:P2 . . 0 A1:0-2|A2:3|A3:3 A1:P0|A2:P2 3" 271 - " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3 X2-3:P2 . 0 A1:0-1|A2:1|A3:2-3 A1:P0|A3:P2 2-3" 272 - " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3 X2-3:P2:C3 . 0 A1:0-1|A2:1|A3:2-3 A1:P0|A3:P2 2-3" 273 - " C0-3:S+ C1-3:S+ C2-3 C2-3 . . . P2 0 A1:0-1|A2:1|A3:1|B1:2-3 A1:P0|A3:P0|B1:P2" 274 - " C0-3:S+ C1-3:S+ C2-3 C4-5 . . . P2 0 B1:4-5 B1:P2 4-5" 275 - " C0-3:S+ C1-3:S+ C2-3 C4 X2-3 X2-3 X2-3:P2 P2 0 A3:2-3|B1:4 A3:P2|B1:P2 2-4" 276 - " C0-3:S+ C1-3:S+ C2-3 C4 X2-3 X2-3 X2-3:P2:C1-3 P2 0 A3:2-3|B1:4 A3:P2|B1:P2 2-4" 277 - " C0-3:S+ C1-3:S+ C2-3 C4 X1-3 X1-3:P2 P2 . 0 A2:1|A3:2-3 A2:P2|A3:P2 1-3" 278 - " C0-3:S+ C1-3:S+ C2-3 C4 X2-3 X2-3 X2-3:P2 P2:C4-5 0 A3:2-3|B1:4-5 A3:P2|B1:P2 2-5" 279 - " C4:X0-3:S+ X1-3:S+ X2-3 . . P2 . . 0 A1:4|A2:1-3|A3:1-3 A2:P2 1-3" 280 - " C4:X0-3:S+ X1-3:S+ X2-3 . . . P2 . 0 A1:4|A2:4|A3:2-3 A3:P2 2-3" 267 + " C0-3 C1-3 C2-3 . X2-3 . . . 0 A1:0-3|A2:1-3|A3:2-3|XA1:2-3" 268 + " C0-3 C1-3 C2-3 . X2-3 X2-3:P2 . . 0 A1:0-1|A2:2-3|A3:2-3 A1:P0|A2:P2 2-3" 269 + " C0-3 C1-3 C2-3 . X2-3 X3:P2 . . 0 A1:0-2|A2:3|A3:3 A1:P0|A2:P2 3" 270 + " C0-3 C1-3 C2-3 . X2-3 X2-3 X2-3:P2 . 0 A1:0-1|A2:1|A3:2-3 A1:P0|A3:P2 2-3" 271 + " C0-3 C1-3 C2-3 . X2-3 X2-3 X2-3:P2:C3 . 0 A1:0-1|A2:1|A3:2-3 A1:P0|A3:P2 2-3" 272 + " C0-3 C1-3 C2-3 C2-3 . . . P2 0 A1:0-1|A2:1|A3:1|B1:2-3 A1:P0|A3:P0|B1:P2" 273 + " C0-3 C1-3 C2-3 C4-5 . . . P2 0 B1:4-5 B1:P2 4-5" 274 + " C0-3 C1-3 C2-3 C4 X2-3 X2-3 X2-3:P2 P2 0 A3:2-3|B1:4 A3:P2|B1:P2 2-4" 275 + " C0-3 C1-3 C2-3 C4 X2-3 X2-3 X2-3:P2:C1-3 P2 0 A3:2-3|B1:4 A3:P2|B1:P2 2-4" 276 + " C0-3 C1-3 C2-3 C4 X1-3 X1-3:P2 P2 . 0 A2:1|A3:2-3 A2:P2|A3:P2 1-3" 277 + " C0-3 C1-3 C2-3 C4 X2-3 X2-3 X2-3:P2 P2:C4-5 0 A3:2-3|B1:4-5 A3:P2|B1:P2 2-5" 278 + " C4:X0-3 X1-3 X2-3 . . P2 . . 0 A1:4|A2:1-3|A3:1-3 A2:P2 1-3" 279 + " C4:X0-3 X1-3 X2-3 . . . P2 . 0 A1:4|A2:4|A3:2-3 A3:P2 2-3" 281 280 282 281 # Nested remote/local partition tests 283 - " C0-3:S+ C1-3:S+ C2-3 C4-5 X2-3 X2-3:P1 P2 P1 0 A1:0-1|A2:|A3:2-3|B1:4-5 \ 282 + " C0-3 C1-3 C2-3 C4-5 X2-3 X2-3:P1 P2 P1 0 A1:0-1|A2:|A3:2-3|B1:4-5 \ 284 283 A1:P0|A2:P1|A3:P2|B1:P1 2-3" 285 - " C0-3:S+ C1-3:S+ C2-3 C4 X2-3 X2-3:P1 P2 P1 0 A1:0-1|A2:|A3:2-3|B1:4 \ 284 + " C0-3 C1-3 C2-3 C4 X2-3 X2-3:P1 P2 P1 0 A1:0-1|A2:|A3:2-3|B1:4 \ 286 285 A1:P0|A2:P1|A3:P2|B1:P1 2-4|2-3" 287 - " C0-3:S+ C1-3:S+ C2-3 C4 X2-3 X2-3:P1 . P1 0 A1:0-1|A2:2-3|A3:2-3|B1:4 \ 286 + " C0-3 C1-3 C2-3 C4 X2-3 X2-3:P1 . P1 0 A1:0-1|A2:2-3|A3:2-3|B1:4 \ 288 287 A1:P0|A2:P1|A3:P0|B1:P1" 289 - " C0-3:S+ C1-3:S+ C3 C4 X2-3 X2-3:P1 P2 P1 0 A1:0-1|A2:2|A3:3|B1:4 \ 288 + " C0-3 C1-3 C3 C4 X2-3 X2-3:P1 P2 P1 0 A1:0-1|A2:2|A3:3|B1:4 \ 290 289 A1:P0|A2:P1|A3:P2|B1:P1 2-4|3" 291 - " C0-4:S+ C1-4:S+ C2-4 . X2-4 X2-4:P2 X4:P1 . 0 A1:0-1|A2:2-3|A3:4 \ 290 + " C0-4 C1-4 C2-4 . X2-4 X2-4:P2 X4:P1 . 0 A1:0-1|A2:2-3|A3:4 \ 292 291 A1:P0|A2:P2|A3:P1 2-4|2-3" 293 - " C0-4:S+ C1-4:S+ C2-4 . X2-4 X2-4:P2 X3-4:P1 . 0 A1:0-1|A2:2|A3:3-4 \ 292 + " C0-4 C1-4 C2-4 . X2-4 X2-4:P2 X3-4:P1 . 0 A1:0-1|A2:2|A3:3-4 \ 294 293 A1:P0|A2:P2|A3:P1 2" 295 - " C0-4:X2-4:S+ C1-4:X2-4:S+:P2 C2-4:X4:P1 \ 294 + " C0-4:X2-4 C1-4:X2-4:P2 C2-4:X4:P1 \ 296 295 . . X5 . . 0 A1:0-4|A2:1-4|A3:2-4 \ 297 296 A1:P0|A2:P-2|A3:P-1 ." 298 - " C0-4:X2-4:S+ C1-4:X2-4:S+:P2 C2-4:X4:P1 \ 297 + " C0-4:X2-4 C1-4:X2-4:P2 C2-4:X4:P1 \ 299 298 . . . X1 . 0 A1:0-1|A2:2-4|A3:2-4 \ 300 299 A1:P0|A2:P2|A3:P-1 2-4" 301 300 302 301 # Remote partition offline tests 303 - " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3 X2-3:P2:O2=0 . 0 A1:0-1|A2:1|A3:3 A1:P0|A3:P2 2-3" 304 - " C0-3:S+ C1-3:S+ C2-3 . X2-3 X2-3 X2-3:P2:O2=0 O2=1 0 A1:0-1|A2:1|A3:2-3 A1:P0|A3:P2 2-3" 305 - " C0-3:S+ C1-3:S+ C3 . X2-3 X2-3 P2:O3=0 . 0 A1:0-2|A2:1-2|A3: A1:P0|A3:P2 3" 306 - " C0-3:S+ C1-3:S+ C3 . X2-3 X2-3 T:P2:O3=0 . 0 A1:0-2|A2:1-2|A3:1-2 A1:P0|A3:P-2 3|" 302 + " C0-3 C1-3 C2-3 . X2-3 X2-3 X2-3:P2:O2=0 . 0 A1:0-1|A2:1|A3:3 A1:P0|A3:P2 2-3" 303 + " C0-3 C1-3 C2-3 . X2-3 X2-3 X2-3:P2:O2=0 O2=1 0 A1:0-1|A2:1|A3:2-3 A1:P0|A3:P2 2-3" 304 + " C0-3 C1-3 C3 . X2-3 X2-3 P2:O3=0 . 0 A1:0-2|A2:1-2|A3: A1:P0|A3:P2 3" 305 + " C0-3 C1-3 C3 . X2-3 X2-3 T:P2:O3=0 . 0 A1:0-2|A2:1-2|A3:1-2 A1:P0|A3:P-2 3|" 307 306 308 307 # An invalidated remote partition cannot self-recover from hotplug 309 - " C0-3:S+ C1-3:S+ C2 . X2-3 X2-3 T:P2:O2=0 O2=1 0 A1:0-3|A2:1-3|A3:2 A1:P0|A3:P-2 ." 308 + " C0-3 C1-3 C2 . X2-3 X2-3 T:P2:O2=0 O2=1 0 A1:0-3|A2:1-3|A3:2 A1:P0|A3:P-2 ." 310 309 311 310 # cpus.exclusive.effective clearing test 312 - " C0-3:S+ C1-3:S+ C2 . X2-3:X . . . 0 A1:0-3|A2:1-3|A3:2|XA1:" 311 + " C0-3 C1-3 C2 . X2-3:X . . . 0 A1:0-3|A2:1-3|A3:2|XA1:" 313 312 314 313 # Invalid to valid remote partition transition test 315 - " C0-3:S+ C1-3 . . . X3:P2 . . 0 A1:0-3|A2:1-3|XA2: A2:P-2 ." 316 - " C0-3:S+ C1-3:X3:P2 317 - . . X2-3 P2 . . 0 A1:0-2|A2:3|XA2:3 A2:P2 3" 314 + " C0-3 C1-3 . . . X3:P2 . . 0 A1:0-3|A2:1-3|XA2: A2:P-2 ." 315 + " C0-3 C1-3:X3:P2 . . X2-3 P2 . . 0 A1:0-2|A2:3|XA2:3 A2:P2 3" 318 316 319 317 # Invalid to valid local partition direct transition tests 320 - " C1-3:S+:P2 X4:P2 . . . . . . 0 A1:1-3|XA1:1-3|A2:1-3:XA2: A1:P2|A2:P-2 1-3" 321 - " C1-3:S+:P2 X4:P2 . . . X3:P2 . . 0 A1:1-2|XA1:1-3|A2:3:XA2:3 A1:P2|A2:P2 1-3" 322 - " C0-3:P2 . . C4-6 C0-4 . . . 0 A1:0-4|B1:5-6 A1:P2|B1:P0" 323 - " C0-3:P2 . . C4-6 C0-4:C0-3 . . . 0 A1:0-3|B1:4-6 A1:P2|B1:P0 0-3" 318 + " C1-3:P2 X4:P2 . . . . . . 0 A1:1-3|XA1:1-3|A2:1-3:XA2: A1:P2|A2:P-2 1-3" 319 + " C1-3:P2 X4:P2 . . . X3:P2 . . 0 A1:1-2|XA1:1-3|A2:3:XA2:3 A1:P2|A2:P2 1-3" 320 + " C0-3:P2 . . C4-6 C0-4 . . . 0 A1:0-4|B1:5-6 A1:P2|B1:P0" 321 + " C0-3:P2 . . C4-6 C0-4:C0-3 . . . 0 A1:0-3|B1:4-6 A1:P2|B1:P0 0-3" 324 322 325 323 # Local partition invalidation tests 326 - " C0-3:X1-3:S+:P2 C1-3:X2-3:S+:P2 C2-3:X3:P2 \ 324 + " C0-3:X1-3:P2 C1-3:X2-3:P2 C2-3:X3:P2 \ 327 325 . . . . . 0 A1:1|A2:2|A3:3 A1:P2|A2:P2|A3:P2 1-3" 328 - " C0-3:X1-3:S+:P2 C1-3:X2-3:S+:P2 C2-3:X3:P2 \ 326 + " C0-3:X1-3:P2 C1-3:X2-3:P2 C2-3:X3:P2 \ 329 327 . . X4 . . 0 A1:1-3|A2:1-3|A3:2-3|XA2:|XA3: A1:P2|A2:P-2|A3:P-2 1-3" 330 - " C0-3:X1-3:S+:P2 C1-3:X2-3:S+:P2 C2-3:X3:P2 \ 328 + " C0-3:X1-3:P2 C1-3:X2-3:P2 C2-3:X3:P2 \ 331 329 . . C4:X . . 0 A1:1-3|A2:1-3|A3:2-3|XA2:|XA3: A1:P2|A2:P-2|A3:P-2 1-3" 332 330 # Local partition CPU change tests 333 - " C0-5:S+:P2 C4-5:S+:P1 . . . C3-5 . . 0 A1:0-2|A2:3-5 A1:P2|A2:P1 0-2" 334 - " C0-5:S+:P2 C4-5:S+:P1 . . C1-5 . . . 0 A1:1-3|A2:4-5 A1:P2|A2:P1 1-3" 331 + " C0-5:P2 C4-5:P1 . . . C3-5 . . 0 A1:0-2|A2:3-5 A1:P2|A2:P1 0-2" 332 + " C0-5:P2 C4-5:P1 . . C1-5 . . . 0 A1:1-3|A2:4-5 A1:P2|A2:P1 1-3" 335 333 336 334 # cpus_allowed/exclusive_cpus update tests 337 - " C0-3:X2-3:S+ C1-3:X2-3:S+ C2-3:X2-3 \ 335 + " C0-3:X2-3 C1-3:X2-3 C2-3:X2-3 \ 338 336 . X:C4 . P2 . 0 A1:4|A2:4|XA2:|XA3:|A3:4 \ 339 337 A1:P0|A3:P-2 ." 340 - " C0-3:X2-3:S+ C1-3:X2-3:S+ C2-3:X2-3 \ 338 + " C0-3:X2-3 C1-3:X2-3 C2-3:X2-3 \ 341 339 . X1 . P2 . 0 A1:0-3|A2:1-3|XA1:1|XA2:|XA3:|A3:2-3 \ 342 340 A1:P0|A3:P-2 ." 343 - " C0-3:X2-3:S+ C1-3:X2-3:S+ C2-3:X2-3 \ 341 + " C0-3:X2-3 C1-3:X2-3 C2-3:X2-3 \ 344 342 . . X3 P2 . 0 A1:0-2|A2:1-2|XA2:3|XA3:3|A3:3 \ 345 343 A1:P0|A3:P2 3" 346 - " C0-3:X2-3:S+ C1-3:X2-3:S+ C2-3:X2-3:P2 \ 344 + " C0-3:X2-3 C1-3:X2-3 C2-3:X2-3:P2 \ 347 345 . . X3 . . 0 A1:0-2|A2:1-2|XA2:3|XA3:3|A3:3|XA3:3 \ 348 346 A1:P0|A3:P2 3" 349 - " C0-3:X2-3:S+ C1-3:X2-3:S+ C2-3:X2-3:P2 \ 347 + " C0-3:X2-3 C1-3:X2-3 C2-3:X2-3:P2 \ 350 348 . X4 . . . 0 A1:0-3|A2:1-3|A3:2-3|XA1:4|XA2:|XA3 \ 351 349 A1:P0|A3:P-2" 352 350 ··· 356 356 # 357 357 # Adding CPUs to partition root that are not in parent's 358 358 # cpuset.cpus is allowed, but those extra CPUs are ignored. 359 - "C2-3:P1:S+ C3:P1 . . . C2-4 . . 0 A1:|A2:2-3 A1:P1|A2:P1" 359 + " C2-3:P1 C3:P1 . . . C2-4 . . 0 A1:|A2:2-3 A1:P1|A2:P1" 360 360 361 361 # Taking away all CPUs from parent or itself if there are tasks 362 362 # will make the partition invalid. 363 - "C2-3:P1:S+ C3:P1 . . T C2-3 . . 0 A1:2-3|A2:2-3 A1:P1|A2:P-1" 364 - " C3:P1:S+ C3 . . T P1 . . 0 A1:3|A2:3 A1:P1|A2:P-1" 363 + " C2-3:P1 C3:P1 . . T C2-3 . . 0 A1:2-3|A2:2-3 A1:P1|A2:P-1" 364 + " C3:P1 C3 . . T P1 . . 0 A1:3|A2:3 A1:P1|A2:P-1" 365 365 "$SETUP_A123_PARTITIONS . T:C2-3 . . . 0 A1:2-3|A2:2-3|A3:3 A1:P1|A2:P-1|A3:P-1" 366 366 "$SETUP_A123_PARTITIONS . T:C2-3:C1-3 . . . 0 A1:1|A2:2|A3:3 A1:P1|A2:P1|A3:P1" 367 367 368 368 # Changing a partition root to member makes child partitions invalid 369 - "C2-3:P1:S+ C3:P1 . . P0 . . . 0 A1:2-3|A2:3 A1:P0|A2:P-1" 369 + " C2-3:P1 C3:P1 . . P0 . . . 0 A1:2-3|A2:3 A1:P0|A2:P-1" 370 370 "$SETUP_A123_PARTITIONS . C2-3 P0 . . 0 A1:2-3|A2:2-3|A3:3 A1:P1|A2:P0|A3:P-1" 371 371 372 372 # cpuset.cpus can contains cpus not in parent's cpuset.cpus as long 373 373 # as they overlap. 374 - "C2-3:P1:S+ . . . . C3-4:P1 . . 0 A1:2|A2:3 A1:P1|A2:P1" 374 + " C2-3:P1 . . . . C3-4:P1 . . 0 A1:2|A2:3 A1:P1|A2:P1" 375 375 376 376 # Deletion of CPUs distributed to child cgroup is allowed. 377 - "C0-1:P1:S+ C1 . C2-3 C4-5 . . . 0 A1:4-5|A2:4-5" 377 + " C0-1:P1 C1 . C2-3 C4-5 . . . 0 A1:4-5|A2:4-5" 378 378 379 379 # To become a valid partition root, cpuset.cpus must overlap parent's 380 380 # cpuset.cpus. 381 - " C0-1:P1 . . C2-3 S+ C4-5:P1 . . 0 A1:0-1|A2:0-1 A1:P1|A2:P-1" 381 + " C0-1:P1 . . C2-3 . C4-5:P1 . . 0 A1:0-1|A2:0-1 A1:P1|A2:P-1" 382 382 383 383 # Enabling partition with child cpusets is allowed 384 - " C0-1:S+ C1 . C2-3 P1 . . . 0 A1:0-1|A2:1 A1:P1" 384 + " C0-1 C1 . C2-3 P1 . . . 0 A1:0-1|A2:1 A1:P1" 385 385 386 386 # A partition root with non-partition root parent is invalid| but it 387 387 # can be made valid if its parent becomes a partition root too. 388 - " C0-1:S+ C1 . C2-3 . P2 . . 0 A1:0-1|A2:1 A1:P0|A2:P-2" 389 - " C0-1:S+ C1:P2 . C2-3 P1 . . . 0 A1:0|A2:1 A1:P1|A2:P2 0-1|1" 388 + " C0-1 C1 . C2-3 . P2 . . 0 A1:0-1|A2:1 A1:P0|A2:P-2" 389 + " C0-1 C1:P2 . C2-3 P1 . . . 0 A1:0|A2:1 A1:P1|A2:P2 0-1|1" 390 390 391 391 # A non-exclusive cpuset.cpus change will not invalidate its siblings partition. 392 392 " C0-1:P1 . . C2-3 C0-2 . . . 0 A1:0-2|B1:3 A1:P1|B1:P0" ··· 398 398 399 399 # Child partition root that try to take all CPUs from parent partition 400 400 # with tasks will remain invalid. 401 - " C1-4:P1:S+ P1 . . . . . . 0 A1:1-4|A2:1-4 A1:P1|A2:P-1" 402 - " C1-4:P1:S+ P1 . . . C1-4 . . 0 A1|A2:1-4 A1:P1|A2:P1" 403 - " C1-4:P1:S+ P1 . . T C1-4 . . 0 A1:1-4|A2:1-4 A1:P1|A2:P-1" 401 + " C1-4:P1 P1 . . . . . . 0 A1:1-4|A2:1-4 A1:P1|A2:P-1" 402 + " C1-4:P1 P1 . . . C1-4 . . 0 A1|A2:1-4 A1:P1|A2:P1" 403 + " C1-4:P1 P1 . . T C1-4 . . 0 A1:1-4|A2:1-4 A1:P1|A2:P-1" 404 404 405 405 # Clearing of cpuset.cpus with a preset cpuset.cpus.exclusive shouldn't 406 406 # affect cpuset.cpus.exclusive.effective. 407 - " C1-4:X3:S+ C1:X3 . . . C . . 0 A2:1-4|XA2:3" 407 + " C1-4:X3 C1:X3 . . . C . . 0 A2:1-4|XA2:3" 408 408 409 409 # cpuset.cpus can contain CPUs that overlap a sibling cpuset with cpus.exclusive 410 410 # but creating a local partition out of it is not allowed. Similarly and change 411 411 # in cpuset.cpus of a local partition that overlaps sibling exclusive CPUs will 412 412 # invalidate it. 413 - " CX1-4:S+ CX2-4:P2 . C5-6 . . . P1 0 A1:1|A2:2-4|B1:5-6|XB1:5-6 \ 413 + " CX1-4 CX2-4:P2 . C5-6 . . . P1 0 A1:1|A2:2-4|B1:5-6|XB1:5-6 \ 414 414 A1:P0|A2:P2:B1:P1 2-4" 415 - " CX1-4:S+ CX2-4:P2 . C3-6 . . . P1 0 A1:1|A2:2-4|B1:5-6 \ 415 + " CX1-4 CX2-4:P2 . C3-6 . . . P1 0 A1:1|A2:2-4|B1:5-6 \ 416 416 A1:P0|A2:P2:B1:P-1 2-4" 417 - " CX1-4:S+ CX2-4:P2 . C5-6 . . . P1:C3-6 0 A1:1|A2:2-4|B1:5-6 \ 417 + " CX1-4 CX2-4:P2 . C5-6 . . . P1:C3-6 0 A1:1|A2:2-4|B1:5-6 \ 418 418 A1:P0|A2:P2:B1:P-1 2-4" 419 419 420 420 # When multiple partitions with conflicting cpuset.cpus are created, the ··· 426 426 " C1-3:X1-3 . . C4-5 . . . C1-2 0 A1:1-3|B1:1-2" 427 427 428 428 # cpuset.cpus can become empty with task in it as it inherits parent's effective CPUs 429 - " C1-3:S+ C2 . . . T:C . . 0 A1:1-3|A2:1-3" 429 + " C1-3 C2 . . . T:C . . 0 A1:1-3|A2:1-3" 430 430 431 431 # old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate ISOLCPUS 432 432 # ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ -------- 433 433 # Failure cases: 434 434 435 435 # A task cannot be added to a partition with no cpu 436 - "C2-3:P1:S+ C3:P1 . . O2=0:T . . . 1 A1:|A2:3 A1:P1|A2:P1" 436 + " C2-3:P1 C3:P1 . . O2=0:T . . . 1 A1:|A2:3 A1:P1|A2:P1" 437 437 438 438 # Changes to cpuset.cpus.exclusive that violate exclusivity rule is rejected 439 439 " C0-3 . . C4-5 X0-3 . . X3-5 1 A1:0-3|B1:4-5" ··· 465 465 # old-p1 old-p2 old-c11 old-c12 old-c21 old-c22 466 466 # new-p1 new-p2 new-c11 new-c12 new-c21 new-c22 ECPUs Pstate ISOLCPUS 467 467 # ------ ------ ------- ------- ------- ------- ----- ------ -------- 468 - " X1-3:S+ X4-6:S+ X1-2 X3 X4-5 X6 \ 468 + " X1-3 X4-6 X1-2 X3 X4-5 X6 \ 469 469 . . P2 P2 P2 P2 c11:1-2|c12:3|c21:4-5|c22:6 \ 470 470 c11:P2|c12:P2|c21:P2|c22:P2 1-6" 471 - " CX1-4:S+ . X1-2:P2 C3 . . \ 471 + " CX1-4 . X1-2:P2 C3 . . \ 472 472 . . . C3-4 . . p1:3-4|c11:1-2|c12:3-4 \ 473 473 p1:P0|c11:P2|c12:P0 1-2" 474 - " CX1-4:S+ . X1-2:P2 . . . \ 474 + " CX1-4 . X1-2:P2 . . . \ 475 475 X2-4 . . . . . p1:1,3-4|c11:2 \ 476 476 p1:P0|c11:P2 2" 477 - " CX1-5:S+ . X1-2:P2 X3-5:P1 . . \ 477 + " CX1-5 . X1-2:P2 X3-5:P1 . . \ 478 478 X2-4 . . . . . p1:1,5|c11:2|c12:3-4 \ 479 479 p1:P0|c11:P2|c12:P1 2" 480 - " CX1-4:S+ . X1-2:P2 X3-4:P1 . . \ 480 + " CX1-4 . X1-2:P2 X3-4:P1 . . \ 481 481 . . X2 . . . p1:1|c11:2|c12:3-4 \ 482 482 p1:P0|c11:P2|c12:P1 2" 483 483 # p1 as member, will get its effective CPUs from its parent rtest 484 - " CX1-4:S+ . X1-2:P2 X3-4:P1 . . \ 484 + " CX1-4 . X1-2:P2 X3-4:P1 . . \ 485 485 . . X1 CX2-4 . . p1:5-7|c11:1|c12:2-4 \ 486 486 p1:P0|c11:P2|c12:P1 1" 487 - " CX1-4:S+ X5-6:P1:S+ . . . . \ 488 - . . X1-2:P2 X4-5:P1 . X1-7:P2 p1:3|c11:1-2|c12:4:c22:5-6 \ 487 + " CX1-4 X5-6:P1 . . . . \ 488 + . . X1-2:P2 X4-5:P1 . X1-7:P2 p1:3|c11:1-2|c12:4:c22:5-6 \ 489 489 p1:P0|p2:P1|c11:P2|c12:P1|c22:P2 \ 490 490 1-2,4-6|1-2,5-6" 491 491 # c12 whose cpuset.cpus CPUs are all granted to c11 will become invalid partition 492 - " C1-5:P1:S+ . C1-4:P1 C2-3 . . \ 492 + " C1-5:P1 . C1-4:P1 C2-3 . . \ 493 493 . . . P1 . . p1:5|c11:1-4|c12:5 \ 494 494 p1:P1|c11:P1|c12:P-1" 495 495 ) ··· 530 530 CGRP=$1 531 531 STATE=$2 532 532 SHOWERR=${3} 533 - CTRL=${CTRL:=$CONTROLLER} 534 533 HASERR=0 535 534 REDIRECT="2> $TMPMSG" 536 535 [[ -z "$STATE" || "$STATE" = '.' ]] && return 0 ··· 539 540 for CMD in $(echo $STATE | sed -e "s/:/ /g") 540 541 do 541 542 TFILE=$CGRP/cgroup.procs 542 - SFILE=$CGRP/cgroup.subtree_control 543 543 PFILE=$CGRP/cpuset.cpus.partition 544 544 CFILE=$CGRP/cpuset.cpus 545 545 XFILE=$CGRP/cpuset.cpus.exclusive 546 - case $CMD in 547 - S*) PREFIX=${CMD#?} 548 - COMM="echo ${PREFIX}${CTRL} > $SFILE" 546 + 547 + # Enable cpuset controller if not enabled yet 548 + [[ -f $CFILE ]] || { 549 + COMM="echo +cpuset > $CGRP/../cgroup.subtree_control" 549 550 eval $COMM $REDIRECT 550 - ;; 551 + } 552 + case $CMD in 551 553 X*) 552 554 CPUS=${CMD#?} 553 555 COMM="echo $CPUS > $XFILE" ··· 764 764 # only CPUs in isolated partitions as well as those that are isolated at 765 765 # boot time. 766 766 # 767 - # $1 - expected isolated cpu list(s) <isolcpus1>{,<isolcpus2>} 767 + # $1 - expected isolated cpu list(s) <isolcpus1>{|<isolcpus2>} 768 768 # <isolcpus1> - expected sched/domains value 769 769 # <isolcpus2> - cpuset.cpus.isolated value = <isolcpus1> if not defined 770 770 # ··· 773 773 EXPECTED_ISOLCPUS=$1 774 774 ISCPUS=${CGROUP2}/cpuset.cpus.isolated 775 775 ISOLCPUS=$(cat $ISCPUS) 776 + HKICPUS=$(cat /sys/devices/system/cpu/isolated) 776 777 LASTISOLCPU= 777 778 SCHED_DOMAINS=/sys/kernel/debug/sched/domains 778 779 if [[ $EXPECTED_ISOLCPUS = . ]] ··· 810 809 [[ "$EXPECTED_ISOLCPUS" != "$ISOLCPUS" ]] && return 1 811 810 ISOLCPUS= 812 811 EXPECTED_ISOLCPUS=$EXPECTED_SDOMAIN 812 + 813 + # 814 + # The inverse of HK_TYPE_DOMAIN cpumask in $HKICPUS should match $ISOLCPUS 815 + # 816 + [[ "$ISOLCPUS" != "$HKICPUS" ]] && return 1 813 817 814 818 # 815 819 # Use the sched domain in debugfs to check isolated CPUs, if available ··· 953 947 run_state_test() 954 948 { 955 949 TEST=$1 956 - CONTROLLER=cpuset 957 950 CGROUP_LIST=". A1 A1/A2 A1/A2/A3 B1" 958 951 RESET_LIST="A1/A2/A3 A1/A2 A1 B1" 959 952 I=0 ··· 1008 1003 run_remote_state_test() 1009 1004 { 1010 1005 TEST=$1 1011 - CONTROLLER=cpuset 1012 1006 [[ -d rtest ]] || mkdir rtest 1013 1007 cd rtest 1014 1008 echo +cpuset > cgroup.subtree_control
+15 -10
tools/testing/selftests/filesystems/nsfs/iterate_mntns.c
··· 37 37 __u64 mnt_ns_id[MNT_NS_COUNT]; 38 38 }; 39 39 40 + static inline bool mntns_in_list(__u64 *mnt_ns_id, struct mnt_ns_info *info) 41 + { 42 + for (int i = 0; i < MNT_NS_COUNT; i++) { 43 + if (mnt_ns_id[i] == info->mnt_ns_id) 44 + return true; 45 + } 46 + return false; 47 + } 48 + 40 49 FIXTURE_SETUP(iterate_mount_namespaces) 41 50 { 42 51 for (int i = 0; i < MNT_NS_COUNT; i++) 43 52 self->fd_mnt_ns[i] = -EBADF; 44 - 45 - /* 46 - * Creating a new user namespace let's us guarantee that we only see 47 - * mount namespaces that we did actually create. 48 - */ 49 - ASSERT_EQ(unshare(CLONE_NEWUSER), 0); 50 53 51 54 for (int i = 0; i < MNT_NS_COUNT; i++) { 52 55 struct mnt_ns_info info = {}; ··· 78 75 fd_mnt_ns_cur = fcntl(self->fd_mnt_ns[0], F_DUPFD_CLOEXEC); 79 76 ASSERT_GE(fd_mnt_ns_cur, 0); 80 77 81 - for (;; count++) { 78 + for (;;) { 82 79 struct mnt_ns_info info = {}; 83 80 int fd_mnt_ns_next; 84 81 85 82 fd_mnt_ns_next = ioctl(fd_mnt_ns_cur, NS_MNT_GET_NEXT, &info); 86 83 if (fd_mnt_ns_next < 0 && errno == ENOENT) 87 84 break; 85 + if (mntns_in_list(self->mnt_ns_id, &info)) 86 + count++; 88 87 ASSERT_GE(fd_mnt_ns_next, 0); 89 88 ASSERT_EQ(close(fd_mnt_ns_cur), 0); 90 89 fd_mnt_ns_cur = fd_mnt_ns_next; ··· 101 96 fd_mnt_ns_cur = fcntl(self->fd_mnt_ns[MNT_NS_LAST_INDEX], F_DUPFD_CLOEXEC); 102 97 ASSERT_GE(fd_mnt_ns_cur, 0); 103 98 104 - for (;; count++) { 99 + for (;;) { 105 100 struct mnt_ns_info info = {}; 106 101 int fd_mnt_ns_prev; 107 102 108 103 fd_mnt_ns_prev = ioctl(fd_mnt_ns_cur, NS_MNT_GET_PREV, &info); 109 104 if (fd_mnt_ns_prev < 0 && errno == ENOENT) 110 105 break; 106 + if (mntns_in_list(self->mnt_ns_id, &info)) 107 + count++; 111 108 ASSERT_GE(fd_mnt_ns_prev, 0); 112 109 ASSERT_EQ(close(fd_mnt_ns_cur), 0); 113 110 fd_mnt_ns_cur = fd_mnt_ns_prev; ··· 132 125 ASSERT_GE(fd_mnt_ns_next, 0); 133 126 ASSERT_EQ(close(fd_mnt_ns_cur), 0); 134 127 fd_mnt_ns_cur = fd_mnt_ns_next; 135 - ASSERT_EQ(info.mnt_ns_id, self->mnt_ns_id[i]); 136 128 } 137 129 } 138 130 ··· 150 144 ASSERT_GE(fd_mnt_ns_prev, 0); 151 145 ASSERT_EQ(close(fd_mnt_ns_cur), 0); 152 146 fd_mnt_ns_cur = fd_mnt_ns_prev; 153 - ASSERT_EQ(info.mnt_ns_id, self->mnt_ns_id[i]); 154 147 } 155 148 } 156 149
+21 -13
tools/testing/selftests/hid/tests/test_wacom_generic.py
··· 598 598 if unit_set: 599 599 assert required[usage].contains(field) 600 600 601 - def test_prop_direct(self): 602 - """ 603 - Todo: Verify that INPUT_PROP_DIRECT is set on display devices. 604 - """ 605 - pass 606 - 607 - def test_prop_pointer(self): 608 - """ 609 - Todo: Verify that INPUT_PROP_POINTER is set on opaque devices. 610 - """ 611 - pass 612 - 613 601 614 602 class PenTabletTest(BaseTest.TestTablet): 615 603 def assertName(self, uhdev): ··· 664 676 self.sync_and_assert_events( 665 677 uhdev.event(130, 240, pressure=0), [], auto_syn=False, strict=True 666 678 ) 679 + 680 + def test_prop_pointer(self): 681 + """ 682 + Verify that INPUT_PROP_POINTER is set and INPUT_PROP_DIRECT 683 + is not set on opaque devices. 684 + """ 685 + evdev = self.uhdev.get_evdev() 686 + assert libevdev.INPUT_PROP_POINTER in evdev.properties 687 + assert libevdev.INPUT_PROP_DIRECT not in evdev.properties 667 688 668 689 669 690 class TestOpaqueCTLTablet(TestOpaqueTablet): ··· 859 862 ) 860 863 861 864 862 - class TestDTH2452Tablet(test_multitouch.BaseTest.TestMultitouch, TouchTabletTest): 865 + class DirectTabletTest(): 866 + def test_prop_direct(self): 867 + """ 868 + Verify that INPUT_PROP_DIRECT is set and INPUT_PROP_POINTER 869 + is not set on display devices. 870 + """ 871 + evdev = self.uhdev.get_evdev() 872 + assert libevdev.INPUT_PROP_DIRECT in evdev.properties 873 + assert libevdev.INPUT_PROP_POINTER not in evdev.properties 874 + 875 + 876 + class TestDTH2452Tablet(test_multitouch.BaseTest.TestMultitouch, TouchTabletTest, DirectTabletTest): 863 877 ContactIds = namedtuple("ContactIds", "contact_id, tracking_id, slot_num") 864 878 865 879 def create_device(self):
+5 -2
tools/testing/selftests/kselftest_harness.h
··· 76 76 memset(s, c, n); 77 77 } 78 78 79 + #define KSELFTEST_PRIO_TEST_F 20000 80 + #define KSELFTEST_PRIO_XFAIL 20001 81 + 79 82 #define TEST_TIMEOUT_DEFAULT 30 80 83 81 84 /* Utilities exposed to the test definitions */ ··· 468 465 fixture_name##_teardown(_metadata, self, variant); \ 469 466 } \ 470 467 static struct __test_metadata *_##fixture_name##_##test_name##_object; \ 471 - static void __attribute__((constructor)) \ 468 + static void __attribute__((constructor(KSELFTEST_PRIO_TEST_F))) \ 472 469 _register_##fixture_name##_##test_name(void) \ 473 470 { \ 474 471 struct __test_metadata *object = mmap(NULL, sizeof(*object), \ ··· 883 880 .fixture = &_##fixture_name##_fixture_object, \ 884 881 .variant = &_##fixture_name##_##variant_name##_object, \ 885 882 }; \ 886 - static void __attribute__((constructor)) \ 883 + static void __attribute__((constructor(KSELFTEST_PRIO_XFAIL))) \ 887 884 _register_##fixture_name##_##variant_name##_##test_name##_xfail(void) \ 888 885 { \ 889 886 _##fixture_name##_##variant_name##_##test_name##_xfail.test = \
+1
tools/testing/selftests/net/Makefile
··· 15 15 big_tcp.sh \ 16 16 bind_bhash.sh \ 17 17 bpf_offload.py \ 18 + bridge_vlan_dump.sh \ 18 19 broadcast_ether_dst.sh \ 19 20 broadcast_pmtu.sh \ 20 21 busy_poll_test.sh \
+204
tools/testing/selftests/net/bridge_vlan_dump.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Test bridge VLAN range grouping. VLANs are collapsed into a range entry in 5 + # the dump if they have the same per-VLAN options. These tests verify that 6 + # VLANs with different per-VLAN option values are not grouped together. 7 + 8 + # shellcheck disable=SC1091,SC2034,SC2154,SC2317 9 + source lib.sh 10 + 11 + ALL_TESTS=" 12 + vlan_range_neigh_suppress 13 + vlan_range_mcast_max_groups 14 + vlan_range_mcast_n_groups 15 + vlan_range_mcast_enabled 16 + " 17 + 18 + setup_prepare() 19 + { 20 + setup_ns NS 21 + defer cleanup_all_ns 22 + 23 + ip -n "$NS" link add name br0 type bridge vlan_filtering 1 \ 24 + vlan_default_pvid 0 mcast_snooping 1 mcast_vlan_snooping 1 25 + ip -n "$NS" link set dev br0 up 26 + 27 + ip -n "$NS" link add name dummy0 type dummy 28 + ip -n "$NS" link set dev dummy0 master br0 29 + ip -n "$NS" link set dev dummy0 up 30 + } 31 + 32 + vlan_range_neigh_suppress() 33 + { 34 + RET=0 35 + 36 + # Add two new consecutive VLANs for range grouping test 37 + bridge -n "$NS" vlan add vid 10 dev dummy0 38 + defer bridge -n "$NS" vlan del vid 10 dev dummy0 39 + 40 + bridge -n "$NS" vlan add vid 11 dev dummy0 41 + defer bridge -n "$NS" vlan del vid 11 dev dummy0 42 + 43 + # Configure different neigh_suppress values and verify no range grouping 44 + bridge -n "$NS" vlan set vid 10 dev dummy0 neigh_suppress on 45 + check_err $? "Failed to set neigh_suppress for VLAN 10" 46 + 47 + bridge -n "$NS" vlan set vid 11 dev dummy0 neigh_suppress off 48 + check_err $? "Failed to set neigh_suppress for VLAN 11" 49 + 50 + # Verify VLANs are not shown as a range, but individual entries exist 51 + bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11" 52 + check_fail $? "VLANs with different neigh_suppress incorrectly grouped" 53 + 54 + bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+10$|^\s+10$" 55 + check_err $? "VLAN 10 individual entry not found" 56 + 57 + bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+11$|^\s+11$" 58 + check_err $? "VLAN 11 individual entry not found" 59 + 60 + # Configure same neigh_suppress value and verify range grouping 61 + bridge -n "$NS" vlan set vid 11 dev dummy0 neigh_suppress on 62 + check_err $? "Failed to set neigh_suppress for VLAN 11" 63 + 64 + bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11" 65 + check_err $? "VLANs with same neigh_suppress not grouped" 66 + 67 + log_test "VLAN range grouping with neigh_suppress" 68 + } 69 + 70 + vlan_range_mcast_max_groups() 71 + { 72 + RET=0 73 + 74 + # Add two new consecutive VLANs for range grouping test 75 + bridge -n "$NS" vlan add vid 10 dev dummy0 76 + defer bridge -n "$NS" vlan del vid 10 dev dummy0 77 + 78 + bridge -n "$NS" vlan add vid 11 dev dummy0 79 + defer bridge -n "$NS" vlan del vid 11 dev dummy0 80 + 81 + # Configure different mcast_max_groups values and verify no range grouping 82 + bridge -n "$NS" vlan set vid 10 dev dummy0 mcast_max_groups 100 83 + check_err $? "Failed to set mcast_max_groups for VLAN 10" 84 + 85 + bridge -n "$NS" vlan set vid 11 dev dummy0 mcast_max_groups 200 86 + check_err $? "Failed to set mcast_max_groups for VLAN 11" 87 + 88 + # Verify VLANs are not shown as a range, but individual entries exist 89 + bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11" 90 + check_fail $? "VLANs with different mcast_max_groups incorrectly grouped" 91 + 92 + bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+10$|^\s+10$" 93 + check_err $? "VLAN 10 individual entry not found" 94 + 95 + bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+11$|^\s+11$" 96 + check_err $? "VLAN 11 individual entry not found" 97 + 98 + # Configure same mcast_max_groups value and verify range grouping 99 + bridge -n "$NS" vlan set vid 11 dev dummy0 mcast_max_groups 100 100 + check_err $? "Failed to set mcast_max_groups for VLAN 11" 101 + 102 + bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11" 103 + check_err $? "VLANs with same mcast_max_groups not grouped" 104 + 105 + log_test "VLAN range grouping with mcast_max_groups" 106 + } 107 + 108 + vlan_range_mcast_n_groups() 109 + { 110 + RET=0 111 + 112 + # Add two new consecutive VLANs for range grouping test 113 + bridge -n "$NS" vlan add vid 10 dev dummy0 114 + defer bridge -n "$NS" vlan del vid 10 dev dummy0 115 + 116 + bridge -n "$NS" vlan add vid 11 dev dummy0 117 + defer bridge -n "$NS" vlan del vid 11 dev dummy0 118 + 119 + # Add different numbers of multicast groups to each VLAN 120 + bridge -n "$NS" mdb add dev br0 port dummy0 grp 239.1.1.1 vid 10 121 + check_err $? "Failed to add mdb entry to VLAN 10" 122 + defer bridge -n "$NS" mdb del dev br0 port dummy0 grp 239.1.1.1 vid 10 123 + 124 + bridge -n "$NS" mdb add dev br0 port dummy0 grp 239.1.1.2 vid 10 125 + check_err $? "Failed to add second mdb entry to VLAN 10" 126 + defer bridge -n "$NS" mdb del dev br0 port dummy0 grp 239.1.1.2 vid 10 127 + 128 + bridge -n "$NS" mdb add dev br0 port dummy0 grp 239.1.1.1 vid 11 129 + check_err $? "Failed to add mdb entry to VLAN 11" 130 + defer bridge -n "$NS" mdb del dev br0 port dummy0 grp 239.1.1.1 vid 11 131 + 132 + # Verify VLANs are not shown as a range due to different mcast_n_groups 133 + bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11" 134 + check_fail $? "VLANs with different mcast_n_groups incorrectly grouped" 135 + 136 + bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+10$|^\s+10$" 137 + check_err $? "VLAN 10 individual entry not found" 138 + 139 + bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+11$|^\s+11$" 140 + check_err $? "VLAN 11 individual entry not found" 141 + 142 + # Add another group to VLAN 11 to match VLAN 10's count 143 + bridge -n "$NS" mdb add dev br0 port dummy0 grp 239.1.1.2 vid 11 144 + check_err $? "Failed to add second mdb entry to VLAN 11" 145 + defer bridge -n "$NS" mdb del dev br0 port dummy0 grp 239.1.1.2 vid 11 146 + 147 + bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11" 148 + check_err $? "VLANs with same mcast_n_groups not grouped" 149 + 150 + log_test "VLAN range grouping with mcast_n_groups" 151 + } 152 + 153 + vlan_range_mcast_enabled() 154 + { 155 + RET=0 156 + 157 + # Add two new consecutive VLANs for range grouping test 158 + bridge -n "$NS" vlan add vid 10 dev br0 self 159 + defer bridge -n "$NS" vlan del vid 10 dev br0 self 160 + 161 + bridge -n "$NS" vlan add vid 11 dev br0 self 162 + defer bridge -n "$NS" vlan del vid 11 dev br0 self 163 + 164 + bridge -n "$NS" vlan add vid 10 dev dummy0 165 + defer bridge -n "$NS" vlan del vid 10 dev dummy0 166 + 167 + bridge -n "$NS" vlan add vid 11 dev dummy0 168 + defer bridge -n "$NS" vlan del vid 11 dev dummy0 169 + 170 + # Configure different mcast_snooping for bridge VLANs 171 + # Port VLANs inherit BR_VLFLAG_MCAST_ENABLED from bridge VLANs 172 + bridge -n "$NS" vlan global set dev br0 vid 10 mcast_snooping 1 173 + bridge -n "$NS" vlan global set dev br0 vid 11 mcast_snooping 0 174 + 175 + # Verify port VLANs are not grouped due to different mcast_enabled 176 + bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11" 177 + check_fail $? "VLANs with different mcast_enabled incorrectly grouped" 178 + 179 + bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+10$|^\s+10$" 180 + check_err $? "VLAN 10 individual entry not found" 181 + 182 + bridge -n "$NS" -d vlan show dev dummy0 | grep -Eq "^\S+\s+11$|^\s+11$" 183 + check_err $? "VLAN 11 individual entry not found" 184 + 185 + # Configure same mcast_snooping and verify range grouping 186 + bridge -n "$NS" vlan global set dev br0 vid 11 mcast_snooping 1 187 + 188 + bridge -n "$NS" -d vlan show dev dummy0 | grep -q "10-11" 189 + check_err $? "VLANs with same mcast_enabled not grouped" 190 + 191 + log_test "VLAN range grouping with mcast_enabled" 192 + } 193 + 194 + # Verify the newest tested option is supported 195 + if ! bridge vlan help 2>&1 | grep -q "neigh_suppress"; then 196 + echo "SKIP: iproute2 too old, missing per-VLAN neighbor suppression support" 197 + exit "$ksft_skip" 198 + fi 199 + 200 + trap defer_scopes_cleanup EXIT 201 + setup_prepare 202 + tests_run 203 + 204 + exit "$EXIT_STATUS"
+11
tools/testing/selftests/net/fib_nexthops.sh
··· 1672 1672 1673 1673 run_cmd "$IP ro replace 172.16.101.1/32 via inet6 2001:db8:50::1 dev veth1" 1674 1674 log_test $? 2 "IPv4 route with invalid IPv6 gateway" 1675 + 1676 + # Test IPv4 route with loopback IPv6 nexthop 1677 + # Regression test: loopback IPv6 nexthop was misclassified as reject 1678 + # route, skipping nhc_pcpu_rth_output allocation, causing panic when 1679 + # an IPv4 route references it and triggers __mkroute_output(). 1680 + run_cmd "$IP -6 nexthop add id 20 dev lo" 1681 + run_cmd "$IP ro add 172.20.20.0/24 nhid 20" 1682 + run_cmd "ip netns exec $me ping -c1 -W1 172.20.20.1" 1683 + log_test $? 1 "IPv4 route with loopback IPv6 nexthop (no crash)" 1684 + run_cmd "$IP ro del 172.20.20.0/24" 1685 + run_cmd "$IP nexthop del id 20" 1675 1686 } 1676 1687 1677 1688 ipv4_fcnal_runtime()
+49
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 104 104 6 0 0 65535, 105 105 6 0 0 0" 106 106 107 + # IPv4: TCP hdr of 48B, a first suboption of 12B (DACK8), the RM_ADDR suboption 108 + # generated using "nfbpf_compile '(ip[32] & 0xf0) == 0xc0 && ip[53] == 0x0c && 109 + # (ip[66] & 0xf0) == 0x40'" 110 + CBPF_MPTCP_SUBOPTION_RM_ADDR="13, 111 + 48 0 0 0, 112 + 84 0 0 240, 113 + 21 0 9 64, 114 + 48 0 0 32, 115 + 84 0 0 240, 116 + 21 0 6 192, 117 + 48 0 0 53, 118 + 21 0 4 12, 119 + 48 0 0 66, 120 + 84 0 0 240, 121 + 21 0 1 64, 122 + 6 0 0 65535, 123 + 6 0 0 0" 124 + 107 125 init_partial() 108 126 { 109 127 capout=$(mktemp) ··· 2626 2608 chk_rst_nr 0 0 2627 2609 fi 2628 2610 2611 + # signal+subflow with limits, remove 2612 + if reset "remove signal+subflow with limits"; then 2613 + pm_nl_set_limits $ns1 0 0 2614 + pm_nl_add_endpoint $ns1 10.0.2.1 flags signal,subflow 2615 + pm_nl_set_limits $ns2 0 0 2616 + addr_nr_ns1=-1 speed=slow \ 2617 + run_tests $ns1 $ns2 10.0.1.1 2618 + chk_join_nr 0 0 0 2619 + chk_add_nr 1 1 2620 + chk_rm_nr 1 0 invert 2621 + chk_rst_nr 0 0 2622 + fi 2623 + 2629 2624 # addresses remove 2630 2625 if reset "remove addresses"; then 2631 2626 pm_nl_set_limits $ns1 3 3 ··· 4248 4217 chk_subflow_nr "after no reject" 3 4249 4218 chk_mptcp_info subflows 2 subflows 2 4250 4219 4220 + # To make sure RM_ADDR are sent over a different subflow, but 4221 + # allow the rest to quickly and cleanly close the subflow 4222 + local ipt=1 4223 + ip netns exec "${ns2}" ${iptables} -I OUTPUT -s "10.0.1.2" \ 4224 + -p tcp -m tcp --tcp-option 30 \ 4225 + -m bpf --bytecode \ 4226 + "$CBPF_MPTCP_SUBOPTION_RM_ADDR" \ 4227 + -j DROP || ipt=0 4251 4228 local i 4252 4229 for i in $(seq 3); do 4253 4230 pm_nl_del_endpoint $ns2 1 10.0.1.2 ··· 4268 4229 chk_subflow_nr "after re-add id 0 ($i)" 3 4269 4230 chk_mptcp_info subflows 3 subflows 3 4270 4231 done 4232 + [ ${ipt} = 1 ] && ip netns exec "${ns2}" ${iptables} -D OUTPUT 1 4271 4233 4272 4234 mptcp_lib_kill_group_wait $tests_pid 4273 4235 ··· 4328 4288 chk_mptcp_info subflows 2 subflows 2 4329 4289 chk_mptcp_info add_addr_signal 2 add_addr_accepted 2 4330 4290 4291 + # To make sure RM_ADDR are sent over a different subflow, but 4292 + # allow the rest to quickly and cleanly close the subflow 4293 + local ipt=1 4294 + ip netns exec "${ns1}" ${iptables} -I OUTPUT -s "10.0.1.1" \ 4295 + -p tcp -m tcp --tcp-option 30 \ 4296 + -m bpf --bytecode \ 4297 + "$CBPF_MPTCP_SUBOPTION_RM_ADDR" \ 4298 + -j DROP || ipt=0 4331 4299 pm_nl_del_endpoint $ns1 42 10.0.1.1 4332 4300 sleep 0.5 4333 4301 chk_subflow_nr "after delete ID 0" 2 4334 4302 chk_mptcp_info subflows 2 subflows 2 4335 4303 chk_mptcp_info add_addr_signal 2 add_addr_accepted 2 4304 + [ ${ipt} = 1 ] && ip netns exec "${ns1}" ${iptables} -D OUTPUT 1 4336 4305 4337 4306 pm_nl_add_endpoint $ns1 10.0.1.1 id 99 flags signal 4338 4307 wait_mpj 4
+7 -4
tools/testing/selftests/net/mptcp/simult_flows.sh
··· 237 237 for dev in ns2eth1 ns2eth2; do 238 238 tc -n $ns2 qdisc del dev $dev root >/dev/null 2>&1 239 239 done 240 - tc -n $ns1 qdisc add dev ns1eth1 root netem rate ${rate1}mbit $delay1 241 - tc -n $ns1 qdisc add dev ns1eth2 root netem rate ${rate2}mbit $delay2 242 - tc -n $ns2 qdisc add dev ns2eth1 root netem rate ${rate1}mbit $delay1 243 - tc -n $ns2 qdisc add dev ns2eth2 root netem rate ${rate2}mbit $delay2 240 + 241 + # keep the queued pkts number low, or the RTT estimator will see 242 + # increasing latency over time. 243 + tc -n $ns1 qdisc add dev ns1eth1 root netem rate ${rate1}mbit $delay1 limit 50 244 + tc -n $ns1 qdisc add dev ns1eth2 root netem rate ${rate2}mbit $delay2 limit 50 245 + tc -n $ns2 qdisc add dev ns2eth1 root netem rate ${rate1}mbit $delay1 limit 50 246 + tc -n $ns2 qdisc add dev ns2eth2 root netem rate ${rate2}mbit $delay2 limit 50 244 247 245 248 # time is measured in ms, account for transfer size, aggregated link speed 246 249 # and header overhead (10%)
+8 -2
tools/testing/selftests/net/netfilter/nf_queue.c
··· 18 18 struct options { 19 19 bool count_packets; 20 20 bool gso_enabled; 21 + bool failopen; 21 22 int verbose; 22 23 unsigned int queue_num; 23 24 unsigned int timeout; ··· 31 30 32 31 static void help(const char *p) 33 32 { 34 - printf("Usage: %s [-c|-v [-vv] ] [-t timeout] [-q queue_num] [-Qdst_queue ] [ -d ms_delay ] [-G]\n", p); 33 + printf("Usage: %s [-c|-v [-vv] ] [-o] [-t timeout] [-q queue_num] [-Qdst_queue ] [ -d ms_delay ] [-G]\n", p); 35 34 } 36 35 37 36 static int parse_attr_cb(const struct nlattr *attr, void *data) ··· 237 236 238 237 flags = opts.gso_enabled ? NFQA_CFG_F_GSO : 0; 239 238 flags |= NFQA_CFG_F_UID_GID; 239 + if (opts.failopen) 240 + flags |= NFQA_CFG_F_FAIL_OPEN; 240 241 mnl_attr_put_u32(nlh, NFQA_CFG_FLAGS, htonl(flags)); 241 242 mnl_attr_put_u32(nlh, NFQA_CFG_MASK, htonl(flags)); 242 243 ··· 332 329 { 333 330 int c; 334 331 335 - while ((c = getopt(argc, argv, "chvt:q:Q:d:G")) != -1) { 332 + while ((c = getopt(argc, argv, "chvot:q:Q:d:G")) != -1) { 336 333 switch (c) { 337 334 case 'c': 338 335 opts.count_packets = true; ··· 368 365 break; 369 366 case 'G': 370 367 opts.gso_enabled = false; 368 + break; 369 + case 'o': 370 + opts.failopen = true; 371 371 break; 372 372 case 'v': 373 373 opts.verbose++;
+9 -4
tools/testing/selftests/net/netfilter/nft_queue.sh
··· 591 591 test_udp_gro_ct() 592 592 { 593 593 local errprefix="FAIL: test_udp_gro_ct:" 594 + local timeout=5 594 595 595 596 ip netns exec "$nsrouter" conntrack -F 2>/dev/null 596 597 ··· 631 630 } 632 631 } 633 632 EOF 634 - timeout 10 ip netns exec "$ns2" socat UDP-LISTEN:12346,fork,pf=ipv4 OPEN:"$TMPFILE1",trunc & 633 + timeout "$timeout" ip netns exec "$ns2" socat UDP-LISTEN:12346,fork,pf=ipv4 OPEN:"$TMPFILE1",trunc & 635 634 local rpid=$! 636 635 637 - ip netns exec "$nsrouter" ./nf_queue -G -c -q 1 -t 2 > "$TMPFILE2" & 636 + ip netns exec "$nsrouter" nice -n -19 ./nf_queue -G -c -q 1 -o -t 2 > "$TMPFILE2" & 638 637 local nfqpid=$! 639 638 640 639 ip netns exec "$nsrouter" ethtool -K "veth0" rx-udp-gro-forwarding on rx-gro-list on generic-receive-offload on ··· 644 643 645 644 local bs=512 646 645 local count=$(((32 * 1024 * 1024) / bs)) 647 - dd if=/dev/zero bs="$bs" count="$count" 2>/dev/null | for i in $(seq 1 16); do 648 - timeout 5 ip netns exec "$ns1" \ 646 + 647 + local nprocs=$(nproc) 648 + [ $nprocs -gt 1 ] && nprocs=$((nprocs - 1)) 649 + 650 + dd if=/dev/zero bs="$bs" count="$count" 2>/dev/null | for i in $(seq 1 $nprocs); do 651 + timeout "$timeout" nice -n 19 ip netns exec "$ns1" \ 649 652 socat -u -b 512 STDIN UDP-DATAGRAM:10.0.2.99:12346,reuseport,bind=0.0.0.0:55221 & 650 653 done 651 654
+1 -1
tools/testing/selftests/net/packetdrill/tcp_rcv_big_endseq.pkt
··· 38 38 39 39 // If queue is empty, accept a packet even if its end_seq is above wup + rcv_wnd 40 40 +0 < P. 4001:54001(50000) ack 1 win 257 41 - +0 > . 1:1(0) ack 54001 win 0 41 + * > . 1:1(0) ack 54001 win 0 42 42 43 43 // Check LINUX_MIB_BEYOND_WINDOW has been incremented 3 times. 44 44 +0 `nstat | grep TcpExtBeyondWindow | grep -q " 3 "`
-33
tools/testing/selftests/net/packetdrill/tcp_rcv_toobig.pkt
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - 3 - --mss=1000 4 - 5 - `./defaults.sh` 6 - 7 - 0 `nstat -n` 8 - 9 - // Establish a connection. 10 - +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 11 - +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 12 - +0 setsockopt(3, SOL_SOCKET, SO_RCVBUF, [20000], 4) = 0 13 - +0 bind(3, ..., ...) = 0 14 - +0 listen(3, 1) = 0 15 - 16 - +0 < S 0:0(0) win 32792 <mss 1000,nop,wscale 7> 17 - +0 > S. 0:0(0) ack 1 win 18980 <mss 1460,nop,wscale 0> 18 - +.1 < . 1:1(0) ack 1 win 257 19 - 20 - +0 accept(3, ..., ...) = 4 21 - 22 - +0 < P. 1:20001(20000) ack 1 win 257 23 - +.04 > . 1:1(0) ack 20001 win 18000 24 - 25 - +0 setsockopt(4, SOL_SOCKET, SO_RCVBUF, [12000], 4) = 0 26 - +0 < P. 20001:80001(60000) ack 1 win 257 27 - +0 > . 1:1(0) ack 20001 win 18000 28 - 29 - +0 read(4, ..., 20000) = 20000 30 - // A too big packet is accepted if the receive queue is empty 31 - +0 < P. 20001:80001(60000) ack 1 win 257 32 - +0 > . 1:1(0) ack 80001 win 0 33 -
+6 -6
tools/testing/selftests/net/tun.c
··· 944 944 ASSERT_EQ(ret, off); 945 945 946 946 ret = receive_gso_packet_from_tunnel(self, variant, &r_num_mss); 947 - ASSERT_EQ(ret, variant->data_size); 948 - ASSERT_EQ(r_num_mss, variant->r_num_mss); 947 + EXPECT_EQ(ret, variant->data_size); 948 + EXPECT_EQ(r_num_mss, variant->r_num_mss); 949 949 } 950 950 951 951 TEST_F(tun_vnet_udptnl, recv_gso_packet) ··· 955 955 int ret, gso_type = VIRTIO_NET_HDR_GSO_UDP_L4; 956 956 957 957 ret = send_gso_packet_into_tunnel(self, variant); 958 - ASSERT_EQ(ret, variant->data_size); 958 + EXPECT_EQ(ret, variant->data_size); 959 959 960 960 memset(&vnet_hdr, 0, sizeof(vnet_hdr)); 961 961 ret = receive_gso_packet_from_tun(self, variant, &vnet_hdr); 962 - ASSERT_EQ(ret, variant->data_size); 962 + EXPECT_EQ(ret, variant->data_size); 963 963 964 964 if (!variant->no_gso) { 965 - ASSERT_EQ(vh->gso_size, variant->gso_size); 965 + EXPECT_EQ(vh->gso_size, variant->gso_size); 966 966 gso_type |= (variant->tunnel_type & UDP_TUNNEL_OUTER_IPV4) ? 967 967 (VIRTIO_NET_HDR_GSO_UDP_TUNNEL_IPV4) : 968 968 (VIRTIO_NET_HDR_GSO_UDP_TUNNEL_IPV6); 969 - ASSERT_EQ(vh->gso_type, gso_type); 969 + EXPECT_EQ(vh->gso_type, gso_type); 970 970 } 971 971 } 972 972
+3 -1
tools/testing/selftests/rcutorture/configs/rcu/SRCU-N
··· 2 2 CONFIG_SMP=y 3 3 CONFIG_NR_CPUS=4 4 4 CONFIG_HOTPLUG_CPU=y 5 - CONFIG_PREEMPT_NONE=y 5 + CONFIG_PREEMPT_DYNAMIC=n 6 + CONFIG_PREEMPT_LAZY=y 7 + CONFIG_PREEMPT_NONE=n 6 8 CONFIG_PREEMPT_VOLUNTARY=n 7 9 CONFIG_PREEMPT=n 8 10 #CHECK#CONFIG_RCU_EXPERT=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcu/SRCU-T
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -2
tools/testing/selftests/rcutorture/configs/rcu/SRCU-U
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n ··· 8 7 CONFIG_RCU_TRACE=n 9 8 CONFIG_DEBUG_LOCK_ALLOC=n 10 9 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 11 - CONFIG_PREEMPT_COUNT=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcu/TASKS02
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -2
tools/testing/selftests/rcutorture/configs/rcu/TINY01
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n ··· 12 11 #CHECK#CONFIG_RCU_STALL_COMMON=n 13 12 CONFIG_DEBUG_LOCK_ALLOC=n 14 13 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 15 - CONFIG_PREEMPT_COUNT=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcu/TINY02
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcu/TRACE01
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=5 3 3 CONFIG_HOTPLUG_CPU=y 4 - CONFIG_PREEMPT_NONE=y 4 + CONFIG_PREEMPT_LAZY=y 5 + CONFIG_PREEMPT_NONE=n 5 6 CONFIG_PREEMPT_VOLUNTARY=n 6 7 CONFIG_PREEMPT=n 7 8 CONFIG_PREEMPT_DYNAMIC=n
+3 -1
tools/testing/selftests/rcutorture/configs/rcu/TREE04
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=8 3 + CONFIG_PREEMPT_LAZY=y 3 4 CONFIG_PREEMPT_NONE=n 4 - CONFIG_PREEMPT_VOLUNTARY=y 5 + CONFIG_PREEMPT_VOLUNTARY=n 5 6 CONFIG_PREEMPT=n 6 7 CONFIG_PREEMPT_DYNAMIC=n 7 8 #CHECK#CONFIG_TREE_RCU=y 9 + #CHECK#CONFIG_PREEMPT_RCU=n 8 10 CONFIG_HZ_PERIODIC=n 9 11 CONFIG_NO_HZ_IDLE=n 10 12 CONFIG_NO_HZ_FULL=y
+3 -1
tools/testing/selftests/rcutorture/configs/rcu/TREE05
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=8 3 - CONFIG_PREEMPT_NONE=y 3 + CONFIG_PREEMPT_DYNAMIC=n 4 + CONFIG_PREEMPT_LAZY=y 5 + CONFIG_PREEMPT_NONE=n 4 6 CONFIG_PREEMPT_VOLUNTARY=n 5 7 CONFIG_PREEMPT=n 6 8 #CHECK#CONFIG_TREE_RCU=y
+4 -1
tools/testing/selftests/rcutorture/configs/rcu/TREE06
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=8 3 - CONFIG_PREEMPT_NONE=y 3 + CONFIG_PREEMPT_DYNAMIC=n 4 + CONFIG_PREEMPT_LAZY=y 5 + CONFIG_PREEMPT_NONE=n 4 6 CONFIG_PREEMPT_VOLUNTARY=n 5 7 CONFIG_PREEMPT=n 6 8 #CHECK#CONFIG_TREE_RCU=y 9 + #CHECK#CONFIG_PREEMPT_RCU=n 7 10 CONFIG_HZ_PERIODIC=n 8 11 CONFIG_NO_HZ_IDLE=y 9 12 CONFIG_NO_HZ_FULL=n
+1
tools/testing/selftests/rcutorture/configs/rcu/TREE10
··· 6 6 CONFIG_PREEMPT=n 7 7 CONFIG_PREEMPT_DYNAMIC=n 8 8 #CHECK#CONFIG_TREE_RCU=y 9 + CONFIG_PREEMPT_RCU=n 9 10 CONFIG_HZ_PERIODIC=n 10 11 CONFIG_NO_HZ_IDLE=y 11 12 CONFIG_NO_HZ_FULL=n
+3 -1
tools/testing/selftests/rcutorture/configs/rcu/TRIVIAL
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=8 3 - CONFIG_PREEMPT_NONE=y 3 + CONFIG_PREEMPT_DYNAMIC=n 4 + CONFIG_PREEMPT_LAZY=y 5 + CONFIG_PREEMPT_NONE=n 4 6 CONFIG_PREEMPT_VOLUNTARY=n 5 7 CONFIG_PREEMPT=n 6 8 CONFIG_HZ_PERIODIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcuscale/TINY
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01
··· 1 1 CONFIG_SMP=y 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/refscale/NOPREEMPT
··· 1 1 CONFIG_SMP=y 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/refscale/TINY
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/scf/NOPREEMPT
··· 1 1 CONFIG_SMP=y 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2
tools/testing/selftests/sched_ext/Makefile
··· 93 93 $(CLANG_SYS_INCLUDES) \ 94 94 -Wall -Wno-compare-distinct-pointer-types \ 95 95 -Wno-incompatible-function-pointer-types \ 96 + -Wno-microsoft-anon-tag \ 97 + -fms-extensions \ 96 98 -O2 -mcpu=v3 97 99 98 100 # sort removes libbpf duplicates when not cross-building
+2 -1
tools/testing/selftests/sched_ext/init_enable_count.c
··· 57 57 char buf; 58 58 59 59 close(pipe_fds[1]); 60 - read(pipe_fds[0], &buf, 1); 60 + if (read(pipe_fds[0], &buf, 1) < 0) 61 + exit(1); 61 62 close(pipe_fds[0]); 62 63 exit(0); 63 64 }
+2 -2
tools/testing/selftests/sched_ext/peek_dsq.bpf.c
··· 58 58 { 59 59 u32 slot_key; 60 60 long *slot_pid_ptr; 61 - int ix; 61 + u32 ix; 62 62 63 63 if (pid <= 0) 64 64 return; 65 65 66 66 /* Find an empty slot or one with the same PID */ 67 67 bpf_for(ix, 0, 10) { 68 - slot_key = (pid + ix) % MAX_SAMPLES; 68 + slot_key = ((u64)pid + ix) % MAX_SAMPLES; 69 69 slot_pid_ptr = bpf_map_lookup_elem(&peek_results, &slot_key); 70 70 if (!slot_pid_ptr) 71 71 continue;
-1
tools/testing/selftests/sched_ext/rt_stall.c
··· 15 15 #include <signal.h> 16 16 #include <bpf/bpf.h> 17 17 #include <scx/common.h> 18 - #include <unistd.h> 19 18 #include "rt_stall.bpf.skel.h" 20 19 #include "scx_test.h" 21 20 #include "../kselftest.h"
+3
tools/testing/selftests/sched_ext/runner.c
··· 166 166 enum scx_test_status status; 167 167 struct scx_test *test = &__scx_tests[i]; 168 168 169 + if (exit_req) 170 + break; 171 + 169 172 if (list) { 170 173 printf("%s\n", test->name); 171 174 if (i == (__scx_num_tests - 1))
+159
tools/testing/selftests/tc-testing/tc-tests/actions/ct.json
··· 505 505 "teardown": [ 506 506 "$TC qdisc del dev $DEV1 ingress" 507 507 ] 508 + }, 509 + { 510 + "id": "8883", 511 + "name": "Try to attach act_ct to an ets qdisc", 512 + "category": [ 513 + "actions", 514 + "ct" 515 + ], 516 + "plugins": { 517 + "requires": "nsPlugin" 518 + }, 519 + "setup": [ 520 + [ 521 + "$TC actions flush action ct", 522 + 0, 523 + 1, 524 + 255 525 + ], 526 + "$TC qdisc add dev $DEV1 root handle 1: ets bands 2" 527 + ], 528 + "cmdUnderTest": "$TC filter add dev $DEV1 parent 1: prio 1 protocol ip matchall action ct index 42", 529 + "expExitCode": "2", 530 + "verifyCmd": "$TC -j filter ls dev $DEV1 parent 1: prio 1 protocol ip", 531 + "matchJSON": [], 532 + "teardown": [ 533 + "$TC qdisc del dev $DEV1 root" 534 + ] 535 + }, 536 + { 537 + "id": "3b10", 538 + "name": "Attach act_ct to an ingress qdisc", 539 + "category": [ 540 + "actions", 541 + "ct" 542 + ], 543 + "plugins": { 544 + "requires": "nsPlugin" 545 + }, 546 + "setup": [ 547 + [ 548 + "$TC actions flush action ct", 549 + 0, 550 + 1, 551 + 255 552 + ], 553 + "$TC qdisc add dev $DEV1 ingress" 554 + ], 555 + "cmdUnderTest": "$TC filter add dev $DEV1 ingress prio 1 protocol ip matchall action ct index 42", 556 + "expExitCode": "0", 557 + "verifyCmd": "$TC -j filter ls dev $DEV1 ingress prio 1 protocol ip", 558 + "matchJSON": [ 559 + { 560 + "kind": "matchall" 561 + }, 562 + { 563 + "options": { 564 + "actions": [ 565 + { 566 + "order": 1, 567 + "kind": "ct", 568 + "index": 42, 569 + "ref": 1, 570 + "bind": 1 571 + } 572 + ] 573 + } 574 + } 575 + ], 576 + "teardown": [ 577 + "$TC qdisc del dev $DEV1 ingress" 578 + ] 579 + }, 580 + { 581 + "id": "0337", 582 + "name": "Attach act_ct to a clsact/egress qdisc", 583 + "category": [ 584 + "actions", 585 + "ct" 586 + ], 587 + "plugins": { 588 + "requires": "nsPlugin" 589 + }, 590 + "setup": [ 591 + [ 592 + "$TC actions flush action ct", 593 + 0, 594 + 1, 595 + 255 596 + ], 597 + "$TC qdisc add dev $DEV1 clsact" 598 + ], 599 + "cmdUnderTest": "$TC filter add dev $DEV1 egress prio 1 protocol ip matchall action ct index 42", 600 + "expExitCode": "0", 601 + "verifyCmd": "$TC -j filter ls dev $DEV1 egress prio 1 protocol ip", 602 + "matchJSON": [ 603 + { 604 + "kind": "matchall" 605 + }, 606 + { 607 + "options": { 608 + "actions": [ 609 + { 610 + "order": 1, 611 + "kind": "ct", 612 + "index": 42, 613 + "ref": 1, 614 + "bind": 1 615 + } 616 + ] 617 + } 618 + } 619 + ], 620 + "teardown": [ 621 + "$TC qdisc del dev $DEV1 clsact" 622 + ] 623 + }, 624 + { 625 + "id": "4f60", 626 + "name": "Attach act_ct to a shared block", 627 + "category": [ 628 + "actions", 629 + "ct" 630 + ], 631 + "plugins": { 632 + "requires": "nsPlugin" 633 + }, 634 + "setup": [ 635 + [ 636 + "$TC actions flush action ct", 637 + 0, 638 + 1, 639 + 255 640 + ], 641 + "$TC qdisc add dev $DEV1 ingress_block 21 clsact" 642 + ], 643 + "cmdUnderTest": "$TC filter add block 21 prio 1 protocol ip matchall action ct index 42", 644 + "expExitCode": "0", 645 + "verifyCmd": "$TC -j filter ls block 21 prio 1 protocol ip", 646 + "matchJSON": [ 647 + { 648 + "kind": "matchall" 649 + }, 650 + { 651 + "options": { 652 + "actions": [ 653 + { 654 + "order": 1, 655 + "kind": "ct", 656 + "index": 42, 657 + "ref": 1, 658 + "bind": 1 659 + } 660 + ] 661 + } 662 + } 663 + ], 664 + "teardown": [ 665 + "$TC qdisc del dev $DEV1 ingress_block 21 clsact" 666 + ] 508 667 } 509 668 ]
+99
tools/testing/selftests/tc-testing/tc-tests/actions/ife.json
··· 1279 1279 "teardown": [ 1280 1280 "$TC actions flush action ife" 1281 1281 ] 1282 + }, 1283 + { 1284 + "id": "f2a0", 1285 + "name": "Update decode ife action with encode metadata", 1286 + "category": [ 1287 + "actions", 1288 + "ife" 1289 + ], 1290 + "plugins": { 1291 + "requires": "nsPlugin" 1292 + }, 1293 + "setup": [ 1294 + [ 1295 + "$TC actions flush action ife", 1296 + 0, 1297 + 1, 1298 + 255 1299 + ], 1300 + "$TC actions add action ife decode index 10" 1301 + ], 1302 + "cmdUnderTest": "$TC actions replace action ife encode use tcindex 1 index 10", 1303 + "expExitCode": "0", 1304 + "verifyCmd": "$TC -j actions get action ife index 10", 1305 + "matchJSON": [ 1306 + { 1307 + "total acts": 0 1308 + }, 1309 + { 1310 + "actions": [ 1311 + { 1312 + "order": 1, 1313 + "kind": "ife", 1314 + "mode": "encode", 1315 + "control_action": { 1316 + "type": "pipe" 1317 + }, 1318 + "type": "0xed3e", 1319 + "tcindex": 1, 1320 + "index": 10, 1321 + "ref": 1, 1322 + "bind": 0, 1323 + "not_in_hw": true 1324 + } 1325 + ] 1326 + } 1327 + ], 1328 + "teardown": [ 1329 + "$TC actions flush action ife" 1330 + ] 1331 + }, 1332 + { 1333 + "id": "d352", 1334 + "name": "Update decode ife action into encode with multiple metadata", 1335 + "category": [ 1336 + "actions", 1337 + "ife" 1338 + ], 1339 + "plugins": { 1340 + "requires": "nsPlugin" 1341 + }, 1342 + "setup": [ 1343 + [ 1344 + "$TC actions flush action ife", 1345 + 0, 1346 + 1, 1347 + 255 1348 + ], 1349 + "$TC actions add action ife decode index 10" 1350 + ], 1351 + "cmdUnderTest": "$TC actions replace action ife encode use tcindex 1 use mark 22 index 10", 1352 + "expExitCode": "0", 1353 + "verifyCmd": "$TC -j actions get action ife index 10", 1354 + "matchJSON": [ 1355 + { 1356 + "total acts": 0 1357 + }, 1358 + { 1359 + "actions": [ 1360 + { 1361 + "order": 1, 1362 + "kind": "ife", 1363 + "mode": "encode", 1364 + "control_action": { 1365 + "type": "pipe" 1366 + }, 1367 + "type": "0xed3e", 1368 + "tcindex": 1, 1369 + "mark": 22, 1370 + "index": 10, 1371 + "ref": 1, 1372 + "bind": 0, 1373 + "not_in_hw": true 1374 + } 1375 + ] 1376 + } 1377 + ], 1378 + "teardown": [ 1379 + "$TC actions flush action ife" 1380 + ] 1282 1381 } 1283 1382 ]
+7 -3
tools/testing/selftests/tc-testing/tdc_helper.py
··· 38 38 39 39 40 40 def list_categories(testlist): 41 - """ Show all categories that are present in a test case file. """ 42 - categories = set(map(lambda x: x['category'], testlist)) 41 + """Show all unique categories present in the test cases.""" 42 + categories = set() 43 + for t in testlist: 44 + if 'category' in t: 45 + categories.update(t['category']) 46 + 43 47 print("Available categories:") 44 - print(", ".join(str(s) for s in categories)) 48 + print(", ".join(sorted(categories))) 45 49 print("") 46 50 47 51