Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.14-rc8).

Conflict:

tools/testing/selftests/net/Makefile
03544faad761 ("selftest: net: add proc_net_pktgen")
3ed61b8938c6 ("selftests: net: test for lwtunnel dst ref loops")

tools/testing/selftests/net/config:
85cb3711acb8 ("selftests: net: Add test cases for link and peer netns")
3ed61b8938c6 ("selftests: net: test for lwtunnel dst ref loops")

Adjacent commits:

tools/testing/selftests/net/Makefile
c935af429ec2 ("selftests: net: add support for testing SO_RCVMARK and SO_RCVPRIORITY")
355d940f4d5a ("Revert "selftests: Add IPv6 link-local address generation tests for GRE devices."")

Signed-off-by: Paolo Abeni <pabeni@redhat.com>

+1965 -1238
+1
.mailmap
··· 281 281 Herbert Xu <herbert@gondor.apana.org.au> 282 282 Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com> 283 283 Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn> 284 + Ike Panhc <ikepanhc@gmail.com> <ike.pan@canonical.com> 284 285 J. Bruce Fields <bfields@fieldses.org> <bfields@redhat.com> 285 286 J. Bruce Fields <bfields@fieldses.org> <bfields@citi.umich.edu> 286 287 Jacob Shin <Jacob.Shin@amd.com>
+1
Documentation/devicetree/bindings/input/touchscreen/imagis,ist3038c.yaml
··· 19 19 - imagis,ist3038 20 20 - imagis,ist3038b 21 21 - imagis,ist3038c 22 + - imagis,ist3038h 22 23 23 24 reg: 24 25 maxItems: 1
+1 -1
Documentation/devicetree/bindings/net/can/renesas,rcar-canfd.yaml
··· 170 170 const: renesas,r8a779h0-canfd 171 171 then: 172 172 patternProperties: 173 - "^channel[5-7]$": false 173 + "^channel[4-7]$": false 174 174 else: 175 175 if: 176 176 not:
+1 -1
Documentation/rust/quick-start.rst
··· 145 145 **************************** 146 146 147 147 The Rust standard library source is required because the build system will 148 - cross-compile ``core`` and ``alloc``. 148 + cross-compile ``core``. 149 149 150 150 If ``rustup`` is being used, run:: 151 151
+1 -1
Documentation/rust/testing.rst
··· 97 97 98 98 /// ``` 99 99 /// # use kernel::{spawn_work_item, workqueue}; 100 - /// spawn_work_item!(workqueue::system(), || pr_info!("x"))?; 100 + /// spawn_work_item!(workqueue::system(), || pr_info!("x\n"))?; 101 101 /// # Ok::<(), Error>(()) 102 102 /// ``` 103 103
+16 -8
MAINTAINERS
··· 2213 2213 M: Sven Peter <sven@svenpeter.dev> 2214 2214 M: Janne Grunau <j@jannau.net> 2215 2215 R: Alyssa Rosenzweig <alyssa@rosenzweig.io> 2216 + R: Neal Gompa <neal@gompa.dev> 2216 2217 L: asahi@lists.linux.dev 2217 2218 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2218 2219 S: Maintained ··· 2238 2237 F: Documentation/devicetree/bindings/pinctrl/apple,pinctrl.yaml 2239 2238 F: Documentation/devicetree/bindings/power/apple* 2240 2239 F: Documentation/devicetree/bindings/pwm/apple,s5l-fpwm.yaml 2240 + F: Documentation/devicetree/bindings/spi/apple,spi.yaml 2241 2241 F: Documentation/devicetree/bindings/watchdog/apple,wdt.yaml 2242 2242 F: arch/arm64/boot/dts/apple/ 2243 2243 F: drivers/bluetooth/hci_bcm4377.c ··· 2256 2254 F: drivers/pinctrl/pinctrl-apple-gpio.c 2257 2255 F: drivers/pwm/pwm-apple.c 2258 2256 F: drivers/soc/apple/* 2257 + F: drivers/spi/spi-apple.c 2259 2258 F: drivers/watchdog/apple_wdt.c 2260 2259 F: include/dt-bindings/interrupt-controller/apple-aic.h 2261 2260 F: include/dt-bindings/pinctrl/apple.h ··· 8647 8644 8648 8645 EXEC & BINFMT API, ELF 8649 8646 M: Kees Cook <kees@kernel.org> 8650 - R: Eric Biederman <ebiederm@xmission.com> 8651 8647 L: linux-mm@kvack.org 8652 8648 S: Supported 8653 8649 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/execve ··· 9831 9829 F: drivers/media/usb/go7007/ 9832 9830 9833 9831 GOODIX TOUCHSCREEN 9834 - M: Bastien Nocera <hadess@hadess.net> 9835 9832 M: Hans de Goede <hdegoede@redhat.com> 9836 9833 L: linux-input@vger.kernel.org 9837 9834 S: Maintained ··· 11142 11141 F: drivers/i2c/busses/i2c-icy.c 11143 11142 11144 11143 IDEAPAD LAPTOP EXTRAS DRIVER 11145 - M: Ike Panhc <ike.pan@canonical.com> 11144 + M: Ike Panhc <ikepanhc@gmail.com> 11146 11145 L: platform-driver-x86@vger.kernel.org 11147 11146 S: Maintained 11148 11147 W: http://launchpad.net/ideapad-laptop ··· 12827 12826 F: include/linux/kernfs.h 12828 12827 12829 12828 KEXEC 12830 - M: Eric Biederman <ebiederm@xmission.com> 12831 12829 L: kexec@lists.infradead.org 12832 - S: Maintained 12833 12830 W: http://kernel.org/pub/linux/utils/kernel/kexec/ 12834 12831 F: include/linux/kexec.h 12835 12832 F: include/uapi/linux/kexec.h ··· 13753 13754 13754 13755 LTC4286 HARDWARE MONITOR DRIVER 13755 13756 M: Delphine CC Chiu <Delphine_CC_Chiu@Wiwynn.com> 13756 - L: linux-i2c@vger.kernel.org 13757 + L: linux-hwmon@vger.kernel.org 13757 13758 S: Maintained 13758 13759 F: Documentation/devicetree/bindings/hwmon/lltc,ltc4286.yaml 13759 13760 F: Documentation/hwmon/ltc4286.rst 13760 - F: drivers/hwmon/pmbus/Kconfig 13761 - F: drivers/hwmon/pmbus/Makefile 13762 13761 F: drivers/hwmon/pmbus/ltc4286.c 13763 13762 13764 13763 LTC4306 I2C MULTIPLEXER DRIVER ··· 16661 16664 F: net/mptcp/ 16662 16665 F: tools/testing/selftests/bpf/*/*mptcp*.[ch] 16663 16666 F: tools/testing/selftests/net/mptcp/ 16667 + 16668 + NETWORKING [SRv6] 16669 + M: Andrea Mayer <andrea.mayer@uniroma2.it> 16670 + L: netdev@vger.kernel.org 16671 + S: Maintained 16672 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 16673 + F: include/linux/seg6* 16674 + F: include/net/seg6* 16675 + F: include/uapi/linux/seg6* 16676 + F: net/ipv6/seg6* 16677 + F: tools/testing/selftests/net/srv6* 16664 16678 16665 16679 NETWORKING [TCP] 16666 16680 M: Eric Dumazet <edumazet@google.com>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 14 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
-5
arch/arm/boot/dts/broadcom/bcm2711-rpi.dtsi
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include "bcm2835-rpi.dtsi" 3 3 4 - #include <dt-bindings/power/raspberrypi-power.h> 5 4 #include <dt-bindings/reset/raspberrypi,firmware-reset.h> 6 5 7 6 / { ··· 99 100 100 101 &vchiq { 101 102 interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>; 102 - }; 103 - 104 - &xhci { 105 - power-domains = <&power RPI_POWER_DOMAIN_USB>; 106 103 };
+6 -6
arch/arm/boot/dts/broadcom/bcm2711.dtsi
··· 134 134 clocks = <&clocks BCM2835_CLOCK_UART>, 135 135 <&clocks BCM2835_CLOCK_VPU>; 136 136 clock-names = "uartclk", "apb_pclk"; 137 - arm,primecell-periphid = <0x00241011>; 137 + arm,primecell-periphid = <0x00341011>; 138 138 status = "disabled"; 139 139 }; 140 140 ··· 145 145 clocks = <&clocks BCM2835_CLOCK_UART>, 146 146 <&clocks BCM2835_CLOCK_VPU>; 147 147 clock-names = "uartclk", "apb_pclk"; 148 - arm,primecell-periphid = <0x00241011>; 148 + arm,primecell-periphid = <0x00341011>; 149 149 status = "disabled"; 150 150 }; 151 151 ··· 156 156 clocks = <&clocks BCM2835_CLOCK_UART>, 157 157 <&clocks BCM2835_CLOCK_VPU>; 158 158 clock-names = "uartclk", "apb_pclk"; 159 - arm,primecell-periphid = <0x00241011>; 159 + arm,primecell-periphid = <0x00341011>; 160 160 status = "disabled"; 161 161 }; 162 162 ··· 167 167 clocks = <&clocks BCM2835_CLOCK_UART>, 168 168 <&clocks BCM2835_CLOCK_VPU>; 169 169 clock-names = "uartclk", "apb_pclk"; 170 - arm,primecell-periphid = <0x00241011>; 170 + arm,primecell-periphid = <0x00341011>; 171 171 status = "disabled"; 172 172 }; 173 173 ··· 451 451 IRQ_TYPE_LEVEL_LOW)>, 452 452 <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(4) | 453 453 IRQ_TYPE_LEVEL_LOW)>; 454 - /* This only applies to the ARMv7 stub */ 455 - arm,cpu-registers-not-fw-configured; 456 454 }; 457 455 458 456 cpus: cpus { ··· 608 610 #address-cells = <1>; 609 611 #size-cells = <0>; 610 612 interrupts = <GIC_SPI 176 IRQ_TYPE_LEVEL_HIGH>; 613 + power-domains = <&pm BCM2835_POWER_DOMAIN_USB>; 611 614 /* DWC2 and this IP block share the same USB PHY, 612 615 * enabling both at the same time results in lockups. 613 616 * So keep this node disabled and let the bootloader ··· 1176 1177 }; 1177 1178 1178 1179 &uart0 { 1180 + arm,primecell-periphid = <0x00341011>; 1179 1181 interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>; 1180 1182 }; 1181 1183
+6 -6
arch/arm/boot/dts/broadcom/bcm4709-asus-rt-ac3200.dts
··· 124 124 }; 125 125 126 126 port@1 { 127 - label = "lan1"; 127 + label = "lan4"; 128 128 }; 129 129 130 130 port@2 { 131 - label = "lan2"; 132 - }; 133 - 134 - port@3 { 135 131 label = "lan3"; 136 132 }; 137 133 134 + port@3 { 135 + label = "lan2"; 136 + }; 137 + 138 138 port@4 { 139 - label = "lan4"; 139 + label = "lan1"; 140 140 }; 141 141 }; 142 142 };
+4 -4
arch/arm/boot/dts/broadcom/bcm47094-asus-rt-ac5300.dts
··· 126 126 127 127 ports { 128 128 port@0 { 129 - label = "lan4"; 129 + label = "wan"; 130 130 }; 131 131 132 132 port@1 { 133 - label = "lan3"; 133 + label = "lan1"; 134 134 }; 135 135 136 136 port@2 { ··· 138 138 }; 139 139 140 140 port@3 { 141 - label = "lan1"; 141 + label = "lan3"; 142 142 }; 143 143 144 144 port@4 { 145 - label = "wan"; 145 + label = "lan4"; 146 146 }; 147 147 }; 148 148 };
+5 -5
arch/arm/boot/dts/nxp/imx/imx6qdl-apalis.dtsi
··· 108 108 }; 109 109 }; 110 110 111 + poweroff { 112 + compatible = "regulator-poweroff"; 113 + cpu-supply = <&vgen2_reg>; 114 + }; 115 + 111 116 reg_module_3v3: regulator-module-3v3 { 112 117 compatible = "regulator-fixed"; 113 118 regulator-always-on; ··· 239 234 pinctrl-0 = <&pinctrl_flexcan2_default>; 240 235 pinctrl-1 = <&pinctrl_flexcan2_sleep>; 241 236 status = "disabled"; 242 - }; 243 - 244 - &clks { 245 - fsl,pmic-stby-poweroff; 246 237 }; 247 238 248 239 /* Apalis SPI1 */ ··· 528 527 529 528 pmic: pmic@8 { 530 529 compatible = "fsl,pfuze100"; 531 - fsl,pmic-stby-poweroff; 532 530 reg = <0x08>; 533 531 534 532 regulators {
+1
arch/arm/mach-davinci/Kconfig
··· 27 27 28 28 config ARCH_DAVINCI_DA850 29 29 bool "DA850/OMAP-L138/AM18x based system" 30 + select ARCH_DAVINCI_DA8XX 30 31 select DAVINCI_CP_INTC 31 32 32 33 config ARCH_DAVINCI_DA8XX
+1
arch/arm/mach-omap1/Kconfig
··· 8 8 select ARCH_OMAP 9 9 select CLKSRC_MMIO 10 10 select FORCE_PCI if PCCARD 11 + select GENERIC_IRQ_CHIP 11 12 select GPIOLIB 12 13 help 13 14 Support for older TI OMAP1 (omap7xx, omap15xx or omap16xx)
+1
arch/arm/mach-shmobile/headsmp.S
··· 136 136 .long shmobile_smp_arg - 1b 137 137 138 138 .bss 139 + .align 2 139 140 .globl shmobile_smp_mpidr 140 141 shmobile_smp_mpidr: 141 142 .space NR_CPUS * 4
+1 -1
arch/arm64/boot/dts/broadcom/bcm2712.dtsi
··· 227 227 interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>; 228 228 clocks = <&clk_uart>, <&clk_vpu>; 229 229 clock-names = "uartclk", "apb_pclk"; 230 - arm,primecell-periphid = <0x00241011>; 230 + arm,primecell-periphid = <0x00341011>; 231 231 status = "disabled"; 232 232 }; 233 233
+3 -3
arch/arm64/boot/dts/freescale/imx8mm-verdin-dahlia.dtsi
··· 16 16 "Headphone Jack", "HPOUTR", 17 17 "IN2L", "Line In Jack", 18 18 "IN2R", "Line In Jack", 19 - "Headphone Jack", "MICBIAS", 20 - "IN1L", "Headphone Jack"; 19 + "Microphone Jack", "MICBIAS", 20 + "IN1L", "Microphone Jack"; 21 21 simple-audio-card,widgets = 22 - "Microphone", "Headphone Jack", 22 + "Microphone", "Microphone Jack", 23 23 "Headphone", "Headphone Jack", 24 24 "Line", "Line In Jack"; 25 25
+4 -12
arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later OR MIT 2 2 /* 3 - * Copyright 2021-2022 TQ-Systems GmbH 4 - * Author: Alexander Stein <alexander.stein@tq-group.com> 3 + * Copyright 2021-2025 TQ-Systems GmbH <linux@ew.tq-group.com>, 4 + * D-82229 Seefeld, Germany. 5 + * Author: Alexander Stein 5 6 */ 6 7 7 8 #include "imx8mp.dtsi" ··· 22 21 regulator-name = "VCC3V3"; 23 22 regulator-min-microvolt = <3300000>; 24 23 regulator-max-microvolt = <3300000>; 25 - regulator-always-on; 26 - }; 27 - 28 - /* e-MMC IO, needed for HS modes */ 29 - reg_vcc1v8: regulator-vcc1v8 { 30 - compatible = "regulator-fixed"; 31 - regulator-name = "VCC1V8"; 32 - regulator-min-microvolt = <1800000>; 33 - regulator-max-microvolt = <1800000>; 34 24 regulator-always-on; 35 25 }; 36 26 }; ··· 189 197 no-sd; 190 198 no-sdio; 191 199 vmmc-supply = <&reg_vcc3v3>; 192 - vqmmc-supply = <&reg_vcc1v8>; 200 + vqmmc-supply = <&buck5_reg>; 193 201 status = "okay"; 194 202 }; 195 203
+3 -3
arch/arm64/boot/dts/freescale/imx8mp-verdin-dahlia.dtsi
··· 28 28 "Headphone Jack", "HPOUTR", 29 29 "IN2L", "Line In Jack", 30 30 "IN2R", "Line In Jack", 31 - "Headphone Jack", "MICBIAS", 32 - "IN1L", "Headphone Jack"; 31 + "Microphone Jack", "MICBIAS", 32 + "IN1L", "Microphone Jack"; 33 33 simple-audio-card,widgets = 34 - "Microphone", "Headphone Jack", 34 + "Microphone", "Microphone Jack", 35 35 "Headphone", "Headphone Jack", 36 36 "Line", "Line In Jack"; 37 37
-1
arch/arm64/boot/dts/qcom/sdm845.dtsi
··· 5163 5163 <GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>, 5164 5164 <GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>, 5165 5165 <GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>; 5166 - dma-coherent; 5167 5166 }; 5168 5167 5169 5168 anoc_1_tbu: tbu@150c5000 {
+12
arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
··· 194 194 <3 RK_PB3 RK_FUNC_GPIO &pcfg_pull_none>; 195 195 }; 196 196 }; 197 + 198 + uart { 199 + uart5_rts_pin: uart5-rts-pin { 200 + rockchip,pins = 201 + <0 RK_PB5 RK_FUNC_GPIO &pcfg_pull_none>; 202 + }; 203 + }; 197 204 }; 198 205 199 206 &pwm0 { ··· 229 222 }; 230 223 231 224 &uart0 { 225 + pinctrl-names = "default"; 226 + pinctrl-0 = <&uart0_xfer>; 232 227 status = "okay"; 233 228 }; 234 229 235 230 &uart5 { 231 + /* Add pinmux for rts-gpios (uart5_rts_pin) */ 232 + pinctrl-names = "default"; 233 + pinctrl-0 = <&uart5_xfer &uart5_rts_pin>; 236 234 rts-gpios = <&gpio0 RK_PB5 GPIO_ACTIVE_HIGH>; 237 235 status = "okay"; 238 236 };
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dtsi
··· 115 115 }; 116 116 117 117 &u2phy1_host { 118 - status = "disabled"; 118 + phy-supply = <&vdd_5v>; 119 119 }; 120 120 121 121 &uart0 {
+14
arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dtsi
··· 227 227 vin-supply = <&vcc12v_dcin>; 228 228 }; 229 229 230 + vcca_0v9: regulator-vcca-0v9 { 231 + compatible = "regulator-fixed"; 232 + regulator-name = "vcca_0v9"; 233 + regulator-always-on; 234 + regulator-boot-on; 235 + regulator-min-microvolt = <900000>; 236 + regulator-max-microvolt = <900000>; 237 + vin-supply = <&vcc3v3_sys>; 238 + }; 239 + 230 240 vdd_log: regulator-vdd-log { 231 241 compatible = "pwm-regulator"; 232 242 pwms = <&pwm2 0 25000 1>; ··· 322 312 }; 323 313 324 314 &hdmi { 315 + avdd-0v9-supply = <&vcca_0v9>; 316 + avdd-1v8-supply = <&vcc1v8_dvp>; 325 317 ddc-i2c-bus = <&i2c3>; 326 318 pinctrl-names = "default"; 327 319 pinctrl-0 = <&hdmi_cec>; ··· 673 661 num-lanes = <4>; 674 662 pinctrl-names = "default"; 675 663 pinctrl-0 = <&pcie_perst>; 664 + vpcie0v9-supply = <&vcca_0v9>; 665 + vpcie1v8-supply = <&vcca_1v8>; 676 666 vpcie12v-supply = <&vcc12v_dcin>; 677 667 vpcie3v3-supply = <&vcc3v3_pcie>; 678 668 status = "okay";
-1
arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
··· 512 512 513 513 &sdmmc0 { 514 514 max-frequency = <150000000>; 515 - supports-sd; 516 515 bus-width = <4>; 517 516 cap-mmc-highspeed; 518 517 cap-sd-highspeed;
-1
arch/arm64/boot/dts/rockchip/rk3588-jaguar.dts
··· 503 503 non-removable; 504 504 pinctrl-names = "default"; 505 505 pinctrl-0 = <&emmc_bus8 &emmc_cmd &emmc_clk &emmc_data_strobe>; 506 - supports-cqe; 507 506 vmmc-supply = <&vcc_3v3_s3>; 508 507 vqmmc-supply = <&vcc_1v8_s3>; 509 508 status = "okay";
+1 -2
arch/arm64/boot/dts/rockchip/rk3588-rock-5-itx.dts
··· 690 690 691 691 &sdhci { 692 692 bus-width = <8>; 693 - max-frequency = <200000000>; 693 + max-frequency = <150000000>; 694 694 mmc-hs400-1_8v; 695 695 mmc-hs400-enhanced-strobe; 696 - mmc-hs200-1_8v; 697 696 no-sdio; 698 697 no-sd; 699 698 non-removable;
-1
arch/arm64/boot/dts/rockchip/rk3588-tiger.dtsi
··· 386 386 non-removable; 387 387 pinctrl-names = "default"; 388 388 pinctrl-0 = <&emmc_bus8 &emmc_cmd &emmc_clk &emmc_data_strobe>; 389 - supports-cqe; 390 389 vmmc-supply = <&vcc_3v3_s3>; 391 390 vqmmc-supply = <&vcc_1v8_s3>; 392 391 status = "okay";
+12 -10
arch/arm64/include/asm/tlbflush.h
··· 396 396 #define __flush_tlb_range_op(op, start, pages, stride, \ 397 397 asid, tlb_level, tlbi_user, lpa2) \ 398 398 do { \ 399 + typeof(start) __flush_start = start; \ 400 + typeof(pages) __flush_pages = pages; \ 399 401 int num = 0; \ 400 402 int scale = 3; \ 401 403 int shift = lpa2 ? 16 : PAGE_SHIFT; \ 402 404 unsigned long addr; \ 403 405 \ 404 - while (pages > 0) { \ 406 + while (__flush_pages > 0) { \ 405 407 if (!system_supports_tlb_range() || \ 406 - pages == 1 || \ 407 - (lpa2 && start != ALIGN(start, SZ_64K))) { \ 408 - addr = __TLBI_VADDR(start, asid); \ 408 + __flush_pages == 1 || \ 409 + (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \ 410 + addr = __TLBI_VADDR(__flush_start, asid); \ 409 411 __tlbi_level(op, addr, tlb_level); \ 410 412 if (tlbi_user) \ 411 413 __tlbi_user_level(op, addr, tlb_level); \ 412 - start += stride; \ 413 - pages -= stride >> PAGE_SHIFT; \ 414 + __flush_start += stride; \ 415 + __flush_pages -= stride >> PAGE_SHIFT; \ 414 416 continue; \ 415 417 } \ 416 418 \ 417 - num = __TLBI_RANGE_NUM(pages, scale); \ 419 + num = __TLBI_RANGE_NUM(__flush_pages, scale); \ 418 420 if (num >= 0) { \ 419 - addr = __TLBI_VADDR_RANGE(start >> shift, asid, \ 421 + addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \ 420 422 scale, num, tlb_level); \ 421 423 __tlbi(r##op, addr); \ 422 424 if (tlbi_user) \ 423 425 __tlbi_user(r##op, addr); \ 424 - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ 425 - pages -= __TLBI_RANGE_PAGES(num, scale); \ 426 + __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ 427 + __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\ 426 428 } \ 427 429 scale--; \ 428 430 } \
+4 -1
arch/arm64/mm/mmu.c
··· 1177 1177 struct vmem_altmap *altmap) 1178 1178 { 1179 1179 WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); 1180 + /* [start, end] should be within one section */ 1181 + WARN_ON_ONCE(end - start > PAGES_PER_SECTION * sizeof(struct page)); 1180 1182 1181 - if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) 1183 + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || 1184 + (end - start < PAGES_PER_SECTION * sizeof(struct page))) 1182 1185 return vmemmap_populate_basepages(start, end, node, altmap); 1183 1186 else 1184 1187 return vmemmap_populate_hugepages(start, end, node, altmap);
+1 -1
arch/riscv/boot/dts/starfive/jh7110-pinfunc.h
··· 89 89 #define GPOUT_SYS_SDIO1_DATA1 59 90 90 #define GPOUT_SYS_SDIO1_DATA2 60 91 91 #define GPOUT_SYS_SDIO1_DATA3 61 92 - #define GPOUT_SYS_SDIO1_DATA4 63 92 + #define GPOUT_SYS_SDIO1_DATA4 62 93 93 #define GPOUT_SYS_SDIO1_DATA5 63 94 94 #define GPOUT_SYS_SDIO1_DATA6 64 95 95 #define GPOUT_SYS_SDIO1_DATA7 65
+4
arch/x86/kernel/cpu/vmware.c
··· 26 26 #include <linux/export.h> 27 27 #include <linux/clocksource.h> 28 28 #include <linux/cpu.h> 29 + #include <linux/efi.h> 29 30 #include <linux/reboot.h> 30 31 #include <linux/static_call.h> 31 32 #include <asm/div64.h> ··· 429 428 } else { 430 429 pr_warn("Failed to get TSC freq from the hypervisor\n"); 431 430 } 431 + 432 + if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !efi_enabled(EFI_BOOT)) 433 + x86_init.mpparse.find_mptable = mpparse_find_mptable; 432 434 433 435 vmware_paravirt_ops_setup(); 434 436
+11 -3
drivers/ata/libata-core.c
··· 2845 2845 (id[ATA_ID_SATA_CAPABILITY] & 0xe) == 0x2) 2846 2846 dev->quirks |= ATA_QUIRK_NOLPM; 2847 2847 2848 + if (dev->quirks & ATA_QUIRK_NO_LPM_ON_ATI && 2849 + ata_dev_check_adapter(dev, PCI_VENDOR_ID_ATI)) 2850 + dev->quirks |= ATA_QUIRK_NOLPM; 2851 + 2848 2852 if (ap->flags & ATA_FLAG_NO_LPM) 2849 2853 dev->quirks |= ATA_QUIRK_NOLPM; 2850 2854 ··· 3901 3897 [__ATA_QUIRK_MAX_SEC_1024] = "maxsec1024", 3902 3898 [__ATA_QUIRK_MAX_TRIM_128M] = "maxtrim128m", 3903 3899 [__ATA_QUIRK_NO_NCQ_ON_ATI] = "noncqonati", 3900 + [__ATA_QUIRK_NO_LPM_ON_ATI] = "nolpmonati", 3904 3901 [__ATA_QUIRK_NO_ID_DEV_LOG] = "noiddevlog", 3905 3902 [__ATA_QUIRK_NO_LOG_DIR] = "nologdir", 3906 3903 [__ATA_QUIRK_NO_FUA] = "nofua", ··· 4147 4142 ATA_QUIRK_ZERO_AFTER_TRIM }, 4148 4143 { "Samsung SSD 860*", NULL, ATA_QUIRK_NO_NCQ_TRIM | 4149 4144 ATA_QUIRK_ZERO_AFTER_TRIM | 4150 - ATA_QUIRK_NO_NCQ_ON_ATI }, 4145 + ATA_QUIRK_NO_NCQ_ON_ATI | 4146 + ATA_QUIRK_NO_LPM_ON_ATI }, 4151 4147 { "Samsung SSD 870*", NULL, ATA_QUIRK_NO_NCQ_TRIM | 4152 4148 ATA_QUIRK_ZERO_AFTER_TRIM | 4153 - ATA_QUIRK_NO_NCQ_ON_ATI }, 4149 + ATA_QUIRK_NO_NCQ_ON_ATI | 4150 + ATA_QUIRK_NO_LPM_ON_ATI }, 4154 4151 { "SAMSUNG*MZ7LH*", NULL, ATA_QUIRK_NO_NCQ_TRIM | 4155 4152 ATA_QUIRK_ZERO_AFTER_TRIM | 4156 - ATA_QUIRK_NO_NCQ_ON_ATI, }, 4153 + ATA_QUIRK_NO_NCQ_ON_ATI | 4154 + ATA_QUIRK_NO_LPM_ON_ATI }, 4157 4155 { "FCCT*M500*", NULL, ATA_QUIRK_NO_NCQ_TRIM | 4158 4156 ATA_QUIRK_ZERO_AFTER_TRIM }, 4159 4157
+2 -2
drivers/block/null_blk/main.c
··· 1549 1549 cmd = blk_mq_rq_to_pdu(req); 1550 1550 cmd->error = null_process_cmd(cmd, req_op(req), blk_rq_pos(req), 1551 1551 blk_rq_sectors(req)); 1552 - if (!blk_mq_add_to_batch(req, iob, (__force int) cmd->error, 1553 - blk_mq_end_request_batch)) 1552 + if (!blk_mq_add_to_batch(req, iob, cmd->error != BLK_STS_OK, 1553 + blk_mq_end_request_batch)) 1554 1554 blk_mq_end_request(req, cmd->error); 1555 1555 nr++; 1556 1556 }
+3 -2
drivers/block/virtio_blk.c
··· 1207 1207 1208 1208 while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) { 1209 1209 struct request *req = blk_mq_rq_from_pdu(vbr); 1210 + u8 status = virtblk_vbr_status(vbr); 1210 1211 1211 1212 found++; 1212 1213 if (!blk_mq_complete_request_remote(req) && 1213 - !blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr), 1214 - virtblk_complete_batch)) 1214 + !blk_mq_add_to_batch(req, iob, status != VIRTIO_BLK_S_OK, 1215 + virtblk_complete_batch)) 1215 1216 virtblk_request_done(req); 1216 1217 } 1217 1218
-2
drivers/clk/qcom/dispcc-sm8750.c
··· 827 827 &disp_cc_mdss_byte0_clk_src.clkr.hw, 828 828 }, 829 829 .num_parents = 1, 830 - .flags = CLK_SET_RATE_PARENT, 831 830 .ops = &clk_regmap_div_ops, 832 831 }, 833 832 }; ··· 841 842 &disp_cc_mdss_byte1_clk_src.clkr.hw, 842 843 }, 843 844 .num_parents = 1, 844 - .flags = CLK_SET_RATE_PARENT, 845 845 .ops = &clk_regmap_div_ops, 846 846 }, 847 847 };
-8
drivers/clk/samsung/clk-gs101.c
··· 382 382 EARLY_WAKEUP_DPU_DEST, 383 383 EARLY_WAKEUP_CSIS_DEST, 384 384 EARLY_WAKEUP_SW_TRIG_APM, 385 - EARLY_WAKEUP_SW_TRIG_APM_SET, 386 - EARLY_WAKEUP_SW_TRIG_APM_CLEAR, 387 385 EARLY_WAKEUP_SW_TRIG_CLUSTER0, 388 - EARLY_WAKEUP_SW_TRIG_CLUSTER0_SET, 389 - EARLY_WAKEUP_SW_TRIG_CLUSTER0_CLEAR, 390 386 EARLY_WAKEUP_SW_TRIG_DPU, 391 - EARLY_WAKEUP_SW_TRIG_DPU_SET, 392 - EARLY_WAKEUP_SW_TRIG_DPU_CLEAR, 393 387 EARLY_WAKEUP_SW_TRIG_CSIS, 394 - EARLY_WAKEUP_SW_TRIG_CSIS_SET, 395 - EARLY_WAKEUP_SW_TRIG_CSIS_CLEAR, 396 388 CLK_CON_MUX_MUX_CLKCMU_BO_BUS, 397 389 CLK_CON_MUX_MUX_CLKCMU_BUS0_BUS, 398 390 CLK_CON_MUX_MUX_CLKCMU_BUS1_BUS,
+6 -1
drivers/clk/samsung/clk-pll.c
··· 206 206 */ 207 207 /* Maximum lock time can be 270 * PDIV cycles */ 208 208 #define PLL35XX_LOCK_FACTOR (270) 209 + #define PLL142XX_LOCK_FACTOR (150) 209 210 210 211 #define PLL35XX_MDIV_MASK (0x3FF) 211 212 #define PLL35XX_PDIV_MASK (0x3F) ··· 273 272 } 274 273 275 274 /* Set PLL lock time. */ 276 - writel_relaxed(rate->pdiv * PLL35XX_LOCK_FACTOR, 275 + if (pll->type == pll_142xx) 276 + writel_relaxed(rate->pdiv * PLL142XX_LOCK_FACTOR, 277 + pll->lock_reg); 278 + else 279 + writel_relaxed(rate->pdiv * PLL35XX_LOCK_FACTOR, 277 280 pll->lock_reg); 278 281 279 282 /* Change PLL PMS values */
+1 -1
drivers/dpll/dpll_core.c
··· 508 508 xa_init_flags(&pin->parent_refs, XA_FLAGS_ALLOC); 509 509 ret = xa_alloc_cyclic(&dpll_pin_xa, &pin->id, pin, xa_limit_32b, 510 510 &dpll_pin_xa_id, GFP_KERNEL); 511 - if (ret) 511 + if (ret < 0) 512 512 goto err_xa_alloc; 513 513 return pin; 514 514 err_xa_alloc:
+4
drivers/firmware/efi/libstub/randomalloc.c
··· 75 75 if (align < EFI_ALLOC_ALIGN) 76 76 align = EFI_ALLOC_ALIGN; 77 77 78 + /* Avoid address 0x0, as it can be mistaken for NULL */ 79 + if (alloc_min == 0) 80 + alloc_min = align; 81 + 78 82 size = round_up(size, EFI_ALLOC_ALIGN); 79 83 80 84 /* count the suitable slots in each memory map entry */
+1
drivers/firmware/imx/imx-scu.c
··· 280 280 return ret; 281 281 282 282 sc_ipc->fast_ipc = of_device_is_compatible(args.np, "fsl,imx8-mu-scu"); 283 + of_node_put(args.np); 283 284 284 285 num_channel = sc_ipc->fast_ipc ? 2 : SCU_MU_CHAN_NUM; 285 286 for (i = 0; i < num_channel; i++) {
+9 -9
drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
··· 814 814 815 815 qcuefi->client = container_of(aux_dev, struct qseecom_client, aux_dev); 816 816 817 - auxiliary_set_drvdata(aux_dev, qcuefi); 818 - status = qcuefi_set_reference(qcuefi); 819 - if (status) 820 - return status; 821 - 822 - status = efivars_register(&qcuefi->efivars, &qcom_efivar_ops); 823 - if (status) 824 - qcuefi_set_reference(NULL); 825 - 826 817 memset(&pool_config, 0, sizeof(pool_config)); 827 818 pool_config.initial_size = SZ_4K; 828 819 pool_config.policy = QCOM_TZMEM_POLICY_MULTIPLIER; ··· 823 832 qcuefi->mempool = devm_qcom_tzmem_pool_new(&aux_dev->dev, &pool_config); 824 833 if (IS_ERR(qcuefi->mempool)) 825 834 return PTR_ERR(qcuefi->mempool); 835 + 836 + auxiliary_set_drvdata(aux_dev, qcuefi); 837 + status = qcuefi_set_reference(qcuefi); 838 + if (status) 839 + return status; 840 + 841 + status = efivars_register(&qcuefi->efivars, &qcom_efivar_ops); 842 + if (status) 843 + qcuefi_set_reference(NULL); 826 844 827 845 return status; 828 846 }
+2 -2
drivers/firmware/qcom/qcom_scm.c
··· 2301 2301 2302 2302 __scm->mempool = devm_qcom_tzmem_pool_new(__scm->dev, &pool_config); 2303 2303 if (IS_ERR(__scm->mempool)) { 2304 - dev_err_probe(__scm->dev, PTR_ERR(__scm->mempool), 2305 - "Failed to create the SCM memory pool\n"); 2304 + ret = dev_err_probe(__scm->dev, PTR_ERR(__scm->mempool), 2305 + "Failed to create the SCM memory pool\n"); 2306 2306 goto err; 2307 2307 } 2308 2308
+9 -6
drivers/gpio/gpiolib-cdev.c
··· 2729 2729 cdev->gdev = gpio_device_get(gdev); 2730 2730 2731 2731 cdev->lineinfo_changed_nb.notifier_call = lineinfo_changed_notify; 2732 - ret = atomic_notifier_chain_register(&gdev->line_state_notifier, 2733 - &cdev->lineinfo_changed_nb); 2732 + scoped_guard(write_lock_irqsave, &gdev->line_state_lock) 2733 + ret = raw_notifier_chain_register(&gdev->line_state_notifier, 2734 + &cdev->lineinfo_changed_nb); 2734 2735 if (ret) 2735 2736 goto out_free_bitmap; 2736 2737 ··· 2755 2754 blocking_notifier_chain_unregister(&gdev->device_notifier, 2756 2755 &cdev->device_unregistered_nb); 2757 2756 out_unregister_line_notifier: 2758 - atomic_notifier_chain_unregister(&gdev->line_state_notifier, 2759 - &cdev->lineinfo_changed_nb); 2757 + scoped_guard(write_lock_irqsave, &gdev->line_state_lock) 2758 + raw_notifier_chain_unregister(&gdev->line_state_notifier, 2759 + &cdev->lineinfo_changed_nb); 2760 2760 out_free_bitmap: 2761 2761 gpio_device_put(gdev); 2762 2762 bitmap_free(cdev->watched_lines); ··· 2781 2779 2782 2780 blocking_notifier_chain_unregister(&gdev->device_notifier, 2783 2781 &cdev->device_unregistered_nb); 2784 - atomic_notifier_chain_unregister(&gdev->line_state_notifier, 2785 - &cdev->lineinfo_changed_nb); 2782 + scoped_guard(write_lock_irqsave, &gdev->line_state_lock) 2783 + raw_notifier_chain_unregister(&gdev->line_state_notifier, 2784 + &cdev->lineinfo_changed_nb); 2786 2785 bitmap_free(cdev->watched_lines); 2787 2786 gpio_device_put(gdev); 2788 2787 kfree(cdev);
+16 -19
drivers/gpio/gpiolib.c
··· 1025 1025 } 1026 1026 } 1027 1027 1028 - ATOMIC_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1028 + rwlock_init(&gdev->line_state_lock); 1029 + RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1029 1030 BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier); 1030 1031 1031 1032 ret = init_srcu_struct(&gdev->srcu); ··· 1057 1056 1058 1057 desc->gdev = gdev; 1059 1058 1060 - if (gc->get_direction && gpiochip_line_is_valid(gc, desc_index)) { 1061 - ret = gc->get_direction(gc, desc_index); 1062 - if (ret < 0) 1063 - /* 1064 - * FIXME: Bail-out here once all GPIO drivers 1065 - * are updated to not return errors in 1066 - * situations that can be considered normal 1067 - * operation. 1068 - */ 1069 - dev_warn(&gdev->dev, 1070 - "%s: get_direction failed: %d\n", 1071 - __func__, ret); 1072 - 1073 - assign_bit(FLAG_IS_OUT, &desc->flags, !ret); 1074 - } else { 1059 + /* 1060 + * We would typically want to check the return value of 1061 + * get_direction() here but we must not check the return value 1062 + * and bail-out as pin controllers can have pins configured to 1063 + * alternate functions and return -EINVAL. Also: there's no 1064 + * need to take the SRCU lock here. 1065 + */ 1066 + if (gc->get_direction && gpiochip_line_is_valid(gc, desc_index)) 1067 + assign_bit(FLAG_IS_OUT, &desc->flags, 1068 + !gc->get_direction(gc, desc_index)); 1069 + else 1075 1070 assign_bit(FLAG_IS_OUT, 1076 1071 &desc->flags, !gc->direction_input); 1077 - } 1078 1072 } 1079 1073 1080 1074 ret = of_gpiochip_add(gc); ··· 4189 4193 4190 4194 void gpiod_line_state_notify(struct gpio_desc *desc, unsigned long action) 4191 4195 { 4192 - atomic_notifier_call_chain(&desc->gdev->line_state_notifier, 4193 - action, desc); 4196 + guard(read_lock_irqsave)(&desc->gdev->line_state_lock); 4197 + 4198 + raw_notifier_call_chain(&desc->gdev->line_state_notifier, action, desc); 4194 4199 } 4195 4200 4196 4201 /**
+4 -1
drivers/gpio/gpiolib.h
··· 16 16 #include <linux/gpio/driver.h> 17 17 #include <linux/module.h> 18 18 #include <linux/notifier.h> 19 + #include <linux/spinlock.h> 19 20 #include <linux/srcu.h> 20 21 #include <linux/workqueue.h> 21 22 ··· 46 45 * @list: links gpio_device:s together for traversal 47 46 * @line_state_notifier: used to notify subscribers about lines being 48 47 * requested, released or reconfigured 48 + * @line_state_lock: RW-spinlock protecting the line state notifier 49 49 * @line_state_wq: used to emit line state events from a separate thread in 50 50 * process context 51 51 * @device_notifier: used to notify character device wait queues about the GPIO ··· 74 72 const char *label; 75 73 void *data; 76 74 struct list_head list; 77 - struct atomic_notifier_head line_state_notifier; 75 + struct raw_notifier_head line_state_notifier; 76 + rwlock_t line_state_lock; 78 77 struct workqueue_struct *line_state_wq; 79 78 struct blocking_notifier_head device_notifier; 80 79 struct srcu_struct srcu;
+9 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 2555 2555 int r; 2556 2556 2557 2557 r = amdgpu_device_suspend(drm_dev, true); 2558 - adev->in_s4 = false; 2559 2558 if (r) 2560 2559 return r; 2561 2560 ··· 2566 2567 static int amdgpu_pmops_thaw(struct device *dev) 2567 2568 { 2568 2569 struct drm_device *drm_dev = dev_get_drvdata(dev); 2570 + struct amdgpu_device *adev = drm_to_adev(drm_dev); 2571 + int r; 2569 2572 2570 - return amdgpu_device_resume(drm_dev, true); 2573 + r = amdgpu_device_resume(drm_dev, true); 2574 + adev->in_s4 = false; 2575 + 2576 + return r; 2571 2577 } 2572 2578 2573 2579 static int amdgpu_pmops_poweroff(struct device *dev) ··· 2585 2581 static int amdgpu_pmops_restore(struct device *dev) 2586 2582 { 2587 2583 struct drm_device *drm_dev = dev_get_drvdata(dev); 2584 + struct amdgpu_device *adev = drm_to_adev(drm_dev); 2585 + 2586 + adev->in_s4 = false; 2588 2587 2589 2588 return amdgpu_device_resume(drm_dev, true); 2590 2589 }
+3 -2
drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
··· 528 528 529 529 bo_adev = amdgpu_ttm_adev(bo->tbo.bdev); 530 530 coherent = bo->flags & AMDGPU_GEM_CREATE_COHERENT; 531 - is_system = (bo->tbo.resource->mem_type == TTM_PL_TT) || 532 - (bo->tbo.resource->mem_type == AMDGPU_PL_PREEMPT); 531 + is_system = bo->tbo.resource && 532 + (bo->tbo.resource->mem_type == TTM_PL_TT || 533 + bo->tbo.resource->mem_type == AMDGPU_PL_PREEMPT); 533 534 534 535 if (bo && bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC) 535 536 *flags |= AMDGPU_PTE_DCC;
+1 -1
drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
··· 284 284 return 0; 285 285 } 286 286 287 - ip_block = amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_VCN); 287 + ip_block = amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_VCE); 288 288 if (!ip_block) 289 289 return -EINVAL; 290 290
+5 -3
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 1230 1230 decrement_queue_count(dqm, qpd, q); 1231 1231 1232 1232 if (dqm->dev->kfd->shared_resources.enable_mes) { 1233 - retval = remove_queue_mes(dqm, q, qpd); 1234 - if (retval) { 1233 + int err; 1234 + 1235 + err = remove_queue_mes(dqm, q, qpd); 1236 + if (err) { 1235 1237 dev_err(dev, "Failed to evict queue %d\n", 1236 1238 q->properties.queue_id); 1237 - goto out; 1239 + retval = err; 1238 1240 } 1239 1241 } 1240 1242 }
+16 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 245 245 static void handle_hpd_irq_helper(struct amdgpu_dm_connector *aconnector); 246 246 static void handle_hpd_rx_irq(void *param); 247 247 248 + static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm, 249 + int bl_idx, 250 + u32 user_brightness); 251 + 248 252 static bool 249 253 is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state, 250 254 struct drm_crtc_state *new_crtc_state); ··· 3375 3371 3376 3372 mutex_unlock(&dm->dc_lock); 3377 3373 3374 + /* set the backlight after a reset */ 3375 + for (i = 0; i < dm->num_of_edps; i++) { 3376 + if (dm->backlight_dev[i]) 3377 + amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]); 3378 + } 3379 + 3378 3380 return 0; 3379 3381 } 3382 + 3383 + /* leave display off for S4 sequence */ 3384 + if (adev->in_s4) 3385 + return 0; 3386 + 3380 3387 /* Recreate dc_state - DC invalidates it when setting power state to S3. */ 3381 3388 dc_state_release(dm_state->context); 3382 3389 dm_state->context = dc_state_create(dm->dc, NULL); ··· 4921 4906 dm->backlight_dev[aconnector->bl_idx] = 4922 4907 backlight_device_register(bl_name, aconnector->base.kdev, dm, 4923 4908 &amdgpu_dm_backlight_ops, &props); 4909 + dm->brightness[aconnector->bl_idx] = props.brightness; 4924 4910 4925 4911 if (IS_ERR(dm->backlight_dev[aconnector->bl_idx])) { 4926 4912 DRM_ERROR("DM: Backlight registration failed!\n"); ··· 4989 4973 aconnector->bl_idx = bl_idx; 4990 4974 4991 4975 amdgpu_dm_update_backlight_caps(dm, bl_idx); 4992 - dm->brightness[bl_idx] = AMDGPU_MAX_BL_LEVEL; 4993 4976 dm->backlight_link[bl_idx] = link; 4994 4977 dm->num_of_edps++; 4995 4978
+1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
··· 455 455 for (i = 0; i < hdcp_work->max_link; i++) { 456 456 cancel_delayed_work_sync(&hdcp_work[i].callback_dwork); 457 457 cancel_delayed_work_sync(&hdcp_work[i].watchdog_timer_dwork); 458 + cancel_delayed_work_sync(&hdcp_work[i].property_validate_dwork); 458 459 } 459 460 460 461 sysfs_remove_bin_file(kobj, &hdcp_work[0].attr);
+45 -19
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
··· 894 894 struct drm_device *dev = adev_to_drm(adev); 895 895 struct drm_connector *connector; 896 896 struct drm_connector_list_iter iter; 897 + int irq_type; 897 898 int i; 899 + 900 + /* First, clear all hpd and hpdrx interrupts */ 901 + for (i = DC_IRQ_SOURCE_HPD1; i <= DC_IRQ_SOURCE_HPD6RX; i++) { 902 + if (!dc_interrupt_set(adev->dm.dc, i, false)) 903 + drm_err(dev, "Failed to clear hpd(rx) source=%d on init\n", 904 + i); 905 + } 898 906 899 907 drm_connector_list_iter_begin(dev, &iter); 900 908 drm_for_each_connector_iter(connector, &iter) { ··· 916 908 917 909 dc_link = amdgpu_dm_connector->dc_link; 918 910 911 + /* 912 + * Get a base driver irq reference for hpd ints for the lifetime 913 + * of dm. Note that only hpd interrupt types are registered with 914 + * base driver; hpd_rx types aren't. IOW, amdgpu_irq_get/put on 915 + * hpd_rx isn't available. DM currently controls hpd_rx 916 + * explicitly with dc_interrupt_set() 917 + */ 919 918 if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) { 920 - dc_interrupt_set(adev->dm.dc, 921 - dc_link->irq_source_hpd, 922 - true); 919 + irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1; 920 + /* 921 + * TODO: There's a mismatch between mode_info.num_hpd 922 + * and what bios reports as the # of connectors with hpd 923 + * sources. Since the # of hpd source types registered 924 + * with base driver == mode_info.num_hpd, we have to 925 + * fallback to dc_interrupt_set for the remaining types. 926 + */ 927 + if (irq_type < adev->mode_info.num_hpd) { 928 + if (amdgpu_irq_get(adev, &adev->hpd_irq, irq_type)) 929 + drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", 930 + dc_link->irq_source_hpd); 931 + } else { 932 + dc_interrupt_set(adev->dm.dc, 933 + dc_link->irq_source_hpd, 934 + true); 935 + } 923 936 } 924 937 925 938 if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) { ··· 950 921 } 951 922 } 952 923 drm_connector_list_iter_end(&iter); 953 - 954 - /* Update reference counts for HPDs */ 955 - for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { 956 - if (amdgpu_irq_get(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) 957 - drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", i); 958 - } 959 924 } 960 925 961 926 /** ··· 965 942 struct drm_device *dev = adev_to_drm(adev); 966 943 struct drm_connector *connector; 967 944 struct drm_connector_list_iter iter; 968 - int i; 945 + int irq_type; 969 946 970 947 drm_connector_list_iter_begin(dev, &iter); 971 948 drm_for_each_connector_iter(connector, &iter) { ··· 979 956 dc_link = amdgpu_dm_connector->dc_link; 980 957 981 958 if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) { 982 - dc_interrupt_set(adev->dm.dc, 983 - dc_link->irq_source_hpd, 984 - false); 959 + irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1; 960 + 961 + /* TODO: See same TODO in amdgpu_dm_hpd_init() */ 962 + if (irq_type < adev->mode_info.num_hpd) { 963 + if (amdgpu_irq_put(adev, &adev->hpd_irq, irq_type)) 964 + drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", 965 + dc_link->irq_source_hpd); 966 + } else { 967 + dc_interrupt_set(adev->dm.dc, 968 + dc_link->irq_source_hpd, 969 + false); 970 + } 985 971 } 986 972 987 973 if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) { ··· 1000 968 } 1001 969 } 1002 970 drm_connector_list_iter_end(&iter); 1003 - 1004 - /* Update reference counts for HPDs */ 1005 - for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { 1006 - if (amdgpu_irq_put(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) 1007 - drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", i); 1008 - } 1009 971 }
+5 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
··· 277 277 if (!dcc->enable) 278 278 return 0; 279 279 280 - if (format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN || 281 - !dc->cap_funcs.get_dcc_compression_cap) 280 + if (adev->family < AMDGPU_FAMILY_GC_12_0_0 && 281 + format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) 282 + return -EINVAL; 283 + 284 + if (!dc->cap_funcs.get_dcc_compression_cap) 282 285 return -EINVAL; 283 286 284 287 input.format = format;
+5 -2
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
··· 3389 3389 break; 3390 3390 case COLOR_DEPTH_121212: 3391 3391 normalized_pix_clk = (pix_clk * 36) / 24; 3392 - break; 3392 + break; 3393 + case COLOR_DEPTH_141414: 3394 + normalized_pix_clk = (pix_clk * 42) / 24; 3395 + break; 3393 3396 case COLOR_DEPTH_161616: 3394 3397 normalized_pix_clk = (pix_clk * 48) / 24; 3395 - break; 3398 + break; 3396 3399 default: 3397 3400 ASSERT(0); 3398 3401 break;
+1
drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c
··· 239 239 dce60_timing_generator_enable_advanced_request, 240 240 .configure_crc = dce60_configure_crc, 241 241 .get_crc = dce110_get_crc, 242 + .is_two_pixels_per_container = dce110_is_two_pixels_per_container, 242 243 }; 243 244 244 245 void dce60_timing_generator_construct(
+24 -16
drivers/gpu/drm/display/drm_dp_mst_topology.c
··· 4025 4025 return 0; 4026 4026 } 4027 4027 4028 + static bool primary_mstb_probing_is_done(struct drm_dp_mst_topology_mgr *mgr) 4029 + { 4030 + bool probing_done = false; 4031 + 4032 + mutex_lock(&mgr->lock); 4033 + 4034 + if (mgr->mst_primary && drm_dp_mst_topology_try_get_mstb(mgr->mst_primary)) { 4035 + probing_done = mgr->mst_primary->link_address_sent; 4036 + drm_dp_mst_topology_put_mstb(mgr->mst_primary); 4037 + } 4038 + 4039 + mutex_unlock(&mgr->lock); 4040 + 4041 + return probing_done; 4042 + } 4043 + 4028 4044 static inline bool 4029 4045 drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr, 4030 4046 struct drm_dp_pending_up_req *up_req) ··· 4071 4055 4072 4056 /* TODO: Add missing handler for DP_RESOURCE_STATUS_NOTIFY events */ 4073 4057 if (msg->req_type == DP_CONNECTION_STATUS_NOTIFY) { 4074 - dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); 4075 - hotplug = true; 4058 + if (!primary_mstb_probing_is_done(mgr)) { 4059 + drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it.\n"); 4060 + } else { 4061 + dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); 4062 + hotplug = true; 4063 + } 4076 4064 } 4077 4065 4078 4066 drm_dp_mst_topology_put_mstb(mstb); ··· 4158 4138 drm_dp_send_up_ack_reply(mgr, mst_primary, up_req->msg.req_type, 4159 4139 false); 4160 4140 4141 + drm_dp_mst_topology_put_mstb(mst_primary); 4142 + 4161 4143 if (up_req->msg.req_type == DP_CONNECTION_STATUS_NOTIFY) { 4162 4144 const struct drm_dp_connection_status_notify *conn_stat = 4163 4145 &up_req->msg.u.conn_stat; 4164 - bool handle_csn; 4165 4146 4166 4147 drm_dbg_kms(mgr->dev, "Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", 4167 4148 conn_stat->port_number, ··· 4171 4150 conn_stat->message_capability_status, 4172 4151 conn_stat->input_port, 4173 4152 conn_stat->peer_device_type); 4174 - 4175 - mutex_lock(&mgr->probe_lock); 4176 - handle_csn = mst_primary->link_address_sent; 4177 - mutex_unlock(&mgr->probe_lock); 4178 - 4179 - if (!handle_csn) { 4180 - drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it."); 4181 - kfree(up_req); 4182 - goto out_put_primary; 4183 - } 4184 4153 } else if (up_req->msg.req_type == DP_RESOURCE_STATUS_NOTIFY) { 4185 4154 const struct drm_dp_resource_status_notify *res_stat = 4186 4155 &up_req->msg.u.resource_stat; ··· 4185 4174 list_add_tail(&up_req->next, &mgr->up_req_list); 4186 4175 mutex_unlock(&mgr->up_req_lock); 4187 4176 queue_work(system_long_wq, &mgr->up_req_work); 4188 - 4189 - out_put_primary: 4190 - drm_dp_mst_topology_put_mstb(mst_primary); 4191 4177 out_clear_reply: 4192 4178 reset_msg_rx_state(&mgr->up_req_recv); 4193 4179 return ret;
+4
drivers/gpu/drm/drm_atomic_uapi.c
··· 956 956 957 957 if (mode != DRM_MODE_DPMS_ON) 958 958 mode = DRM_MODE_DPMS_OFF; 959 + 960 + if (connector->dpms == mode) 961 + goto out; 962 + 959 963 connector->dpms = mode; 960 964 961 965 crtc = connector->state->crtc;
+4
drivers/gpu/drm/drm_connector.c
··· 1427 1427 * callback. For atomic drivers the remapping to the "ACTIVE" property is 1428 1428 * implemented in the DRM core. 1429 1429 * 1430 + * On atomic drivers any DPMS setproperty ioctl where the value does not 1431 + * change is completely skipped, otherwise a full atomic commit will occur. 1432 + * On legacy drivers the exact behavior is driver specific. 1433 + * 1430 1434 * Note that this property cannot be set through the MODE_ATOMIC ioctl, 1431 1435 * userspace must use "ACTIVE" on the CRTC instead. 1432 1436 *
+8 -8
drivers/gpu/drm/drm_panic_qr.rs
··· 545 545 } 546 546 self.push(&mut offset, (MODE_STOP, 4)); 547 547 548 - let pad_offset = (offset + 7) / 8; 548 + let pad_offset = offset.div_ceil(8); 549 549 for i in pad_offset..self.version.max_data() { 550 550 self.data[i] = PADDING[(i & 1) ^ (pad_offset & 1)]; 551 551 } ··· 659 659 impl QrImage<'_> { 660 660 fn new<'a, 'b>(em: &'b EncodedMsg<'b>, qrdata: &'a mut [u8]) -> QrImage<'a> { 661 661 let width = em.version.width(); 662 - let stride = (width + 7) / 8; 662 + let stride = width.div_ceil(8); 663 663 let data = qrdata; 664 664 665 665 let mut qr_image = QrImage { ··· 911 911 /// 912 912 /// * `url`: The base URL of the QR code. It will be encoded as Binary segment. 913 913 /// * `data`: A pointer to the binary data, to be encoded. if URL is NULL, it 914 - /// will be encoded as binary segment, otherwise it will be encoded 915 - /// efficiently as a numeric segment, and appended to the URL. 914 + /// will be encoded as binary segment, otherwise it will be encoded 915 + /// efficiently as a numeric segment, and appended to the URL. 916 916 /// * `data_len`: Length of the data, that needs to be encoded, must be less 917 - /// than data_size. 917 + /// than data_size. 918 918 /// * `data_size`: Size of data buffer, it should be at least 4071 bytes to hold 919 - /// a V40 QR code. It will then be overwritten with the QR code image. 919 + /// a V40 QR code. It will then be overwritten with the QR code image. 920 920 /// * `tmp`: A temporary buffer that the QR code encoder will use, to write the 921 - /// segments and ECC. 921 + /// segments and ECC. 922 922 /// * `tmp_size`: Size of the temporary buffer, it must be at least 3706 bytes 923 - /// long for V40. 923 + /// long for V40. 924 924 /// 925 925 /// # Safety 926 926 ///
+5
drivers/gpu/drm/gma500/mid_bios.c
··· 279 279 0, PCI_DEVFN(2, 0)); 280 280 int ret = -1; 281 281 282 + if (pci_gfx_root == NULL) { 283 + WARN_ON(1); 284 + return; 285 + } 286 + 282 287 /* Get the address of the platform config vbt */ 283 288 pci_read_config_dword(pci_gfx_root, 0xFC, &addr); 284 289 pci_dev_put(pci_gfx_root);
+2 -3
drivers/gpu/drm/i915/display/intel_display.c
··· 7830 7830 7831 7831 intel_program_dpkgc_latency(state); 7832 7832 7833 - if (state->modeset) 7834 - intel_set_cdclk_post_plane_update(state); 7835 - 7836 7833 intel_wait_for_vblank_workers(state); 7837 7834 7838 7835 /* FIXME: We should call drm_atomic_helper_commit_hw_done() here ··· 7903 7906 intel_verify_planes(state); 7904 7907 7905 7908 intel_sagv_post_plane_update(state); 7909 + if (state->modeset) 7910 + intel_set_cdclk_post_plane_update(state); 7906 7911 intel_pmdemand_post_plane_update(state); 7907 7912 7908 7913 drm_atomic_helper_commit_hw_done(&state->base);
+4 -1
drivers/gpu/drm/i915/gem/i915_gem_mman.c
··· 164 164 * 4 - Support multiple fault handlers per object depending on object's 165 165 * backing storage (a.k.a. MMAP_OFFSET). 166 166 * 167 + * 5 - Support multiple partial mmaps(mmap part of BO + unmap a offset, multiple 168 + * times with different size and offset). 169 + * 167 170 * Restrictions: 168 171 * 169 172 * * snoopable objects cannot be accessed via the GTT. It can cause machine ··· 194 191 */ 195 192 int i915_gem_mmap_gtt_version(void) 196 193 { 197 - return 4; 194 + return 5; 198 195 } 199 196 200 197 static inline struct i915_gtt_view
+40 -13
drivers/gpu/drm/xe/xe_guc_pc.c
··· 6 6 #include "xe_guc_pc.h" 7 7 8 8 #include <linux/delay.h> 9 + #include <linux/ktime.h> 9 10 10 11 #include <drm/drm_managed.h> 11 12 #include <generated/xe_wa_oob.h> ··· 20 19 #include "xe_gt.h" 21 20 #include "xe_gt_idle.h" 22 21 #include "xe_gt_printk.h" 22 + #include "xe_gt_throttle.h" 23 23 #include "xe_gt_types.h" 24 24 #include "xe_guc.h" 25 25 #include "xe_guc_ct.h" ··· 50 48 51 49 #define LNL_MERT_FREQ_CAP 800 52 50 #define BMG_MERT_FREQ_CAP 2133 51 + 52 + #define SLPC_RESET_TIMEOUT_MS 5 /* roughly 5ms, but no need for precision */ 53 + #define SLPC_RESET_EXTENDED_TIMEOUT_MS 1000 /* To be used only at pc_start */ 53 54 54 55 /** 55 56 * DOC: GuC Power Conservation (PC) ··· 118 113 FIELD_PREP(HOST2GUC_PC_SLPC_REQUEST_MSG_1_EVENT_ARGC, count)) 119 114 120 115 static int wait_for_pc_state(struct xe_guc_pc *pc, 121 - enum slpc_global_state state) 116 + enum slpc_global_state state, 117 + int timeout_ms) 122 118 { 123 - int timeout_us = 5000; /* rought 5ms, but no need for precision */ 119 + int timeout_us = 1000 * timeout_ms; 124 120 int slept, wait = 10; 125 121 126 122 xe_device_assert_mem_access(pc_to_xe(pc)); ··· 170 164 }; 171 165 int ret; 172 166 173 - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING)) 167 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 168 + SLPC_RESET_TIMEOUT_MS)) 174 169 return -EAGAIN; 175 170 176 171 /* Blocking here to ensure the results are ready before reading them */ ··· 194 187 }; 195 188 int ret; 196 189 197 - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING)) 190 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 191 + SLPC_RESET_TIMEOUT_MS)) 198 192 return -EAGAIN; 199 193 200 194 ret = xe_guc_ct_send(ct, action, ARRAY_SIZE(action), 0, 0); ··· 216 208 struct xe_guc_ct *ct = &pc_to_guc(pc)->ct; 217 209 int ret; 218 210 219 - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING)) 211 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 212 + SLPC_RESET_TIMEOUT_MS)) 220 213 return -EAGAIN; 221 214 222 215 ret = xe_guc_ct_send(ct, action, ARRAY_SIZE(action), 0, 0); ··· 449 440 return freq; 450 441 } 451 442 443 + static u32 get_cur_freq(struct xe_gt *gt) 444 + { 445 + u32 freq; 446 + 447 + freq = xe_mmio_read32(&gt->mmio, RPNSWREQ); 448 + freq = REG_FIELD_GET(REQ_RATIO_MASK, freq); 449 + return decode_freq(freq); 450 + } 451 + 452 452 /** 453 453 * xe_guc_pc_get_cur_freq - Get Current requested frequency 454 454 * @pc: The GuC PC ··· 481 463 return -ETIMEDOUT; 482 464 } 483 465 484 - *freq = xe_mmio_read32(&gt->mmio, RPNSWREQ); 485 - 486 - *freq = REG_FIELD_GET(REQ_RATIO_MASK, *freq); 487 - *freq = decode_freq(*freq); 466 + *freq = get_cur_freq(gt); 488 467 489 468 xe_force_wake_put(gt_to_fw(gt), fw_ref); 490 469 return 0; ··· 1017 1002 struct xe_gt *gt = pc_to_gt(pc); 1018 1003 u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data)); 1019 1004 unsigned int fw_ref; 1005 + ktime_t earlier; 1020 1006 int ret; 1021 1007 1022 1008 xe_gt_assert(gt, xe_device_uc_enabled(xe)); ··· 1042 1026 memset(pc->bo->vmap.vaddr, 0, size); 1043 1027 slpc_shared_data_write(pc, header.size, size); 1044 1028 1029 + earlier = ktime_get(); 1045 1030 ret = pc_action_reset(pc); 1046 1031 if (ret) 1047 1032 goto out; 1048 1033 1049 - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING)) { 1050 - xe_gt_err(gt, "GuC PC Start failed\n"); 1051 - ret = -EIO; 1052 - goto out; 1034 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 1035 + SLPC_RESET_TIMEOUT_MS)) { 1036 + xe_gt_warn(gt, "GuC PC start taking longer than normal [freq = %dMHz (req = %dMHz), perf_limit_reasons = 0x%08X]\n", 1037 + xe_guc_pc_get_act_freq(pc), get_cur_freq(gt), 1038 + xe_gt_throttle_get_limit_reasons(gt)); 1039 + 1040 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 1041 + SLPC_RESET_EXTENDED_TIMEOUT_MS)) { 1042 + xe_gt_err(gt, "GuC PC Start failed: Dynamic GT frequency control and GT sleep states are now disabled.\n"); 1043 + goto out; 1044 + } 1045 + 1046 + xe_gt_warn(gt, "GuC PC excessive start time: %lldms", 1047 + ktime_ms_delta(ktime_get(), earlier)); 1053 1048 } 1054 1049 1055 1050 ret = pc_init_freqs(pc);
+1 -1
drivers/gpu/drm/xe/xe_guc_submit.c
··· 1246 1246 xe_pm_runtime_get(guc_to_xe(guc)); 1247 1247 trace_xe_exec_queue_destroy(q); 1248 1248 1249 + release_guc_id(guc, q); 1249 1250 if (xe_exec_queue_is_lr(q)) 1250 1251 cancel_work_sync(&ge->lr_tdr); 1251 1252 /* Confirm no work left behind accessing device structures */ 1252 1253 cancel_delayed_work_sync(&ge->sched.base.work_tdr); 1253 - release_guc_id(guc, q); 1254 1254 xe_sched_entity_fini(&ge->entity); 1255 1255 xe_sched_fini(&ge->sched); 1256 1256
+5 -1
drivers/gpu/drm/xe/xe_hmm.c
··· 138 138 i += size; 139 139 140 140 if (unlikely(j == st->nents - 1)) { 141 + xe_assert(xe, i >= npages); 141 142 if (i > npages) 142 143 size -= (i - npages); 144 + 143 145 sg_mark_end(sgl); 146 + } else { 147 + xe_assert(xe, i < npages); 144 148 } 149 + 145 150 sg_set_page(sgl, page, size << PAGE_SHIFT, 0); 146 151 } 147 - xe_assert(xe, i == npages); 148 152 149 153 return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 150 154 DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
+12 -1
drivers/gpu/drm/xe/xe_pm.c
··· 267 267 } 268 268 ALLOW_ERROR_INJECTION(xe_pm_init_early, ERRNO); /* See xe_pci_probe() */ 269 269 270 + static u32 vram_threshold_value(struct xe_device *xe) 271 + { 272 + /* FIXME: D3Cold temporarily disabled by default on BMG */ 273 + if (xe->info.platform == XE_BATTLEMAGE) 274 + return 0; 275 + 276 + return DEFAULT_VRAM_THRESHOLD; 277 + } 278 + 270 279 /** 271 280 * xe_pm_init - Initialize Xe Power Management 272 281 * @xe: xe device instance ··· 286 277 */ 287 278 int xe_pm_init(struct xe_device *xe) 288 279 { 280 + u32 vram_threshold; 289 281 int err; 290 282 291 283 /* For now suspend/resume is only allowed with GuC */ ··· 300 290 if (err) 301 291 return err; 302 292 303 - err = xe_pm_set_vram_threshold(xe, DEFAULT_VRAM_THRESHOLD); 293 + vram_threshold = vram_threshold_value(xe); 294 + err = xe_pm_set_vram_threshold(xe, vram_threshold); 304 295 if (err) 305 296 return err; 306 297 }
-3
drivers/gpu/drm/xe/xe_vm.c
··· 1809 1809 args->flags & DRM_XE_VM_CREATE_FLAG_FAULT_MODE)) 1810 1810 return -EINVAL; 1811 1811 1812 - if (XE_IOCTL_DBG(xe, args->extensions)) 1813 - return -EINVAL; 1814 - 1815 1812 if (args->flags & DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE) 1816 1813 flags |= XE_VM_FLAG_SCRATCH_PAGE; 1817 1814 if (args->flags & DRM_XE_VM_CREATE_FLAG_LR_MODE)
+2 -2
drivers/hwmon/nct6775-core.c
··· 273 273 static const u16 NCT6776_REG_TOLERANCE_H[] = { 274 274 0x10c, 0x20c, 0x30c, 0x80c, 0x90c, 0xa0c, 0xb0c }; 275 275 276 - static const u8 NCT6776_REG_PWM_MODE[] = { 0x04, 0, 0, 0, 0, 0 }; 277 - static const u8 NCT6776_PWM_MODE_MASK[] = { 0x01, 0, 0, 0, 0, 0 }; 276 + static const u8 NCT6776_REG_PWM_MODE[] = { 0x04, 0, 0, 0, 0, 0, 0 }; 277 + static const u8 NCT6776_PWM_MODE_MASK[] = { 0x01, 0, 0, 0, 0, 0, 0 }; 278 278 279 279 static const u16 NCT6776_REG_FAN_MIN[] = { 280 280 0x63a, 0x63c, 0x63e, 0x640, 0x642, 0x64a, 0x64c };
+11 -1
drivers/i2c/busses/i2c-ali1535.c
··· 485 485 486 486 static int ali1535_probe(struct pci_dev *dev, const struct pci_device_id *id) 487 487 { 488 + int ret; 489 + 488 490 if (ali1535_setup(dev)) { 489 491 dev_warn(&dev->dev, 490 492 "ALI1535 not detected, module not inserted.\n"); ··· 498 496 499 497 snprintf(ali1535_adapter.name, sizeof(ali1535_adapter.name), 500 498 "SMBus ALI1535 adapter at %04x", ali1535_offset); 501 - return i2c_add_adapter(&ali1535_adapter); 499 + ret = i2c_add_adapter(&ali1535_adapter); 500 + if (ret) 501 + goto release_region; 502 + 503 + return 0; 504 + 505 + release_region: 506 + release_region(ali1535_smba, ALI1535_SMB_IOSIZE); 507 + return ret; 502 508 } 503 509 504 510 static void ali1535_remove(struct pci_dev *dev)
+11 -1
drivers/i2c/busses/i2c-ali15x3.c
··· 472 472 473 473 static int ali15x3_probe(struct pci_dev *dev, const struct pci_device_id *id) 474 474 { 475 + int ret; 476 + 475 477 if (ali15x3_setup(dev)) { 476 478 dev_err(&dev->dev, 477 479 "ALI15X3 not detected, module not inserted.\n"); ··· 485 483 486 484 snprintf(ali15x3_adapter.name, sizeof(ali15x3_adapter.name), 487 485 "SMBus ALI15X3 adapter at %04x", ali15x3_smba); 488 - return i2c_add_adapter(&ali15x3_adapter); 486 + ret = i2c_add_adapter(&ali15x3_adapter); 487 + if (ret) 488 + goto release_region; 489 + 490 + return 0; 491 + 492 + release_region: 493 + release_region(ali15x3_smba, ALI15X3_SMB_IOSIZE); 494 + return ret; 489 495 } 490 496 491 497 static void ali15x3_remove(struct pci_dev *dev)
+7 -19
drivers/i2c/busses/i2c-omap.c
··· 1048 1048 return 0; 1049 1049 } 1050 1050 1051 - static irqreturn_t 1052 - omap_i2c_isr(int irq, void *dev_id) 1053 - { 1054 - struct omap_i2c_dev *omap = dev_id; 1055 - irqreturn_t ret = IRQ_HANDLED; 1056 - u16 mask; 1057 - u16 stat; 1058 - 1059 - stat = omap_i2c_read_reg(omap, OMAP_I2C_STAT_REG); 1060 - mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG) & ~OMAP_I2C_STAT_NACK; 1061 - 1062 - if (stat & mask) 1063 - ret = IRQ_WAKE_THREAD; 1064 - 1065 - return ret; 1066 - } 1067 - 1068 1051 static int omap_i2c_xfer_data(struct omap_i2c_dev *omap) 1069 1052 { 1070 1053 u16 bits; ··· 1078 1095 } 1079 1096 1080 1097 if (stat & OMAP_I2C_STAT_NACK) { 1081 - err |= OMAP_I2C_STAT_NACK; 1098 + omap->cmd_err |= OMAP_I2C_STAT_NACK; 1082 1099 omap_i2c_ack_stat(omap, OMAP_I2C_STAT_NACK); 1100 + 1101 + if (!(stat & ~OMAP_I2C_STAT_NACK)) { 1102 + err = -EAGAIN; 1103 + break; 1104 + } 1083 1105 } 1084 1106 1085 1107 if (stat & OMAP_I2C_STAT_AL) { ··· 1460 1472 IRQF_NO_SUSPEND, pdev->name, omap); 1461 1473 else 1462 1474 r = devm_request_threaded_irq(&pdev->dev, omap->irq, 1463 - omap_i2c_isr, omap_i2c_isr_thread, 1475 + NULL, omap_i2c_isr_thread, 1464 1476 IRQF_NO_SUSPEND | IRQF_ONESHOT, 1465 1477 pdev->name, omap); 1466 1478
+11 -1
drivers/i2c/busses/i2c-sis630.c
··· 509 509 510 510 static int sis630_probe(struct pci_dev *dev, const struct pci_device_id *id) 511 511 { 512 + int ret; 513 + 512 514 if (sis630_setup(dev)) { 513 515 dev_err(&dev->dev, 514 516 "SIS630 compatible bus not detected, " ··· 524 522 snprintf(sis630_adapter.name, sizeof(sis630_adapter.name), 525 523 "SMBus SIS630 adapter at %04x", smbus_base + SMB_STS); 526 524 527 - return i2c_add_adapter(&sis630_adapter); 525 + ret = i2c_add_adapter(&sis630_adapter); 526 + if (ret) 527 + goto release_region; 528 + 529 + return 0; 530 + 531 + release_region: 532 + release_region(smbus_base + SMB_STS, SIS630_SMB_IOREGION); 533 + return ret; 528 534 } 529 535 530 536 static void sis630_remove(struct pci_dev *dev)
-6
drivers/infiniband/hw/bnxt_re/bnxt_re.h
··· 53 53 #define BNXT_RE_MAX_MR_SIZE_HIGH BIT_ULL(39) 54 54 #define BNXT_RE_MAX_MR_SIZE BNXT_RE_MAX_MR_SIZE_HIGH 55 55 56 - #define BNXT_RE_MAX_QPC_COUNT (64 * 1024) 57 - #define BNXT_RE_MAX_MRW_COUNT (64 * 1024) 58 - #define BNXT_RE_MAX_SRQC_COUNT (64 * 1024) 59 - #define BNXT_RE_MAX_CQ_COUNT (64 * 1024) 60 - #define BNXT_RE_MAX_MRW_COUNT_64K (64 * 1024) 61 - #define BNXT_RE_MAX_MRW_COUNT_256K (256 * 1024) 62 56 63 57 /* Number of MRs to reserve for PF, leaving remainder for VFs */ 64 58 #define BNXT_RE_RESVD_MR_FOR_PF (32 * 1024)
+1 -2
drivers/infiniband/hw/bnxt_re/main.c
··· 2130 2130 * memory for the function and all child VFs 2131 2131 */ 2132 2132 rc = bnxt_qplib_alloc_rcfw_channel(&rdev->qplib_res, &rdev->rcfw, 2133 - &rdev->qplib_ctx, 2134 - BNXT_RE_MAX_QPC_COUNT); 2133 + &rdev->qplib_ctx); 2135 2134 if (rc) { 2136 2135 ibdev_err(&rdev->ibdev, 2137 2136 "Failed to allocate RCFW Channel: %#x\n", rc);
-2
drivers/infiniband/hw/bnxt_re/qplib_fp.c
··· 1217 1217 qp->path_mtu = 1218 1218 CMDQ_MODIFY_QP_PATH_MTU_MTU_2048; 1219 1219 } 1220 - qp->modify_flags &= 1221 - ~CMDQ_MODIFY_QP_MODIFY_MASK_VLAN_ID; 1222 1220 /* Bono FW require the max_dest_rd_atomic to be >= 1 */ 1223 1221 if (qp->max_dest_rd_atomic < 1) 1224 1222 qp->max_dest_rd_atomic = 1;
+1 -9
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
··· 915 915 916 916 void bnxt_qplib_free_rcfw_channel(struct bnxt_qplib_rcfw *rcfw) 917 917 { 918 - kfree(rcfw->qp_tbl); 919 918 kfree(rcfw->crsqe_tbl); 920 919 bnxt_qplib_free_hwq(rcfw->res, &rcfw->cmdq.hwq); 921 920 bnxt_qplib_free_hwq(rcfw->res, &rcfw->creq.hwq); ··· 923 924 924 925 int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res, 925 926 struct bnxt_qplib_rcfw *rcfw, 926 - struct bnxt_qplib_ctx *ctx, 927 - int qp_tbl_sz) 927 + struct bnxt_qplib_ctx *ctx) 928 928 { 929 929 struct bnxt_qplib_hwq_attr hwq_attr = {}; 930 930 struct bnxt_qplib_sg_info sginfo = {}; ··· 967 969 if (!rcfw->crsqe_tbl) 968 970 goto fail; 969 971 970 - /* Allocate one extra to hold the QP1 entries */ 971 - rcfw->qp_tbl_size = qp_tbl_sz + 1; 972 - rcfw->qp_tbl = kcalloc(rcfw->qp_tbl_size, sizeof(struct bnxt_qplib_qp_node), 973 - GFP_KERNEL); 974 - if (!rcfw->qp_tbl) 975 - goto fail; 976 972 spin_lock_init(&rcfw->tbl_lock); 977 973 978 974 rcfw->max_timeout = res->cctx->hwrm_cmd_max_timeout;
+3 -3
drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
··· 262 262 void bnxt_qplib_free_rcfw_channel(struct bnxt_qplib_rcfw *rcfw); 263 263 int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res, 264 264 struct bnxt_qplib_rcfw *rcfw, 265 - struct bnxt_qplib_ctx *ctx, 266 - int qp_tbl_sz); 265 + struct bnxt_qplib_ctx *ctx); 267 266 void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill); 268 267 void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw); 269 268 int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector, ··· 284 285 int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw, 285 286 struct bnxt_qplib_ctx *ctx, int is_virtfn); 286 287 void bnxt_qplib_mark_qp_error(void *qp_handle); 288 + 287 289 static inline u32 map_qp_id_to_tbl_indx(u32 qid, struct bnxt_qplib_rcfw *rcfw) 288 290 { 289 291 /* Last index of the qp_tbl is for QP1 ie. qp_tbl_size - 1*/ 290 - return (qid == 1) ? rcfw->qp_tbl_size - 1 : qid % rcfw->qp_tbl_size - 2; 292 + return (qid == 1) ? rcfw->qp_tbl_size - 1 : (qid % (rcfw->qp_tbl_size - 2)); 291 293 } 292 294 #endif /* __BNXT_QPLIB_RCFW_H__ */
+9
drivers/infiniband/hw/bnxt_re/qplib_res.c
··· 871 871 872 872 void bnxt_qplib_free_res(struct bnxt_qplib_res *res) 873 873 { 874 + kfree(res->rcfw->qp_tbl); 874 875 bnxt_qplib_free_sgid_tbl(res, &res->sgid_tbl); 875 876 bnxt_qplib_free_pd_tbl(&res->pd_tbl); 876 877 bnxt_qplib_free_dpi_tbl(res, &res->dpi_tbl); ··· 879 878 880 879 int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct net_device *netdev) 881 880 { 881 + struct bnxt_qplib_rcfw *rcfw = res->rcfw; 882 882 struct bnxt_qplib_dev_attr *dev_attr; 883 883 int rc; 884 884 885 885 res->netdev = netdev; 886 886 dev_attr = res->dattr; 887 + 888 + /* Allocate one extra to hold the QP1 entries */ 889 + rcfw->qp_tbl_size = max_t(u32, BNXT_RE_MAX_QPC_COUNT + 1, dev_attr->max_qp); 890 + rcfw->qp_tbl = kcalloc(rcfw->qp_tbl_size, sizeof(struct bnxt_qplib_qp_node), 891 + GFP_KERNEL); 892 + if (!rcfw->qp_tbl) 893 + return -ENOMEM; 887 894 888 895 rc = bnxt_qplib_alloc_sgid_tbl(res, &res->sgid_tbl, dev_attr->max_sgid); 889 896 if (rc)
+12
drivers/infiniband/hw/bnxt_re/qplib_res.h
··· 49 49 #define CHIP_NUM_58818 0xd818 50 50 #define CHIP_NUM_57608 0x1760 51 51 52 + #define BNXT_RE_MAX_QPC_COUNT (64 * 1024) 53 + #define BNXT_RE_MAX_MRW_COUNT (64 * 1024) 54 + #define BNXT_RE_MAX_SRQC_COUNT (64 * 1024) 55 + #define BNXT_RE_MAX_CQ_COUNT (64 * 1024) 56 + #define BNXT_RE_MAX_MRW_COUNT_64K (64 * 1024) 57 + #define BNXT_RE_MAX_MRW_COUNT_256K (256 * 1024) 58 + 52 59 #define BNXT_QPLIB_DBR_VALID (0x1UL << 26) 53 60 #define BNXT_QPLIB_DBR_EPOCH_SHIFT 24 54 61 #define BNXT_QPLIB_DBR_TOGGLE_SHIFT 25 ··· 605 598 static inline bool _is_cq_coalescing_supported(u16 dev_cap_ext_flags2) 606 599 { 607 600 return dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_CQ_COALESCING_SUPPORTED; 601 + } 602 + 603 + static inline bool _is_max_srq_ext_supported(u16 dev_cap_ext_flags_2) 604 + { 605 + return !!(dev_cap_ext_flags_2 & CREQ_QUERY_FUNC_RESP_SB_MAX_SRQ_EXTENDED); 608 606 } 609 607 610 608 #endif /* __BNXT_QPLIB_RES_H__ */
+3
drivers/infiniband/hw/bnxt_re/qplib_sp.c
··· 176 176 attr->dev_cap_flags = le16_to_cpu(sb->dev_cap_flags); 177 177 attr->dev_cap_flags2 = le16_to_cpu(sb->dev_cap_ext_flags_2); 178 178 179 + if (_is_max_srq_ext_supported(attr->dev_cap_flags2)) 180 + attr->max_srq += le16_to_cpu(sb->max_srq_ext); 181 + 179 182 bnxt_qplib_query_version(rcfw, attr->fw_ver); 180 183 181 184 for (i = 0; i < MAX_TQM_ALLOC_REQ / 4; i++) {
+2 -1
drivers/infiniband/hw/bnxt_re/roce_hsi.h
··· 2215 2215 #define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE (0x2UL << 4) 2216 2216 #define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_LAST \ 2217 2217 CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE 2218 + #define CREQ_QUERY_FUNC_RESP_SB_MAX_SRQ_EXTENDED 0x40UL 2218 2219 #define CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED 0x1000UL 2219 2220 __le16 max_xp_qp_size; 2220 2221 __le16 create_qp_batch_size; 2221 2222 __le16 destroy_qp_batch_size; 2222 - __le16 reserved16; 2223 + __le16 max_srq_ext; 2223 2224 __le64 reserved64; 2224 2225 }; 2225 2226
+3 -1
drivers/infiniband/hw/hns/hns_roce_alloc.c
··· 175 175 if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_XRC) 176 176 ida_destroy(&hr_dev->xrcd_ida.ida); 177 177 178 - if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) 178 + if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) { 179 179 ida_destroy(&hr_dev->srq_table.srq_ida.ida); 180 + xa_destroy(&hr_dev->srq_table.xa); 181 + } 180 182 hns_roce_cleanup_qp_table(hr_dev); 181 183 hns_roce_cleanup_cq_table(hr_dev); 182 184 ida_destroy(&hr_dev->mr_table.mtpt_ida.ida);
+1
drivers/infiniband/hw/hns/hns_roce_cq.c
··· 537 537 538 538 for (i = 0; i < HNS_ROCE_CQ_BANK_NUM; i++) 539 539 ida_destroy(&hr_dev->cq_table.bank[i].ida); 540 + xa_destroy(&hr_dev->cq_table.array); 540 541 mutex_destroy(&hr_dev->cq_table.bank_mutex); 541 542 }
+15 -1
drivers/infiniband/hw/hns/hns_roce_hem.c
··· 1361 1361 return ret; 1362 1362 } 1363 1363 1364 + /* This is the bottom bt pages number of a 100G MR on 4K OS, assuming 1365 + * the bt page size is not expanded by cal_best_bt_pg_sz() 1366 + */ 1367 + #define RESCHED_LOOP_CNT_THRESHOLD_ON_4K 12800 1368 + 1364 1369 /* construct the base address table and link them by address hop config */ 1365 1370 int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev, 1366 1371 struct hns_roce_hem_list *hem_list, ··· 1374 1369 { 1375 1370 const struct hns_roce_buf_region *r; 1376 1371 int ofs, end; 1372 + int loop; 1377 1373 int unit; 1378 1374 int ret; 1379 1375 int i; ··· 1392 1386 continue; 1393 1387 1394 1388 end = r->offset + r->count; 1395 - for (ofs = r->offset; ofs < end; ofs += unit) { 1389 + for (ofs = r->offset, loop = 1; ofs < end; ofs += unit, loop++) { 1390 + if (!(loop % RESCHED_LOOP_CNT_THRESHOLD_ON_4K)) 1391 + cond_resched(); 1392 + 1396 1393 ret = hem_list_alloc_mid_bt(hr_dev, r, unit, ofs, 1397 1394 hem_list->mid_bt[i], 1398 1395 &hem_list->btm_bt); ··· 1452 1443 struct list_head *head = &hem_list->btm_bt; 1453 1444 struct hns_roce_hem_item *hem, *temp_hem; 1454 1445 void *cpu_base = NULL; 1446 + int loop = 1; 1455 1447 int nr = 0; 1456 1448 1457 1449 list_for_each_entry_safe(hem, temp_hem, head, sibling) { 1450 + if (!(loop % RESCHED_LOOP_CNT_THRESHOLD_ON_4K)) 1451 + cond_resched(); 1452 + loop++; 1453 + 1458 1454 if (hem_list_page_is_in_range(hem, offset)) { 1459 1455 nr = offset - hem->start; 1460 1456 cpu_base = hem->addr + nr * BA_BYTE_LEN;
+1 -1
drivers/infiniband/hw/hns/hns_roce_main.c
··· 183 183 IB_DEVICE_RC_RNR_NAK_GEN; 184 184 props->max_send_sge = hr_dev->caps.max_sq_sg; 185 185 props->max_recv_sge = hr_dev->caps.max_rq_sg; 186 - props->max_sge_rd = 1; 186 + props->max_sge_rd = hr_dev->caps.max_sq_sg; 187 187 props->max_cq = hr_dev->caps.num_cqs; 188 188 props->max_cqe = hr_dev->caps.max_cqes; 189 189 props->max_mr = hr_dev->caps.num_mtpts;
+11 -9
drivers/infiniband/hw/hns/hns_roce_qp.c
··· 868 868 struct hns_roce_ib_create_qp *ucmd, 869 869 struct hns_roce_ib_create_qp_resp *resp) 870 870 { 871 + bool has_sdb = user_qp_has_sdb(hr_dev, init_attr, udata, resp, ucmd); 871 872 struct hns_roce_ucontext *uctx = rdma_udata_to_drv_context(udata, 872 873 struct hns_roce_ucontext, ibucontext); 874 + bool has_rdb = user_qp_has_rdb(hr_dev, init_attr, udata, resp); 873 875 struct ib_device *ibdev = &hr_dev->ib_dev; 874 876 int ret; 875 877 876 - if (user_qp_has_sdb(hr_dev, init_attr, udata, resp, ucmd)) { 878 + if (has_sdb) { 877 879 ret = hns_roce_db_map_user(uctx, ucmd->sdb_addr, &hr_qp->sdb); 878 880 if (ret) { 879 881 ibdev_err(ibdev, ··· 886 884 hr_qp->en_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB; 887 885 } 888 886 889 - if (user_qp_has_rdb(hr_dev, init_attr, udata, resp)) { 887 + if (has_rdb) { 890 888 ret = hns_roce_db_map_user(uctx, ucmd->db_addr, &hr_qp->rdb); 891 889 if (ret) { 892 890 ibdev_err(ibdev, ··· 900 898 return 0; 901 899 902 900 err_sdb: 903 - if (hr_qp->en_flags & HNS_ROCE_QP_CAP_SQ_RECORD_DB) 901 + if (has_sdb) 904 902 hns_roce_db_unmap_user(uctx, &hr_qp->sdb); 905 903 err_out: 906 904 return ret; ··· 1121 1119 ibucontext); 1122 1120 hr_qp->config = uctx->config; 1123 1121 ret = set_user_sq_size(hr_dev, &init_attr->cap, hr_qp, ucmd); 1124 - if (ret) 1122 + if (ret) { 1125 1123 ibdev_err(ibdev, 1126 1124 "failed to set user SQ size, ret = %d.\n", 1127 1125 ret); 1126 + return ret; 1127 + } 1128 1128 1129 1129 ret = set_congest_param(hr_dev, hr_qp, ucmd); 1130 - if (ret) 1131 - return ret; 1132 1130 } else { 1133 1131 if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09) 1134 1132 hr_qp->config = HNS_ROCE_EXSGE_FLAGS; 1133 + default_congest_type(hr_dev, hr_qp); 1135 1134 ret = set_kernel_sq_size(hr_dev, &init_attr->cap, hr_qp); 1136 1135 if (ret) 1137 1136 ibdev_err(ibdev, 1138 1137 "failed to set kernel SQ size, ret = %d.\n", 1139 1138 ret); 1140 - 1141 - default_congest_type(hr_dev, hr_qp); 1142 1139 } 1143 1140 1144 1141 return ret; ··· 1220 1219 min(udata->outlen, sizeof(resp))); 1221 1220 if (ret) { 1222 1221 ibdev_err(ibdev, "copy qp resp failed!\n"); 1223 - goto err_store; 1222 + goto err_flow_ctrl; 1224 1223 } 1225 1224 } 1226 1225 ··· 1603 1602 for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++) 1604 1603 ida_destroy(&hr_dev->qp_table.bank[i].ida); 1605 1604 xa_destroy(&hr_dev->qp_table.dip_xa); 1605 + xa_destroy(&hr_dev->qp_table_xa); 1606 1606 mutex_destroy(&hr_dev->qp_table.bank_mutex); 1607 1607 mutex_destroy(&hr_dev->qp_table.scc_mutex); 1608 1608 }
+9 -5
drivers/infiniband/hw/mlx5/ah.c
··· 50 50 return sport; 51 51 } 52 52 53 - static void create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah, 53 + static int create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah, 54 54 struct rdma_ah_init_attr *init_attr) 55 55 { 56 56 struct rdma_ah_attr *ah_attr = init_attr->ah_attr; 57 57 enum ib_gid_type gid_type; 58 + int rate_val; 58 59 59 60 if (rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH) { 60 61 const struct ib_global_route *grh = rdma_ah_read_grh(ah_attr); ··· 68 67 ah->av.tclass = grh->traffic_class; 69 68 } 70 69 71 - ah->av.stat_rate_sl = 72 - (mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah_attr)) << 4); 70 + rate_val = mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah_attr)); 71 + if (rate_val < 0) 72 + return rate_val; 73 + ah->av.stat_rate_sl = rate_val << 4; 73 74 74 75 if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) { 75 76 if (init_attr->xmit_slave) ··· 92 89 ah->av.fl_mlid = rdma_ah_get_path_bits(ah_attr) & 0x7f; 93 90 ah->av.stat_rate_sl |= (rdma_ah_get_sl(ah_attr) & 0xf); 94 91 } 92 + 93 + return 0; 95 94 } 96 95 97 96 int mlx5_ib_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr, ··· 126 121 return err; 127 122 } 128 123 129 - create_ib_ah(dev, ah, init_attr); 130 - return 0; 124 + return create_ib_ah(dev, ah, init_attr); 131 125 } 132 126 133 127 int mlx5_ib_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr)
+6 -19
drivers/infiniband/sw/rxe/rxe.c
··· 38 38 } 39 39 40 40 /* initialize rxe device parameters */ 41 - static void rxe_init_device_param(struct rxe_dev *rxe) 41 + static void rxe_init_device_param(struct rxe_dev *rxe, struct net_device *ndev) 42 42 { 43 - struct net_device *ndev; 44 - 45 43 rxe->max_inline_data = RXE_MAX_INLINE_DATA; 46 44 47 45 rxe->attr.vendor_id = RXE_VENDOR_ID; ··· 72 74 rxe->attr.max_pkeys = RXE_MAX_PKEYS; 73 75 rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY; 74 76 75 - ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 76 - if (!ndev) 77 - return; 78 - 79 77 addrconf_addr_eui48((unsigned char *)&rxe->attr.sys_image_guid, 80 78 ndev->dev_addr); 81 - 82 - dev_put(ndev); 83 79 84 80 rxe->max_ucontext = RXE_MAX_UCONTEXT; 85 81 } ··· 107 115 /* initialize port state, note IB convention that HCA ports are always 108 116 * numbered from 1 109 117 */ 110 - static void rxe_init_ports(struct rxe_dev *rxe) 118 + static void rxe_init_ports(struct rxe_dev *rxe, struct net_device *ndev) 111 119 { 112 120 struct rxe_port *port = &rxe->port; 113 - struct net_device *ndev; 114 121 115 122 rxe_init_port_param(port); 116 - ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 117 - if (!ndev) 118 - return; 119 123 addrconf_addr_eui48((unsigned char *)&port->port_guid, 120 124 ndev->dev_addr); 121 - dev_put(ndev); 122 125 spin_lock_init(&port->port_lock); 123 126 } 124 127 ··· 131 144 } 132 145 133 146 /* initialize rxe device state */ 134 - static void rxe_init(struct rxe_dev *rxe) 147 + static void rxe_init(struct rxe_dev *rxe, struct net_device *ndev) 135 148 { 136 149 /* init default device parameters */ 137 - rxe_init_device_param(rxe); 150 + rxe_init_device_param(rxe, ndev); 138 151 139 - rxe_init_ports(rxe); 152 + rxe_init_ports(rxe, ndev); 140 153 rxe_init_pools(rxe); 141 154 142 155 /* init pending mmap list */ ··· 171 184 int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name, 172 185 struct net_device *ndev) 173 186 { 174 - rxe_init(rxe); 187 + rxe_init(rxe, ndev); 175 188 rxe_set_mtu(rxe, mtu); 176 189 177 190 return rxe_register_device(rxe, ibdev_name, ndev);
+32 -7
drivers/input/joystick/xpad.c
··· 140 140 { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 141 141 { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 142 142 { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX }, 143 + { 0x044f, 0xd01e, "ThrustMaster, Inc. ESWAP X 2 ELDEN RING EDITION", 0, XTYPE_XBOXONE }, 143 144 { 0x044f, 0x0f10, "Thrustmaster Modena GT Wheel", 0, XTYPE_XBOX }, 144 145 { 0x044f, 0xb326, "Thrustmaster Gamepad GP XID", 0, XTYPE_XBOX360 }, 145 146 { 0x045e, 0x0202, "Microsoft X-Box pad v1 (US)", 0, XTYPE_XBOX }, ··· 178 177 { 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX }, 179 178 { 0x06a3, 0x0201, "Saitek Adrenalin", 0, XTYPE_XBOX }, 180 179 { 0x06a3, 0xf51a, "Saitek P3600", 0, XTYPE_XBOX360 }, 180 + { 0x0738, 0x4503, "Mad Catz Racing Wheel", 0, XTYPE_XBOXONE }, 181 181 { 0x0738, 0x4506, "Mad Catz 4506 Wireless Controller", 0, XTYPE_XBOX }, 182 182 { 0x0738, 0x4516, "Mad Catz Control Pad", 0, XTYPE_XBOX }, 183 183 { 0x0738, 0x4520, "Mad Catz Control Pad Pro", 0, XTYPE_XBOX }, ··· 240 238 { 0x0e6f, 0x0146, "Rock Candy Wired Controller for Xbox One", 0, XTYPE_XBOXONE }, 241 239 { 0x0e6f, 0x0147, "PDP Marvel Xbox One Controller", 0, XTYPE_XBOXONE }, 242 240 { 0x0e6f, 0x015c, "PDP Xbox One Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, 241 + { 0x0e6f, 0x015d, "PDP Mirror's Edge Official Wired Controller for Xbox One", XTYPE_XBOXONE }, 243 242 { 0x0e6f, 0x0161, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, 244 243 { 0x0e6f, 0x0162, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, 245 244 { 0x0e6f, 0x0163, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, ··· 279 276 { 0x0f0d, 0x0078, "Hori Real Arcade Pro V Kai Xbox One", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, 280 277 { 0x0f0d, 0x00c5, "Hori Fighting Commander ONE", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, 281 278 { 0x0f0d, 0x00dc, "HORIPAD FPS for Nintendo Switch", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 279 + { 0x0f0d, 0x0151, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE }, 280 + { 0x0f0d, 0x0152, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE }, 282 281 { 0x0f30, 0x010b, "Philips Recoil", 0, XTYPE_XBOX }, 283 282 { 0x0f30, 0x0202, "Joytech Advanced Controller", 0, XTYPE_XBOX }, 284 283 { 0x0f30, 0x8888, "BigBen XBMiniPad Controller", 0, XTYPE_XBOX }, 285 284 { 0x102c, 0xff0c, "Joytech Wireless Advanced Controller", 0, XTYPE_XBOX }, 286 285 { 0x1038, 0x1430, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 }, 287 286 { 0x1038, 0x1431, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 }, 287 + { 0x10f5, 0x7005, "Turtle Beach Recon Controller", 0, XTYPE_XBOXONE }, 288 288 { 0x11c9, 0x55f0, "Nacon GC-100XF", 0, XTYPE_XBOX360 }, 289 289 { 0x11ff, 0x0511, "PXN V900", 0, XTYPE_XBOX360 }, 290 290 { 0x1209, 0x2882, "Ardwiino Controller", 0, XTYPE_XBOX360 }, ··· 312 306 { 0x1689, 0xfe00, "Razer Sabertooth", 0, XTYPE_XBOX360 }, 313 307 { 0x17ef, 0x6182, "Lenovo Legion Controller for Windows", 0, XTYPE_XBOX360 }, 314 308 { 0x1949, 0x041a, "Amazon Game Controller", 0, XTYPE_XBOX360 }, 315 - { 0x1a86, 0xe310, "QH Electronics Controller", 0, XTYPE_XBOX360 }, 309 + { 0x1a86, 0xe310, "Legion Go S", 0, XTYPE_XBOX360 }, 316 310 { 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 }, 317 311 { 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 }, 318 312 { 0x1bad, 0x0130, "Ion Drum Rocker", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 }, ··· 349 343 { 0x1bad, 0xfa01, "MadCatz GamePad", 0, XTYPE_XBOX360 }, 350 344 { 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 }, 351 345 { 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 }, 346 + { 0x1ee9, 0x1590, "ZOTAC Gaming Zone", 0, XTYPE_XBOX360 }, 352 347 { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE }, 353 348 { 0x20d6, 0x2009, "PowerA Enhanced Wired Controller for Xbox Series X|S", 0, XTYPE_XBOXONE }, 354 349 { 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 }, ··· 373 366 { 0x24c6, 0x5510, "Hori Fighting Commander ONE (Xbox 360/PC Mode)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 374 367 { 0x24c6, 0x551a, "PowerA FUSION Pro Controller", 0, XTYPE_XBOXONE }, 375 368 { 0x24c6, 0x561a, "PowerA FUSION Controller", 0, XTYPE_XBOXONE }, 369 + { 0x24c6, 0x581a, "ThrustMaster XB1 Classic Controller", 0, XTYPE_XBOXONE }, 376 370 { 0x24c6, 0x5b00, "ThrustMaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 }, 377 371 { 0x24c6, 0x5b02, "Thrustmaster, Inc. GPX Controller", 0, XTYPE_XBOX360 }, 378 372 { 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 }, ··· 382 374 { 0x2563, 0x058d, "OneXPlayer Gamepad", 0, XTYPE_XBOX360 }, 383 375 { 0x294b, 0x3303, "Snakebyte GAMEPAD BASE X", 0, XTYPE_XBOXONE }, 384 376 { 0x294b, 0x3404, "Snakebyte GAMEPAD RGB X", 0, XTYPE_XBOXONE }, 377 + { 0x2993, 0x2001, "TECNO Pocket Go", 0, XTYPE_XBOX360 }, 385 378 { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE }, 386 379 { 0x2dc8, 0x3106, "8BitDo Ultimate Wireless / Pro 2 Wired Controller", 0, XTYPE_XBOX360 }, 380 + { 0x2dc8, 0x3109, "8BitDo Ultimate Wireless Bluetooth", 0, XTYPE_XBOX360 }, 387 381 { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 }, 382 + { 0x2dc8, 0x6001, "8BitDo SN30 Pro", 0, XTYPE_XBOX360 }, 388 383 { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE }, 384 + { 0x2e24, 0x1688, "Hyperkin X91 X-Box One pad", 0, XTYPE_XBOXONE }, 385 + { 0x2e95, 0x0504, "SCUF Gaming Controller", MAP_SELECT_BUTTON, XTYPE_XBOXONE }, 389 386 { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 }, 390 387 { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 }, 391 388 { 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 }, ··· 398 385 { 0x31e3, 0x1230, "Wooting Two HE (ARM)", 0, XTYPE_XBOX360 }, 399 386 { 0x31e3, 0x1300, "Wooting 60HE (AVR)", 0, XTYPE_XBOX360 }, 400 387 { 0x31e3, 0x1310, "Wooting 60HE (ARM)", 0, XTYPE_XBOX360 }, 388 + { 0x3285, 0x0603, "Nacon Pro Compact controller for Xbox", 0, XTYPE_XBOXONE }, 401 389 { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 }, 390 + { 0x3285, 0x0614, "Nacon Pro Compact", 0, XTYPE_XBOXONE }, 402 391 { 0x3285, 0x0646, "Nacon Pro Compact", 0, XTYPE_XBOXONE }, 392 + { 0x3285, 0x0662, "Nacon Revolution5 Pro", 0, XTYPE_XBOX360 }, 403 393 { 0x3285, 0x0663, "Nacon Evol-X", 0, XTYPE_XBOXONE }, 404 394 { 0x3537, 0x1004, "GameSir T4 Kaleid", 0, XTYPE_XBOX360 }, 395 + { 0x3537, 0x1010, "GameSir G7 SE", 0, XTYPE_XBOXONE }, 405 396 { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX }, 397 + { 0x413d, 0x2104, "Black Shark Green Ghost Gamepad", 0, XTYPE_XBOX360 }, 406 398 { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX }, 407 399 { 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN } 408 400 }; ··· 506 488 XPAD_XBOX360_VENDOR(0x03f0), /* HP HyperX Xbox 360 controllers */ 507 489 XPAD_XBOXONE_VENDOR(0x03f0), /* HP HyperX Xbox One controllers */ 508 490 XPAD_XBOX360_VENDOR(0x044f), /* Thrustmaster Xbox 360 controllers */ 491 + XPAD_XBOXONE_VENDOR(0x044f), /* Thrustmaster Xbox One controllers */ 509 492 XPAD_XBOX360_VENDOR(0x045e), /* Microsoft Xbox 360 controllers */ 510 493 XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft Xbox One controllers */ 511 494 XPAD_XBOX360_VENDOR(0x046d), /* Logitech Xbox 360-style controllers */ ··· 538 519 XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */ 539 520 XPAD_XBOX360_VENDOR(0x17ef), /* Lenovo */ 540 521 XPAD_XBOX360_VENDOR(0x1949), /* Amazon controllers */ 541 - XPAD_XBOX360_VENDOR(0x1a86), /* QH Electronics */ 522 + XPAD_XBOX360_VENDOR(0x1a86), /* Nanjing Qinheng Microelectronics (WCH) */ 542 523 XPAD_XBOX360_VENDOR(0x1bad), /* Harmonix Rock Band guitar and drums */ 524 + XPAD_XBOX360_VENDOR(0x1ee9), /* ZOTAC Technology Limited */ 543 525 XPAD_XBOX360_VENDOR(0x20d6), /* PowerA controllers */ 544 526 XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA controllers */ 545 527 XPAD_XBOX360_VENDOR(0x2345), /* Machenike Controllers */ ··· 548 528 XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA controllers */ 549 529 XPAD_XBOX360_VENDOR(0x2563), /* OneXPlayer Gamepad */ 550 530 XPAD_XBOX360_VENDOR(0x260d), /* Dareu H101 */ 551 - XPAD_XBOXONE_VENDOR(0x294b), /* Snakebyte */ 531 + XPAD_XBOXONE_VENDOR(0x294b), /* Snakebyte */ 532 + XPAD_XBOX360_VENDOR(0x2993), /* TECNO Mobile */ 552 533 XPAD_XBOX360_VENDOR(0x2c22), /* Qanba Controllers */ 553 - XPAD_XBOX360_VENDOR(0x2dc8), /* 8BitDo Pro 2 Wired Controller */ 554 - XPAD_XBOXONE_VENDOR(0x2dc8), /* 8BitDo Pro 2 Wired Controller for Xbox */ 555 - XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke Xbox One pad */ 556 - XPAD_XBOX360_VENDOR(0x2f24), /* GameSir controllers */ 534 + XPAD_XBOX360_VENDOR(0x2dc8), /* 8BitDo Controllers */ 535 + XPAD_XBOXONE_VENDOR(0x2dc8), /* 8BitDo Controllers */ 536 + XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Controllers */ 537 + XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */ 538 + XPAD_XBOXONE_VENDOR(0x2e95), /* SCUF Gaming Controller */ 557 539 XPAD_XBOX360_VENDOR(0x31e3), /* Wooting Keyboards */ 558 540 XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */ 559 541 XPAD_XBOXONE_VENDOR(0x3285), /* Nacon Evol-X */ 560 542 XPAD_XBOX360_VENDOR(0x3537), /* GameSir Controllers */ 561 543 XPAD_XBOXONE_VENDOR(0x3537), /* GameSir Controllers */ 544 + XPAD_XBOX360_VENDOR(0x413d), /* Black Shark Green Ghost Controller */ 562 545 { } 563 546 }; 564 547 ··· 714 691 XBOXONE_INIT_PKT(0x045e, 0x0b00, xboxone_s_init), 715 692 XBOXONE_INIT_PKT(0x045e, 0x0b00, extra_input_packet_init), 716 693 XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_led_on), 694 + XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_led_on), 717 695 XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_auth), 696 + XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_auth), 718 697 XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init), 719 698 XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init), 720 699 XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init),
+22 -28
drivers/input/misc/iqs7222.c
··· 100 100 101 101 enum iqs7222_reg_grp_id { 102 102 IQS7222_REG_GRP_STAT, 103 - IQS7222_REG_GRP_FILT, 104 103 IQS7222_REG_GRP_CYCLE, 105 104 IQS7222_REG_GRP_GLBL, 106 105 IQS7222_REG_GRP_BTN, 107 106 IQS7222_REG_GRP_CHAN, 107 + IQS7222_REG_GRP_FILT, 108 108 IQS7222_REG_GRP_SLDR, 109 109 IQS7222_REG_GRP_TPAD, 110 110 IQS7222_REG_GRP_GPIO, ··· 286 286 287 287 struct iqs7222_reg_grp_desc { 288 288 u16 base; 289 + u16 val_len; 289 290 int num_row; 290 291 int num_col; 291 292 }; ··· 343 342 }, 344 343 [IQS7222_REG_GRP_FILT] = { 345 344 .base = 0xAC00, 345 + .val_len = 3, 346 346 .num_row = 1, 347 347 .num_col = 2, 348 348 }, ··· 402 400 }, 403 401 [IQS7222_REG_GRP_FILT] = { 404 402 .base = 0xAC00, 403 + .val_len = 3, 405 404 .num_row = 1, 406 405 .num_col = 2, 407 406 }, ··· 457 454 }, 458 455 [IQS7222_REG_GRP_FILT] = { 459 456 .base = 0xC400, 457 + .val_len = 3, 460 458 .num_row = 1, 461 459 .num_col = 2, 462 460 }, ··· 500 496 }, 501 497 [IQS7222_REG_GRP_FILT] = { 502 498 .base = 0xC400, 499 + .val_len = 3, 503 500 .num_row = 1, 504 501 .num_col = 2, 505 502 }, ··· 548 543 }, 549 544 [IQS7222_REG_GRP_FILT] = { 550 545 .base = 0xAA00, 546 + .val_len = 3, 551 547 .num_row = 1, 552 548 .num_col = 2, 553 549 }, ··· 606 600 }, 607 601 [IQS7222_REG_GRP_FILT] = { 608 602 .base = 0xAA00, 603 + .val_len = 3, 609 604 .num_row = 1, 610 605 .num_col = 2, 611 606 }, ··· 663 656 }, 664 657 [IQS7222_REG_GRP_FILT] = { 665 658 .base = 0xAE00, 659 + .val_len = 3, 666 660 .num_row = 1, 667 661 .num_col = 2, 668 662 }, ··· 720 712 }, 721 713 [IQS7222_REG_GRP_FILT] = { 722 714 .base = 0xAE00, 715 + .val_len = 3, 723 716 .num_row = 1, 724 717 .num_col = 2, 725 718 }, ··· 777 768 }, 778 769 [IQS7222_REG_GRP_FILT] = { 779 770 .base = 0xAE00, 771 + .val_len = 3, 780 772 .num_row = 1, 781 773 .num_col = 2, 782 774 }, ··· 1614 1604 } 1615 1605 1616 1606 static int iqs7222_read_burst(struct iqs7222_private *iqs7222, 1617 - u16 reg, void *val, u16 num_val) 1607 + u16 reg, void *val, u16 val_len) 1618 1608 { 1619 1609 u8 reg_buf[sizeof(__be16)]; 1620 1610 int ret, i; ··· 1629 1619 { 1630 1620 .addr = client->addr, 1631 1621 .flags = I2C_M_RD, 1632 - .len = num_val * sizeof(__le16), 1622 + .len = val_len, 1633 1623 .buf = (u8 *)val, 1634 1624 }, 1635 1625 }; ··· 1685 1675 __le16 val_buf; 1686 1676 int error; 1687 1677 1688 - error = iqs7222_read_burst(iqs7222, reg, &val_buf, 1); 1678 + error = iqs7222_read_burst(iqs7222, reg, &val_buf, sizeof(val_buf)); 1689 1679 if (error) 1690 1680 return error; 1691 1681 ··· 1695 1685 } 1696 1686 1697 1687 static int iqs7222_write_burst(struct iqs7222_private *iqs7222, 1698 - u16 reg, const void *val, u16 num_val) 1688 + u16 reg, const void *val, u16 val_len) 1699 1689 { 1700 1690 int reg_len = reg > U8_MAX ? sizeof(reg) : sizeof(u8); 1701 - int val_len = num_val * sizeof(__le16); 1702 1691 int msg_len = reg_len + val_len; 1703 1692 int ret, i; 1704 1693 struct i2c_client *client = iqs7222->client; ··· 1756 1747 { 1757 1748 __le16 val_buf = cpu_to_le16(val); 1758 1749 1759 - return iqs7222_write_burst(iqs7222, reg, &val_buf, 1); 1750 + return iqs7222_write_burst(iqs7222, reg, &val_buf, sizeof(val_buf)); 1760 1751 } 1761 1752 1762 1753 static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222) ··· 1840 1831 1841 1832 /* 1842 1833 * Acknowledge reset before writing any registers in case the device 1843 - * suffers a spurious reset during initialization. Because this step 1844 - * may change the reserved fields of the second filter beta register, 1845 - * its cache must be updated. 1846 - * 1847 - * Writing the second filter beta register, in turn, may clobber the 1848 - * system status register. As such, the filter beta register pair is 1849 - * written first to protect against this hazard. 1834 + * suffers a spurious reset during initialization. 1850 1835 */ 1851 1836 if (dir == WRITE) { 1852 - u16 reg = dev_desc->reg_grps[IQS7222_REG_GRP_FILT].base + 1; 1853 - u16 filt_setup; 1854 - 1855 1837 error = iqs7222_write_word(iqs7222, IQS7222_SYS_SETUP, 1856 1838 iqs7222->sys_setup[0] | 1857 1839 IQS7222_SYS_SETUP_ACK_RESET); 1858 1840 if (error) 1859 1841 return error; 1860 - 1861 - error = iqs7222_read_word(iqs7222, reg, &filt_setup); 1862 - if (error) 1863 - return error; 1864 - 1865 - iqs7222->filt_setup[1] &= GENMASK(7, 0); 1866 - iqs7222->filt_setup[1] |= (filt_setup & ~GENMASK(7, 0)); 1867 1842 } 1868 1843 1869 1844 /* ··· 1876 1883 int num_col = dev_desc->reg_grps[i].num_col; 1877 1884 u16 reg = dev_desc->reg_grps[i].base; 1878 1885 __le16 *val_buf; 1886 + u16 val_len = dev_desc->reg_grps[i].val_len ? : num_col * sizeof(*val_buf); 1879 1887 u16 *val; 1880 1888 1881 1889 if (!num_col) ··· 1894 1900 switch (dir) { 1895 1901 case READ: 1896 1902 error = iqs7222_read_burst(iqs7222, reg, 1897 - val_buf, num_col); 1903 + val_buf, val_len); 1898 1904 for (k = 0; k < num_col; k++) 1899 1905 val[k] = le16_to_cpu(val_buf[k]); 1900 1906 break; ··· 1903 1909 for (k = 0; k < num_col; k++) 1904 1910 val_buf[k] = cpu_to_le16(val[k]); 1905 1911 error = iqs7222_write_burst(iqs7222, reg, 1906 - val_buf, num_col); 1912 + val_buf, val_len); 1907 1913 break; 1908 1914 1909 1915 default: ··· 1956 1962 int error, i; 1957 1963 1958 1964 error = iqs7222_read_burst(iqs7222, IQS7222_PROD_NUM, dev_id, 1959 - ARRAY_SIZE(dev_id)); 1965 + sizeof(dev_id)); 1960 1966 if (error) 1961 1967 return error; 1962 1968 ··· 2909 2915 __le16 status[IQS7222_MAX_COLS_STAT]; 2910 2916 2911 2917 error = iqs7222_read_burst(iqs7222, IQS7222_SYS_STATUS, status, 2912 - num_stat); 2918 + num_stat * sizeof(*status)); 2913 2919 if (error) 2914 2920 return error; 2915 2921
+55 -56
drivers/input/serio/i8042-acpipnpio.h
··· 1080 1080 DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"), 1081 1081 DMI_MATCH(DMI_BOARD_NAME, "AURA1501"), 1082 1082 }, 1083 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1084 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1083 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1085 1084 }, 1086 1085 { 1087 1086 .matches = { 1088 1087 DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"), 1089 1088 DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"), 1090 1089 }, 1091 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1092 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1090 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1093 1091 }, 1094 1092 { 1095 1093 /* Mivvy M310 */ ··· 1157 1159 }, 1158 1160 /* 1159 1161 * A lot of modern Clevo barebones have touchpad and/or keyboard issues 1160 - * after suspend fixable with nomux + reset + noloop + nopnp. Luckily, 1161 - * none of them have an external PS/2 port so this can safely be set for 1162 - * all of them. 1162 + * after suspend fixable with the forcenorestore quirk. 1163 1163 * Clevo barebones come with board_vendor and/or system_vendor set to 1164 1164 * either the very generic string "Notebook" and/or a different value 1165 1165 * for each individual reseller. The only somewhat universal way to ··· 1167 1171 .matches = { 1168 1172 DMI_MATCH(DMI_BOARD_NAME, "LAPQC71A"), 1169 1173 }, 1170 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1171 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1174 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1172 1175 }, 1173 1176 { 1174 1177 .matches = { 1175 1178 DMI_MATCH(DMI_BOARD_NAME, "LAPQC71B"), 1176 1179 }, 1177 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1178 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1180 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1179 1181 }, 1180 1182 { 1181 1183 .matches = { 1182 1184 DMI_MATCH(DMI_BOARD_NAME, "N140CU"), 1183 1185 }, 1184 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1185 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1186 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1186 1187 }, 1187 1188 { 1188 1189 .matches = { 1189 1190 DMI_MATCH(DMI_BOARD_NAME, "N141CU"), 1190 1191 }, 1191 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1192 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1192 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1193 1193 }, 1194 1194 { 1195 1195 .matches = { ··· 1197 1205 .matches = { 1198 1206 DMI_MATCH(DMI_BOARD_NAME, "NH5xAx"), 1199 1207 }, 1200 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1201 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1208 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1202 1209 }, 1203 1210 { 1204 - /* 1205 - * Setting SERIO_QUIRK_NOMUX or SERIO_QUIRK_RESET_ALWAYS makes 1206 - * the keyboard very laggy for ~5 seconds after boot and 1207 - * sometimes also after resume. 1208 - * However both are required for the keyboard to not fail 1209 - * completely sometimes after boot or resume. 1210 - */ 1211 1211 .matches = { 1212 1212 DMI_MATCH(DMI_BOARD_NAME, "NHxxRZQ"), 1213 1213 }, 1214 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1215 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1214 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1216 1215 }, 1217 1216 { 1218 1217 .matches = { 1219 1218 DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"), 1220 1219 }, 1221 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1222 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1220 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1223 1221 }, 1224 1222 /* 1225 1223 * At least one modern Clevo barebone has the touchpad connected both ··· 1225 1243 .matches = { 1226 1244 DMI_MATCH(DMI_BOARD_NAME, "NS50MU"), 1227 1245 }, 1228 - .driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX | 1229 - SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP | 1230 - SERIO_QUIRK_NOPNP) 1246 + .driver_data = (void *)(SERIO_QUIRK_NOAUX | 1247 + SERIO_QUIRK_FORCENORESTORE) 1231 1248 }, 1232 1249 { 1233 1250 .matches = { 1234 1251 DMI_MATCH(DMI_BOARD_NAME, "NS50_70MU"), 1235 1252 }, 1236 - .driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX | 1237 - SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP | 1238 - SERIO_QUIRK_NOPNP) 1253 + .driver_data = (void *)(SERIO_QUIRK_NOAUX | 1254 + SERIO_QUIRK_FORCENORESTORE) 1239 1255 }, 1240 1256 { 1241 1257 .matches = { ··· 1245 1265 .matches = { 1246 1266 DMI_MATCH(DMI_BOARD_NAME, "NJ50_70CU"), 1247 1267 }, 1248 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1249 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1268 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1269 + }, 1270 + { 1271 + .matches = { 1272 + DMI_MATCH(DMI_BOARD_NAME, "P640RE"), 1273 + }, 1274 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1250 1275 }, 1251 1276 { 1252 1277 /* ··· 1262 1277 .matches = { 1263 1278 DMI_MATCH(DMI_PRODUCT_NAME, "P65xH"), 1264 1279 }, 1265 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1266 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1280 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1267 1281 }, 1268 1282 { 1269 1283 /* Clevo P650RS, 650RP6, Sager NP8152-S, and others */ 1270 1284 .matches = { 1271 1285 DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"), 1272 1286 }, 1273 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1274 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1287 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1275 1288 }, 1276 1289 { 1277 1290 /* ··· 1280 1297 .matches = { 1281 1298 DMI_MATCH(DMI_PRODUCT_NAME, "P65_P67H"), 1282 1299 }, 1283 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1284 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1300 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1285 1301 }, 1286 1302 { 1287 1303 /* ··· 1291 1309 .matches = { 1292 1310 DMI_MATCH(DMI_PRODUCT_NAME, "P65_67RP"), 1293 1311 }, 1294 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1295 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1312 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1296 1313 }, 1297 1314 { 1298 1315 /* ··· 1302 1321 .matches = { 1303 1322 DMI_MATCH(DMI_PRODUCT_NAME, "P65_67RS"), 1304 1323 }, 1305 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1306 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1324 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1307 1325 }, 1308 1326 { 1309 1327 /* ··· 1313 1333 .matches = { 1314 1334 DMI_MATCH(DMI_PRODUCT_NAME, "P67xRP"), 1315 1335 }, 1316 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1317 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1336 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1318 1337 }, 1319 1338 { 1320 1339 .matches = { 1321 1340 DMI_MATCH(DMI_BOARD_NAME, "PB50_70DFx,DDx"), 1322 1341 }, 1323 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1324 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1342 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1343 + }, 1344 + { 1345 + .matches = { 1346 + DMI_MATCH(DMI_BOARD_NAME, "PB51RF"), 1347 + }, 1348 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1349 + }, 1350 + { 1351 + .matches = { 1352 + DMI_MATCH(DMI_BOARD_NAME, "PB71RD"), 1353 + }, 1354 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1355 + }, 1356 + { 1357 + .matches = { 1358 + DMI_MATCH(DMI_BOARD_NAME, "PC70DR"), 1359 + }, 1360 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1325 1361 }, 1326 1362 { 1327 1363 .matches = { 1328 1364 DMI_MATCH(DMI_BOARD_NAME, "PCX0DX"), 1329 1365 }, 1330 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1331 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1366 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1367 + }, 1368 + { 1369 + .matches = { 1370 + DMI_MATCH(DMI_BOARD_NAME, "PCX0DX_GN20"), 1371 + }, 1372 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1332 1373 }, 1333 1374 /* See comment on TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU above */ 1334 1375 { ··· 1362 1361 .matches = { 1363 1362 DMI_MATCH(DMI_BOARD_NAME, "X170SM"), 1364 1363 }, 1365 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1366 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1364 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1367 1365 }, 1368 1366 { 1369 1367 .matches = { 1370 1368 DMI_MATCH(DMI_BOARD_NAME, "X170KM-G"), 1371 1369 }, 1372 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1373 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1370 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1374 1371 }, 1375 1372 { 1376 1373 /*
+1 -1
drivers/input/touchscreen/ads7846.c
··· 1021 1021 if (pdata->get_pendown_state) { 1022 1022 ts->get_pendown_state = pdata->get_pendown_state; 1023 1023 } else { 1024 - ts->gpio_pendown = gpiod_get(&spi->dev, "pendown", GPIOD_IN); 1024 + ts->gpio_pendown = devm_gpiod_get(&spi->dev, "pendown", GPIOD_IN); 1025 1025 if (IS_ERR(ts->gpio_pendown)) { 1026 1026 dev_err(&spi->dev, "failed to request pendown GPIO\n"); 1027 1027 return PTR_ERR(ts->gpio_pendown);
+13 -13
drivers/input/touchscreen/goodix_berlin_core.c
··· 165 165 struct device *dev; 166 166 struct regmap *regmap; 167 167 struct regulator *avdd; 168 - struct regulator *iovdd; 168 + struct regulator *vddio; 169 169 struct gpio_desc *reset_gpio; 170 170 struct touchscreen_properties props; 171 171 struct goodix_berlin_fw_version fw_version; ··· 248 248 { 249 249 int error; 250 250 251 - error = regulator_enable(cd->iovdd); 251 + error = regulator_enable(cd->vddio); 252 252 if (error) { 253 - dev_err(cd->dev, "Failed to enable iovdd: %d\n", error); 253 + dev_err(cd->dev, "Failed to enable vddio: %d\n", error); 254 254 return error; 255 255 } 256 256 257 - /* Vendor waits 3ms for IOVDD to settle */ 257 + /* Vendor waits 3ms for VDDIO to settle */ 258 258 usleep_range(3000, 3100); 259 259 260 260 error = regulator_enable(cd->avdd); 261 261 if (error) { 262 262 dev_err(cd->dev, "Failed to enable avdd: %d\n", error); 263 - goto err_iovdd_disable; 263 + goto err_vddio_disable; 264 264 } 265 265 266 - /* Vendor waits 15ms for IOVDD to settle */ 266 + /* Vendor waits 15ms for AVDD to settle */ 267 267 usleep_range(15000, 15100); 268 268 269 269 gpiod_set_value_cansleep(cd->reset_gpio, 0); ··· 283 283 err_dev_reset: 284 284 gpiod_set_value_cansleep(cd->reset_gpio, 1); 285 285 regulator_disable(cd->avdd); 286 - err_iovdd_disable: 287 - regulator_disable(cd->iovdd); 286 + err_vddio_disable: 287 + regulator_disable(cd->vddio); 288 288 return error; 289 289 } 290 290 ··· 292 292 { 293 293 gpiod_set_value_cansleep(cd->reset_gpio, 1); 294 294 regulator_disable(cd->avdd); 295 - regulator_disable(cd->iovdd); 295 + regulator_disable(cd->vddio); 296 296 } 297 297 298 298 static int goodix_berlin_read_version(struct goodix_berlin_core *cd) ··· 744 744 return dev_err_probe(dev, PTR_ERR(cd->avdd), 745 745 "Failed to request avdd regulator\n"); 746 746 747 - cd->iovdd = devm_regulator_get(dev, "iovdd"); 748 - if (IS_ERR(cd->iovdd)) 749 - return dev_err_probe(dev, PTR_ERR(cd->iovdd), 750 - "Failed to request iovdd regulator\n"); 747 + cd->vddio = devm_regulator_get(dev, "vddio"); 748 + if (IS_ERR(cd->vddio)) 749 + return dev_err_probe(dev, PTR_ERR(cd->vddio), 750 + "Failed to request vddio regulator\n"); 751 751 752 752 error = goodix_berlin_power_on(cd); 753 753 if (error) {
+9
drivers/input/touchscreen/imagis.c
··· 22 22 23 23 #define IST3032C_WHOAMI 0x32c 24 24 #define IST3038C_WHOAMI 0x38c 25 + #define IST3038H_WHOAMI 0x38d 25 26 26 27 #define IST3038B_REG_CHIPID 0x30 27 28 #define IST3038B_WHOAMI 0x30380b ··· 429 428 .protocol_b = true, 430 429 }; 431 430 431 + static const struct imagis_properties imagis_3038h_data = { 432 + .interrupt_msg_cmd = IST3038C_REG_INTR_MESSAGE, 433 + .touch_coord_cmd = IST3038C_REG_TOUCH_COORD, 434 + .whoami_cmd = IST3038C_REG_CHIPID, 435 + .whoami_val = IST3038H_WHOAMI, 436 + }; 437 + 432 438 static const struct of_device_id imagis_of_match[] = { 433 439 { .compatible = "imagis,ist3032c", .data = &imagis_3032c_data }, 434 440 { .compatible = "imagis,ist3038", .data = &imagis_3038_data }, 435 441 { .compatible = "imagis,ist3038b", .data = &imagis_3038b_data }, 436 442 { .compatible = "imagis,ist3038c", .data = &imagis_3038c_data }, 443 + { .compatible = "imagis,ist3038h", .data = &imagis_3038h_data }, 437 444 { }, 438 445 }; 439 446 MODULE_DEVICE_TABLE(of, imagis_of_match);
+2
drivers/input/touchscreen/wdt87xx_i2c.c
··· 1153 1153 }; 1154 1154 MODULE_DEVICE_TABLE(i2c, wdt87xx_dev_id); 1155 1155 1156 + #ifdef CONFIG_ACPI 1156 1157 static const struct acpi_device_id wdt87xx_acpi_id[] = { 1157 1158 { "WDHT0001", 0 }, 1158 1159 { } 1159 1160 }; 1160 1161 MODULE_DEVICE_TABLE(acpi, wdt87xx_acpi_id); 1162 + #endif 1161 1163 1162 1164 static struct i2c_driver wdt87xx_driver = { 1163 1165 .probe = wdt87xx_ts_probe,
+10 -11
drivers/leds/leds-st1202.c
··· 261 261 int err, reg; 262 262 263 263 for_each_available_child_of_node_scoped(dev_of_node(dev), child) { 264 - struct led_init_data init_data = {}; 265 - 266 264 err = of_property_read_u32(child, "reg", &reg); 267 265 if (err) 268 266 return dev_err_probe(dev, err, "Invalid register\n"); ··· 274 276 led->led_cdev.pattern_set = st1202_led_pattern_set; 275 277 led->led_cdev.pattern_clear = st1202_led_pattern_clear; 276 278 led->led_cdev.default_trigger = "pattern"; 277 - 278 - init_data.fwnode = led->fwnode; 279 - init_data.devicename = "st1202"; 280 - init_data.default_label = ":"; 281 - 282 - err = devm_led_classdev_register_ext(dev, &led->led_cdev, &init_data); 283 - if (err < 0) 284 - return dev_err_probe(dev, err, "Failed to register LED class device\n"); 285 - 286 279 led->led_cdev.brightness_set = st1202_brightness_set; 287 280 led->led_cdev.brightness_get = st1202_brightness_get; 288 281 } ··· 357 368 return ret; 358 369 359 370 for (int i = 0; i < ST1202_MAX_LEDS; i++) { 371 + struct led_init_data init_data = {}; 360 372 led = &chip->leds[i]; 361 373 led->chip = chip; 362 374 led->led_num = i; ··· 374 384 if (ret < 0) 375 385 return dev_err_probe(&client->dev, ret, 376 386 "Failed to clear LED pattern\n"); 387 + 388 + init_data.fwnode = led->fwnode; 389 + init_data.devicename = "st1202"; 390 + init_data.default_label = ":"; 391 + 392 + ret = devm_led_classdev_register_ext(&client->dev, &led->led_cdev, &init_data); 393 + if (ret < 0) 394 + return dev_err_probe(&client->dev, ret, 395 + "Failed to register LED class device\n"); 377 396 } 378 397 379 398 return 0;
+1 -1
drivers/md/dm-flakey.c
··· 426 426 if (!clone) 427 427 return NULL; 428 428 429 - bio_init(clone, fc->dev->bdev, bio->bi_inline_vecs, nr_iovecs, bio->bi_opf); 429 + bio_init(clone, fc->dev->bdev, clone->bi_inline_vecs, nr_iovecs, bio->bi_opf); 430 430 431 431 clone->bi_iter.bi_sector = flakey_map_sector(ti, bio->bi_iter.bi_sector); 432 432 clone->bi_private = bio;
+1 -1
drivers/media/dvb-frontends/rtl2832_sdr.c
··· 1363 1363 dev->vb_queue.ops = &rtl2832_sdr_vb2_ops; 1364 1364 dev->vb_queue.mem_ops = &vb2_vmalloc_memops; 1365 1365 dev->vb_queue.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; 1366 + dev->vb_queue.lock = &dev->vb_queue_lock; 1366 1367 ret = vb2_queue_init(&dev->vb_queue); 1367 1368 if (ret) { 1368 1369 dev_err(&pdev->dev, "Could not initialize vb2 queue\n"); ··· 1422 1421 /* Init video_device structure */ 1423 1422 dev->vdev = rtl2832_sdr_template; 1424 1423 dev->vdev.queue = &dev->vb_queue; 1425 - dev->vdev.queue->lock = &dev->vb_queue_lock; 1426 1424 video_set_drvdata(&dev->vdev, dev); 1427 1425 1428 1426 /* Register the v4l2_device structure */
-20
drivers/memory/omap-gpmc.c
··· 2226 2226 goto err; 2227 2227 } 2228 2228 2229 - if (of_node_name_eq(child, "nand")) { 2230 - /* Warn about older DT blobs with no compatible property */ 2231 - if (!of_property_read_bool(child, "compatible")) { 2232 - dev_warn(&pdev->dev, 2233 - "Incompatible NAND node: missing compatible"); 2234 - ret = -EINVAL; 2235 - goto err; 2236 - } 2237 - } 2238 - 2239 - if (of_node_name_eq(child, "onenand")) { 2240 - /* Warn about older DT blobs with no compatible property */ 2241 - if (!of_property_read_bool(child, "compatible")) { 2242 - dev_warn(&pdev->dev, 2243 - "Incompatible OneNAND node: missing compatible"); 2244 - ret = -EINVAL; 2245 - goto err; 2246 - } 2247 - } 2248 - 2249 2229 if (of_match_node(omap_nand_ids, child)) { 2250 2230 /* NAND specific setup */ 2251 2231 val = 8;
+3 -1
drivers/mmc/host/atmel-mci.c
··· 2499 2499 /* Get MCI capabilities and set operations according to it */ 2500 2500 atmci_get_cap(host); 2501 2501 ret = atmci_configure_dma(host); 2502 - if (ret == -EPROBE_DEFER) 2502 + if (ret == -EPROBE_DEFER) { 2503 + clk_disable_unprepare(host->mck); 2503 2504 goto err_dma_probe_defer; 2505 + } 2504 2506 if (ret == 0) { 2505 2507 host->prepare_data = &atmci_prepare_data_dma; 2506 2508 host->submit_data = &atmci_submit_data_dma;
+10
drivers/mmc/host/sdhci-brcmstb.c
··· 503 503 struct sdhci_host *host = dev_get_drvdata(dev); 504 504 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 505 505 struct sdhci_brcmstb_priv *priv = sdhci_pltfm_priv(pltfm_host); 506 + int ret; 506 507 507 508 clk_disable_unprepare(priv->base_clk); 509 + if (host->mmc->caps2 & MMC_CAP2_CQE) { 510 + ret = cqhci_suspend(host->mmc); 511 + if (ret) 512 + return ret; 513 + } 514 + 508 515 return sdhci_pltfm_suspend(dev); 509 516 } 510 517 ··· 535 528 (clk_get_rate(priv->base_clk) != priv->base_freq_hz)) 536 529 ret = clk_set_rate(priv->base_clk, priv->base_freq_hz); 537 530 } 531 + 532 + if (host->mmc->caps2 & MMC_CAP2_CQE) 533 + ret = cqhci_resume(host->mmc); 538 534 539 535 return ret; 540 536 }
+15 -3
drivers/net/can/flexcan/flexcan-core.c
··· 2308 2308 2309 2309 flexcan_chip_interrupts_disable(dev); 2310 2310 2311 + err = flexcan_transceiver_disable(priv); 2312 + if (err) 2313 + return err; 2314 + 2311 2315 err = pinctrl_pm_select_sleep_state(device); 2312 2316 if (err) 2313 2317 return err; 2314 2318 } 2315 2319 netif_stop_queue(dev); 2316 2320 netif_device_detach(dev); 2321 + 2322 + priv->can.state = CAN_STATE_SLEEPING; 2317 2323 } 2318 - priv->can.state = CAN_STATE_SLEEPING; 2319 2324 2320 2325 return 0; 2321 2326 } ··· 2331 2326 struct flexcan_priv *priv = netdev_priv(dev); 2332 2327 int err; 2333 2328 2334 - priv->can.state = CAN_STATE_ERROR_ACTIVE; 2335 2329 if (netif_running(dev)) { 2336 2330 netif_device_attach(dev); 2337 2331 netif_start_queue(dev); ··· 2344 2340 if (err) 2345 2341 return err; 2346 2342 2347 - err = flexcan_chip_start(dev); 2343 + err = flexcan_transceiver_enable(priv); 2348 2344 if (err) 2349 2345 return err; 2350 2346 2347 + err = flexcan_chip_start(dev); 2348 + if (err) { 2349 + flexcan_transceiver_disable(priv); 2350 + return err; 2351 + } 2352 + 2351 2353 flexcan_chip_interrupts_enable(dev); 2352 2354 } 2355 + 2356 + priv->can.state = CAN_STATE_ERROR_ACTIVE; 2353 2357 } 2354 2358 2355 2359 return 0;
+11 -17
drivers/net/can/rcar/rcar_canfd.c
··· 787 787 } 788 788 789 789 static void rcar_canfd_configure_afl_rules(struct rcar_canfd_global *gpriv, 790 - u32 ch) 790 + u32 ch, u32 rule_entry) 791 791 { 792 - u32 cfg; 793 - int offset, start, page, num_rules = RCANFD_CHANNEL_NUMRULES; 792 + int offset, page, num_rules = RCANFD_CHANNEL_NUMRULES; 793 + u32 rule_entry_index = rule_entry % 16; 794 794 u32 ridx = ch + RCANFD_RFFIFO_IDX; 795 795 796 - if (ch == 0) { 797 - start = 0; /* Channel 0 always starts from 0th rule */ 798 - } else { 799 - /* Get number of Channel 0 rules and adjust */ 800 - cfg = rcar_canfd_read(gpriv->base, RCANFD_GAFLCFG(ch)); 801 - start = RCANFD_GAFLCFG_GETRNC(gpriv, 0, cfg); 802 - } 803 - 804 796 /* Enable write access to entry */ 805 - page = RCANFD_GAFL_PAGENUM(start); 797 + page = RCANFD_GAFL_PAGENUM(rule_entry); 806 798 rcar_canfd_set_bit(gpriv->base, RCANFD_GAFLECTR, 807 799 (RCANFD_GAFLECTR_AFLPN(gpriv, page) | 808 800 RCANFD_GAFLECTR_AFLDAE)); ··· 810 818 offset = RCANFD_C_GAFL_OFFSET; 811 819 812 820 /* Accept all IDs */ 813 - rcar_canfd_write(gpriv->base, RCANFD_GAFLID(offset, start), 0); 821 + rcar_canfd_write(gpriv->base, RCANFD_GAFLID(offset, rule_entry_index), 0); 814 822 /* IDE or RTR is not considered for matching */ 815 - rcar_canfd_write(gpriv->base, RCANFD_GAFLM(offset, start), 0); 823 + rcar_canfd_write(gpriv->base, RCANFD_GAFLM(offset, rule_entry_index), 0); 816 824 /* Any data length accepted */ 817 - rcar_canfd_write(gpriv->base, RCANFD_GAFLP0(offset, start), 0); 825 + rcar_canfd_write(gpriv->base, RCANFD_GAFLP0(offset, rule_entry_index), 0); 818 826 /* Place the msg in corresponding Rx FIFO entry */ 819 - rcar_canfd_set_bit(gpriv->base, RCANFD_GAFLP1(offset, start), 827 + rcar_canfd_set_bit(gpriv->base, RCANFD_GAFLP1(offset, rule_entry_index), 820 828 RCANFD_GAFLP1_GAFLFDP(ridx)); 821 829 822 830 /* Disable write access to page */ ··· 1843 1851 unsigned long channels_mask = 0; 1844 1852 int err, ch_irq, g_irq; 1845 1853 int g_err_irq, g_recc_irq; 1854 + u32 rule_entry = 0; 1846 1855 bool fdmode = true; /* CAN FD only mode - default */ 1847 1856 char name[9] = "channelX"; 1848 1857 int i; ··· 2016 2023 rcar_canfd_configure_tx(gpriv, ch); 2017 2024 2018 2025 /* Configure receive rules */ 2019 - rcar_canfd_configure_afl_rules(gpriv, ch); 2026 + rcar_canfd_configure_afl_rules(gpriv, ch, rule_entry); 2027 + rule_entry += RCANFD_CHANNEL_NUMRULES; 2020 2028 } 2021 2029 2022 2030 /* Configure common interrupts */
+18 -25
drivers/net/can/usb/ucan.c
··· 186 186 */ 187 187 struct ucan_ctl_cmd_get_protocol_version cmd_get_protocol_version; 188 188 189 - u8 raw[128]; 189 + u8 fw_str[128]; 190 190 } __packed; 191 191 192 192 enum { ··· 424 424 UCAN_USB_CTL_PIPE_TIMEOUT); 425 425 } 426 426 427 - static int ucan_device_request_in(struct ucan_priv *up, 428 - u8 cmd, u16 subcmd, u16 datalen) 427 + static void ucan_get_fw_str(struct ucan_priv *up, char *fw_str, size_t size) 429 428 { 430 - return usb_control_msg(up->udev, 431 - usb_rcvctrlpipe(up->udev, 0), 432 - cmd, 433 - USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 434 - subcmd, 435 - 0, 436 - up->ctl_msg_buffer, 437 - datalen, 438 - UCAN_USB_CTL_PIPE_TIMEOUT); 429 + int ret; 430 + 431 + ret = usb_control_msg(up->udev, usb_rcvctrlpipe(up->udev, 0), 432 + UCAN_DEVICE_GET_FW_STRING, 433 + USB_DIR_IN | USB_TYPE_VENDOR | 434 + USB_RECIP_DEVICE, 435 + 0, 0, fw_str, size - 1, 436 + UCAN_USB_CTL_PIPE_TIMEOUT); 437 + if (ret > 0) 438 + fw_str[ret] = '\0'; 439 + else 440 + strscpy(fw_str, "unknown", size); 439 441 } 440 442 441 443 /* Parse the device information structure reported by the device and ··· 1316 1314 u8 in_ep_addr; 1317 1315 u8 out_ep_addr; 1318 1316 union ucan_ctl_payload *ctl_msg_buffer; 1319 - char firmware_str[sizeof(union ucan_ctl_payload) + 1]; 1320 1317 1321 1318 udev = interface_to_usbdev(intf); 1322 1319 ··· 1528 1527 */ 1529 1528 ucan_parse_device_info(up, &ctl_msg_buffer->cmd_get_device_info); 1530 1529 1531 - /* just print some device information - if available */ 1532 - ret = ucan_device_request_in(up, UCAN_DEVICE_GET_FW_STRING, 0, 1533 - sizeof(union ucan_ctl_payload)); 1534 - if (ret > 0) { 1535 - /* copy string while ensuring zero termination */ 1536 - strscpy(firmware_str, up->ctl_msg_buffer->raw, 1537 - sizeof(union ucan_ctl_payload) + 1); 1538 - } else { 1539 - strcpy(firmware_str, "unknown"); 1540 - } 1541 - 1542 1530 /* device is compatible, reset it */ 1543 1531 ret = ucan_ctrl_command_out(up, UCAN_COMMAND_RESET, 0, 0); 1544 1532 if (ret < 0) ··· 1545 1555 1546 1556 /* initialisation complete, log device info */ 1547 1557 netdev_info(up->netdev, "registered device\n"); 1548 - netdev_info(up->netdev, "firmware string: %s\n", firmware_str); 1558 + ucan_get_fw_str(up, up->ctl_msg_buffer->fw_str, 1559 + sizeof(up->ctl_msg_buffer->fw_str)); 1560 + netdev_info(up->netdev, "firmware string: %s\n", 1561 + up->ctl_msg_buffer->fw_str); 1549 1562 1550 1563 /* success */ 1551 1564 return 0;
+10 -4
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 134 134 struct gdma_list_devices_resp resp = {}; 135 135 struct gdma_general_req req = {}; 136 136 struct gdma_dev_id dev; 137 - u32 i, max_num_devs; 137 + int found_dev = 0; 138 138 u16 dev_type; 139 139 int err; 140 + u32 i; 140 141 141 142 mana_gd_init_req_hdr(&req.hdr, GDMA_LIST_DEVICES, sizeof(req), 142 143 sizeof(resp)); ··· 149 148 return err ? err : -EPROTO; 150 149 } 151 150 152 - max_num_devs = min_t(u32, MAX_NUM_GDMA_DEVICES, resp.num_of_devs); 153 - 154 - for (i = 0; i < max_num_devs; i++) { 151 + for (i = 0; i < GDMA_DEV_LIST_SIZE && 152 + found_dev < resp.num_of_devs; i++) { 155 153 dev = resp.devs[i]; 156 154 dev_type = dev.type; 155 + 156 + /* Skip empty devices */ 157 + if (dev.as_uint32 == 0) 158 + continue; 159 + 160 + found_dev++; 157 161 158 162 /* HWC is already detected in mana_hwc_create_channel(). */ 159 163 if (dev_type == GDMA_DEVICE_HWC)
+3 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
··· 53 53 u32 a_index = 0; 54 54 55 55 if (!plat_dat->axi) { 56 - plat_dat->axi = kzalloc(sizeof(struct stmmac_axi), GFP_KERNEL); 56 + plat_dat->axi = devm_kzalloc(&pdev->dev, 57 + sizeof(struct stmmac_axi), 58 + GFP_KERNEL); 57 59 58 60 if (!plat_dat->axi) 59 61 return -ENOMEM;
+18 -14
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 2217 2217 static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common) 2218 2218 { 2219 2219 struct device *dev = common->dev; 2220 + struct am65_cpsw_tx_chn *tx_chn; 2220 2221 int i, ret = 0; 2221 2222 2222 2223 for (i = 0; i < common->tx_ch_num; i++) { 2223 - struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i]; 2224 + tx_chn = &common->tx_chns[i]; 2224 2225 2225 2226 hrtimer_init(&tx_chn->tx_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED); 2226 2227 tx_chn->tx_hrtimer.function = &am65_cpsw_nuss_tx_timer_callback; 2228 + 2229 + netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx, 2230 + am65_cpsw_nuss_tx_poll); 2227 2231 2228 2232 ret = devm_request_irq(dev, tx_chn->irq, 2229 2233 am65_cpsw_nuss_tx_irq, ··· 2238 2234 tx_chn->id, tx_chn->irq, ret); 2239 2235 goto err; 2240 2236 } 2241 - 2242 - netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx, 2243 - am65_cpsw_nuss_tx_poll); 2244 2237 } 2245 2238 2246 2239 return 0; 2247 2240 2248 2241 err: 2249 - for (--i ; i >= 0 ; i--) { 2250 - struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i]; 2251 - 2252 - netif_napi_del(&tx_chn->napi_tx); 2242 + netif_napi_del(&tx_chn->napi_tx); 2243 + for (--i; i >= 0; i--) { 2244 + tx_chn = &common->tx_chns[i]; 2253 2245 devm_free_irq(dev, tx_chn->irq, tx_chn); 2246 + netif_napi_del(&tx_chn->napi_tx); 2254 2247 } 2255 2248 2256 2249 return ret; ··· 2481 2480 HRTIMER_MODE_REL_PINNED); 2482 2481 flow->rx_hrtimer.function = &am65_cpsw_nuss_rx_timer_callback; 2483 2482 2483 + netif_napi_add(common->dma_ndev, &flow->napi_rx, 2484 + am65_cpsw_nuss_rx_poll); 2485 + 2484 2486 ret = devm_request_irq(dev, flow->irq, 2485 2487 am65_cpsw_nuss_rx_irq, 2486 2488 IRQF_TRIGGER_HIGH, ··· 2492 2488 dev_err(dev, "failure requesting rx %d irq %u, %d\n", 2493 2489 i, flow->irq, ret); 2494 2490 flow->irq = -EINVAL; 2495 - goto err_flow; 2491 + goto err_request_irq; 2496 2492 } 2497 - 2498 - netif_napi_add(common->dma_ndev, &flow->napi_rx, 2499 - am65_cpsw_nuss_rx_poll); 2500 2493 } 2501 2494 2502 2495 /* setup classifier to route priorities to flows */ ··· 2501 2500 2502 2501 return 0; 2503 2502 2503 + err_request_irq: 2504 + netif_napi_del(&flow->napi_rx); 2505 + 2504 2506 err_flow: 2505 - for (--i; i >= 0 ; i--) { 2507 + for (--i; i >= 0; i--) { 2506 2508 flow = &rx_chn->flows[i]; 2507 - netif_napi_del(&flow->napi_rx); 2508 2509 devm_free_irq(dev, flow->irq, flow); 2510 + netif_napi_del(&flow->napi_rx); 2509 2511 } 2510 2512 2511 2513 err:
+1
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 1806 1806 } 1807 1807 1808 1808 spin_lock_init(&prueth->vtbl_lock); 1809 + spin_lock_init(&prueth->stats_lock); 1809 1810 /* setup netdev interfaces */ 1810 1811 if (eth0_node) { 1811 1812 ret = prueth_netdev_init(prueth, eth0_node);
+2
drivers/net/ethernet/ti/icssg/icssg_prueth.h
··· 341 341 int default_vlan; 342 342 /** @vtbl_lock: Lock for vtbl in shared memory */ 343 343 spinlock_t vtbl_lock; 344 + /** @stats_lock: Lock for reading icssg stats */ 345 + spinlock_t stats_lock; 344 346 }; 345 347 346 348 struct emac_tx_ts_response {
+4
drivers/net/ethernet/ti/icssg/icssg_stats.c
··· 26 26 u32 val, reg; 27 27 int i; 28 28 29 + spin_lock(&prueth->stats_lock); 30 + 29 31 for (i = 0; i < ARRAY_SIZE(icssg_all_miig_stats); i++) { 30 32 regmap_read(prueth->miig_rt, 31 33 base + icssg_all_miig_stats[i].offset, ··· 53 51 emac->pa_stats[i] += val; 54 52 } 55 53 } 54 + 55 + spin_unlock(&prueth->stats_lock); 56 56 } 57 57 58 58 void icssg_stats_work_handler(struct work_struct *work)
+2 -1
drivers/nvme/host/apple.c
··· 599 599 } 600 600 601 601 if (!nvme_try_complete_req(req, cqe->status, cqe->result) && 602 - !blk_mq_add_to_batch(req, iob, nvme_req(req)->status, 602 + !blk_mq_add_to_batch(req, iob, 603 + nvme_req(req)->status != NVME_SC_SUCCESS, 603 604 apple_nvme_complete_batch)) 604 605 apple_nvme_complete_rq(req); 605 606 }
+6 -6
drivers/nvme/host/core.c
··· 431 431 432 432 static inline void __nvme_end_req(struct request *req) 433 433 { 434 + if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) { 435 + if (blk_rq_is_passthrough(req)) 436 + nvme_log_err_passthru(req); 437 + else 438 + nvme_log_error(req); 439 + } 434 440 nvme_end_req_zoned(req); 435 441 nvme_trace_bio_complete(req); 436 442 if (req->cmd_flags & REQ_NVME_MPATH) ··· 447 441 { 448 442 blk_status_t status = nvme_error_status(nvme_req(req)->status); 449 443 450 - if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) { 451 - if (blk_rq_is_passthrough(req)) 452 - nvme_log_err_passthru(req); 453 - else 454 - nvme_log_error(req); 455 - } 456 444 __nvme_end_req(req); 457 445 blk_mq_end_request(req, status); 458 446 }
+15 -3
drivers/nvme/host/pci.c
··· 1130 1130 1131 1131 trace_nvme_sq(req, cqe->sq_head, nvmeq->sq_tail); 1132 1132 if (!nvme_try_complete_req(req, cqe->status, cqe->result) && 1133 - !blk_mq_add_to_batch(req, iob, nvme_req(req)->status, 1134 - nvme_pci_complete_batch)) 1133 + !blk_mq_add_to_batch(req, iob, 1134 + nvme_req(req)->status != NVME_SC_SUCCESS, 1135 + nvme_pci_complete_batch)) 1135 1136 nvme_pci_complete_rq(req); 1136 1137 } 1137 1138 ··· 1412 1411 struct nvme_dev *dev = nvmeq->dev; 1413 1412 struct request *abort_req; 1414 1413 struct nvme_command cmd = { }; 1414 + struct pci_dev *pdev = to_pci_dev(dev->dev); 1415 1415 u32 csts = readl(dev->bar + NVME_REG_CSTS); 1416 1416 u8 opcode; 1417 1417 1418 + /* 1419 + * Shutdown the device immediately if we see it is disconnected. This 1420 + * unblocks PCIe error handling if the nvme driver is waiting in 1421 + * error_resume for a device that has been removed. We can't unbind the 1422 + * driver while the driver's error callback is waiting to complete, so 1423 + * we're relying on a timeout to break that deadlock if a removal 1424 + * occurs while reset work is running. 1425 + */ 1426 + if (pci_dev_is_disconnected(pdev)) 1427 + nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING); 1418 1428 if (nvme_state_terminal(&dev->ctrl)) 1419 1429 goto disable; 1420 1430 ··· 1433 1421 * the recovery mechanism will surely fail. 1434 1422 */ 1435 1423 mb(); 1436 - if (pci_channel_offline(to_pci_dev(dev->dev))) 1424 + if (pci_channel_offline(pdev)) 1437 1425 return BLK_EH_RESET_TIMER; 1438 1426 1439 1427 /*
+14 -14
drivers/nvme/target/pci-epf.c
··· 1265 1265 struct nvmet_pci_epf_queue *cq = &ctrl->cq[cqid]; 1266 1266 u16 status; 1267 1267 1268 - if (test_and_set_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags)) 1268 + if (test_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags)) 1269 1269 return NVME_SC_QID_INVALID | NVME_STATUS_DNR; 1270 1270 1271 1271 if (!(flags & NVME_QUEUE_PHYS_CONTIG)) 1272 1272 return NVME_SC_INVALID_QUEUE | NVME_STATUS_DNR; 1273 - 1274 - if (flags & NVME_CQ_IRQ_ENABLED) 1275 - set_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags); 1276 1273 1277 1274 cq->pci_addr = pci_addr; 1278 1275 cq->qid = cqid; ··· 1287 1290 cq->qes = ctrl->io_cqes; 1288 1291 cq->pci_size = cq->qes * cq->depth; 1289 1292 1290 - cq->iv = nvmet_pci_epf_add_irq_vector(ctrl, vector); 1291 - if (!cq->iv) { 1292 - status = NVME_SC_INTERNAL | NVME_STATUS_DNR; 1293 - goto err; 1293 + if (flags & NVME_CQ_IRQ_ENABLED) { 1294 + cq->iv = nvmet_pci_epf_add_irq_vector(ctrl, vector); 1295 + if (!cq->iv) 1296 + return NVME_SC_INTERNAL | NVME_STATUS_DNR; 1297 + set_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags); 1294 1298 } 1295 1299 1296 1300 status = nvmet_cq_create(tctrl, &cq->nvme_cq, cqid, cq->depth); 1297 1301 if (status != NVME_SC_SUCCESS) 1298 1302 goto err; 1303 + 1304 + set_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags); 1299 1305 1300 1306 dev_dbg(ctrl->dev, "CQ[%u]: %u entries of %zu B, IRQ vector %u\n", 1301 1307 cqid, qsize, cq->qes, cq->vector); ··· 1306 1306 return NVME_SC_SUCCESS; 1307 1307 1308 1308 err: 1309 - clear_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags); 1310 - clear_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags); 1309 + if (test_and_clear_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags)) 1310 + nvmet_pci_epf_remove_irq_vector(ctrl, cq->vector); 1311 1311 return status; 1312 1312 } 1313 1313 ··· 1333 1333 struct nvmet_pci_epf_queue *sq = &ctrl->sq[sqid]; 1334 1334 u16 status; 1335 1335 1336 - if (test_and_set_bit(NVMET_PCI_EPF_Q_LIVE, &sq->flags)) 1336 + if (test_bit(NVMET_PCI_EPF_Q_LIVE, &sq->flags)) 1337 1337 return NVME_SC_QID_INVALID | NVME_STATUS_DNR; 1338 1338 1339 1339 if (!(flags & NVME_QUEUE_PHYS_CONTIG)) ··· 1355 1355 1356 1356 status = nvmet_sq_create(tctrl, &sq->nvme_sq, sqid, sq->depth); 1357 1357 if (status != NVME_SC_SUCCESS) 1358 - goto out_clear_bit; 1358 + return status; 1359 1359 1360 1360 sq->iod_wq = alloc_workqueue("sq%d_wq", WQ_UNBOUND, 1361 1361 min_t(int, sq->depth, WQ_MAX_ACTIVE), sqid); ··· 1365 1365 goto out_destroy_sq; 1366 1366 } 1367 1367 1368 + set_bit(NVMET_PCI_EPF_Q_LIVE, &sq->flags); 1369 + 1368 1370 dev_dbg(ctrl->dev, "SQ[%u]: %u entries of %zu B\n", 1369 1371 sqid, qsize, sq->qes); 1370 1372 ··· 1374 1372 1375 1373 out_destroy_sq: 1376 1374 nvmet_sq_destroy(&sq->nvme_sq); 1377 - out_clear_bit: 1378 - clear_bit(NVMET_PCI_EPF_Q_LIVE, &sq->flags); 1379 1375 return status; 1380 1376 } 1381 1377
+4 -1
drivers/platform/surface/surface_aggregator_registry.c
··· 371 371 NULL, 372 372 }; 373 373 374 - /* Devices for Surface Pro 9 (Intel/x86) and 10 */ 374 + /* Devices for Surface Pro 9, 10 and 11 (Intel/x86) */ 375 375 static const struct software_node *ssam_node_group_sp9[] = { 376 376 &ssam_node_root, 377 377 &ssam_node_hub_kip, ··· 429 429 430 430 /* Surface Pro 10 */ 431 431 { "MSHW0510", (unsigned long)ssam_node_group_sp9 }, 432 + 433 + /* Surface Pro 11 */ 434 + { "MSHW0583", (unsigned long)ssam_node_group_sp9 }, 432 435 433 436 /* Surface Book 2 */ 434 437 { "MSHW0107", (unsigned long)ssam_node_group_gen5 },
+2
drivers/platform/x86/amd/pmf/spc.c
··· 219 219 220 220 switch (dev->current_profile) { 221 221 case PLATFORM_PROFILE_PERFORMANCE: 222 + case PLATFORM_PROFILE_BALANCED_PERFORMANCE: 222 223 val = TA_BEST_PERFORMANCE; 223 224 break; 224 225 case PLATFORM_PROFILE_BALANCED: 225 226 val = TA_BETTER_PERFORMANCE; 226 227 break; 227 228 case PLATFORM_PROFILE_LOW_POWER: 229 + case PLATFORM_PROFILE_QUIET: 228 230 val = TA_BEST_BATTERY; 229 231 break; 230 232 default:
+25 -11
drivers/platform/x86/amd/pmf/tee-if.c
··· 510 510 511 511 ret = amd_pmf_set_dram_addr(dev, true); 512 512 if (ret) 513 - goto error; 513 + goto err_cancel_work; 514 514 515 515 dev->policy_base = devm_ioremap_resource(dev->dev, dev->res); 516 516 if (IS_ERR(dev->policy_base)) { 517 517 ret = PTR_ERR(dev->policy_base); 518 - goto error; 518 + goto err_free_dram_buf; 519 519 } 520 520 521 521 dev->policy_buf = kzalloc(dev->policy_sz, GFP_KERNEL); 522 522 if (!dev->policy_buf) { 523 523 ret = -ENOMEM; 524 - goto error; 524 + goto err_free_dram_buf; 525 525 } 526 526 527 527 memcpy_fromio(dev->policy_buf, dev->policy_base, dev->policy_sz); ··· 531 531 dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL); 532 532 if (!dev->prev_data) { 533 533 ret = -ENOMEM; 534 - goto error; 534 + goto err_free_policy; 535 535 } 536 536 537 537 for (i = 0; i < ARRAY_SIZE(amd_pmf_ta_uuid); i++) { 538 538 ret = amd_pmf_tee_init(dev, &amd_pmf_ta_uuid[i]); 539 539 if (ret) 540 - return ret; 540 + goto err_free_prev_data; 541 541 542 542 ret = amd_pmf_start_policy_engine(dev); 543 543 switch (ret) { ··· 550 550 status = false; 551 551 break; 552 552 default: 553 - goto error; 553 + ret = -EINVAL; 554 + amd_pmf_tee_deinit(dev); 555 + goto err_free_prev_data; 554 556 } 555 557 556 558 if (status) 557 559 break; 558 560 } 559 561 560 - if (!status && !pb_side_load) 561 - goto error; 562 + if (!status && !pb_side_load) { 563 + ret = -EINVAL; 564 + goto err_free_prev_data; 565 + } 562 566 563 567 if (pb_side_load) 564 568 amd_pmf_open_pb(dev, dev->dbgfs_dir); 565 569 566 570 ret = amd_pmf_register_input_device(dev); 567 571 if (ret) 568 - goto error; 572 + goto err_pmf_remove_pb; 569 573 570 574 return 0; 571 575 572 - error: 573 - amd_pmf_deinit_smart_pc(dev); 576 + err_pmf_remove_pb: 577 + if (pb_side_load && dev->esbin) 578 + amd_pmf_remove_pb(dev); 579 + amd_pmf_tee_deinit(dev); 580 + err_free_prev_data: 581 + kfree(dev->prev_data); 582 + err_free_policy: 583 + kfree(dev->policy_buf); 584 + err_free_dram_buf: 585 + kfree(dev->buf); 586 + err_cancel_work: 587 + cancel_delayed_work_sync(&dev->pb_work); 574 588 575 589 return ret; 576 590 }
+1 -1
drivers/pmdomain/amlogic/meson-secure-pwrc.c
··· 221 221 SEC_PD(T7_VI_CLK2, 0), 222 222 /* ETH is for ethernet online wakeup, and should be always on */ 223 223 SEC_PD(T7_ETH, GENPD_FLAG_ALWAYS_ON), 224 - SEC_PD(T7_ISP, 0), 224 + TOP_PD(T7_ISP, 0, PWRC_T7_MIPI_ISP_ID), 225 225 SEC_PD(T7_MIPI_ISP, 0), 226 226 TOP_PD(T7_GDC, 0, PWRC_T7_NIC3_ID), 227 227 TOP_PD(T7_DEWARP, 0, PWRC_T7_NIC3_ID),
+14 -5
drivers/reset/reset-microchip-sparx5.c
··· 8 8 */ 9 9 #include <linux/mfd/syscon.h> 10 10 #include <linux/of.h> 11 + #include <linux/of_address.h> 11 12 #include <linux/module.h> 12 13 #include <linux/platform_device.h> 13 14 #include <linux/property.h> ··· 73 72 struct device_node *syscon_np) 74 73 { 75 74 struct regmap_config regmap_config = mchp_lan966x_syscon_regmap_config; 76 - resource_size_t size; 75 + struct resource res; 77 76 void __iomem *base; 77 + int err; 78 78 79 - base = devm_of_iomap(dev, syscon_np, 0, &size); 80 - if (IS_ERR(base)) 81 - return ERR_CAST(base); 79 + err = of_address_to_resource(syscon_np, 0, &res); 80 + if (err) 81 + return ERR_PTR(err); 82 82 83 - regmap_config.max_register = size - 4; 83 + /* It is not possible to use devm_of_iomap because this resource is 84 + * shared with other drivers. 85 + */ 86 + base = devm_ioremap(dev, res.start, resource_size(&res)); 87 + if (!base) 88 + return ERR_PTR(-ENOMEM); 89 + 90 + regmap_config.max_register = resource_size(&res) - 4; 84 91 85 92 return devm_regmap_init_mmio(dev, base, &regmap_config); 86 93 }
+2 -2
drivers/soc/hisilicon/kunpeng_hccs.c
··· 1539 1539 u16 i; 1540 1540 1541 1541 for (i = 0; i < hdev->used_type_num - 1; i++) 1542 - len += sysfs_emit(&buf[len], "%s ", hdev->type_name_maps[i].name); 1543 - len += sysfs_emit(&buf[len], "%s\n", hdev->type_name_maps[i].name); 1542 + len += sysfs_emit_at(buf, len, "%s ", hdev->type_name_maps[i].name); 1543 + len += sysfs_emit_at(buf, len, "%s\n", hdev->type_name_maps[i].name); 1544 1544 1545 1545 return len; 1546 1546 }
+24 -2
drivers/soc/imx/soc-imx8m.c
··· 192 192 devm_kasprintf((dev), GFP_KERNEL, "%d.%d", ((soc_rev) >> 4) & 0xf, (soc_rev) & 0xf) : \ 193 193 "unknown" 194 194 195 + static void imx8m_unregister_soc(void *data) 196 + { 197 + soc_device_unregister(data); 198 + } 199 + 200 + static void imx8m_unregister_cpufreq(void *data) 201 + { 202 + platform_device_unregister(data); 203 + } 204 + 195 205 static int imx8m_soc_probe(struct platform_device *pdev) 196 206 { 197 207 struct soc_device_attribute *soc_dev_attr; 208 + struct platform_device *cpufreq_dev; 198 209 const struct imx8_soc_data *data; 199 210 struct device *dev = &pdev->dev; 200 211 const struct of_device_id *id; ··· 250 239 if (IS_ERR(soc_dev)) 251 240 return PTR_ERR(soc_dev); 252 241 242 + ret = devm_add_action(dev, imx8m_unregister_soc, soc_dev); 243 + if (ret) 244 + return ret; 245 + 253 246 pr_info("SoC: %s revision %s\n", soc_dev_attr->soc_id, 254 247 soc_dev_attr->revision); 255 248 256 - if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT)) 257 - platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0); 249 + if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT)) { 250 + cpufreq_dev = platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0); 251 + if (IS_ERR(cpufreq_dev)) 252 + return dev_err_probe(dev, PTR_ERR(cpufreq_dev), 253 + "Failed to register imx-cpufreq-dev device\n"); 254 + ret = devm_add_action(dev, imx8m_unregister_cpufreq, cpufreq_dev); 255 + if (ret) 256 + return ret; 257 + } 258 258 259 259 return 0; 260 260 }
+1 -7
drivers/soc/qcom/pdr_interface.c
··· 75 75 { 76 76 struct pdr_handle *pdr = container_of(qmi, struct pdr_handle, 77 77 locator_hdl); 78 - struct pdr_service *pds; 79 78 80 79 mutex_lock(&pdr->lock); 81 80 /* Create a local client port for QMI communication */ ··· 86 87 mutex_unlock(&pdr->lock); 87 88 88 89 /* Service pending lookup requests */ 89 - mutex_lock(&pdr->list_lock); 90 - list_for_each_entry(pds, &pdr->lookups, node) { 91 - if (pds->need_locator_lookup) 92 - schedule_work(&pdr->locator_work); 93 - } 94 - mutex_unlock(&pdr->list_lock); 90 + schedule_work(&pdr->locator_work); 95 91 96 92 return 0; 97 93 }
+1 -1
drivers/soc/qcom/pmic_glink.c
··· 233 233 234 234 static int pmic_glink_rpmsg_probe(struct rpmsg_device *rpdev) 235 235 { 236 - struct pmic_glink *pg = __pmic_glink; 236 + struct pmic_glink *pg; 237 237 238 238 guard(mutex)(&__pmic_glink_lock); 239 239 pg = __pmic_glink;
+8 -3
drivers/thunderbolt/tunnel.c
··· 1009 1009 */ 1010 1010 tb_tunnel_get(tunnel); 1011 1011 1012 + tunnel->dprx_started = true; 1013 + 1012 1014 if (tunnel->callback) { 1013 1015 tunnel->dprx_timeout = dprx_timeout_to_ktime(dprx_timeout); 1014 1016 queue_delayed_work(tunnel->tb->wq, &tunnel->dprx_work, 0); ··· 1023 1021 1024 1022 static void tb_dp_dprx_stop(struct tb_tunnel *tunnel) 1025 1023 { 1026 - tunnel->dprx_canceled = true; 1027 - cancel_delayed_work(&tunnel->dprx_work); 1028 - tb_tunnel_put(tunnel); 1024 + if (tunnel->dprx_started) { 1025 + tunnel->dprx_started = false; 1026 + tunnel->dprx_canceled = true; 1027 + cancel_delayed_work(&tunnel->dprx_work); 1028 + tb_tunnel_put(tunnel); 1029 + } 1029 1030 } 1030 1031 1031 1032 static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
+2
drivers/thunderbolt/tunnel.h
··· 63 63 * @allocated_down: Allocated downstream bandwidth (only for USB3) 64 64 * @bw_mode: DP bandwidth allocation mode registers can be used to 65 65 * determine consumed and allocated bandwidth 66 + * @dprx_started: DPRX negotiation was started (tb_dp_dprx_start() was called for it) 66 67 * @dprx_canceled: Was DPRX capabilities read poll canceled 67 68 * @dprx_timeout: If set DPRX capabilities read poll work will timeout after this passes 68 69 * @dprx_work: Worker that is scheduled to poll completion of DPRX capabilities read ··· 101 100 int allocated_up; 102 101 int allocated_down; 103 102 bool bw_mode; 103 + bool dprx_started; 104 104 bool dprx_canceled; 105 105 ktime_t dprx_timeout; 106 106 struct delayed_work dprx_work;
+14
drivers/usb/serial/ftdi_sio.c
··· 1079 1079 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 1080 1080 /* GMC devices */ 1081 1081 { USB_DEVICE(GMC_VID, GMC_Z216C_PID) }, 1082 + /* Altera USB Blaster 3 */ 1083 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6022_PID, 1) }, 1084 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6025_PID, 2) }, 1085 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 2) }, 1086 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 3) }, 1087 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6029_PID, 2) }, 1088 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 2) }, 1089 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 3) }, 1090 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602C_PID, 1) }, 1091 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 1) }, 1092 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 2) }, 1093 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 1) }, 1094 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 2) }, 1095 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 3) }, 1082 1096 { } /* Terminating entry */ 1083 1097 }; 1084 1098
+13
drivers/usb/serial/ftdi_sio_ids.h
··· 1612 1612 */ 1613 1613 #define GMC_VID 0x1cd7 1614 1614 #define GMC_Z216C_PID 0x0217 /* GMC Z216C Adapter IR-USB */ 1615 + 1616 + /* 1617 + * Altera USB Blaster 3 (http://www.altera.com). 1618 + */ 1619 + #define ALTERA_VID 0x09fb 1620 + #define ALTERA_UB3_6022_PID 0x6022 1621 + #define ALTERA_UB3_6025_PID 0x6025 1622 + #define ALTERA_UB3_6026_PID 0x6026 1623 + #define ALTERA_UB3_6029_PID 0x6029 1624 + #define ALTERA_UB3_602A_PID 0x602a 1625 + #define ALTERA_UB3_602C_PID 0x602c 1626 + #define ALTERA_UB3_602D_PID 0x602d 1627 + #define ALTERA_UB3_602E_PID 0x602e
+32 -16
drivers/usb/serial/option.c
··· 1368 1368 .driver_info = NCTRL(0) | RSVD(1) }, 1369 1369 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990A (PCIe) */ 1370 1370 .driver_info = RSVD(0) }, 1371 - { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */ 1371 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990A (rmnet) */ 1372 1372 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1373 - { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990 (MBIM) */ 1373 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990A (MBIM) */ 1374 1374 .driver_info = NCTRL(0) | RSVD(1) }, 1375 - { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990 (RNDIS) */ 1375 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990A (RNDIS) */ 1376 1376 .driver_info = NCTRL(2) | RSVD(3) }, 1377 - { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */ 1377 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990A (ECM) */ 1378 1378 .driver_info = NCTRL(0) | RSVD(1) }, 1379 1379 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff), /* Telit FN20C04 (rmnet) */ 1380 1380 .driver_info = RSVD(0) | NCTRL(3) }, ··· 1388 1388 .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1389 1389 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */ 1390 1390 .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) }, 1391 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x30), /* Telit FE990B (rmnet) */ 1392 + .driver_info = NCTRL(5) }, 1393 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x40) }, 1394 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x60) }, 1395 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x30), /* Telit FE990B (MBIM) */ 1396 + .driver_info = NCTRL(6) }, 1397 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x40) }, 1398 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x60) }, 1399 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x30), /* Telit FE990B (RNDIS) */ 1400 + .driver_info = NCTRL(6) }, 1401 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x40) }, 1402 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x60) }, 1403 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x30), /* Telit FE990B (ECM) */ 1404 + .driver_info = NCTRL(6) }, 1405 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x40) }, 1406 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x60) }, 1391 1407 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c0, 0xff), /* Telit FE910C04 (rmnet) */ 1392 1408 .driver_info = RSVD(0) | NCTRL(3) }, 1393 1409 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c4, 0xff), /* Telit FE910C04 (rmnet) */ 1394 1410 .driver_info = RSVD(0) | NCTRL(3) }, 1395 1411 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */ 1396 1412 .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1397 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x60) }, /* Telit FN990B (rmnet) */ 1398 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x40) }, 1399 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x30), 1413 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x30), /* Telit FN990B (rmnet) */ 1400 1414 .driver_info = NCTRL(5) }, 1401 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x60) }, /* Telit FN990B (MBIM) */ 1402 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x40) }, 1403 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x30), 1415 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x40) }, 1416 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x60) }, 1417 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x30), /* Telit FN990B (MBIM) */ 1404 1418 .driver_info = NCTRL(6) }, 1405 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x60) }, /* Telit FN990B (RNDIS) */ 1406 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x40) }, 1407 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x30), 1419 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x40) }, 1420 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x60) }, 1421 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x30), /* Telit FN990B (RNDIS) */ 1408 1422 .driver_info = NCTRL(6) }, 1409 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x60) }, /* Telit FN990B (ECM) */ 1410 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x40) }, 1411 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x30), 1423 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x40) }, 1424 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x60) }, 1425 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x30), /* Telit FN990B (ECM) */ 1412 1426 .driver_info = NCTRL(6) }, 1427 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x40) }, 1428 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x60) }, 1413 1429 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1414 1430 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1415 1431 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+4 -4
drivers/usb/typec/tcpm/tcpm.c
··· 5117 5117 */ 5118 5118 if (port->vbus_never_low) { 5119 5119 port->vbus_never_low = false; 5120 - tcpm_set_state(port, SNK_SOFT_RESET, 5121 - port->timings.sink_wait_cap_time); 5120 + upcoming_state = SNK_SOFT_RESET; 5122 5121 } else { 5123 5122 if (!port->self_powered) 5124 5123 upcoming_state = SNK_WAIT_CAPABILITIES_TIMEOUT; 5125 5124 else 5126 5125 upcoming_state = hard_reset_state(port); 5127 - tcpm_set_state(port, SNK_WAIT_CAPABILITIES_TIMEOUT, 5128 - port->timings.sink_wait_cap_time); 5129 5126 } 5127 + 5128 + tcpm_set_state(port, upcoming_state, 5129 + port->timings.sink_wait_cap_time); 5130 5130 break; 5131 5131 case SNK_WAIT_CAPABILITIES_TIMEOUT: 5132 5132 /*
+1 -1
fs/bcachefs/btree_io.c
··· 1186 1186 le64_to_cpu(i->journal_seq), 1187 1187 b->written, b->written + sectors, ptr_written); 1188 1188 1189 - b->written += sectors; 1189 + b->written = min(b->written + sectors, btree_sectors(c)); 1190 1190 1191 1191 if (blacklisted && !first) 1192 1192 continue;
+8
fs/bcachefs/btree_update.h
··· 126 126 127 127 int bch2_btree_insert_clone_trans(struct btree_trans *, enum btree_id, struct bkey_i *); 128 128 129 + int bch2_btree_write_buffer_insert_err(struct btree_trans *, 130 + enum btree_id, struct bkey_i *); 131 + 129 132 static inline int __must_check bch2_trans_update_buffered(struct btree_trans *trans, 130 133 enum btree_id btree, 131 134 struct bkey_i *k) 132 135 { 136 + if (unlikely(!btree_type_uses_write_buffer(btree))) { 137 + int ret = bch2_btree_write_buffer_insert_err(trans, btree, k); 138 + dump_stack(); 139 + return ret; 140 + } 133 141 /* 134 142 * Most updates skip the btree write buffer until journal replay is 135 143 * finished because synchronization with journal replay relies on having
+20 -1
fs/bcachefs/btree_write_buffer.c
··· 264 264 BUG_ON(wb->sorted.size < wb->flushing.keys.nr); 265 265 } 266 266 267 + int bch2_btree_write_buffer_insert_err(struct btree_trans *trans, 268 + enum btree_id btree, struct bkey_i *k) 269 + { 270 + struct bch_fs *c = trans->c; 271 + struct printbuf buf = PRINTBUF; 272 + 273 + prt_printf(&buf, "attempting to do write buffer update on non wb btree="); 274 + bch2_btree_id_to_text(&buf, btree); 275 + prt_str(&buf, "\n"); 276 + bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(k)); 277 + 278 + bch2_fs_inconsistent(c, "%s", buf.buf); 279 + printbuf_exit(&buf); 280 + return -EROFS; 281 + } 282 + 267 283 static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans) 268 284 { 269 285 struct bch_fs *c = trans->c; ··· 328 312 darray_for_each(wb->sorted, i) { 329 313 struct btree_write_buffered_key *k = &wb->flushing.keys.data[i->idx]; 330 314 331 - BUG_ON(!btree_type_uses_write_buffer(k->btree)); 315 + if (unlikely(!btree_type_uses_write_buffer(k->btree))) { 316 + ret = bch2_btree_write_buffer_insert_err(trans, k->btree, &k->k); 317 + goto err; 318 + } 332 319 333 320 for (struct wb_key_ref *n = i + 1; n < min(i + 4, &darray_top(wb->sorted)); n++) 334 321 prefetch(&wb->flushing.keys.data[n->idx]);
+1 -1
fs/bcachefs/extents.c
··· 99 99 100 100 /* Pick at random, biased in favor of the faster device: */ 101 101 102 - return bch2_rand_range(l1 + l2) > l1; 102 + return bch2_get_random_u64_below(l1 + l2) > l1; 103 103 } 104 104 105 105 if (bch2_force_reconstruct_read)
+1
fs/bcachefs/inode.c
··· 1198 1198 opts->_name##_from_inode = true; \ 1199 1199 } else { \ 1200 1200 opts->_name = c->opts._name; \ 1201 + opts->_name##_from_inode = false; \ 1201 1202 } 1202 1203 BCH_INODE_OPTS() 1203 1204 #undef x
+12 -7
fs/bcachefs/io_read.c
··· 59 59 } 60 60 rcu_read_unlock(); 61 61 62 - return bch2_rand_range(nr * CONGESTED_MAX) < total; 62 + return get_random_u32_below(nr * CONGESTED_MAX) < total; 63 63 } 64 64 65 65 #else ··· 951 951 goto retry_pick; 952 952 } 953 953 954 - /* 955 - * Unlock the iterator while the btree node's lock is still in 956 - * cache, before doing the IO: 957 - */ 958 - bch2_trans_unlock(trans); 959 - 960 954 if (flags & BCH_READ_NODECODE) { 961 955 /* 962 956 * can happen if we retry, and the extent we were going to read ··· 1107 1113 trace_and_count(c, read_split, &orig->bio); 1108 1114 } 1109 1115 1116 + /* 1117 + * Unlock the iterator while the btree node's lock is still in 1118 + * cache, before doing the IO: 1119 + */ 1120 + if (!(flags & BCH_READ_IN_RETRY)) 1121 + bch2_trans_unlock(trans); 1122 + else 1123 + bch2_trans_unlock_long(trans); 1124 + 1110 1125 if (!rbio->pick.idx) { 1111 1126 if (unlikely(!rbio->have_ioref)) { 1112 1127 struct printbuf buf = PRINTBUF; ··· 1163 1160 if (likely(!(flags & BCH_READ_IN_RETRY))) { 1164 1161 return 0; 1165 1162 } else { 1163 + bch2_trans_unlock(trans); 1164 + 1166 1165 int ret; 1167 1166 1168 1167 rbio->context = RBIO_CONTEXT_UNBOUND;
+6 -5
fs/bcachefs/super.c
··· 1811 1811 goto err_late; 1812 1812 1813 1813 up_write(&c->state_lock); 1814 - return 0; 1814 + out: 1815 + printbuf_exit(&label); 1816 + printbuf_exit(&errbuf); 1817 + bch_err_fn(c, ret); 1818 + return ret; 1815 1819 1816 1820 err_unlock: 1817 1821 mutex_unlock(&c->sb_lock); ··· 1824 1820 if (ca) 1825 1821 bch2_dev_free(ca); 1826 1822 bch2_free_super(&sb); 1827 - printbuf_exit(&label); 1828 - printbuf_exit(&errbuf); 1829 - bch_err_fn(c, ret); 1830 - return ret; 1823 + goto out; 1831 1824 err_late: 1832 1825 up_write(&c->state_lock); 1833 1826 ca = NULL;
+15 -9
fs/bcachefs/util.c
··· 653 653 return 0; 654 654 } 655 655 656 - size_t bch2_rand_range(size_t max) 656 + u64 bch2_get_random_u64_below(u64 ceil) 657 657 { 658 - size_t rand; 658 + if (ceil <= U32_MAX) 659 + return __get_random_u32_below(ceil); 659 660 660 - if (!max) 661 - return 0; 661 + /* this is the same (clever) algorithm as in __get_random_u32_below() */ 662 + u64 rand = get_random_u64(); 663 + u64 mult = ceil * rand; 662 664 663 - do { 664 - rand = get_random_long(); 665 - rand &= roundup_pow_of_two(max) - 1; 666 - } while (rand >= max); 665 + if (unlikely(mult < ceil)) { 666 + u64 bound; 667 + div64_u64_rem(-ceil, ceil, &bound); 668 + while (unlikely(mult < bound)) { 669 + rand = get_random_u64(); 670 + mult = ceil * rand; 671 + } 672 + } 667 673 668 - return rand; 674 + return mul_u64_u64_shr(ceil, rand, 64); 669 675 } 670 676 671 677 void memcpy_to_bio(struct bio *dst, struct bvec_iter dst_iter, const void *src)
+1 -1
fs/bcachefs/util.h
··· 401 401 _ret; \ 402 402 }) 403 403 404 - size_t bch2_rand_range(size_t); 404 + u64 bch2_get_random_u64_below(u64); 405 405 406 406 void memcpy_to_bio(struct bio *, struct bvec_iter, const void *); 407 407 void memcpy_from_bio(void *, struct bio *, struct bvec_iter);
+49 -3
fs/efivarfs/super.c
··· 421 421 if (err) 422 422 size = 0; 423 423 424 - inode_lock(inode); 424 + inode_lock_nested(inode, I_MUTEX_CHILD); 425 425 i_size_write(inode, size); 426 426 inode_unlock(inode); 427 427 ··· 474 474 return err; 475 475 } 476 476 477 + static void efivarfs_deactivate_super_work(struct work_struct *work) 478 + { 479 + struct super_block *s = container_of(work, struct super_block, 480 + destroy_work); 481 + /* 482 + * note: here s->destroy_work is free for reuse (which 483 + * will happen in deactivate_super) 484 + */ 485 + deactivate_super(s); 486 + } 487 + 488 + static struct file_system_type efivarfs_type; 489 + 477 490 static int efivarfs_pm_notify(struct notifier_block *nb, unsigned long action, 478 491 void *ptr) 479 492 { 480 493 struct efivarfs_fs_info *sfi = container_of(nb, struct efivarfs_fs_info, 481 494 pm_nb); 482 - struct path path = { .mnt = NULL, .dentry = sfi->sb->s_root, }; 495 + struct path path; 483 496 struct efivarfs_ctx ectx = { 484 497 .ctx = { 485 498 .actor = efivarfs_actor, ··· 500 487 .sb = sfi->sb, 501 488 }; 502 489 struct file *file; 490 + struct super_block *s = sfi->sb; 503 491 static bool rescan_done = true; 504 492 505 493 if (action == PM_HIBERNATION_PREPARE) { ··· 513 499 if (rescan_done) 514 500 return NOTIFY_DONE; 515 501 502 + /* ensure single superblock is alive and pin it */ 503 + if (!atomic_inc_not_zero(&s->s_active)) 504 + return NOTIFY_DONE; 505 + 516 506 pr_info("efivarfs: resyncing variable state\n"); 517 507 518 - /* O_NOATIME is required to prevent oops on NULL mnt */ 508 + path.dentry = sfi->sb->s_root; 509 + 510 + /* 511 + * do not add SB_KERNMOUNT which a single superblock could 512 + * expose to userspace and which also causes MNT_INTERNAL, see 513 + * below 514 + */ 515 + path.mnt = vfs_kern_mount(&efivarfs_type, 0, 516 + efivarfs_type.name, NULL); 517 + if (IS_ERR(path.mnt)) { 518 + pr_err("efivarfs: internal mount failed\n"); 519 + /* 520 + * We may be the last pinner of the superblock but 521 + * calling efivarfs_kill_sb from within the notifier 522 + * here would deadlock trying to unregister it 523 + */ 524 + INIT_WORK(&s->destroy_work, efivarfs_deactivate_super_work); 525 + schedule_work(&s->destroy_work); 526 + return PTR_ERR(path.mnt); 527 + } 528 + 529 + /* path.mnt now has pin on superblock, so this must be above one */ 530 + atomic_dec(&s->s_active); 531 + 519 532 file = kernel_file_open(&path, O_RDONLY | O_DIRECTORY | O_NOATIME, 520 533 current_cred()); 534 + /* 535 + * safe even if last put because no MNT_INTERNAL means this 536 + * will do delayed deactivate_super and not deadlock 537 + */ 538 + mntput(path.mnt); 521 539 if (IS_ERR(file)) 522 540 return NOTIFY_DONE; 523 541
-3
fs/ext4/file.c
··· 756 756 return VM_FAULT_SIGBUS; 757 757 } 758 758 } else { 759 - result = filemap_fsnotify_fault(vmf); 760 - if (unlikely(result)) 761 - return result; 762 759 filemap_invalidate_lock_shared(mapping); 763 760 } 764 761 result = dax_iomap_fault(vmf, order, &pfn, &error, &ext4_iomap_ops);
+9 -1
fs/proc/generic.c
··· 559 559 return p; 560 560 } 561 561 562 - static inline void pde_set_flags(struct proc_dir_entry *pde) 562 + static void pde_set_flags(struct proc_dir_entry *pde) 563 563 { 564 564 if (pde->proc_ops->proc_flags & PROC_ENTRY_PERMANENT) 565 565 pde->flags |= PROC_ENTRY_PERMANENT; 566 + if (pde->proc_ops->proc_read_iter) 567 + pde->flags |= PROC_ENTRY_proc_read_iter; 568 + #ifdef CONFIG_COMPAT 569 + if (pde->proc_ops->proc_compat_ioctl) 570 + pde->flags |= PROC_ENTRY_proc_compat_ioctl; 571 + #endif 566 572 } 567 573 568 574 struct proc_dir_entry *proc_create_data(const char *name, umode_t mode, ··· 632 626 p->proc_ops = &proc_seq_ops; 633 627 p->seq_ops = ops; 634 628 p->state_size = state_size; 629 + pde_set_flags(p); 635 630 return proc_register(parent, p); 636 631 } 637 632 EXPORT_SYMBOL(proc_create_seq_private); ··· 663 656 return NULL; 664 657 p->proc_ops = &proc_single_ops; 665 658 p->single_show = show; 659 + pde_set_flags(p); 666 660 return proc_register(parent, p); 667 661 } 668 662 EXPORT_SYMBOL(proc_create_single_data);
+3 -3
fs/proc/inode.c
··· 656 656 657 657 if (S_ISREG(inode->i_mode)) { 658 658 inode->i_op = de->proc_iops; 659 - if (de->proc_ops->proc_read_iter) 659 + if (pde_has_proc_read_iter(de)) 660 660 inode->i_fop = &proc_iter_file_ops; 661 661 else 662 662 inode->i_fop = &proc_reg_file_ops; 663 663 #ifdef CONFIG_COMPAT 664 - if (de->proc_ops->proc_compat_ioctl) { 665 - if (de->proc_ops->proc_read_iter) 664 + if (pde_has_proc_compat_ioctl(de)) { 665 + if (pde_has_proc_read_iter(de)) 666 666 inode->i_fop = &proc_iter_file_ops_compat; 667 667 else 668 668 inode->i_fop = &proc_reg_file_ops_compat;
+14
fs/proc/internal.h
··· 85 85 pde->flags |= PROC_ENTRY_PERMANENT; 86 86 } 87 87 88 + static inline bool pde_has_proc_read_iter(const struct proc_dir_entry *pde) 89 + { 90 + return pde->flags & PROC_ENTRY_proc_read_iter; 91 + } 92 + 93 + static inline bool pde_has_proc_compat_ioctl(const struct proc_dir_entry *pde) 94 + { 95 + #ifdef CONFIG_COMPAT 96 + return pde->flags & PROC_ENTRY_proc_compat_ioctl; 97 + #else 98 + return false; 99 + #endif 100 + } 101 + 88 102 extern struct kmem_cache *proc_dir_entry_cache; 89 103 void pde_free(struct proc_dir_entry *pde); 90 104
+12 -4
fs/smb/client/connect.c
··· 1825 1825 struct smb3_fs_context *ctx, 1826 1826 bool match_super) 1827 1827 { 1828 - if (ctx->sectype != Unspecified && 1829 - ctx->sectype != ses->sectype) 1830 - return 0; 1828 + struct TCP_Server_Info *server = ses->server; 1829 + enum securityEnum ctx_sec, ses_sec; 1831 1830 1832 1831 if (!match_super && ctx->dfs_root_ses != ses->dfs_root_ses) 1833 1832 return 0; ··· 1838 1839 if (ses->chan_max < ctx->max_channels) 1839 1840 return 0; 1840 1841 1841 - switch (ses->sectype) { 1842 + ctx_sec = server->ops->select_sectype(server, ctx->sectype); 1843 + ses_sec = server->ops->select_sectype(server, ses->sectype); 1844 + 1845 + if (ctx_sec != ses_sec) 1846 + return 0; 1847 + 1848 + switch (ctx_sec) { 1849 + case IAKerb: 1842 1850 case Kerberos: 1843 1851 if (!uid_eq(ctx->cred_uid, ses->cred_uid)) 1844 1852 return 0; 1845 1853 break; 1854 + case NTLMv2: 1855 + case RawNTLMSSP: 1846 1856 default: 1847 1857 /* NULL username means anonymous session */ 1848 1858 if (ses->user_name == NULL) {
+11 -7
fs/smb/client/fs_context.c
··· 171 171 fsparam_string("username", Opt_user), 172 172 fsparam_string("pass", Opt_pass), 173 173 fsparam_string("password", Opt_pass), 174 + fsparam_string("pass2", Opt_pass2), 174 175 fsparam_string("password2", Opt_pass2), 175 176 fsparam_string("ip", Opt_ip), 176 177 fsparam_string("addr", Opt_ip), ··· 1132 1131 } else if (!strcmp("user", param->key) || !strcmp("username", param->key)) { 1133 1132 skip_parsing = true; 1134 1133 opt = Opt_user; 1134 + } else if (!strcmp("pass2", param->key) || !strcmp("password2", param->key)) { 1135 + skip_parsing = true; 1136 + opt = Opt_pass2; 1135 1137 } 1136 1138 } 1137 1139 ··· 1344 1340 } 1345 1341 break; 1346 1342 case Opt_acregmax: 1347 - ctx->acregmax = HZ * result.uint_32; 1348 - if (ctx->acregmax > CIFS_MAX_ACTIMEO) { 1343 + if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { 1349 1344 cifs_errorf(fc, "acregmax too large\n"); 1350 1345 goto cifs_parse_mount_err; 1351 1346 } 1347 + ctx->acregmax = HZ * result.uint_32; 1352 1348 break; 1353 1349 case Opt_acdirmax: 1354 - ctx->acdirmax = HZ * result.uint_32; 1355 - if (ctx->acdirmax > CIFS_MAX_ACTIMEO) { 1350 + if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { 1356 1351 cifs_errorf(fc, "acdirmax too large\n"); 1357 1352 goto cifs_parse_mount_err; 1358 1353 } 1354 + ctx->acdirmax = HZ * result.uint_32; 1359 1355 break; 1360 1356 case Opt_actimeo: 1361 - if (HZ * result.uint_32 > CIFS_MAX_ACTIMEO) { 1357 + if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { 1362 1358 cifs_errorf(fc, "timeout too large\n"); 1363 1359 goto cifs_parse_mount_err; 1364 1360 } ··· 1370 1366 ctx->acdirmax = ctx->acregmax = HZ * result.uint_32; 1371 1367 break; 1372 1368 case Opt_closetimeo: 1373 - ctx->closetimeo = HZ * result.uint_32; 1374 - if (ctx->closetimeo > SMB3_MAX_DCLOSETIMEO) { 1369 + if (result.uint_32 > SMB3_MAX_DCLOSETIMEO / HZ) { 1375 1370 cifs_errorf(fc, "closetimeo too large\n"); 1376 1371 goto cifs_parse_mount_err; 1377 1372 } 1373 + ctx->closetimeo = HZ * result.uint_32; 1378 1374 break; 1379 1375 case Opt_echo_interval: 1380 1376 ctx->echo_interval = result.uint_32;
+20
fs/smb/server/connection.c
··· 433 433 default_conn_ops.terminate_fn = ops->terminate_fn; 434 434 } 435 435 436 + void ksmbd_conn_r_count_inc(struct ksmbd_conn *conn) 437 + { 438 + atomic_inc(&conn->r_count); 439 + } 440 + 441 + void ksmbd_conn_r_count_dec(struct ksmbd_conn *conn) 442 + { 443 + /* 444 + * Checking waitqueue to dropping pending requests on 445 + * disconnection. waitqueue_active is safe because it 446 + * uses atomic operation for condition. 447 + */ 448 + atomic_inc(&conn->refcnt); 449 + if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 450 + wake_up(&conn->r_count_q); 451 + 452 + if (atomic_dec_and_test(&conn->refcnt)) 453 + kfree(conn); 454 + } 455 + 436 456 int ksmbd_conn_transport_init(void) 437 457 { 438 458 int ret;
+2
fs/smb/server/connection.h
··· 168 168 void ksmbd_conn_transport_destroy(void); 169 169 void ksmbd_conn_lock(struct ksmbd_conn *conn); 170 170 void ksmbd_conn_unlock(struct ksmbd_conn *conn); 171 + void ksmbd_conn_r_count_inc(struct ksmbd_conn *conn); 172 + void ksmbd_conn_r_count_dec(struct ksmbd_conn *conn); 171 173 172 174 /* 173 175 * WARNING
-3
fs/smb/server/ksmbd_work.c
··· 26 26 INIT_LIST_HEAD(&work->request_entry); 27 27 INIT_LIST_HEAD(&work->async_request_entry); 28 28 INIT_LIST_HEAD(&work->fp_entry); 29 - INIT_LIST_HEAD(&work->interim_entry); 30 29 INIT_LIST_HEAD(&work->aux_read_list); 31 30 work->iov_alloc_cnt = 4; 32 31 work->iov = kcalloc(work->iov_alloc_cnt, sizeof(struct kvec), ··· 55 56 kfree(work->tr_buf); 56 57 kvfree(work->request_buf); 57 58 kfree(work->iov); 58 - if (!list_empty(&work->interim_entry)) 59 - list_del(&work->interim_entry); 60 59 61 60 if (work->async_id) 62 61 ksmbd_release_id(&work->conn->async_ida, work->async_id);
-1
fs/smb/server/ksmbd_work.h
··· 89 89 /* List head at conn->async_requests */ 90 90 struct list_head async_request_entry; 91 91 struct list_head fp_entry; 92 - struct list_head interim_entry; 93 92 }; 94 93 95 94 /**
+21 -22
fs/smb/server/oplock.c
··· 46 46 opinfo->fid = id; 47 47 opinfo->Tid = Tid; 48 48 INIT_LIST_HEAD(&opinfo->op_entry); 49 - INIT_LIST_HEAD(&opinfo->interim_list); 50 49 init_waitqueue_head(&opinfo->oplock_q); 51 50 init_waitqueue_head(&opinfo->oplock_brk); 52 51 atomic_set(&opinfo->refcount, 1); ··· 634 635 { 635 636 struct smb2_oplock_break *rsp = NULL; 636 637 struct ksmbd_work *work = container_of(wk, struct ksmbd_work, work); 638 + struct ksmbd_conn *conn = work->conn; 637 639 struct oplock_break_info *br_info = work->request_buf; 638 640 struct smb2_hdr *rsp_hdr; 639 641 struct ksmbd_file *fp; ··· 690 690 691 691 out: 692 692 ksmbd_free_work_struct(work); 693 + ksmbd_conn_r_count_dec(conn); 693 694 } 694 695 695 696 /** ··· 725 724 work->sess = opinfo->sess; 726 725 727 726 if (opinfo->op_state == OPLOCK_ACK_WAIT) { 727 + ksmbd_conn_r_count_inc(conn); 728 728 INIT_WORK(&work->work, __smb2_oplock_break_noti); 729 729 ksmbd_queue_work(work); 730 730 ··· 747 745 { 748 746 struct smb2_lease_break *rsp = NULL; 749 747 struct ksmbd_work *work = container_of(wk, struct ksmbd_work, work); 748 + struct ksmbd_conn *conn = work->conn; 750 749 struct lease_break_info *br_info = work->request_buf; 751 750 struct smb2_hdr *rsp_hdr; 752 751 ··· 794 791 795 792 out: 796 793 ksmbd_free_work_struct(work); 794 + ksmbd_conn_r_count_dec(conn); 797 795 } 798 796 799 797 /** ··· 807 803 static int smb2_lease_break_noti(struct oplock_info *opinfo) 808 804 { 809 805 struct ksmbd_conn *conn = opinfo->conn; 810 - struct list_head *tmp, *t; 811 806 struct ksmbd_work *work; 812 807 struct lease_break_info *br_info; 813 808 struct lease *lease = opinfo->o_lease; ··· 834 831 work->sess = opinfo->sess; 835 832 836 833 if (opinfo->op_state == OPLOCK_ACK_WAIT) { 837 - list_for_each_safe(tmp, t, &opinfo->interim_list) { 838 - struct ksmbd_work *in_work; 839 - 840 - in_work = list_entry(tmp, struct ksmbd_work, 841 - interim_entry); 842 - setup_async_work(in_work, NULL, NULL); 843 - smb2_send_interim_resp(in_work, STATUS_PENDING); 844 - list_del_init(&in_work->interim_entry); 845 - release_async_work(in_work); 846 - } 834 + ksmbd_conn_r_count_inc(conn); 847 835 INIT_WORK(&work->work, __smb2_lease_break_noti); 848 836 ksmbd_queue_work(work); 849 837 wait_for_break_ack(opinfo); ··· 865 871 } 866 872 } 867 873 868 - static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level) 874 + static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level, 875 + struct ksmbd_work *in_work) 869 876 { 870 877 int err = 0; 871 878 ··· 909 914 } 910 915 911 916 if (lease->state & (SMB2_LEASE_WRITE_CACHING_LE | 912 - SMB2_LEASE_HANDLE_CACHING_LE)) 917 + SMB2_LEASE_HANDLE_CACHING_LE)) { 918 + if (in_work) { 919 + setup_async_work(in_work, NULL, NULL); 920 + smb2_send_interim_resp(in_work, STATUS_PENDING); 921 + release_async_work(in_work); 922 + } 923 + 913 924 brk_opinfo->op_state = OPLOCK_ACK_WAIT; 914 - else 925 + } else 915 926 atomic_dec(&brk_opinfo->breaking_cnt); 916 927 } else { 917 928 err = oplock_break_pending(brk_opinfo, req_op_level); ··· 1117 1116 if (ksmbd_conn_releasing(opinfo->conn)) 1118 1117 continue; 1119 1118 1120 - oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE); 1119 + oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); 1121 1120 opinfo_put(opinfo); 1122 1121 } 1123 1122 } ··· 1153 1152 1154 1153 if (ksmbd_conn_releasing(opinfo->conn)) 1155 1154 continue; 1156 - oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE); 1155 + oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); 1157 1156 opinfo_put(opinfo); 1158 1157 } 1159 1158 } ··· 1253 1252 goto op_break_not_needed; 1254 1253 } 1255 1254 1256 - list_add(&work->interim_entry, &prev_opinfo->interim_list); 1257 - err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II); 1255 + err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II, work); 1258 1256 opinfo_put(prev_opinfo); 1259 1257 if (err == -ENOENT) 1260 1258 goto set_lev; ··· 1322 1322 } 1323 1323 1324 1324 brk_opinfo->open_trunc = is_trunc; 1325 - list_add(&work->interim_entry, &brk_opinfo->interim_list); 1326 - oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II); 1325 + oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II, work); 1327 1326 opinfo_put(brk_opinfo); 1328 1327 } 1329 1328 ··· 1385 1386 SMB2_LEASE_KEY_SIZE)) 1386 1387 goto next; 1387 1388 brk_op->open_trunc = is_trunc; 1388 - oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE); 1389 + oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE, NULL); 1389 1390 next: 1390 1391 opinfo_put(brk_op); 1391 1392 rcu_read_lock();
-1
fs/smb/server/oplock.h
··· 67 67 bool is_lease; 68 68 bool open_trunc; /* truncate on open */ 69 69 struct lease *o_lease; 70 - struct list_head interim_list; 71 70 struct list_head op_entry; 72 71 struct list_head lease_entry; 73 72 wait_queue_head_t oplock_q; /* Other server threads */
+2 -12
fs/smb/server/server.c
··· 270 270 271 271 ksmbd_conn_try_dequeue_request(work); 272 272 ksmbd_free_work_struct(work); 273 - /* 274 - * Checking waitqueue to dropping pending requests on 275 - * disconnection. waitqueue_active is safe because it 276 - * uses atomic operation for condition. 277 - */ 278 - atomic_inc(&conn->refcnt); 279 - if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 280 - wake_up(&conn->r_count_q); 281 - 282 - if (atomic_dec_and_test(&conn->refcnt)) 283 - kfree(conn); 273 + ksmbd_conn_r_count_dec(conn); 284 274 } 285 275 286 276 /** ··· 300 310 conn->request_buf = NULL; 301 311 302 312 ksmbd_conn_enqueue_request(work); 303 - atomic_inc(&conn->r_count); 313 + ksmbd_conn_r_count_inc(conn); 304 314 /* update activity on connection */ 305 315 conn->last_active = jiffies; 306 316 INIT_WORK(&work->work, handle_ksmbd_work);
+1 -1
fs/squashfs/cache.c
··· 198 198 { 199 199 int i, j; 200 200 201 - if (cache == NULL) 201 + if (IS_ERR(cache) || cache == NULL) 202 202 return; 203 203 204 204 for (i = 0; i < cache->entries; i++) {
+3 -5
fs/xfs/libxfs/xfs_alloc.c
··· 33 33 34 34 struct workqueue_struct *xfs_alloc_wq; 35 35 36 - #define XFS_ABSDIFF(a,b) (((a) <= (b)) ? ((b) - (a)) : ((a) - (b))) 37 - 38 36 #define XFSA_FIXUP_BNO_OK 1 39 37 #define XFSA_FIXUP_CNT_OK 2 40 38 ··· 408 410 if (newbno1 != NULLAGBLOCK && newbno2 != NULLAGBLOCK) { 409 411 if (newlen1 < newlen2 || 410 412 (newlen1 == newlen2 && 411 - XFS_ABSDIFF(newbno1, wantbno) > 412 - XFS_ABSDIFF(newbno2, wantbno))) 413 + abs_diff(newbno1, wantbno) > 414 + abs_diff(newbno2, wantbno))) 413 415 newbno1 = newbno2; 414 416 } else if (newbno2 != NULLAGBLOCK) 415 417 newbno1 = newbno2; ··· 425 427 } else 426 428 newbno1 = freeend - wantlen; 427 429 *newbnop = newbno1; 428 - return newbno1 == NULLAGBLOCK ? 0 : XFS_ABSDIFF(newbno1, wantbno); 430 + return newbno1 == NULLAGBLOCK ? 0 : abs_diff(newbno1, wantbno); 429 431 } 430 432 431 433 /*
-13
fs/xfs/xfs_file.c
··· 1451 1451 1452 1452 trace_xfs_read_fault(ip, order); 1453 1453 1454 - ret = filemap_fsnotify_fault(vmf); 1455 - if (unlikely(ret)) 1456 - return ret; 1457 1454 xfs_ilock(ip, XFS_MMAPLOCK_SHARED); 1458 1455 ret = xfs_dax_fault_locked(vmf, order, false); 1459 1456 xfs_iunlock(ip, XFS_MMAPLOCK_SHARED); ··· 1479 1482 vm_fault_t ret; 1480 1483 1481 1484 trace_xfs_write_fault(ip, order); 1482 - /* 1483 - * Usually we get here from ->page_mkwrite callback but in case of DAX 1484 - * we will get here also for ordinary write fault. Handle HSM 1485 - * notifications for that case. 1486 - */ 1487 - if (IS_DAX(inode)) { 1488 - ret = filemap_fsnotify_fault(vmf); 1489 - if (unlikely(ret)) 1490 - return ret; 1491 - } 1492 1485 1493 1486 sb_start_pagefault(inode->i_sb); 1494 1487 file_update_time(vmf->vma->vm_file);
+12 -4
include/linux/blk-mq.h
··· 852 852 return rq->rq_flags & RQF_RESV; 853 853 } 854 854 855 - /* 855 + /** 856 + * blk_mq_add_to_batch() - add a request to the completion batch 857 + * @req: The request to add to batch 858 + * @iob: The batch to add the request 859 + * @is_error: Specify true if the request failed with an error 860 + * @complete: The completaion handler for the request 861 + * 856 862 * Batched completions only work when there is no I/O error and no special 857 863 * ->end_io handler. 864 + * 865 + * Return: true when the request was added to the batch, otherwise false 858 866 */ 859 867 static inline bool blk_mq_add_to_batch(struct request *req, 860 - struct io_comp_batch *iob, int ioerror, 868 + struct io_comp_batch *iob, bool is_error, 861 869 void (*complete)(struct io_comp_batch *)) 862 870 { 863 871 /* ··· 873 865 * 1) No batch container 874 866 * 2) Has scheduler data attached 875 867 * 3) Not a passthrough request and end_io set 876 - * 4) Not a passthrough request and an ioerror 868 + * 4) Not a passthrough request and failed with an error 877 869 */ 878 870 if (!iob) 879 871 return false; ··· 882 874 if (!blk_rq_is_passthrough(req)) { 883 875 if (req->end_io) 884 876 return false; 885 - if (ioerror < 0) 877 + if (is_error) 886 878 return false; 887 879 } 888 880
+1 -1
include/linux/cleanup.h
··· 212 212 { return val; } 213 213 214 214 #define no_free_ptr(p) \ 215 - ((typeof(p)) __must_check_fn(__get_and_null(p, NULL))) 215 + ((typeof(p)) __must_check_fn((__force const volatile void *)__get_and_null(p, NULL))) 216 216 217 217 #define return_ptr(p) return no_free_ptr(p) 218 218
+5
include/linux/damon.h
··· 470 470 unsigned long next_apply_sis; 471 471 /* informs if ongoing DAMOS walk for this scheme is finished */ 472 472 bool walk_completed; 473 + /* 474 + * If the current region in the filtering stage is allowed by core 475 + * layer-handled filters. If true, operations layer allows it, too. 476 + */ 477 + bool core_filters_allowed; 473 478 /* public: */ 474 479 struct damos_quota quota; 475 480 struct damos_watermarks wmarks;
+21
include/linux/fsnotify.h
··· 171 171 } 172 172 173 173 /* 174 + * fsnotify_mmap_perm - permission hook before mmap of file range 175 + */ 176 + static inline int fsnotify_mmap_perm(struct file *file, int prot, 177 + const loff_t off, size_t len) 178 + { 179 + /* 180 + * mmap() generates only pre-content events. 181 + */ 182 + if (!file || likely(!FMODE_FSNOTIFY_HSM(file->f_mode))) 183 + return 0; 184 + 185 + return fsnotify_pre_content(&file->f_path, &off, len); 186 + } 187 + 188 + /* 174 189 * fsnotify_truncate_perm - permission hook before file truncate 175 190 */ 176 191 static inline int fsnotify_truncate_perm(const struct path *path, loff_t length) ··· 234 219 235 220 static inline int fsnotify_file_area_perm(struct file *file, int perm_mask, 236 221 const loff_t *ppos, size_t count) 222 + { 223 + return 0; 224 + } 225 + 226 + static inline int fsnotify_mmap_perm(struct file *file, int prot, 227 + const loff_t off, size_t len) 237 228 { 238 229 return 0; 239 230 }
+2
include/linux/libata.h
··· 88 88 __ATA_QUIRK_MAX_SEC_1024, /* Limit max sects to 1024 */ 89 89 __ATA_QUIRK_MAX_TRIM_128M, /* Limit max trim size to 128M */ 90 90 __ATA_QUIRK_NO_NCQ_ON_ATI, /* Disable NCQ on ATI chipset */ 91 + __ATA_QUIRK_NO_LPM_ON_ATI, /* Disable LPM on ATI chipset */ 91 92 __ATA_QUIRK_NO_ID_DEV_LOG, /* Identify device log missing */ 92 93 __ATA_QUIRK_NO_LOG_DIR, /* Do not read log directory */ 93 94 __ATA_QUIRK_NO_FUA, /* Do not use FUA */ ··· 433 432 ATA_QUIRK_MAX_SEC_1024 = (1U << __ATA_QUIRK_MAX_SEC_1024), 434 433 ATA_QUIRK_MAX_TRIM_128M = (1U << __ATA_QUIRK_MAX_TRIM_128M), 435 434 ATA_QUIRK_NO_NCQ_ON_ATI = (1U << __ATA_QUIRK_NO_NCQ_ON_ATI), 435 + ATA_QUIRK_NO_LPM_ON_ATI = (1U << __ATA_QUIRK_NO_LPM_ON_ATI), 436 436 ATA_QUIRK_NO_ID_DEV_LOG = (1U << __ATA_QUIRK_NO_ID_DEV_LOG), 437 437 ATA_QUIRK_NO_LOG_DIR = (1U << __ATA_QUIRK_NO_LOG_DIR), 438 438 ATA_QUIRK_NO_FUA = (1U << __ATA_QUIRK_NO_FUA),
+7 -2
include/linux/mm.h
··· 1458 1458 1459 1459 static inline void get_page(struct page *page) 1460 1460 { 1461 - folio_get(page_folio(page)); 1461 + struct folio *folio = page_folio(page); 1462 + if (WARN_ON_ONCE(folio_test_slab(folio))) 1463 + return; 1464 + folio_get(folio); 1462 1465 } 1463 1466 1464 1467 static inline __must_check bool try_get_page(struct page *page) ··· 1554 1551 static inline void put_page(struct page *page) 1555 1552 { 1556 1553 struct folio *folio = page_folio(page); 1554 + 1555 + if (folio_test_slab(folio)) 1556 + return; 1557 1557 1558 1558 /* 1559 1559 * For some devmap managed pages we need to catch refcount transition ··· 3426 3420 extern vm_fault_t filemap_map_pages(struct vm_fault *vmf, 3427 3421 pgoff_t start_pgoff, pgoff_t end_pgoff); 3428 3422 extern vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf); 3429 - extern vm_fault_t filemap_fsnotify_fault(struct vm_fault *vmf); 3430 3423 3431 3424 extern unsigned long stack_guard_gap; 3432 3425 /* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
+5 -2
include/linux/proc_fs.h
··· 20 20 * If in doubt, ignore this flag. 21 21 */ 22 22 #ifdef MODULE 23 - PROC_ENTRY_PERMANENT = 0U, 23 + PROC_ENTRY_PERMANENT = 0U, 24 24 #else 25 - PROC_ENTRY_PERMANENT = 1U << 0, 25 + PROC_ENTRY_PERMANENT = 1U << 0, 26 26 #endif 27 + 28 + PROC_ENTRY_proc_read_iter = 1U << 1, 29 + PROC_ENTRY_proc_compat_ioctl = 1U << 2, 27 30 }; 28 31 29 32 struct proc_ops {
+2 -2
include/linux/swap_cgroup.h
··· 6 6 7 7 #if defined(CONFIG_MEMCG) && defined(CONFIG_SWAP) 8 8 9 - extern void swap_cgroup_record(struct folio *folio, swp_entry_t ent); 9 + extern void swap_cgroup_record(struct folio *folio, unsigned short id, swp_entry_t ent); 10 10 extern unsigned short swap_cgroup_clear(swp_entry_t ent, unsigned int nr_ents); 11 11 extern unsigned short lookup_swap_cgroup_id(swp_entry_t ent); 12 12 extern int swap_cgroup_swapon(int type, unsigned long max_pages); ··· 15 15 #else 16 16 17 17 static inline 18 - void swap_cgroup_record(struct folio *folio, swp_entry_t ent) 18 + void swap_cgroup_record(struct folio *folio, unsigned short id, swp_entry_t ent) 19 19 { 20 20 } 21 21
+1 -1
include/net/bluetooth/hci.h
··· 683 683 #define HCI_ERROR_REMOTE_POWER_OFF 0x15 684 684 #define HCI_ERROR_LOCAL_HOST_TERM 0x16 685 685 #define HCI_ERROR_PAIRING_NOT_ALLOWED 0x18 686 - #define HCI_ERROR_UNSUPPORTED_REMOTE_FEATURE 0x1e 686 + #define HCI_ERROR_UNSUPPORTED_REMOTE_FEATURE 0x1a 687 687 #define HCI_ERROR_INVALID_LL_PARAMS 0x1e 688 688 #define HCI_ERROR_UNSPECIFIED 0x1f 689 689 #define HCI_ERROR_ADVERTISING_TIMEOUT 0x3c
+7 -4
include/net/mana/gdma.h
··· 408 408 struct gdma_dev mana_ib; 409 409 }; 410 410 411 - #define MAX_NUM_GDMA_DEVICES 4 412 - 413 411 static inline bool mana_gd_is_mana(struct gdma_dev *gd) 414 412 { 415 413 return gd->dev_id.type == GDMA_DEVICE_MANA; ··· 554 556 #define GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG BIT(3) 555 557 #define GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT BIT(5) 556 558 559 + /* Driver can handle holes (zeros) in the device list */ 560 + #define GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP BIT(11) 561 + 557 562 #define GDMA_DRV_CAP_FLAGS1 \ 558 563 (GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \ 559 564 GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \ 560 565 GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG | \ 561 - GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT) 566 + GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT | \ 567 + GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP) 562 568 563 569 #define GDMA_DRV_CAP_FLAGS2 0 564 570 ··· 623 621 }; /* HW DATA */ 624 622 625 623 /* GDMA_LIST_DEVICES */ 624 + #define GDMA_DEV_LIST_SIZE 64 626 625 struct gdma_list_devices_resp { 627 626 struct gdma_resp_hdr hdr; 628 627 u32 num_of_devs; 629 628 u32 reserved; 630 - struct gdma_dev_id devs[64]; 629 + struct gdma_dev_id devs[GDMA_DEV_LIST_SIZE]; 631 630 }; /* HW DATA */ 632 631 633 632 /* GDMA_REGISTER_DEVICE */
+4 -1
include/sound/soc.h
··· 1261 1261 1262 1262 /* mixer control */ 1263 1263 struct soc_mixer_control { 1264 - int min, max, platform_max; 1264 + /* Minimum and maximum specified as written to the hardware */ 1265 + int min, max; 1266 + /* Limited maximum value specified as presented through the control */ 1267 + int platform_max; 1265 1268 int reg, rreg; 1266 1269 unsigned int shift, rshift; 1267 1270 unsigned int sign_bit;
+1 -1
init/Kconfig
··· 1973 1973 depends on !MODVERSIONS || GENDWARFKSYMS 1974 1974 depends on !GCC_PLUGIN_RANDSTRUCT 1975 1975 depends on !RANDSTRUCT 1976 - depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE 1976 + depends on !DEBUG_INFO_BTF || (PAHOLE_HAS_LANG_EXCLUDE && !LTO) 1977 1977 depends on !CFI_CLANG || HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC 1978 1978 select CFI_ICALL_NORMALIZE_INTEGERS if CFI_CLANG 1979 1979 depends on !CALL_PADDING || RUSTC_VERSION >= 108100
+2 -2
kernel/locking/rtmutex_common.h
··· 59 59 }; 60 60 61 61 /** 62 - * rt_wake_q_head - Wrapper around regular wake_q_head to support 63 - * "sleeping" spinlocks on RT 62 + * struct rt_wake_q_head - Wrapper around regular wake_q_head to support 63 + * "sleeping" spinlocks on RT 64 64 * @head: The regular wake_q_head for sleeping lock variants 65 65 * @rtlock_task: Task pointer for RT lock (spin/rwlock) wakeups 66 66 */
+9 -4
kernel/locking/semaphore.c
··· 29 29 #include <linux/export.h> 30 30 #include <linux/sched.h> 31 31 #include <linux/sched/debug.h> 32 + #include <linux/sched/wake_q.h> 32 33 #include <linux/semaphore.h> 33 34 #include <linux/spinlock.h> 34 35 #include <linux/ftrace.h> ··· 39 38 static noinline int __down_interruptible(struct semaphore *sem); 40 39 static noinline int __down_killable(struct semaphore *sem); 41 40 static noinline int __down_timeout(struct semaphore *sem, long timeout); 42 - static noinline void __up(struct semaphore *sem); 41 + static noinline void __up(struct semaphore *sem, struct wake_q_head *wake_q); 43 42 44 43 /** 45 44 * down - acquire the semaphore ··· 184 183 void __sched up(struct semaphore *sem) 185 184 { 186 185 unsigned long flags; 186 + DEFINE_WAKE_Q(wake_q); 187 187 188 188 raw_spin_lock_irqsave(&sem->lock, flags); 189 189 if (likely(list_empty(&sem->wait_list))) 190 190 sem->count++; 191 191 else 192 - __up(sem); 192 + __up(sem, &wake_q); 193 193 raw_spin_unlock_irqrestore(&sem->lock, flags); 194 + if (!wake_q_empty(&wake_q)) 195 + wake_up_q(&wake_q); 194 196 } 195 197 EXPORT_SYMBOL(up); 196 198 ··· 273 269 return __down_common(sem, TASK_UNINTERRUPTIBLE, timeout); 274 270 } 275 271 276 - static noinline void __sched __up(struct semaphore *sem) 272 + static noinline void __sched __up(struct semaphore *sem, 273 + struct wake_q_head *wake_q) 277 274 { 278 275 struct semaphore_waiter *waiter = list_first_entry(&sem->wait_list, 279 276 struct semaphore_waiter, list); 280 277 list_del(&waiter->list); 281 278 waiter->up = true; 282 - wake_up_process(waiter->task); 279 + wake_q_add(wake_q, waiter->task); 283 280 }
+4 -4
kernel/sched/cputime.c
··· 9 9 10 10 #ifdef CONFIG_IRQ_TIME_ACCOUNTING 11 11 12 - DEFINE_STATIC_KEY_FALSE(sched_clock_irqtime); 13 - 14 12 /* 15 13 * There are no locks covering percpu hardirq/softirq time. 16 14 * They are only modified in vtime_account, on corresponding CPU ··· 22 24 */ 23 25 DEFINE_PER_CPU(struct irqtime, cpu_irqtime); 24 26 27 + int sched_clock_irqtime; 28 + 25 29 void enable_sched_clock_irqtime(void) 26 30 { 27 - static_branch_enable(&sched_clock_irqtime); 31 + sched_clock_irqtime = 1; 28 32 } 29 33 30 34 void disable_sched_clock_irqtime(void) 31 35 { 32 - static_branch_disable(&sched_clock_irqtime); 36 + sched_clock_irqtime = 0; 33 37 } 34 38 35 39 static void irqtime_account_delta(struct irqtime *irqtime, u64 delta,
+2 -2
kernel/sched/sched.h
··· 3259 3259 }; 3260 3260 3261 3261 DECLARE_PER_CPU(struct irqtime, cpu_irqtime); 3262 - DECLARE_STATIC_KEY_FALSE(sched_clock_irqtime); 3262 + extern int sched_clock_irqtime; 3263 3263 3264 3264 static inline int irqtime_enabled(void) 3265 3265 { 3266 - return static_branch_likely(&sched_clock_irqtime); 3266 + return sched_clock_irqtime; 3267 3267 } 3268 3268 3269 3269 /*
+18 -6
kernel/trace/trace_events_hist.c
··· 5689 5689 guard(mutex)(&event_mutex); 5690 5690 5691 5691 event_file = event_file_data(file); 5692 - if (!event_file) 5693 - return -ENODEV; 5692 + if (!event_file) { 5693 + ret = -ENODEV; 5694 + goto err; 5695 + } 5694 5696 5695 5697 hist_file = kzalloc(sizeof(*hist_file), GFP_KERNEL); 5696 - if (!hist_file) 5697 - return -ENOMEM; 5698 + if (!hist_file) { 5699 + ret = -ENOMEM; 5700 + goto err; 5701 + } 5698 5702 5699 5703 hist_file->file = file; 5700 5704 hist_file->last_act = get_hist_hit_count(event_file); ··· 5706 5702 /* Clear private_data to avoid warning in single_open() */ 5707 5703 file->private_data = NULL; 5708 5704 ret = single_open(file, hist_show, hist_file); 5709 - if (ret) 5705 + if (ret) { 5710 5706 kfree(hist_file); 5707 + goto err; 5708 + } 5711 5709 5710 + return 0; 5711 + err: 5712 + tracing_release_file_tr(inode, file); 5712 5713 return ret; 5713 5714 } 5714 5715 ··· 5988 5979 5989 5980 /* Clear private_data to avoid warning in single_open() */ 5990 5981 file->private_data = NULL; 5991 - return single_open(file, hist_debug_show, file); 5982 + ret = single_open(file, hist_debug_show, file); 5983 + if (ret) 5984 + tracing_release_file_tr(inode, file); 5985 + return ret; 5992 5986 } 5993 5987 5994 5988 const struct file_operations event_hist_debug_fops = {
+14 -16
kernel/trace/trace_fprobe.c
··· 920 920 921 921 if (!data->tpoint && !strcmp(data->tp_name, tp->name)) { 922 922 data->tpoint = tp; 923 - if (!data->mod) { 923 + if (!data->mod) 924 924 data->mod = mod; 925 - if (!try_module_get(data->mod)) { 926 - data->tpoint = NULL; 927 - data->mod = NULL; 928 - } 929 - } 930 925 } 931 926 } 932 927 ··· 933 938 data->tpoint = tp; 934 939 } 935 940 936 - /* 937 - * Find a tracepoint from kernel and module. If the tracepoint is in a module, 938 - * this increments the module refcount to prevent unloading until the 939 - * trace_fprobe is registered to the list. After registering the trace_fprobe 940 - * on the trace_fprobe list, the module refcount is decremented because 941 - * tracepoint_probe_module_cb will handle it. 942 - */ 941 + /* Find a tracepoint from kernel and module. */ 943 942 static struct tracepoint *find_tracepoint(const char *tp_name, 944 943 struct module **tp_mod) 945 944 { ··· 962 973 } 963 974 } 964 975 976 + /* Find a tracepoint from specified module. */ 965 977 static struct tracepoint *find_tracepoint_in_module(struct module *mod, 966 978 const char *tp_name) 967 979 { ··· 998 1008 reenable_trace_fprobe(tf); 999 1009 } 1000 1010 } else if (val == MODULE_STATE_GOING && tp_mod->mod == tf->mod) { 1001 - tracepoint_probe_unregister(tf->tpoint, 1011 + unregister_fprobe(&tf->fp); 1012 + if (trace_fprobe_is_tracepoint(tf)) { 1013 + tracepoint_probe_unregister(tf->tpoint, 1002 1014 tf->tpoint->probestub, NULL); 1003 - tf->tpoint = NULL; 1004 - tf->mod = NULL; 1015 + tf->tpoint = TRACEPOINT_STUB; 1016 + tf->mod = NULL; 1017 + } 1005 1018 } 1006 1019 } 1007 1020 mutex_unlock(&event_mutex); ··· 1169 1176 if (is_tracepoint) { 1170 1177 ctx->flags |= TPARG_FL_TPOINT; 1171 1178 tpoint = find_tracepoint(symbol, &tp_mod); 1179 + /* lock module until register this tprobe. */ 1180 + if (tp_mod && !try_module_get(tp_mod)) { 1181 + tpoint = NULL; 1182 + tp_mod = NULL; 1183 + } 1172 1184 if (tpoint) { 1173 1185 ctx->funcname = kallsyms_lookup( 1174 1186 (unsigned long)tpoint->probestub,
+6 -2
lib/iov_iter.c
··· 1190 1190 if (!n) 1191 1191 return -ENOMEM; 1192 1192 p = *pages; 1193 - for (int k = 0; k < n; k++) 1194 - get_page(p[k] = page + k); 1193 + for (int k = 0; k < n; k++) { 1194 + struct folio *folio = page_folio(page); 1195 + p[k] = page + k; 1196 + if (!folio_test_slab(folio)) 1197 + folio_get(folio); 1198 + } 1195 1199 maxsize = min_t(size_t, maxsize, n * PAGE_SIZE - *start); 1196 1200 i->count -= maxsize; 1197 1201 i->iov_offset += maxsize;
+6 -1
mm/damon/core.c
··· 373 373 * or damon_attrs are updated. 374 374 */ 375 375 scheme->next_apply_sis = 0; 376 + scheme->walk_completed = false; 376 377 INIT_LIST_HEAD(&scheme->filters); 377 378 scheme->stat = (struct damos_stat){}; 378 379 INIT_LIST_HEAD(&scheme->list); ··· 1430 1429 { 1431 1430 struct damos_filter *filter; 1432 1431 1432 + s->core_filters_allowed = false; 1433 1433 damos_for_each_filter(filter, s) { 1434 - if (damos_filter_match(ctx, t, r, filter)) 1434 + if (damos_filter_match(ctx, t, r, filter)) { 1435 + if (filter->allow) 1436 + s->core_filters_allowed = true; 1435 1437 return !filter->allow; 1438 + } 1436 1439 } 1437 1440 return false; 1438 1441 }
+3
mm/damon/paddr.c
··· 236 236 { 237 237 struct damos_filter *filter; 238 238 239 + if (scheme->core_filters_allowed) 240 + return false; 241 + 239 242 damos_for_each_filter(filter, scheme) { 240 243 if (damos_pa_filter_match(filter, folio)) 241 244 return !filter->allow;
+28 -98
mm/filemap.c
··· 47 47 #include <linux/splice.h> 48 48 #include <linux/rcupdate_wait.h> 49 49 #include <linux/sched/mm.h> 50 - #include <linux/fsnotify.h> 51 50 #include <asm/pgalloc.h> 52 51 #include <asm/tlbflush.h> 53 52 #include "internal.h" ··· 1985 1986 1986 1987 if (err == -EEXIST) 1987 1988 goto repeat; 1988 - if (err) 1989 + if (err) { 1990 + /* 1991 + * When NOWAIT I/O fails to allocate folios this could 1992 + * be due to a nonblocking memory allocation and not 1993 + * because the system actually is out of memory. 1994 + * Return -EAGAIN so that there caller retries in a 1995 + * blocking fashion instead of propagating -ENOMEM 1996 + * to the application. 1997 + */ 1998 + if ((fgp_flags & FGP_NOWAIT) && err == -ENOMEM) 1999 + err = -EAGAIN; 1989 2000 return ERR_PTR(err); 2001 + } 1990 2002 /* 1991 2003 * filemap_add_folio locks the page, and for mmap 1992 2004 * we expect an unlocked page. ··· 3208 3198 unsigned long vm_flags = vmf->vma->vm_flags; 3209 3199 unsigned int mmap_miss; 3210 3200 3211 - /* 3212 - * If we have pre-content watches we need to disable readahead to make 3213 - * sure that we don't populate our mapping with 0 filled pages that we 3214 - * never emitted an event for. 3215 - */ 3216 - if (unlikely(FMODE_FSNOTIFY_HSM(file->f_mode))) 3217 - return fpin; 3218 - 3219 3201 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 3220 3202 /* Use the readahead code, even if readahead is disabled */ 3221 3203 if ((vm_flags & VM_HUGEPAGE) && HPAGE_PMD_ORDER <= MAX_PAGECACHE_ORDER) { ··· 3276 3274 struct file *fpin = NULL; 3277 3275 unsigned int mmap_miss; 3278 3276 3279 - /* See comment in do_sync_mmap_readahead. */ 3280 - if (unlikely(FMODE_FSNOTIFY_HSM(file->f_mode))) 3281 - return fpin; 3282 - 3283 3277 /* If we don't want any read-ahead, don't bother */ 3284 3278 if (vmf->vma->vm_flags & VM_RAND_READ || !ra->ra_pages) 3285 3279 return fpin; ··· 3333 3335 pte_unmap(ptep); 3334 3336 return ret; 3335 3337 } 3336 - 3337 - /** 3338 - * filemap_fsnotify_fault - maybe emit a pre-content event. 3339 - * @vmf: struct vm_fault containing details of the fault. 3340 - * 3341 - * If we have a pre-content watch on this file we will emit an event for this 3342 - * range. If we return anything the fault caller should return immediately, we 3343 - * will return VM_FAULT_RETRY if we had to emit an event, which will trigger the 3344 - * fault again and then the fault handler will run the second time through. 3345 - * 3346 - * Return: a bitwise-OR of %VM_FAULT_ codes, 0 if nothing happened. 3347 - */ 3348 - vm_fault_t filemap_fsnotify_fault(struct vm_fault *vmf) 3349 - { 3350 - struct file *fpin = NULL; 3351 - int mask = (vmf->flags & FAULT_FLAG_WRITE) ? MAY_WRITE : MAY_ACCESS; 3352 - loff_t pos = vmf->pgoff >> PAGE_SHIFT; 3353 - size_t count = PAGE_SIZE; 3354 - int err; 3355 - 3356 - /* 3357 - * We already did this and now we're retrying with everything locked, 3358 - * don't emit the event and continue. 3359 - */ 3360 - if (vmf->flags & FAULT_FLAG_TRIED) 3361 - return 0; 3362 - 3363 - /* No watches, we're done. */ 3364 - if (likely(!FMODE_FSNOTIFY_HSM(vmf->vma->vm_file->f_mode))) 3365 - return 0; 3366 - 3367 - fpin = maybe_unlock_mmap_for_io(vmf, fpin); 3368 - if (!fpin) 3369 - return VM_FAULT_SIGBUS; 3370 - 3371 - err = fsnotify_file_area_perm(fpin, mask, &pos, count); 3372 - fput(fpin); 3373 - if (err) 3374 - return VM_FAULT_SIGBUS; 3375 - return VM_FAULT_RETRY; 3376 - } 3377 - EXPORT_SYMBOL_GPL(filemap_fsnotify_fault); 3378 3338 3379 3339 /** 3380 3340 * filemap_fault - read in file data for page fault handling ··· 3437 3481 * or because readahead was otherwise unable to retrieve it. 3438 3482 */ 3439 3483 if (unlikely(!folio_test_uptodate(folio))) { 3440 - /* 3441 - * If this is a precontent file we have can now emit an event to 3442 - * try and populate the folio. 3443 - */ 3444 - if (!(vmf->flags & FAULT_FLAG_TRIED) && 3445 - unlikely(FMODE_FSNOTIFY_HSM(file->f_mode))) { 3446 - loff_t pos = folio_pos(folio); 3447 - size_t count = folio_size(folio); 3448 - 3449 - /* We're NOWAIT, we have to retry. */ 3450 - if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) { 3451 - folio_unlock(folio); 3452 - goto out_retry; 3453 - } 3454 - 3455 - if (mapping_locked) 3456 - filemap_invalidate_unlock_shared(mapping); 3457 - mapping_locked = false; 3458 - 3459 - folio_unlock(folio); 3460 - fpin = maybe_unlock_mmap_for_io(vmf, fpin); 3461 - if (!fpin) 3462 - goto out_retry; 3463 - 3464 - error = fsnotify_file_area_perm(fpin, MAY_ACCESS, &pos, 3465 - count); 3466 - if (error) 3467 - ret = VM_FAULT_SIGBUS; 3468 - goto out_retry; 3469 - } 3470 - 3471 3484 /* 3472 3485 * If the invalidate lock is not held, the folio was in cache 3473 3486 * and uptodate and now it is not. Strange but possible since we ··· 4094 4169 bytes = min(chunk - offset, bytes); 4095 4170 balance_dirty_pages_ratelimited(mapping); 4096 4171 4097 - /* 4098 - * Bring in the user page that we will copy from _first_. 4099 - * Otherwise there's a nasty deadlock on copying from the 4100 - * same page as we're writing to, without it being marked 4101 - * up-to-date. 4102 - */ 4103 - if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) { 4104 - status = -EFAULT; 4105 - break; 4106 - } 4107 - 4108 4172 if (fatal_signal_pending(current)) { 4109 4173 status = -EINTR; 4110 4174 break; ··· 4111 4197 if (mapping_writably_mapped(mapping)) 4112 4198 flush_dcache_folio(folio); 4113 4199 4200 + /* 4201 + * Faults here on mmap()s can recurse into arbitrary 4202 + * filesystem code. Lots of locks are held that can 4203 + * deadlock. Use an atomic copy to avoid deadlocking 4204 + * in page fault handling. 4205 + */ 4114 4206 copied = copy_folio_from_iter_atomic(folio, offset, bytes, i); 4115 4207 flush_dcache_folio(folio); 4116 4208 ··· 4141 4221 if (copied) { 4142 4222 bytes = copied; 4143 4223 goto retry; 4224 + } 4225 + 4226 + /* 4227 + * 'folio' is now unlocked and faults on it can be 4228 + * handled. Ensure forward progress by trying to 4229 + * fault it in now. 4230 + */ 4231 + if (fault_in_iov_iter_readable(i, bytes) == bytes) { 4232 + status = -EFAULT; 4233 + break; 4144 4234 } 4145 4235 } else { 4146 4236 pos += status;
+1 -1
mm/huge_memory.c
··· 3304 3304 folio_account_cleaned(tail, 3305 3305 inode_to_wb(folio->mapping->host)); 3306 3306 __filemap_remove_folio(tail, NULL); 3307 - folio_put(tail); 3307 + folio_put_refs(tail, folio_nr_pages(tail)); 3308 3308 } else if (!folio_test_anon(folio)) { 3309 3309 __xa_store(&folio->mapping->i_pages, tail->index, 3310 3310 tail, 0);
+6 -2
mm/hugetlb.c
··· 2135 2135 2136 2136 if (!folio_ref_count(folio)) { 2137 2137 struct hstate *h = folio_hstate(folio); 2138 + bool adjust_surplus = false; 2139 + 2138 2140 if (!available_huge_pages(h)) 2139 2141 goto out; 2140 2142 ··· 2159 2157 goto retry; 2160 2158 } 2161 2159 2162 - remove_hugetlb_folio(h, folio, false); 2160 + if (h->surplus_huge_pages_node[folio_nid(folio)]) 2161 + adjust_surplus = true; 2162 + remove_hugetlb_folio(h, folio, adjust_surplus); 2163 2163 h->max_huge_pages--; 2164 2164 spin_unlock_irq(&hugetlb_lock); 2165 2165 ··· 2181 2177 rc = hugetlb_vmemmap_restore_folio(h, folio); 2182 2178 if (rc) { 2183 2179 spin_lock_irq(&hugetlb_lock); 2184 - add_hugetlb_folio(h, folio, false); 2180 + add_hugetlb_folio(h, folio, adjust_surplus); 2185 2181 h->max_huge_pages++; 2186 2182 goto out; 2187 2183 }
+11 -2
mm/memcontrol.c
··· 1921 1921 static int memcg_hotplug_cpu_dead(unsigned int cpu) 1922 1922 { 1923 1923 struct memcg_stock_pcp *stock; 1924 + struct obj_cgroup *old; 1925 + unsigned long flags; 1924 1926 1925 1927 stock = &per_cpu(memcg_stock, cpu); 1928 + 1929 + /* drain_obj_stock requires stock_lock */ 1930 + local_lock_irqsave(&memcg_stock.stock_lock, flags); 1931 + old = drain_obj_stock(stock); 1932 + local_unlock_irqrestore(&memcg_stock.stock_lock, flags); 1933 + 1926 1934 drain_stock(stock); 1935 + obj_cgroup_put(old); 1927 1936 1928 1937 return 0; 1929 1938 } ··· 5002 4993 mem_cgroup_id_get_many(swap_memcg, nr_entries - 1); 5003 4994 mod_memcg_state(swap_memcg, MEMCG_SWAP, nr_entries); 5004 4995 5005 - swap_cgroup_record(folio, entry); 4996 + swap_cgroup_record(folio, mem_cgroup_id(swap_memcg), entry); 5006 4997 5007 4998 folio_unqueue_deferred_split(folio); 5008 4999 folio->memcg_data = 0; ··· 5064 5055 mem_cgroup_id_get_many(memcg, nr_pages - 1); 5065 5056 mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); 5066 5057 5067 - swap_cgroup_record(folio, entry); 5058 + swap_cgroup_record(folio, mem_cgroup_id(memcg), entry); 5068 5059 5069 5060 return 0; 5070 5061 }
-19
mm/memory.c
··· 76 76 #include <linux/ptrace.h> 77 77 #include <linux/vmalloc.h> 78 78 #include <linux/sched/sysctl.h> 79 - #include <linux/fsnotify.h> 80 79 81 80 #include <trace/events/kmem.h> 82 81 ··· 5749 5750 static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf) 5750 5751 { 5751 5752 struct vm_area_struct *vma = vmf->vma; 5752 - 5753 5753 if (vma_is_anonymous(vma)) 5754 5754 return do_huge_pmd_anonymous_page(vmf); 5755 - /* 5756 - * Currently we just emit PAGE_SIZE for our fault events, so don't allow 5757 - * a huge fault if we have a pre content watch on this file. This would 5758 - * be trivial to support, but there would need to be tests to ensure 5759 - * this works properly and those don't exist currently. 5760 - */ 5761 - if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode))) 5762 - return VM_FAULT_FALLBACK; 5763 5755 if (vma->vm_ops->huge_fault) 5764 5756 return vma->vm_ops->huge_fault(vmf, PMD_ORDER); 5765 5757 return VM_FAULT_FALLBACK; ··· 5774 5784 } 5775 5785 5776 5786 if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { 5777 - /* See comment in create_huge_pmd. */ 5778 - if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode))) 5779 - goto split; 5780 5787 if (vma->vm_ops->huge_fault) { 5781 5788 ret = vma->vm_ops->huge_fault(vmf, PMD_ORDER); 5782 5789 if (!(ret & VM_FAULT_FALLBACK)) ··· 5796 5809 /* No support for anonymous transparent PUD pages yet */ 5797 5810 if (vma_is_anonymous(vma)) 5798 5811 return VM_FAULT_FALLBACK; 5799 - /* See comment in create_huge_pmd. */ 5800 - if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode))) 5801 - return VM_FAULT_FALLBACK; 5802 5812 if (vma->vm_ops->huge_fault) 5803 5813 return vma->vm_ops->huge_fault(vmf, PUD_ORDER); 5804 5814 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ ··· 5813 5829 if (vma_is_anonymous(vma)) 5814 5830 goto split; 5815 5831 if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { 5816 - /* See comment in create_huge_pmd. */ 5817 - if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode))) 5818 - goto split; 5819 5832 if (vma->vm_ops->huge_fault) { 5820 5833 ret = vma->vm_ops->huge_fault(vmf, PUD_ORDER); 5821 5834 if (!(ret & VM_FAULT_FALLBACK))
+4 -6
mm/migrate.c
··· 518 518 if (folio_test_anon(folio) && folio_test_large(folio)) 519 519 mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1); 520 520 folio_ref_add(newfolio, nr); /* add cache reference */ 521 - if (folio_test_swapbacked(folio)) { 521 + if (folio_test_swapbacked(folio)) 522 522 __folio_set_swapbacked(newfolio); 523 - if (folio_test_swapcache(folio)) { 524 - folio_set_swapcache(newfolio); 525 - newfolio->private = folio_get_private(folio); 526 - } 523 + if (folio_test_swapcache(folio)) { 524 + folio_set_swapcache(newfolio); 525 + newfolio->private = folio_get_private(folio); 527 526 entries = nr; 528 527 } else { 529 - VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); 530 528 entries = 1; 531 529 } 532 530
-7
mm/nommu.c
··· 1613 1613 } 1614 1614 EXPORT_SYMBOL(remap_vmalloc_range); 1615 1615 1616 - vm_fault_t filemap_fsnotify_fault(struct vm_fault *vmf) 1617 - { 1618 - BUG(); 1619 - return 0; 1620 - } 1621 - EXPORT_SYMBOL_GPL(filemap_fsnotify_fault); 1622 - 1623 1616 vm_fault_t filemap_fault(struct vm_fault *vmf) 1624 1617 { 1625 1618 BUG();
+12 -2
mm/page_alloc.c
··· 7004 7004 7005 7005 static bool cond_accept_memory(struct zone *zone, unsigned int order) 7006 7006 { 7007 - long to_accept; 7007 + long to_accept, wmark; 7008 7008 bool ret = false; 7009 7009 7010 7010 if (!has_unaccepted_memory()) ··· 7013 7013 if (list_empty(&zone->unaccepted_pages)) 7014 7014 return false; 7015 7015 7016 + wmark = promo_wmark_pages(zone); 7017 + 7018 + /* 7019 + * Watermarks have not been initialized yet. 7020 + * 7021 + * Accepting one MAX_ORDER page to ensure progress. 7022 + */ 7023 + if (!wmark) 7024 + return try_to_accept_memory_one(zone); 7025 + 7016 7026 /* How much to accept to get to promo watermark? */ 7017 - to_accept = promo_wmark_pages(zone) - 7027 + to_accept = wmark - 7018 7028 (zone_page_state(zone, NR_FREE_PAGES) - 7019 7029 __zone_watermark_unusable_free(zone, order, 0) - 7020 7030 zone_page_state(zone, NR_UNACCEPTED));
-14
mm/readahead.c
··· 128 128 #include <linux/blk-cgroup.h> 129 129 #include <linux/fadvise.h> 130 130 #include <linux/sched/mm.h> 131 - #include <linux/fsnotify.h> 132 131 133 132 #include "internal.h" 134 133 ··· 558 559 pgoff_t prev_index, miss; 559 560 560 561 /* 561 - * If we have pre-content watches we need to disable readahead to make 562 - * sure that we don't find 0 filled pages in cache that we never emitted 563 - * events for. Filesystems supporting HSM must make sure to not call 564 - * this function with ractl->file unset for files handled by HSM. 565 - */ 566 - if (ractl->file && unlikely(FMODE_FSNOTIFY_HSM(ractl->file->f_mode))) 567 - return; 568 - 569 - /* 570 562 * Even if readahead is disabled, issue this request as readahead 571 563 * as we'll need it to satisfy the requested range. The forced 572 564 * readahead will do the right thing and limit the read to just the ··· 633 643 634 644 /* no readahead */ 635 645 if (!ra->ra_pages) 636 - return; 637 - 638 - /* See the comment in page_cache_sync_ra. */ 639 - if (ractl->file && unlikely(FMODE_FSNOTIFY_HSM(ractl->file->f_mode))) 640 646 return; 641 647 642 648 /*
+4 -3
mm/swap_cgroup.c
··· 58 58 * entries must not have been charged 59 59 * 60 60 * @folio: the folio that the swap entry belongs to 61 + * @id: mem_cgroup ID to be recorded 61 62 * @ent: the first swap entry to be recorded 62 63 */ 63 - void swap_cgroup_record(struct folio *folio, swp_entry_t ent) 64 + void swap_cgroup_record(struct folio *folio, unsigned short id, 65 + swp_entry_t ent) 64 66 { 65 67 unsigned int nr_ents = folio_nr_pages(folio); 66 68 struct swap_cgroup *map; ··· 74 72 map = swap_cgroup_ctrl[swp_type(ent)].map; 75 73 76 74 do { 77 - old = __swap_cgroup_id_xchg(map, offset, 78 - mem_cgroup_id(folio_memcg(folio))); 75 + old = __swap_cgroup_id_xchg(map, offset, id); 79 76 VM_BUG_ON(old); 80 77 } while (++offset != end); 81 78 }
+3
mm/util.c
··· 23 23 #include <linux/processor.h> 24 24 #include <linux/sizes.h> 25 25 #include <linux/compat.h> 26 + #include <linux/fsnotify.h> 26 27 27 28 #include <linux/uaccess.h> 28 29 ··· 570 569 LIST_HEAD(uf); 571 570 572 571 ret = security_mmap_file(file, prot, flag); 572 + if (!ret) 573 + ret = fsnotify_mmap_perm(file, prot, pgoff >> PAGE_SHIFT, len); 573 574 if (!ret) { 574 575 if (mmap_write_lock_killable(mm)) 575 576 return -EINTR;
+2 -1
mm/vma.c
··· 2381 2381 * vma_merge_new_range() calls khugepaged_enter_vma() too, the below 2382 2382 * call covers the non-merge case. 2383 2383 */ 2384 - khugepaged_enter_vma(vma, map->flags); 2384 + if (!vma_is_anonymous(vma)) 2385 + khugepaged_enter_vma(vma, map->flags); 2385 2386 ksm_add_vma(vma); 2386 2387 *vmap = vma; 2387 2388 return 0;
+2 -1
net/atm/lec.c
··· 181 181 lec_send(struct atm_vcc *vcc, struct sk_buff *skb) 182 182 { 183 183 struct net_device *dev = skb->dev; 184 + unsigned int len = skb->len; 184 185 185 186 ATM_SKB(skb)->vcc = vcc; 186 187 atm_account_tx(vcc, skb); ··· 192 191 } 193 192 194 193 dev->stats.tx_packets++; 195 - dev->stats.tx_bytes += skb->len; 194 + dev->stats.tx_bytes += len; 196 195 } 197 196 198 197 static void lec_tx_timeout(struct net_device *dev, unsigned int txqueue)
+1 -2
net/batman-adv/bat_iv_ogm.c
··· 326 326 /* check if there is enough space for the optional TVLV */ 327 327 next_buff_pos += ntohs(ogm_packet->tvlv_len); 328 328 329 - return (next_buff_pos <= packet_len) && 330 - (next_buff_pos <= BATADV_MAX_AGGREGATION_BYTES); 329 + return next_buff_pos <= packet_len; 331 330 } 332 331 333 332 /* send a batman ogm to a given interface */
+1 -2
net/batman-adv/bat_v_ogm.c
··· 839 839 /* check if there is enough space for the optional TVLV */ 840 840 next_buff_pos += ntohs(ogm2_packet->tvlv_len); 841 841 842 - return (next_buff_pos <= packet_len) && 843 - (next_buff_pos <= BATADV_MAX_AGGREGATION_BYTES); 842 + return next_buff_pos <= packet_len; 844 843 } 845 844 846 845 /**
+6 -1
net/bluetooth/6lowpan.c
··· 826 826 unsigned long hdr_len, 827 827 unsigned long len, int nb) 828 828 { 829 + struct sk_buff *skb; 830 + 829 831 /* Note that we must allocate using GFP_ATOMIC here as 830 832 * this function is called originally from netdev hard xmit 831 833 * function in atomic context. 832 834 */ 833 - return bt_skb_alloc(hdr_len + len, GFP_ATOMIC); 835 + skb = bt_skb_alloc(hdr_len + len, GFP_ATOMIC); 836 + if (!skb) 837 + return ERR_PTR(-ENOMEM); 838 + return skb; 834 839 } 835 840 836 841 static void chan_suspend_cb(struct l2cap_chan *chan)
+6 -6
net/can/af_can.c
··· 289 289 netif_rx(newskb); 290 290 291 291 /* update statistics */ 292 - pkg_stats->tx_frames++; 293 - pkg_stats->tx_frames_delta++; 292 + atomic_long_inc(&pkg_stats->tx_frames); 293 + atomic_long_inc(&pkg_stats->tx_frames_delta); 294 294 295 295 return 0; 296 296 ··· 649 649 int matches; 650 650 651 651 /* update statistics */ 652 - pkg_stats->rx_frames++; 653 - pkg_stats->rx_frames_delta++; 652 + atomic_long_inc(&pkg_stats->rx_frames); 653 + atomic_long_inc(&pkg_stats->rx_frames_delta); 654 654 655 655 /* create non-zero unique skb identifier together with *skb */ 656 656 while (!(can_skb_prv(skb)->skbcnt)) ··· 671 671 consume_skb(skb); 672 672 673 673 if (matches > 0) { 674 - pkg_stats->matches++; 675 - pkg_stats->matches_delta++; 674 + atomic_long_inc(&pkg_stats->matches); 675 + atomic_long_inc(&pkg_stats->matches_delta); 676 676 } 677 677 } 678 678
+6 -6
net/can/af_can.h
··· 66 66 struct can_pkg_stats { 67 67 unsigned long jiffies_init; 68 68 69 - unsigned long rx_frames; 70 - unsigned long tx_frames; 71 - unsigned long matches; 69 + atomic_long_t rx_frames; 70 + atomic_long_t tx_frames; 71 + atomic_long_t matches; 72 72 73 73 unsigned long total_rx_rate; 74 74 unsigned long total_tx_rate; ··· 82 82 unsigned long max_tx_rate; 83 83 unsigned long max_rx_match_ratio; 84 84 85 - unsigned long rx_frames_delta; 86 - unsigned long tx_frames_delta; 87 - unsigned long matches_delta; 85 + atomic_long_t rx_frames_delta; 86 + atomic_long_t tx_frames_delta; 87 + atomic_long_t matches_delta; 88 88 }; 89 89 90 90 /* persistent statistics */
+27 -19
net/can/proc.c
··· 118 118 struct can_pkg_stats *pkg_stats = net->can.pkg_stats; 119 119 unsigned long j = jiffies; /* snapshot */ 120 120 121 + long rx_frames = atomic_long_read(&pkg_stats->rx_frames); 122 + long tx_frames = atomic_long_read(&pkg_stats->tx_frames); 123 + long matches = atomic_long_read(&pkg_stats->matches); 124 + long rx_frames_delta = atomic_long_read(&pkg_stats->rx_frames_delta); 125 + long tx_frames_delta = atomic_long_read(&pkg_stats->tx_frames_delta); 126 + long matches_delta = atomic_long_read(&pkg_stats->matches_delta); 127 + 121 128 /* restart counting in timer context on user request */ 122 129 if (user_reset) 123 130 can_init_stats(net); ··· 134 127 can_init_stats(net); 135 128 136 129 /* prevent overflow in calc_rate() */ 137 - if (pkg_stats->rx_frames > (ULONG_MAX / HZ)) 130 + if (rx_frames > (LONG_MAX / HZ)) 138 131 can_init_stats(net); 139 132 140 133 /* prevent overflow in calc_rate() */ 141 - if (pkg_stats->tx_frames > (ULONG_MAX / HZ)) 134 + if (tx_frames > (LONG_MAX / HZ)) 142 135 can_init_stats(net); 143 136 144 137 /* matches overflow - very improbable */ 145 - if (pkg_stats->matches > (ULONG_MAX / 100)) 138 + if (matches > (LONG_MAX / 100)) 146 139 can_init_stats(net); 147 140 148 141 /* calc total values */ 149 - if (pkg_stats->rx_frames) 150 - pkg_stats->total_rx_match_ratio = (pkg_stats->matches * 100) / 151 - pkg_stats->rx_frames; 142 + if (rx_frames) 143 + pkg_stats->total_rx_match_ratio = (matches * 100) / rx_frames; 152 144 153 145 pkg_stats->total_tx_rate = calc_rate(pkg_stats->jiffies_init, j, 154 - pkg_stats->tx_frames); 146 + tx_frames); 155 147 pkg_stats->total_rx_rate = calc_rate(pkg_stats->jiffies_init, j, 156 - pkg_stats->rx_frames); 148 + rx_frames); 157 149 158 150 /* calc current values */ 159 - if (pkg_stats->rx_frames_delta) 151 + if (rx_frames_delta) 160 152 pkg_stats->current_rx_match_ratio = 161 - (pkg_stats->matches_delta * 100) / 162 - pkg_stats->rx_frames_delta; 153 + (matches_delta * 100) / rx_frames_delta; 163 154 164 - pkg_stats->current_tx_rate = calc_rate(0, HZ, pkg_stats->tx_frames_delta); 165 - pkg_stats->current_rx_rate = calc_rate(0, HZ, pkg_stats->rx_frames_delta); 155 + pkg_stats->current_tx_rate = calc_rate(0, HZ, tx_frames_delta); 156 + pkg_stats->current_rx_rate = calc_rate(0, HZ, rx_frames_delta); 166 157 167 158 /* check / update maximum values */ 168 159 if (pkg_stats->max_tx_rate < pkg_stats->current_tx_rate) ··· 173 168 pkg_stats->max_rx_match_ratio = pkg_stats->current_rx_match_ratio; 174 169 175 170 /* clear values for 'current rate' calculation */ 176 - pkg_stats->tx_frames_delta = 0; 177 - pkg_stats->rx_frames_delta = 0; 178 - pkg_stats->matches_delta = 0; 171 + atomic_long_set(&pkg_stats->tx_frames_delta, 0); 172 + atomic_long_set(&pkg_stats->rx_frames_delta, 0); 173 + atomic_long_set(&pkg_stats->matches_delta, 0); 179 174 180 175 /* restart timer (one second) */ 181 176 mod_timer(&net->can.stattimer, round_jiffies(jiffies + HZ)); ··· 219 214 struct can_rcv_lists_stats *rcv_lists_stats = net->can.rcv_lists_stats; 220 215 221 216 seq_putc(m, '\n'); 222 - seq_printf(m, " %8ld transmitted frames (TXF)\n", pkg_stats->tx_frames); 223 - seq_printf(m, " %8ld received frames (RXF)\n", pkg_stats->rx_frames); 224 - seq_printf(m, " %8ld matched frames (RXMF)\n", pkg_stats->matches); 217 + seq_printf(m, " %8ld transmitted frames (TXF)\n", 218 + atomic_long_read(&pkg_stats->tx_frames)); 219 + seq_printf(m, " %8ld received frames (RXF)\n", 220 + atomic_long_read(&pkg_stats->rx_frames)); 221 + seq_printf(m, " %8ld matched frames (RXMF)\n", 222 + atomic_long_read(&pkg_stats->matches)); 225 223 226 224 seq_putc(m, '\n'); 227 225
+53 -12
net/core/lwtunnel.c
··· 23 23 #include <net/ip6_fib.h> 24 24 #include <net/rtnh.h> 25 25 26 + #include "dev.h" 27 + 26 28 DEFINE_STATIC_KEY_FALSE(nf_hooks_lwtunnel_enabled); 27 29 EXPORT_SYMBOL_GPL(nf_hooks_lwtunnel_enabled); 28 30 ··· 328 326 329 327 int lwtunnel_output(struct net *net, struct sock *sk, struct sk_buff *skb) 330 328 { 331 - struct dst_entry *dst = skb_dst(skb); 332 329 const struct lwtunnel_encap_ops *ops; 333 330 struct lwtunnel_state *lwtstate; 334 - int ret = -EINVAL; 331 + struct dst_entry *dst; 332 + int ret; 335 333 336 - if (!dst) 334 + if (dev_xmit_recursion()) { 335 + net_crit_ratelimited("%s(): recursion limit reached on datapath\n", 336 + __func__); 337 + ret = -ENETDOWN; 337 338 goto drop; 339 + } 340 + 341 + dst = skb_dst(skb); 342 + if (!dst) { 343 + ret = -EINVAL; 344 + goto drop; 345 + } 338 346 lwtstate = dst->lwtstate; 339 347 340 348 if (lwtstate->type == LWTUNNEL_ENCAP_NONE || ··· 354 342 ret = -EOPNOTSUPP; 355 343 rcu_read_lock(); 356 344 ops = rcu_dereference(lwtun_encaps[lwtstate->type]); 357 - if (likely(ops && ops->output)) 345 + if (likely(ops && ops->output)) { 346 + dev_xmit_recursion_inc(); 358 347 ret = ops->output(net, sk, skb); 348 + dev_xmit_recursion_dec(); 349 + } 359 350 rcu_read_unlock(); 360 351 361 352 if (ret == -EOPNOTSUPP) ··· 375 360 376 361 int lwtunnel_xmit(struct sk_buff *skb) 377 362 { 378 - struct dst_entry *dst = skb_dst(skb); 379 363 const struct lwtunnel_encap_ops *ops; 380 364 struct lwtunnel_state *lwtstate; 381 - int ret = -EINVAL; 365 + struct dst_entry *dst; 366 + int ret; 382 367 383 - if (!dst) 368 + if (dev_xmit_recursion()) { 369 + net_crit_ratelimited("%s(): recursion limit reached on datapath\n", 370 + __func__); 371 + ret = -ENETDOWN; 384 372 goto drop; 373 + } 374 + 375 + dst = skb_dst(skb); 376 + if (!dst) { 377 + ret = -EINVAL; 378 + goto drop; 379 + } 385 380 386 381 lwtstate = dst->lwtstate; 387 382 ··· 402 377 ret = -EOPNOTSUPP; 403 378 rcu_read_lock(); 404 379 ops = rcu_dereference(lwtun_encaps[lwtstate->type]); 405 - if (likely(ops && ops->xmit)) 380 + if (likely(ops && ops->xmit)) { 381 + dev_xmit_recursion_inc(); 406 382 ret = ops->xmit(skb); 383 + dev_xmit_recursion_dec(); 384 + } 407 385 rcu_read_unlock(); 408 386 409 387 if (ret == -EOPNOTSUPP) ··· 423 395 424 396 int lwtunnel_input(struct sk_buff *skb) 425 397 { 426 - struct dst_entry *dst = skb_dst(skb); 427 398 const struct lwtunnel_encap_ops *ops; 428 399 struct lwtunnel_state *lwtstate; 429 - int ret = -EINVAL; 400 + struct dst_entry *dst; 401 + int ret; 430 402 431 - if (!dst) 403 + if (dev_xmit_recursion()) { 404 + net_crit_ratelimited("%s(): recursion limit reached on datapath\n", 405 + __func__); 406 + ret = -ENETDOWN; 432 407 goto drop; 408 + } 409 + 410 + dst = skb_dst(skb); 411 + if (!dst) { 412 + ret = -EINVAL; 413 + goto drop; 414 + } 433 415 lwtstate = dst->lwtstate; 434 416 435 417 if (lwtstate->type == LWTUNNEL_ENCAP_NONE || ··· 449 411 ret = -EOPNOTSUPP; 450 412 rcu_read_lock(); 451 413 ops = rcu_dereference(lwtun_encaps[lwtstate->type]); 452 - if (likely(ops && ops->input)) 414 + if (likely(ops && ops->input)) { 415 + dev_xmit_recursion_inc(); 453 416 ret = ops->input(skb); 417 + dev_xmit_recursion_dec(); 418 + } 454 419 rcu_read_unlock(); 455 420 456 421 if (ret == -EOPNOTSUPP)
+1
net/core/neighbour.c
··· 2243 2243 static const struct nla_policy nl_ntbl_parm_policy[NDTPA_MAX+1] = { 2244 2244 [NDTPA_IFINDEX] = { .type = NLA_U32 }, 2245 2245 [NDTPA_QUEUE_LEN] = { .type = NLA_U32 }, 2246 + [NDTPA_QUEUE_LENBYTES] = { .type = NLA_U32 }, 2246 2247 [NDTPA_PROXY_QLEN] = { .type = NLA_U32 }, 2247 2248 [NDTPA_APP_PROBES] = { .type = NLA_U32 }, 2248 2249 [NDTPA_UCAST_PROBES] = { .type = NLA_U32 },
+1 -1
net/devlink/core.c
··· 117 117 118 118 err = xa_alloc_cyclic(&devlink_rels, &rel->index, rel, 119 119 xa_limit_32b, &next, GFP_KERNEL); 120 - if (err) { 120 + if (err < 0) { 121 121 kfree(rel); 122 122 return ERR_PTR(err); 123 123 }
+6 -9
net/ipv6/addrconf.c
··· 3209 3209 struct in6_addr addr; 3210 3210 struct net_device *dev; 3211 3211 struct net *net = dev_net(idev->dev); 3212 - int scope, plen; 3212 + int scope, plen, offset = 0; 3213 3213 u32 pflags = 0; 3214 3214 3215 3215 ASSERT_RTNL(); 3216 3216 3217 3217 memset(&addr, 0, sizeof(struct in6_addr)); 3218 - memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4); 3218 + /* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */ 3219 + if (idev->dev->addr_len == sizeof(struct in6_addr)) 3220 + offset = sizeof(struct in6_addr) - 4; 3221 + memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4); 3219 3222 3220 3223 if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) { 3221 3224 scope = IPV6_ADDR_COMPATv4; ··· 3529 3526 return; 3530 3527 } 3531 3528 3532 - /* Generate the IPv6 link-local address using addrconf_addr_gen(), 3533 - * unless we have an IPv4 GRE device not bound to an IP address and 3534 - * which is in EUI64 mode (as __ipv6_isatap_ifid() would fail in this 3535 - * case). Such devices fall back to add_v4_addrs() instead. 3536 - */ 3537 - if (!(dev->type == ARPHRD_IPGRE && *(__be32 *)dev->dev_addr == 0 && 3538 - idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_EUI64)) { 3529 + if (dev->type == ARPHRD_ETHER) { 3539 3530 addrconf_addr_gen(idev, true); 3540 3531 return; 3541 3532 }
+4 -4
net/ipv6/ioam6_iptunnel.c
··· 337 337 static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb) 338 338 { 339 339 struct dst_entry *dst = skb_dst(skb), *cache_dst = NULL; 340 - struct in6_addr orig_daddr; 341 340 struct ioam6_lwt *ilwt; 342 341 int err = -EINVAL; 343 342 u32 pkt_cnt; ··· 350 351 pkt_cnt = atomic_fetch_inc(&ilwt->pkt_cnt); 351 352 if (pkt_cnt % ilwt->freq.n >= ilwt->freq.k) 352 353 goto out; 353 - 354 - orig_daddr = ipv6_hdr(skb)->daddr; 355 354 356 355 local_bh_disable(); 357 356 cache_dst = dst_cache_get(&ilwt->cache); ··· 419 422 goto drop; 420 423 } 421 424 422 - if (!ipv6_addr_equal(&orig_daddr, &ipv6_hdr(skb)->daddr)) { 425 + /* avoid lwtunnel_output() reentry loop when destination is the same 426 + * after transformation (e.g., with the inline mode) 427 + */ 428 + if (dst->lwtstate != cache_dst->lwtstate) { 423 429 skb_dst_drop(skb); 424 430 skb_dst_set(skb, cache_dst); 425 431 return dst_output(net, sk, skb);
+4 -1
net/ipv6/route.c
··· 3644 3644 in6_dev_put(idev); 3645 3645 3646 3646 if (err) { 3647 - lwtstate_put(fib6_nh->fib_nh_lws); 3647 + fib_nh_common_release(&fib6_nh->nh_common); 3648 + fib6_nh->nh_common.nhc_pcpu_rth_output = NULL; 3648 3649 fib6_nh->fib_nh_lws = NULL; 3649 3650 netdev_put(dev, dev_tracker); 3650 3651 } ··· 3803 3802 if (nh) { 3804 3803 if (rt->fib6_src.plen) { 3805 3804 NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing"); 3805 + err = -EINVAL; 3806 3806 goto out_free; 3807 3807 } 3808 3808 if (!nexthop_get(nh)) { 3809 3809 NL_SET_ERR_MSG(extack, "Nexthop has been deleted"); 3810 + err = -ENOENT; 3810 3811 goto out_free; 3811 3812 } 3812 3813 rt->nh = nh;
+15 -6
net/ipv6/tcpv6_offload.c
··· 94 94 } 95 95 96 96 static void __tcpv6_gso_segment_csum(struct sk_buff *seg, 97 + struct in6_addr *oldip, 98 + const struct in6_addr *newip, 97 99 __be16 *oldport, __be16 newport) 98 100 { 99 - struct tcphdr *th; 101 + struct tcphdr *th = tcp_hdr(seg); 102 + 103 + if (!ipv6_addr_equal(oldip, newip)) { 104 + inet_proto_csum_replace16(&th->check, seg, 105 + oldip->s6_addr32, 106 + newip->s6_addr32, 107 + true); 108 + *oldip = *newip; 109 + } 100 110 101 111 if (*oldport == newport) 102 112 return; 103 113 104 - th = tcp_hdr(seg); 105 114 inet_proto_csum_replace2(&th->check, seg, *oldport, newport, false); 106 115 *oldport = newport; 107 116 } ··· 138 129 th2 = tcp_hdr(seg); 139 130 iph2 = ipv6_hdr(seg); 140 131 141 - iph2->saddr = iph->saddr; 142 - iph2->daddr = iph->daddr; 143 - __tcpv6_gso_segment_csum(seg, &th2->source, th->source); 144 - __tcpv6_gso_segment_csum(seg, &th2->dest, th->dest); 132 + __tcpv6_gso_segment_csum(seg, &iph2->saddr, &iph->saddr, 133 + &th2->source, th->source); 134 + __tcpv6_gso_segment_csum(seg, &iph2->daddr, &iph->daddr, 135 + &th2->dest, th->dest); 145 136 } 146 137 147 138 return segs;
+4 -2
net/mptcp/options.c
··· 651 651 struct mptcp_sock *msk = mptcp_sk(subflow->conn); 652 652 bool drop_other_suboptions = false; 653 653 unsigned int opt_size = *size; 654 + struct mptcp_addr_info addr; 654 655 bool echo; 655 656 int len; 656 657 ··· 660 659 */ 661 660 if (!mptcp_pm_should_add_signal(msk) || 662 661 (opts->suboptions & (OPTION_MPTCP_MPJ_ACK | OPTION_MPTCP_MPC_ACK)) || 663 - !mptcp_pm_add_addr_signal(msk, skb, opt_size, remaining, &opts->addr, 662 + !mptcp_pm_add_addr_signal(msk, skb, opt_size, remaining, &addr, 664 663 &echo, &drop_other_suboptions)) 665 664 return false; 666 665 ··· 673 672 else if (opts->suboptions & OPTION_MPTCP_DSS) 674 673 return false; 675 674 676 - len = mptcp_add_addr_len(opts->addr.family, echo, !!opts->addr.port); 675 + len = mptcp_add_addr_len(addr.family, echo, !!addr.port); 677 676 if (remaining < len) 678 677 return false; 679 678 ··· 690 689 opts->ahmac = 0; 691 690 *size -= opt_size; 692 691 } 692 + opts->addr = addr; 693 693 opts->suboptions |= OPTION_MPTCP_ADD_ADDR; 694 694 if (!echo) { 695 695 MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_ADDADDRTX);
+1 -1
net/xdp/xsk_buff_pool.c
··· 107 107 if (pool->unaligned) 108 108 pool->free_heads[i] = xskb; 109 109 else 110 - xp_init_xskb_addr(xskb, pool, i * pool->chunk_size); 110 + xp_init_xskb_addr(xskb, pool, (u64)i * pool->chunk_size); 111 111 } 112 112 113 113 return pool;
+42 -1
net/xfrm/xfrm_output.c
··· 612 612 } 613 613 EXPORT_SYMBOL_GPL(xfrm_output_resume); 614 614 615 + static int xfrm_dev_direct_output(struct sock *sk, struct xfrm_state *x, 616 + struct sk_buff *skb) 617 + { 618 + struct dst_entry *dst = skb_dst(skb); 619 + struct net *net = xs_net(x); 620 + int err; 621 + 622 + dst = skb_dst_pop(skb); 623 + if (!dst) { 624 + XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR); 625 + kfree_skb(skb); 626 + return -EHOSTUNREACH; 627 + } 628 + skb_dst_set(skb, dst); 629 + nf_reset_ct(skb); 630 + 631 + err = skb_dst(skb)->ops->local_out(net, sk, skb); 632 + if (unlikely(err != 1)) { 633 + kfree_skb(skb); 634 + return err; 635 + } 636 + 637 + /* In transport mode, network destination is 638 + * directly reachable, while in tunnel mode, 639 + * inner packet network may not be. In packet 640 + * offload type, HW is responsible for hard 641 + * header packet mangling so directly xmit skb 642 + * to netdevice. 643 + */ 644 + skb->dev = x->xso.dev; 645 + __skb_push(skb, skb->dev->hard_header_len); 646 + return dev_queue_xmit(skb); 647 + } 648 + 615 649 static int xfrm_output2(struct net *net, struct sock *sk, struct sk_buff *skb) 616 650 { 617 651 return xfrm_output_resume(sk, skb, 1); ··· 769 735 return -EHOSTUNREACH; 770 736 } 771 737 738 + /* Exclusive direct xmit for tunnel mode, as 739 + * some filtering or matching rules may apply 740 + * in transport mode. 741 + */ 742 + if (x->props.mode == XFRM_MODE_TUNNEL) 743 + return xfrm_dev_direct_output(sk, x, skb); 744 + 772 745 return xfrm_output_resume(sk, skb, 0); 773 746 } 774 747 ··· 799 758 skb->encapsulation = 1; 800 759 801 760 if (skb_is_gso(skb)) { 802 - if (skb->inner_protocol) 761 + if (skb->inner_protocol && x->props.mode == XFRM_MODE_TUNNEL) 803 762 return xfrm_output_gso(net, sk, skb); 804 763 805 764 skb_shinfo(skb)->gso_type |= SKB_GSO_ESP;
+18
rust/kernel/alloc/allocator_test.rs
··· 62 62 )); 63 63 } 64 64 65 + // ISO C (ISO/IEC 9899:2011) defines `aligned_alloc`: 66 + // 67 + // > The value of alignment shall be a valid alignment supported by the implementation 68 + // [...]. 69 + // 70 + // As an example of the "supported by the implementation" requirement, POSIX.1-2001 (IEEE 71 + // 1003.1-2001) defines `posix_memalign`: 72 + // 73 + // > The value of alignment shall be a power of two multiple of sizeof (void *). 74 + // 75 + // and POSIX-based implementations of `aligned_alloc` inherit this requirement. At the time 76 + // of writing, this is known to be the case on macOS (but not in glibc). 77 + // 78 + // Satisfy the stricter requirement to avoid spurious test failures on some platforms. 79 + let min_align = core::mem::size_of::<*const crate::ffi::c_void>(); 80 + let layout = layout.align_to(min_align).map_err(|_| AllocError)?; 81 + let layout = layout.pad_to_align(); 82 + 65 83 // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or 66 84 // exceeds the given size and alignment requirements. 67 85 let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8;
+1 -1
rust/kernel/error.rs
··· 107 107 } else { 108 108 // TODO: Make it a `WARN_ONCE` once available. 109 109 crate::pr_warn!( 110 - "attempted to create `Error` with out of range `errno`: {}", 110 + "attempted to create `Error` with out of range `errno`: {}\n", 111 111 errno 112 112 ); 113 113 code::EINVAL
+10 -13
rust/kernel/init.rs
··· 259 259 /// }, 260 260 /// })); 261 261 /// let foo: Pin<&mut Foo> = foo; 262 - /// pr_info!("a: {}", &*foo.a.lock()); 262 + /// pr_info!("a: {}\n", &*foo.a.lock()); 263 263 /// ``` 264 264 /// 265 265 /// # Syntax ··· 319 319 /// }, GFP_KERNEL)?, 320 320 /// })); 321 321 /// let foo = foo.unwrap(); 322 - /// pr_info!("a: {}", &*foo.a.lock()); 322 + /// pr_info!("a: {}\n", &*foo.a.lock()); 323 323 /// ``` 324 324 /// 325 325 /// ```rust,ignore ··· 352 352 /// x: 64, 353 353 /// }, GFP_KERNEL)?, 354 354 /// })); 355 - /// pr_info!("a: {}", &*foo.a.lock()); 355 + /// pr_info!("a: {}\n", &*foo.a.lock()); 356 356 /// # Ok::<_, AllocError>(()) 357 357 /// ``` 358 358 /// ··· 882 882 /// 883 883 /// impl Foo { 884 884 /// fn setup(self: Pin<&mut Self>) { 885 - /// pr_info!("Setting up foo"); 885 + /// pr_info!("Setting up foo\n"); 886 886 /// } 887 887 /// } 888 888 /// ··· 986 986 /// 987 987 /// impl Foo { 988 988 /// fn setup(&mut self) { 989 - /// pr_info!("Setting up foo"); 989 + /// pr_info!("Setting up foo\n"); 990 990 /// } 991 991 /// } 992 992 /// ··· 1336 1336 /// #[pinned_drop] 1337 1337 /// impl PinnedDrop for Foo { 1338 1338 /// fn drop(self: Pin<&mut Self>) { 1339 - /// pr_info!("Foo is being dropped!"); 1339 + /// pr_info!("Foo is being dropped!\n"); 1340 1340 /// } 1341 1341 /// } 1342 1342 /// ``` ··· 1418 1418 // SAFETY: `T: Zeroable` and `UnsafeCell` is `repr(transparent)`. 1419 1419 {<T: ?Sized + Zeroable>} UnsafeCell<T>, 1420 1420 1421 - // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee). 1421 + // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee: 1422 + // https://doc.rust-lang.org/stable/std/option/index.html#representation). 1422 1423 Option<NonZeroU8>, Option<NonZeroU16>, Option<NonZeroU32>, Option<NonZeroU64>, 1423 1424 Option<NonZeroU128>, Option<NonZeroUsize>, 1424 1425 Option<NonZeroI8>, Option<NonZeroI16>, Option<NonZeroI32>, Option<NonZeroI64>, 1425 1426 Option<NonZeroI128>, Option<NonZeroIsize>, 1426 - 1427 - // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee). 1428 - // 1429 - // In this case we are allowed to use `T: ?Sized`, since all zeros is the `None` variant. 1430 - {<T: ?Sized>} Option<NonNull<T>>, 1431 - {<T: ?Sized>} Option<KBox<T>>, 1427 + {<T>} Option<NonNull<T>>, 1428 + {<T>} Option<KBox<T>>, 1432 1429 1433 1430 // SAFETY: `null` pointer is valid. 1434 1431 //
+3 -3
rust/kernel/init/macros.rs
··· 45 45 //! #[pinned_drop] 46 46 //! impl PinnedDrop for Foo { 47 47 //! fn drop(self: Pin<&mut Self>) { 48 - //! pr_info!("{self:p} is getting dropped."); 48 + //! pr_info!("{self:p} is getting dropped.\n"); 49 49 //! } 50 50 //! } 51 51 //! ··· 412 412 //! #[pinned_drop] 413 413 //! impl PinnedDrop for Foo { 414 414 //! fn drop(self: Pin<&mut Self>) { 415 - //! pr_info!("{self:p} is getting dropped."); 415 + //! pr_info!("{self:p} is getting dropped.\n"); 416 416 //! } 417 417 //! } 418 418 //! ``` ··· 423 423 //! // `unsafe`, full path and the token parameter are added, everything else stays the same. 424 424 //! unsafe impl ::kernel::init::PinnedDrop for Foo { 425 425 //! fn drop(self: Pin<&mut Self>, _: ::kernel::init::__internal::OnlyCallFromDrop) { 426 - //! pr_info!("{self:p} is getting dropped."); 426 + //! pr_info!("{self:p} is getting dropped.\n"); 427 427 //! } 428 428 //! } 429 429 //! ```
+1 -1
rust/kernel/lib.rs
··· 6 6 //! usage by Rust code in the kernel and is shared by all of them. 7 7 //! 8 8 //! In other words, all the rest of the Rust code in the kernel (e.g. kernel 9 - //! modules written in Rust) depends on [`core`], [`alloc`] and this crate. 9 + //! modules written in Rust) depends on [`core`] and this crate. 10 10 //! 11 11 //! If you need a kernel C API that is not ported or wrapped yet here, then 12 12 //! do so first instead of bypassing this crate.
+4 -12
rust/kernel/sync.rs
··· 30 30 unsafe impl Sync for LockClassKey {} 31 31 32 32 impl LockClassKey { 33 - /// Creates a new lock class key. 34 - pub const fn new() -> Self { 35 - Self(Opaque::uninit()) 36 - } 37 - 38 33 pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key { 39 34 self.0.get() 40 - } 41 - } 42 - 43 - impl Default for LockClassKey { 44 - fn default() -> Self { 45 - Self::new() 46 35 } 47 36 } 48 37 ··· 40 51 #[macro_export] 41 52 macro_rules! static_lock_class { 42 53 () => {{ 43 - static CLASS: $crate::sync::LockClassKey = $crate::sync::LockClassKey::new(); 54 + static CLASS: $crate::sync::LockClassKey = 55 + // SAFETY: lockdep expects uninitialized memory when it's handed a statically allocated 56 + // lock_class_key 57 + unsafe { ::core::mem::MaybeUninit::uninit().assume_init() }; 44 58 &CLASS 45 59 }}; 46 60 }
+1 -1
rust/kernel/sync/locked_by.rs
··· 55 55 /// fn print_bytes_used(dir: &Directory, file: &File) { 56 56 /// let guard = dir.inner.lock(); 57 57 /// let inner_file = file.inner.access(&guard); 58 - /// pr_info!("{} {}", guard.bytes_used, inner_file.bytes_used); 58 + /// pr_info!("{} {}\n", guard.bytes_used, inner_file.bytes_used); 59 59 /// } 60 60 /// 61 61 /// /// Increments `bytes_used` for both the directory and file.
+1 -1
rust/kernel/task.rs
··· 320 320 321 321 /// Wakes up the task. 322 322 pub fn wake_up(&self) { 323 - // SAFETY: It's always safe to call `signal_pending` on a valid task, even if the task 323 + // SAFETY: It's always safe to call `wake_up_process` on a valid task, even if the task 324 324 // running. 325 325 unsafe { bindings::wake_up_process(self.as_ptr()) }; 326 326 }
+3 -3
rust/kernel/workqueue.rs
··· 60 60 //! type Pointer = Arc<MyStruct>; 61 61 //! 62 62 //! fn run(this: Arc<MyStruct>) { 63 - //! pr_info!("The value is: {}", this.value); 63 + //! pr_info!("The value is: {}\n", this.value); 64 64 //! } 65 65 //! } 66 66 //! ··· 108 108 //! type Pointer = Arc<MyStruct>; 109 109 //! 110 110 //! fn run(this: Arc<MyStruct>) { 111 - //! pr_info!("The value is: {}", this.value_1); 111 + //! pr_info!("The value is: {}\n", this.value_1); 112 112 //! } 113 113 //! } 114 114 //! ··· 116 116 //! type Pointer = Arc<MyStruct>; 117 117 //! 118 118 //! fn run(this: Arc<MyStruct>) { 119 - //! pr_info!("The second value is: {}", this.value_2); 119 + //! pr_info!("The second value is: {}\n", this.value_2); 120 120 //! } 121 121 //! } 122 122 //!
+42 -29
scripts/generate_rust_analyzer.py
··· 57 57 crates_indexes[display_name] = len(crates) 58 58 crates.append(crate) 59 59 60 - # First, the ones in `rust/` since they are a bit special. 61 - append_crate( 62 - "core", 63 - sysroot_src / "core" / "src" / "lib.rs", 64 - [], 65 - cfg=crates_cfgs.get("core", []), 66 - is_workspace_member=False, 67 - ) 60 + def append_sysroot_crate( 61 + display_name, 62 + deps, 63 + cfg=[], 64 + ): 65 + append_crate( 66 + display_name, 67 + sysroot_src / display_name / "src" / "lib.rs", 68 + deps, 69 + cfg, 70 + is_workspace_member=False, 71 + ) 72 + 73 + # NB: sysroot crates reexport items from one another so setting up our transitive dependencies 74 + # here is important for ensuring that rust-analyzer can resolve symbols. The sources of truth 75 + # for this dependency graph are `(sysroot_src / crate / "Cargo.toml" for crate in crates)`. 76 + append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", [])) 77 + append_sysroot_crate("alloc", ["core"]) 78 + append_sysroot_crate("std", ["alloc", "core"]) 79 + append_sysroot_crate("proc_macro", ["core", "std"]) 68 80 69 81 append_crate( 70 82 "compiler_builtins", ··· 87 75 append_crate( 88 76 "macros", 89 77 srctree / "rust" / "macros" / "lib.rs", 90 - [], 78 + ["std", "proc_macro"], 91 79 is_proc_macro=True, 92 80 ) 93 81 ··· 97 85 ["core", "compiler_builtins"], 98 86 ) 99 87 100 - append_crate( 101 - "bindings", 102 - srctree / "rust"/ "bindings" / "lib.rs", 103 - ["core"], 104 - cfg=cfg, 105 - ) 106 - crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True)) 88 + def append_crate_with_generated( 89 + display_name, 90 + deps, 91 + ): 92 + append_crate( 93 + display_name, 94 + srctree / "rust"/ display_name / "lib.rs", 95 + deps, 96 + cfg=cfg, 97 + ) 98 + crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True)) 99 + crates[-1]["source"] = { 100 + "include_dirs": [ 101 + str(srctree / "rust" / display_name), 102 + str(objtree / "rust") 103 + ], 104 + "exclude_dirs": [], 105 + } 107 106 108 - append_crate( 109 - "kernel", 110 - srctree / "rust" / "kernel" / "lib.rs", 111 - ["core", "macros", "build_error", "bindings"], 112 - cfg=cfg, 113 - ) 114 - crates[-1]["source"] = { 115 - "include_dirs": [ 116 - str(srctree / "rust" / "kernel"), 117 - str(objtree / "rust") 118 - ], 119 - "exclude_dirs": [], 120 - } 107 + append_crate_with_generated("bindings", ["core"]) 108 + append_crate_with_generated("uapi", ["core"]) 109 + append_crate_with_generated("kernel", ["core", "macros", "build_error", "bindings", "uapi"]) 121 110 122 111 def is_root_crate(build_file, target): 123 112 try:
+2 -2
scripts/rustdoc_test_gen.rs
··· 15 15 //! - Test code should be able to define functions and call them, without having to carry 16 16 //! the context. 17 17 //! 18 - //! - Later on, we may want to be able to test non-kernel code (e.g. `core`, `alloc` or 19 - //! third-party crates) which likely use the standard library `assert*!` macros. 18 + //! - Later on, we may want to be able to test non-kernel code (e.g. `core` or third-party 19 + //! crates) which likely use the standard library `assert*!` macros. 20 20 //! 21 21 //! For this reason, instead of the passed context, `kunit_get_current_test()` is used instead 22 22 //! (i.e. `current->kunit_test`).
+21
sound/pci/hda/patch_realtek.c
··· 4790 4790 } 4791 4791 } 4792 4792 4793 + static void alc295_fixup_hp_mute_led_coefbit11(struct hda_codec *codec, 4794 + const struct hda_fixup *fix, int action) 4795 + { 4796 + struct alc_spec *spec = codec->spec; 4797 + 4798 + if (action == HDA_FIXUP_ACT_PRE_PROBE) { 4799 + spec->mute_led_polarity = 0; 4800 + spec->mute_led_coef.idx = 0xb; 4801 + spec->mute_led_coef.mask = 3 << 3; 4802 + spec->mute_led_coef.on = 1 << 3; 4803 + spec->mute_led_coef.off = 1 << 4; 4804 + snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set); 4805 + } 4806 + } 4807 + 4793 4808 static void alc285_fixup_hp_mute_led(struct hda_codec *codec, 4794 4809 const struct hda_fixup *fix, int action) 4795 4810 { ··· 7671 7656 ALC290_FIXUP_MONO_SPEAKERS_HSJACK, 7672 7657 ALC290_FIXUP_SUBWOOFER, 7673 7658 ALC290_FIXUP_SUBWOOFER_HSJACK, 7659 + ALC295_FIXUP_HP_MUTE_LED_COEFBIT11, 7674 7660 ALC269_FIXUP_THINKPAD_ACPI, 7675 7661 ALC269_FIXUP_LENOVO_XPAD_ACPI, 7676 7662 ALC269_FIXUP_DMIC_THINKPAD_ACPI, ··· 9417 9401 .chained = true, 9418 9402 .chain_id = ALC283_FIXUP_INT_MIC, 9419 9403 }, 9404 + [ALC295_FIXUP_HP_MUTE_LED_COEFBIT11] = { 9405 + .type = HDA_FIXUP_FUNC, 9406 + .v.func = alc295_fixup_hp_mute_led_coefbit11, 9407 + }, 9420 9408 [ALC298_FIXUP_SAMSUNG_AMP] = { 9421 9409 .type = HDA_FIXUP_FUNC, 9422 9410 .v.func = alc298_fixup_samsung_amp, ··· 10471 10451 SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 10472 10452 SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), 10473 10453 SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10454 + SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11), 10474 10455 SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360), 10475 10456 SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), 10476 10457 SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+7
sound/soc/amd/yc/acp6x-mach.c
··· 252 252 .driver_data = &acp6x_card, 253 253 .matches = { 254 254 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 255 + DMI_MATCH(DMI_PRODUCT_NAME, "21M6"), 256 + } 257 + }, 258 + { 259 + .driver_data = &acp6x_card, 260 + .matches = { 261 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 255 262 DMI_MATCH(DMI_PRODUCT_NAME, "21ME"), 256 263 } 257 264 },
+10 -3
sound/soc/codecs/cs42l43-jack.c
··· 167 167 autocontrol |= 0x3 << CS42L43_JACKDET_MODE_SHIFT; 168 168 169 169 ret = cs42l43_find_index(priv, "cirrus,tip-fall-db-ms", 500, 170 - NULL, cs42l43_accdet_db_ms, 170 + &priv->tip_fall_db_ms, cs42l43_accdet_db_ms, 171 171 ARRAY_SIZE(cs42l43_accdet_db_ms)); 172 172 if (ret < 0) 173 173 goto error; ··· 175 175 tip_deb |= ret << CS42L43_TIPSENSE_FALLING_DB_TIME_SHIFT; 176 176 177 177 ret = cs42l43_find_index(priv, "cirrus,tip-rise-db-ms", 500, 178 - NULL, cs42l43_accdet_db_ms, 178 + &priv->tip_rise_db_ms, cs42l43_accdet_db_ms, 179 179 ARRAY_SIZE(cs42l43_accdet_db_ms)); 180 180 if (ret < 0) 181 181 goto error; ··· 764 764 error: 765 765 mutex_unlock(&priv->jack_lock); 766 766 767 + priv->suspend_jack_debounce = false; 768 + 767 769 pm_runtime_mark_last_busy(priv->dev); 768 770 pm_runtime_put_autosuspend(priv->dev); 769 771 } ··· 773 771 irqreturn_t cs42l43_tip_sense(int irq, void *data) 774 772 { 775 773 struct cs42l43_codec *priv = data; 774 + unsigned int db_delay = priv->tip_debounce_ms; 776 775 777 776 cancel_delayed_work(&priv->bias_sense_timeout); 778 777 cancel_delayed_work(&priv->tip_sense_work); 779 778 cancel_delayed_work(&priv->button_press_work); 780 779 cancel_work(&priv->button_release_work); 781 780 781 + // Ensure delay after suspend is long enough to avoid false detection 782 + if (priv->suspend_jack_debounce) 783 + db_delay += priv->tip_fall_db_ms + priv->tip_rise_db_ms; 784 + 782 785 queue_delayed_work(system_long_wq, &priv->tip_sense_work, 783 - msecs_to_jiffies(priv->tip_debounce_ms)); 786 + msecs_to_jiffies(db_delay)); 784 787 785 788 return IRQ_HANDLED; 786 789 }
+15 -2
sound/soc/codecs/cs42l43.c
··· 1146 1146 1147 1147 SOC_DOUBLE_R_SX_TLV("ADC Volume", CS42L43_ADC_B_CTRL1, CS42L43_ADC_B_CTRL2, 1148 1148 CS42L43_ADC_PGA_GAIN_SHIFT, 1149 - 0xF, 5, cs42l43_adc_tlv), 1149 + 0xF, 4, cs42l43_adc_tlv), 1150 1150 1151 1151 SOC_DOUBLE("PDM1 Invert Switch", CS42L43_DMIC_PDM_CTRL, 1152 1152 CS42L43_PDM1L_INV_SHIFT, CS42L43_PDM1R_INV_SHIFT, 1, 0), ··· 2402 2402 return 0; 2403 2403 } 2404 2404 2405 + static int cs42l43_codec_runtime_force_suspend(struct device *dev) 2406 + { 2407 + struct cs42l43_codec *priv = dev_get_drvdata(dev); 2408 + 2409 + dev_dbg(priv->dev, "Runtime suspend\n"); 2410 + 2411 + priv->suspend_jack_debounce = true; 2412 + 2413 + pm_runtime_force_suspend(dev); 2414 + 2415 + return 0; 2416 + } 2417 + 2405 2418 static const struct dev_pm_ops cs42l43_codec_pm_ops = { 2406 2419 RUNTIME_PM_OPS(NULL, cs42l43_codec_runtime_resume, NULL) 2407 - SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 2420 + SYSTEM_SLEEP_PM_OPS(cs42l43_codec_runtime_force_suspend, pm_runtime_force_resume) 2408 2421 }; 2409 2422 2410 2423 static const struct platform_device_id cs42l43_codec_id_table[] = {
+3
sound/soc/codecs/cs42l43.h
··· 78 78 79 79 bool use_ring_sense; 80 80 unsigned int tip_debounce_ms; 81 + unsigned int tip_fall_db_ms; 82 + unsigned int tip_rise_db_ms; 81 83 unsigned int bias_low; 82 84 unsigned int bias_sense_ua; 83 85 unsigned int bias_ramp_ms; ··· 97 95 bool button_detect_running; 98 96 bool jack_present; 99 97 int jack_override; 98 + bool suspend_jack_debounce; 100 99 101 100 struct work_struct hp_ilimit_work; 102 101 struct delayed_work hp_ilimit_clear_work;
+3
sound/soc/codecs/rt1320-sdw.c
··· 535 535 /* set the timeout values */ 536 536 prop->clk_stop_timeout = 64; 537 537 538 + /* BIOS may set wake_capable. Make sure it is 0 as wake events are disabled. */ 539 + prop->wake_capable = 0; 540 + 538 541 return 0; 539 542 } 540 543
+4
sound/soc/codecs/rt722-sdca-sdw.c
··· 86 86 case 0x6100067: 87 87 case 0x6100070 ... 0x610007c: 88 88 case 0x6100080: 89 + case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN, 90 + CH_01) ... 91 + SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN, 92 + CH_04): 89 93 case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME, 90 94 CH_01): 91 95 case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME,
+11 -2
sound/soc/codecs/wm0010.c
··· 920 920 if (ret) { 921 921 dev_err(wm0010->dev, "Failed to set IRQ %d as wake source: %d\n", 922 922 irq, ret); 923 - return ret; 923 + goto free_irq; 924 924 } 925 925 926 926 if (spi->max_speed_hz) ··· 932 932 &soc_component_dev_wm0010, wm0010_dai, 933 933 ARRAY_SIZE(wm0010_dai)); 934 934 if (ret < 0) 935 - return ret; 935 + goto disable_irq_wake; 936 936 937 937 return 0; 938 + 939 + disable_irq_wake: 940 + irq_set_irq_wake(wm0010->irq, 0); 941 + 942 + free_irq: 943 + if (wm0010->irq) 944 + free_irq(wm0010->irq, wm0010); 945 + 946 + return ret; 938 947 } 939 948 940 949 static void wm0010_spi_remove(struct spi_device *spi)
+2 -2
sound/soc/codecs/wsa884x.c
··· 1875 1875 * Reading temperature is possible only when Power Amplifier is 1876 1876 * off. Report last cached data. 1877 1877 */ 1878 - *temp = wsa884x->temperature; 1878 + *temp = wsa884x->temperature * 1000; 1879 1879 return 0; 1880 1880 } 1881 1881 ··· 1934 1934 if ((val > WSA884X_LOW_TEMP_THRESHOLD) && 1935 1935 (val < WSA884X_HIGH_TEMP_THRESHOLD)) { 1936 1936 wsa884x->temperature = val; 1937 - *temp = val; 1937 + *temp = val * 1000; 1938 1938 ret = 0; 1939 1939 } else { 1940 1940 ret = -EAGAIN;
+1 -1
sound/soc/intel/boards/sof_sdw.c
··· 954 954 955 955 /* generate DAI links by each sdw link */ 956 956 while (sof_dais->initialised) { 957 - int current_be_id; 957 + int current_be_id = 0; 958 958 959 959 ret = create_sdw_dailink(card, sof_dais, dai_links, 960 960 &current_be_id, codec_conf);
+7 -8
sound/soc/soc-ops.c
··· 337 337 if (ucontrol->value.integer.value[0] < 0) 338 338 return -EINVAL; 339 339 val = ucontrol->value.integer.value[0]; 340 - if (mc->platform_max && ((int)val + min) > mc->platform_max) 340 + if (mc->platform_max && val > mc->platform_max) 341 341 return -EINVAL; 342 342 if (val > max - min) 343 343 return -EINVAL; ··· 350 350 if (ucontrol->value.integer.value[1] < 0) 351 351 return -EINVAL; 352 352 val2 = ucontrol->value.integer.value[1]; 353 - if (mc->platform_max && ((int)val2 + min) > mc->platform_max) 353 + if (mc->platform_max && val2 > mc->platform_max) 354 354 return -EINVAL; 355 355 if (val2 > max - min) 356 356 return -EINVAL; ··· 503 503 { 504 504 struct soc_mixer_control *mc = 505 505 (struct soc_mixer_control *)kcontrol->private_value; 506 - int platform_max; 507 - int min = mc->min; 506 + int max; 508 507 509 - if (!mc->platform_max) 510 - mc->platform_max = mc->max; 511 - platform_max = mc->platform_max; 508 + max = mc->max - mc->min; 509 + if (mc->platform_max && mc->platform_max < max) 510 + max = mc->platform_max; 512 511 513 512 uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER; 514 513 uinfo->count = snd_soc_volsw_is_stereo(mc) ? 2 : 1; 515 514 uinfo->value.integer.min = 0; 516 - uinfo->value.integer.max = platform_max - min; 515 + uinfo->value.integer.max = max; 517 516 518 517 return 0; 519 518 }
+2 -2
sound/soc/tegra/tegra210_adx.c
··· 264 264 .rates = SNDRV_PCM_RATE_8000_192000, \ 265 265 .formats = SNDRV_PCM_FMTBIT_S8 | \ 266 266 SNDRV_PCM_FMTBIT_S16_LE | \ 267 - SNDRV_PCM_FMTBIT_S16_LE | \ 267 + SNDRV_PCM_FMTBIT_S24_LE | \ 268 268 SNDRV_PCM_FMTBIT_S32_LE, \ 269 269 }, \ 270 270 .capture = { \ ··· 274 274 .rates = SNDRV_PCM_RATE_8000_192000, \ 275 275 .formats = SNDRV_PCM_FMTBIT_S8 | \ 276 276 SNDRV_PCM_FMTBIT_S16_LE | \ 277 - SNDRV_PCM_FMTBIT_S16_LE | \ 277 + SNDRV_PCM_FMTBIT_S24_LE | \ 278 278 SNDRV_PCM_FMTBIT_S32_LE, \ 279 279 }, \ 280 280 .ops = &tegra210_adx_out_dai_ops, \
+19 -2
tools/include/uapi/asm-generic/socket.h
··· 119 119 120 120 #define SO_DETACH_REUSEPORT_BPF 68 121 121 122 + #define SO_PREFER_BUSY_POLL 69 123 + #define SO_BUSY_POLL_BUDGET 70 124 + 125 + #define SO_NETNS_COOKIE 71 126 + 127 + #define SO_BUF_LOCK 72 128 + 129 + #define SO_RESERVE_MEM 73 130 + 131 + #define SO_TXREHASH 74 132 + 122 133 #define SO_RCVMARK 75 123 134 124 135 #define SO_PASSPIDFD 76 125 136 #define SO_PEERPIDFD 77 126 137 127 - #define SCM_TS_OPT_ID 78 138 + #define SO_DEVMEM_LINEAR 78 139 + #define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR 140 + #define SO_DEVMEM_DMABUF 79 141 + #define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF 142 + #define SO_DEVMEM_DONTNEED 80 128 143 129 - #define SO_RCVPRIORITY 79 144 + #define SCM_TS_OPT_ID 81 145 + 146 + #define SO_RCVPRIORITY 82 130 147 131 148 #if !defined(__KERNEL__) 132 149
+8 -8
tools/testing/selftests/drivers/net/ping.py
··· 7 7 from lib.py import ksft_eq, KsftSkipEx, KsftFailEx 8 8 from lib.py import EthtoolFamily, NetDrvEpEnv 9 9 from lib.py import bkg, cmd, wait_port_listen, rand_port 10 - from lib.py import ethtool, ip 10 + from lib.py import defer, ethtool, ip 11 11 12 12 remote_ifname="" 13 13 no_sleep=False ··· 60 60 prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 61 61 cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 62 62 cmd(f"ip link set dev {cfg.ifname} mtu 1500 xdpgeneric obj {prog} sec xdp", shell=True) 63 + defer(cmd, f"ip link set dev {cfg.ifname} xdpgeneric off") 63 64 64 65 if no_sleep != True: 65 66 time.sleep(10) ··· 69 68 test_dir = os.path.dirname(os.path.realpath(__file__)) 70 69 prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 71 70 cmd(f"ip link set dev {remote_ifname} mtu 9000", shell=True, host=cfg.remote) 71 + defer(ip, f"link set dev {remote_ifname} mtu 1500", host=cfg.remote) 72 72 ip("link set dev %s mtu 9000 xdpgeneric obj %s sec xdp.frags" % (cfg.ifname, prog)) 73 + defer(ip, f"link set dev {cfg.ifname} mtu 1500 xdpgeneric off") 73 74 74 75 if no_sleep != True: 75 76 time.sleep(10) ··· 81 78 prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 82 79 cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 83 80 cmd(f"ip -j link set dev {cfg.ifname} mtu 1500 xdp obj {prog} sec xdp", shell=True) 81 + defer(ip, f"link set dev {cfg.ifname} mtu 1500 xdp off") 84 82 xdp_info = ip("-d link show %s" % (cfg.ifname), json=True)[0] 85 83 if xdp_info['xdp']['mode'] != 1: 86 84 """ ··· 98 94 test_dir = os.path.dirname(os.path.realpath(__file__)) 99 95 prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 100 96 cmd(f"ip link set dev {remote_ifname} mtu 9000", shell=True, host=cfg.remote) 97 + defer(ip, f"link set dev {remote_ifname} mtu 1500", host=cfg.remote) 101 98 try: 102 99 cmd(f"ip link set dev {cfg.ifname} mtu 9000 xdp obj {prog} sec xdp.frags", shell=True) 100 + defer(ip, f"link set dev {cfg.ifname} mtu 1500 xdp off") 103 101 except Exception as e: 104 - cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 105 102 raise KsftSkipEx('device does not support native-multi-buffer XDP') 106 103 107 104 if no_sleep != True: ··· 116 111 cmd(f"ip link set dev {cfg.ifname} xdpoffload obj {prog} sec xdp", shell=True) 117 112 except Exception as e: 118 113 raise KsftSkipEx('device does not support offloaded XDP') 114 + defer(ip, f"link set dev {cfg.ifname} xdpoffload off") 119 115 cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 120 116 121 117 if no_sleep != True: ··· 163 157 _test_v4(cfg) 164 158 _test_v6(cfg) 165 159 _test_tcp(cfg) 166 - ip("link set dev %s xdpgeneric off" % cfg.ifname) 167 160 168 161 def test_xdp_generic_mb(cfg, netnl) -> None: 169 162 _set_xdp_generic_mb_on(cfg) ··· 174 169 _test_v4(cfg) 175 170 _test_v6(cfg) 176 171 _test_tcp(cfg) 177 - ip("link set dev %s xdpgeneric off" % cfg.ifname) 178 172 179 173 def test_xdp_native_sb(cfg, netnl) -> None: 180 174 _set_xdp_native_sb_on(cfg) ··· 185 181 _test_v4(cfg) 186 182 _test_v6(cfg) 187 183 _test_tcp(cfg) 188 - ip("link set dev %s xdp off" % cfg.ifname) 189 184 190 185 def test_xdp_native_mb(cfg, netnl) -> None: 191 186 _set_xdp_native_mb_on(cfg) ··· 196 193 _test_v4(cfg) 197 194 _test_v6(cfg) 198 195 _test_tcp(cfg) 199 - ip("link set dev %s xdp off" % cfg.ifname) 200 196 201 197 def test_xdp_offload(cfg, netnl) -> None: 202 198 _set_xdp_offload_on(cfg) 203 199 _test_v4(cfg) 204 200 _test_v6(cfg) 205 201 _test_tcp(cfg) 206 - ip("link set dev %s xdpoffload off" % cfg.ifname) 207 202 208 203 def main() -> None: 209 204 with NetDrvEpEnv(__file__) as cfg: ··· 214 213 test_xdp_native_mb, 215 214 test_xdp_offload], 216 215 args=(cfg, EthtoolFamily())) 217 - set_interface_init(cfg) 218 216 ksft_exit() 219 217 220 218
+3 -1
tools/testing/selftests/mm/run_vmtests.sh
··· 304 304 CATEGORY="userfaultfd" run_test ${uffd_stress_bin} anon 20 16 305 305 # Hugetlb tests require source and destination huge pages. Pass in half 306 306 # the size of the free pages we have, which is used for *each*. 307 - half_ufd_size_MB=$((freepgs / 2)) 307 + # uffd-stress expects a region expressed in MiB, so we adjust 308 + # half_ufd_size_MB accordingly. 309 + half_ufd_size_MB=$(((freepgs * hpgsize_KB) / 1024 / 2)) 308 310 CATEGORY="userfaultfd" run_test ${uffd_stress_bin} hugetlb "$half_ufd_size_MB" 32 309 311 CATEGORY="userfaultfd" run_test ${uffd_stress_bin} hugetlb-private "$half_ufd_size_MB" 32 310 312 CATEGORY="userfaultfd" run_test ${uffd_stress_bin} shmem 20 16
+1 -1
tools/testing/selftests/net/Makefile
··· 31 31 TEST_PROGS += ioam6.sh 32 32 TEST_PROGS += gro.sh 33 33 TEST_PROGS += gre_gso.sh 34 - TEST_PROGS += gre_ipv6_lladdr.sh 35 34 TEST_PROGS += cmsg_so_mark.sh 36 35 TEST_PROGS += cmsg_so_priority.sh 37 36 TEST_PROGS += test_so_rcv.sh ··· 105 106 TEST_PROGS += ipv6_route_update_soft_lockup.sh 106 107 TEST_PROGS += busy_poll_test.sh 107 108 TEST_GEN_PROGS += proc_net_pktgen 109 + TEST_PROGS += lwt_dst_cache_ref_loop.sh 108 110 109 111 # YNL files, must be before "include ..lib.mk" 110 112 YNL_GEN_FILES := busy_poller netlink-dumps
+2
tools/testing/selftests/net/config
··· 115 115 CONFIG_CAN_VXCAN=m 116 116 CONFIG_NETKIT=y 117 117 CONFIG_NET_PKTGEN=m 118 + CONFIG_IPV6_ILA=m 119 + CONFIG_IPV6_RPL_LWTUNNEL=y
-177
tools/testing/selftests/net/gre_ipv6_lladdr.sh
··· 1 - #!/bin/bash 2 - # SPDX-License-Identifier: GPL-2.0 3 - 4 - source ./lib.sh 5 - 6 - PAUSE_ON_FAIL="no" 7 - 8 - # The trap function handler 9 - # 10 - exit_cleanup_all() 11 - { 12 - cleanup_all_ns 13 - 14 - exit "${EXIT_STATUS}" 15 - } 16 - 17 - # Add fake IPv4 and IPv6 networks on the loopback device, to be used as 18 - # underlay by future GRE devices. 19 - # 20 - setup_basenet() 21 - { 22 - ip -netns "${NS0}" link set dev lo up 23 - ip -netns "${NS0}" address add dev lo 192.0.2.10/24 24 - ip -netns "${NS0}" address add dev lo 2001:db8::10/64 nodad 25 - } 26 - 27 - # Check if network device has an IPv6 link-local address assigned. 28 - # 29 - # Parameters: 30 - # 31 - # * $1: The network device to test 32 - # * $2: An extra regular expression that should be matched (to verify the 33 - # presence of extra attributes) 34 - # * $3: The expected return code from grep (to allow checking the absence of 35 - # a link-local address) 36 - # * $4: The user visible name for the scenario being tested 37 - # 38 - check_ipv6_ll_addr() 39 - { 40 - local DEV="$1" 41 - local EXTRA_MATCH="$2" 42 - local XRET="$3" 43 - local MSG="$4" 44 - 45 - RET=0 46 - set +e 47 - ip -netns "${NS0}" -6 address show dev "${DEV}" scope link | grep "fe80::" | grep -q "${EXTRA_MATCH}" 48 - check_err_fail "${XRET}" $? "" 49 - log_test "${MSG}" 50 - set -e 51 - } 52 - 53 - # Create a GRE device and verify that it gets an IPv6 link-local address as 54 - # expected. 55 - # 56 - # Parameters: 57 - # 58 - # * $1: The device type (gre, ip6gre, gretap or ip6gretap) 59 - # * $2: The local underlay IP address (can be an IPv4, an IPv6 or "any") 60 - # * $3: The remote underlay IP address (can be an IPv4, an IPv6 or "any") 61 - # * $4: The IPv6 interface identifier generation mode to use for the GRE 62 - # device (eui64, none, stable-privacy or random). 63 - # 64 - test_gre_device() 65 - { 66 - local GRE_TYPE="$1" 67 - local LOCAL_IP="$2" 68 - local REMOTE_IP="$3" 69 - local MODE="$4" 70 - local ADDR_GEN_MODE 71 - local MATCH_REGEXP 72 - local MSG 73 - 74 - ip link add netns "${NS0}" name gretest type "${GRE_TYPE}" local "${LOCAL_IP}" remote "${REMOTE_IP}" 75 - 76 - case "${MODE}" in 77 - "eui64") 78 - ADDR_GEN_MODE=0 79 - MATCH_REGEXP="" 80 - MSG="${GRE_TYPE}, mode: 0 (EUI64), ${LOCAL_IP} -> ${REMOTE_IP}" 81 - XRET=0 82 - ;; 83 - "none") 84 - ADDR_GEN_MODE=1 85 - MATCH_REGEXP="" 86 - MSG="${GRE_TYPE}, mode: 1 (none), ${LOCAL_IP} -> ${REMOTE_IP}" 87 - XRET=1 # No link-local address should be generated 88 - ;; 89 - "stable-privacy") 90 - ADDR_GEN_MODE=2 91 - MATCH_REGEXP="stable-privacy" 92 - MSG="${GRE_TYPE}, mode: 2 (stable privacy), ${LOCAL_IP} -> ${REMOTE_IP}" 93 - XRET=0 94 - # Initialise stable_secret (required for stable-privacy mode) 95 - ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.stable_secret="2001:db8::abcd" 96 - ;; 97 - "random") 98 - ADDR_GEN_MODE=3 99 - MATCH_REGEXP="stable-privacy" 100 - MSG="${GRE_TYPE}, mode: 3 (random), ${LOCAL_IP} -> ${REMOTE_IP}" 101 - XRET=0 102 - ;; 103 - esac 104 - 105 - # Check that IPv6 link-local address is generated when device goes up 106 - ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}" 107 - ip -netns "${NS0}" link set dev gretest up 108 - check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "config: ${MSG}" 109 - 110 - # Now disable link-local address generation 111 - ip -netns "${NS0}" link set dev gretest down 112 - ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode=1 113 - ip -netns "${NS0}" link set dev gretest up 114 - 115 - # Check that link-local address generation works when re-enabled while 116 - # the device is already up 117 - ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}" 118 - check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "update: ${MSG}" 119 - 120 - ip -netns "${NS0}" link del dev gretest 121 - } 122 - 123 - test_gre4() 124 - { 125 - local GRE_TYPE 126 - local MODE 127 - 128 - for GRE_TYPE in "gre" "gretap"; do 129 - printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n" 130 - 131 - for MODE in "eui64" "none" "stable-privacy" "random"; do 132 - test_gre_device "${GRE_TYPE}" 192.0.2.10 192.0.2.11 "${MODE}" 133 - test_gre_device "${GRE_TYPE}" any 192.0.2.11 "${MODE}" 134 - test_gre_device "${GRE_TYPE}" 192.0.2.10 any "${MODE}" 135 - done 136 - done 137 - } 138 - 139 - test_gre6() 140 - { 141 - local GRE_TYPE 142 - local MODE 143 - 144 - for GRE_TYPE in "ip6gre" "ip6gretap"; do 145 - printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n" 146 - 147 - for MODE in "eui64" "none" "stable-privacy" "random"; do 148 - test_gre_device "${GRE_TYPE}" 2001:db8::10 2001:db8::11 "${MODE}" 149 - test_gre_device "${GRE_TYPE}" any 2001:db8::11 "${MODE}" 150 - test_gre_device "${GRE_TYPE}" 2001:db8::10 any "${MODE}" 151 - done 152 - done 153 - } 154 - 155 - usage() 156 - { 157 - echo "Usage: $0 [-p]" 158 - exit 1 159 - } 160 - 161 - while getopts :p o 162 - do 163 - case $o in 164 - p) PAUSE_ON_FAIL="yes";; 165 - *) usage;; 166 - esac 167 - done 168 - 169 - setup_ns NS0 170 - 171 - set -e 172 - trap exit_cleanup_all EXIT 173 - 174 - setup_basenet 175 - 176 - test_gre4 177 - test_gre6
+246
tools/testing/selftests/net/lwt_dst_cache_ref_loop.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0+ 3 + # 4 + # Author: Justin Iurman <justin.iurman@uliege.be> 5 + # 6 + # WARNING 7 + # ------- 8 + # This is just a dummy script that triggers encap cases with possible dst cache 9 + # reference loops in affected lwt users (see list below). Some cases are 10 + # pathological configurations for simplicity, others are valid. Overall, we 11 + # don't want this issue to happen, no matter what. In order to catch any 12 + # reference loops, kmemleak MUST be used. The results alone are always blindly 13 + # successful, don't rely on them. Note that the following tests may crash the 14 + # kernel if the fix to prevent lwtunnel_{input|output|xmit}() reentry loops is 15 + # not present. 16 + # 17 + # Affected lwt users so far (please update accordingly if needed): 18 + # - ila_lwt (output only) 19 + # - ioam6_iptunnel (output only) 20 + # - rpl_iptunnel (both input and output) 21 + # - seg6_iptunnel (both input and output) 22 + 23 + source lib.sh 24 + 25 + check_compatibility() 26 + { 27 + setup_ns tmp_node &>/dev/null 28 + if [ $? != 0 ]; then 29 + echo "SKIP: Cannot create netns." 30 + exit $ksft_skip 31 + fi 32 + 33 + ip link add name veth0 netns $tmp_node type veth \ 34 + peer name veth1 netns $tmp_node &>/dev/null 35 + local ret=$? 36 + 37 + ip -netns $tmp_node link set veth0 up &>/dev/null 38 + ret=$((ret + $?)) 39 + 40 + ip -netns $tmp_node link set veth1 up &>/dev/null 41 + ret=$((ret + $?)) 42 + 43 + if [ $ret != 0 ]; then 44 + echo "SKIP: Cannot configure links." 45 + cleanup_ns $tmp_node 46 + exit $ksft_skip 47 + fi 48 + 49 + lsmod 2>/dev/null | grep -q "ila" 50 + ila_lsmod=$? 51 + [ $ila_lsmod != 0 ] && modprobe ila &>/dev/null 52 + 53 + ip -netns $tmp_node route add 2001:db8:1::/64 \ 54 + encap ila 1:2:3:4 csum-mode no-action ident-type luid \ 55 + hook-type output \ 56 + dev veth0 &>/dev/null 57 + 58 + ip -netns $tmp_node route add 2001:db8:2::/64 \ 59 + encap ioam6 trace prealloc type 0x800000 ns 0 size 4 \ 60 + dev veth0 &>/dev/null 61 + 62 + ip -netns $tmp_node route add 2001:db8:3::/64 \ 63 + encap rpl segs 2001:db8:3::1 dev veth0 &>/dev/null 64 + 65 + ip -netns $tmp_node route add 2001:db8:4::/64 \ 66 + encap seg6 mode inline segs 2001:db8:4::1 dev veth0 &>/dev/null 67 + 68 + ip -netns $tmp_node -6 route 2>/dev/null | grep -q "encap ila" 69 + skip_ila=$? 70 + 71 + ip -netns $tmp_node -6 route 2>/dev/null | grep -q "encap ioam6" 72 + skip_ioam6=$? 73 + 74 + ip -netns $tmp_node -6 route 2>/dev/null | grep -q "encap rpl" 75 + skip_rpl=$? 76 + 77 + ip -netns $tmp_node -6 route 2>/dev/null | grep -q "encap seg6" 78 + skip_seg6=$? 79 + 80 + cleanup_ns $tmp_node 81 + } 82 + 83 + setup() 84 + { 85 + setup_ns alpha beta gamma &>/dev/null 86 + 87 + ip link add name veth-alpha netns $alpha type veth \ 88 + peer name veth-betaL netns $beta &>/dev/null 89 + 90 + ip link add name veth-betaR netns $beta type veth \ 91 + peer name veth-gamma netns $gamma &>/dev/null 92 + 93 + ip -netns $alpha link set veth-alpha name veth0 &>/dev/null 94 + ip -netns $beta link set veth-betaL name veth0 &>/dev/null 95 + ip -netns $beta link set veth-betaR name veth1 &>/dev/null 96 + ip -netns $gamma link set veth-gamma name veth0 &>/dev/null 97 + 98 + ip -netns $alpha addr add 2001:db8:1::2/64 dev veth0 &>/dev/null 99 + ip -netns $alpha link set veth0 up &>/dev/null 100 + ip -netns $alpha link set lo up &>/dev/null 101 + ip -netns $alpha route add 2001:db8:2::/64 \ 102 + via 2001:db8:1::1 dev veth0 &>/dev/null 103 + 104 + ip -netns $beta addr add 2001:db8:1::1/64 dev veth0 &>/dev/null 105 + ip -netns $beta addr add 2001:db8:2::1/64 dev veth1 &>/dev/null 106 + ip -netns $beta link set veth0 up &>/dev/null 107 + ip -netns $beta link set veth1 up &>/dev/null 108 + ip -netns $beta link set lo up &>/dev/null 109 + ip -netns $beta route del 2001:db8:2::/64 110 + ip -netns $beta route add 2001:db8:2::/64 dev veth1 111 + ip netns exec $beta \ 112 + sysctl -wq net.ipv6.conf.all.forwarding=1 &>/dev/null 113 + 114 + ip -netns $gamma addr add 2001:db8:2::2/64 dev veth0 &>/dev/null 115 + ip -netns $gamma link set veth0 up &>/dev/null 116 + ip -netns $gamma link set lo up &>/dev/null 117 + ip -netns $gamma route add 2001:db8:1::/64 \ 118 + via 2001:db8:2::1 dev veth0 &>/dev/null 119 + 120 + sleep 1 121 + 122 + ip netns exec $alpha ping6 -c 5 -W 1 2001:db8:2::2 &>/dev/null 123 + if [ $? != 0 ]; then 124 + echo "SKIP: Setup failed." 125 + exit $ksft_skip 126 + fi 127 + 128 + sleep 1 129 + } 130 + 131 + cleanup() 132 + { 133 + cleanup_ns $alpha $beta $gamma 134 + [ $ila_lsmod != 0 ] && modprobe -r ila &>/dev/null 135 + } 136 + 137 + run_ila() 138 + { 139 + if [ $skip_ila != 0 ]; then 140 + echo "SKIP: ila (output)" 141 + return 142 + fi 143 + 144 + ip -netns $beta route del 2001:db8:2::/64 145 + ip -netns $beta route add 2001:db8:2:0:0:0:0:2/128 \ 146 + encap ila 2001:db8:2:0 csum-mode no-action ident-type luid \ 147 + hook-type output \ 148 + dev veth1 &>/dev/null 149 + sleep 1 150 + 151 + echo "TEST: ila (output)" 152 + ip netns exec $beta ping6 -c 2 -W 1 2001:db8:2::2 &>/dev/null 153 + sleep 1 154 + 155 + ip -netns $beta route del 2001:db8:2:0:0:0:0:2/128 156 + ip -netns $beta route add 2001:db8:2::/64 dev veth1 157 + sleep 1 158 + } 159 + 160 + run_ioam6() 161 + { 162 + if [ $skip_ioam6 != 0 ]; then 163 + echo "SKIP: ioam6 (output)" 164 + return 165 + fi 166 + 167 + ip -netns $beta route change 2001:db8:2::/64 \ 168 + encap ioam6 trace prealloc type 0x800000 ns 1 size 4 \ 169 + dev veth1 &>/dev/null 170 + sleep 1 171 + 172 + echo "TEST: ioam6 (output)" 173 + ip netns exec $beta ping6 -c 2 -W 1 2001:db8:2::2 &>/dev/null 174 + sleep 1 175 + } 176 + 177 + run_rpl() 178 + { 179 + if [ $skip_rpl != 0 ]; then 180 + echo "SKIP: rpl (input)" 181 + echo "SKIP: rpl (output)" 182 + return 183 + fi 184 + 185 + ip -netns $beta route change 2001:db8:2::/64 \ 186 + encap rpl segs 2001:db8:2::2 \ 187 + dev veth1 &>/dev/null 188 + sleep 1 189 + 190 + echo "TEST: rpl (input)" 191 + ip netns exec $alpha ping6 -c 2 -W 1 2001:db8:2::2 &>/dev/null 192 + sleep 1 193 + 194 + echo "TEST: rpl (output)" 195 + ip netns exec $beta ping6 -c 2 -W 1 2001:db8:2::2 &>/dev/null 196 + sleep 1 197 + } 198 + 199 + run_seg6() 200 + { 201 + if [ $skip_seg6 != 0 ]; then 202 + echo "SKIP: seg6 (input)" 203 + echo "SKIP: seg6 (output)" 204 + return 205 + fi 206 + 207 + ip -netns $beta route change 2001:db8:2::/64 \ 208 + encap seg6 mode inline segs 2001:db8:2::2 \ 209 + dev veth1 &>/dev/null 210 + sleep 1 211 + 212 + echo "TEST: seg6 (input)" 213 + ip netns exec $alpha ping6 -c 2 -W 1 2001:db8:2::2 &>/dev/null 214 + sleep 1 215 + 216 + echo "TEST: seg6 (output)" 217 + ip netns exec $beta ping6 -c 2 -W 1 2001:db8:2::2 &>/dev/null 218 + sleep 1 219 + } 220 + 221 + run() 222 + { 223 + run_ila 224 + run_ioam6 225 + run_rpl 226 + run_seg6 227 + } 228 + 229 + if [ "$(id -u)" -ne 0 ]; then 230 + echo "SKIP: Need root privileges." 231 + exit $ksft_skip 232 + fi 233 + 234 + if [ ! -x "$(command -v ip)" ]; then 235 + echo "SKIP: Could not run test without ip tool." 236 + exit $ksft_skip 237 + fi 238 + 239 + check_compatibility 240 + 241 + trap cleanup EXIT 242 + 243 + setup 244 + run 245 + 246 + exit $ksft_pass