Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

spi: aspeed: Improve handling of shared SPI

Merge series from Chin-Ting Kuo <chin-ting_kuo@aspeedtech.com>:

This patch series improves handling of SPI controllers that are
shared by spi-mem devices and other SPI peripherals.

The primary goal of this series is to support non-spi-mem devices in
the ASPEED FMC/SPI controller driver. It also addresses an issue in
the spi-mem framework observed when different types of SPI devices
operate concurrently on the same controller, ensuring that spi-mem
operations are properly serialized.

+4009 -1720
+3
.mailmap
··· 12 12 # 13 13 Aaron Durbin <adurbin@google.com> 14 14 Abel Vesa <abelvesa@kernel.org> <abel.vesa@nxp.com> 15 + Abel Vesa <abelvesa@kernel.org> <abel.vesa@linaro.org> 15 16 Abel Vesa <abelvesa@kernel.org> <abelvesa@gmail.com> 16 17 Abhijeet Dharmapurikar <quic_adharmap@quicinc.com> <adharmap@codeaurora.org> 17 18 Abhinav Kumar <quic_abhinavk@quicinc.com> <abhinavk@codeaurora.org> ··· 879 878 Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com> 880 879 Yanteng Si <si.yanteng@linux.dev> <siyanteng@loongson.cn> 881 880 Ying Huang <huang.ying.caritas@gmail.com> <ying.huang@intel.com> 881 + Yixun Lan <dlan@kernel.org> <dlan@gentoo.org> 882 + Yixun Lan <dlan@kernel.org> <yixun.lan@amlogic.com> 882 883 Yosry Ahmed <yosry.ahmed@linux.dev> <yosryahmed@google.com> 883 884 Yu-Chun Lin <eleanor.lin@realtek.com> <eleanor15x@gmail.com> 884 885 Yusuke Goda <goda.yusuke@renesas.com>
+4
CREDITS
··· 2231 2231 S: L3R 8B2 2232 2232 S: Canada 2233 2233 2234 + N: Krzysztof Kozlowski 2235 + E: krzk@kernel.org 2236 + D: NFC network subsystem and drivers maintainer 2237 + 2234 2238 N: Christian Krafft 2235 2239 D: PowerPC Cell support 2236 2240
+1 -1
Documentation/admin-guide/laptops/alienware-wmi.rst
··· 105 105 106 106 Manual fan control on the other hand, is not exposed directly by the AWCC 107 107 interface. Instead it let's us control a fan `boost` value. This `boost` value 108 - has the following aproximate behavior over the fan pwm: 108 + has the following approximate behavior over the fan pwm: 109 109 110 110 :: 111 111
+4
Documentation/admin-guide/sysctl/vm.rst
··· 494 494 495 495 The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT. 496 496 497 + When CONFIG_MEM_ALLOC_PROFILING_DEBUG=y, this control is read-only to avoid 498 + warnings produced by allocations made while profiling is disabled and freed 499 + when it's enabled. 500 + 497 501 498 502 memory_failure_early_kill 499 503 =========================
+3 -1
Documentation/arch/riscv/uabi.rst
··· 7 7 ------------------------------------ 8 8 9 9 The canonical order of ISA extension names in the ISA string is defined in 10 - chapter 27 of the unprivileged specification. 10 + Chapter 27 of the RISC-V Instruction Set Manual Volume I Unprivileged ISA 11 + (Document Version 20191213). 12 + 11 13 The specification uses vague wording, such as should, when it comes to ordering, 12 14 so for our purposes the following rules apply: 13 15
+2 -2
Documentation/arch/x86/amd_hsmp.rst
··· 14 14 15 15 More details on the interface can be found in chapter 16 16 "7 Host System Management Port (HSMP)" of the family/model PPR 17 - Eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip 17 + Eg: https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50 18 18 19 19 20 20 HSMP interface is supported on EPYC line of server CPUs and MI300A (APU). ··· 185 185 186 186 More details on the interface and message definitions can be found in chapter 187 187 "7 Host System Management Port (HSMP)" of the respective family/model PPR 188 - eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip 188 + eg: https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50 189 189 190 190 User space C-APIs are made available by linking against the esmi library, 191 191 which is provided by the E-SMS project https://www.amd.com/en/developer/e-sms.html.
+1 -1
Documentation/devicetree/bindings/display/mediatek/mediatek,dp.yaml
··· 11 11 - Jitao shi <jitao.shi@mediatek.com> 12 12 13 13 description: | 14 - MediaTek DP and eDP are different hardwares and there are some features 14 + MediaTek DP and eDP are different hardware and there are some features 15 15 which are not supported for eDP. For example, audio is not supported for 16 16 eDP. Therefore, we need to use two different compatibles to describe them. 17 17 In addition, We just need to enable the power domain of DP, so the clock
+31
Documentation/devicetree/bindings/interconnect/qcom,sa8775p-rpmh.yaml
··· 74 74 - description: aggre UFS CARD AXI clock 75 75 - description: RPMH CC IPA clock 76 76 77 + - if: 78 + properties: 79 + compatible: 80 + contains: 81 + enum: 82 + - qcom,sa8775p-config-noc 83 + - qcom,sa8775p-dc-noc 84 + - qcom,sa8775p-gem-noc 85 + - qcom,sa8775p-gpdsp-anoc 86 + - qcom,sa8775p-lpass-ag-noc 87 + - qcom,sa8775p-mmss-noc 88 + - qcom,sa8775p-nspa-noc 89 + - qcom,sa8775p-nspb-noc 90 + - qcom,sa8775p-pcie-anoc 91 + - qcom,sa8775p-system-noc 92 + then: 93 + properties: 94 + clocks: false 95 + 96 + - if: 97 + properties: 98 + compatible: 99 + contains: 100 + enum: 101 + - qcom,sa8775p-clk-virt 102 + - qcom,sa8775p-mc-virt 103 + then: 104 + properties: 105 + reg: false 106 + clocks: false 107 + 77 108 unevaluatedProperties: false 78 109 79 110 examples:
+1 -1
Documentation/misc-devices/amd-sbi.rst
··· 15 15 More details on the interface can be found in chapter 16 16 "5 Advanced Platform Management Link (APML)" of the family/model PPR [1]_. 17 17 18 - .. [1] https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip 18 + .. [1] https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50 19 19 20 20 21 21 SBRMI device
+10
Documentation/mm/allocation-profiling.rst
··· 33 33 sysctl: 34 34 /proc/sys/vm/mem_profiling 35 35 36 + 1: Enable memory profiling. 37 + 38 + 0: Disable memory profiling. 39 + 40 + The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT. 41 + 42 + When CONFIG_MEM_ALLOC_PROFILING_DEBUG=y, this control is read-only to avoid 43 + warnings produced by allocations made while profiling is disabled and freed 44 + when it's enabled. 45 + 36 46 Runtime info: 37 47 /proc/allocinfo 38 48
+2
Documentation/netlink/specs/fou.yaml
··· 39 39 - 40 40 name: ipproto 41 41 type: u8 42 + checks: 43 + min: 1 42 44 - 43 45 name: type 44 46 type: u8
+41
Documentation/process/conclave.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Linux kernel project continuity 4 + =============================== 5 + 6 + The Linux kernel development project is widely distributed, with over 7 + 100 maintainers each working to keep changes moving through their own 8 + repositories. The final step, though, is a centralized one where changes 9 + are pulled into the mainline repository. That is normally done by Linus 10 + Torvalds but, as was demonstrated by the 4.19 release in 2018, there are 11 + others who can do that work when the need arises. 12 + 13 + Should the maintainers of that repository become unwilling or unable to 14 + do that work going forward (including facilitating a transition), the 15 + project will need to find one or more replacements without delay. The 16 + process by which that will be done is listed below. $ORGANIZER is the 17 + last Maintainer Summit organizer or the current Linux Foundation (LF) 18 + Technical Advisory Board (TAB) Chair as a backup. 19 + 20 + - Within 72 hours, $ORGANIZER will open a discussion with the invitees 21 + of the most recently concluded Maintainers Summit. A meeting of those 22 + invitees and the TAB, either online or in-person, will be set as soon 23 + as possible in a way that maximizes the number of people who can 24 + participate. 25 + 26 + - If there has been no Maintainers Summit in the last 15 months, the set of 27 + invitees for this meeting will be determined by the TAB. 28 + 29 + - The invitees to this meeting may bring in other maintainers as needed. 30 + 31 + - This meeting, chaired by $ORGANIZER, will consider options for the 32 + ongoing management of the top-level kernel repository consistent with 33 + the expectation that it maximizes the long term health of the project 34 + and its community. 35 + 36 + - Within two weeks, a representative of this group will communicate to the 37 + broader community, using the ksummit@lists.linux.dev mailing list, what 38 + the next steps will be. 39 + 40 + The Linux Foundation, as guided by the TAB, will take the steps 41 + necessary to support and implement this plan.
+1
Documentation/process/index.rst
··· 68 68 stable-kernel-rules 69 69 management-style 70 70 researcher-guidelines 71 + conclave 71 72 72 73 Dealing with bugs 73 74 -----------------
+12
Documentation/process/maintainer-netdev.rst
··· 363 363 with better review coverage. Re-posting large series also increases the mailing 364 364 list traffic. 365 365 366 + Limit patches outstanding on mailing list 367 + ----------------------------------------- 368 + 369 + Avoid having more than 15 patches, across all series, outstanding for 370 + review on the mailing list for a single tree. In other words, a maximum of 371 + 15 patches under review on net, and a maximum of 15 patches under review on 372 + net-next. 373 + 374 + This limit is intended to focus developer effort on testing patches before 375 + upstream review. Aiding the quality of upstream submissions, and easing the 376 + load on reviewers. 377 + 366 378 .. _rcs: 367 379 368 380 Local variable ordering ("reverse xmas tree", "RCS")
+9 -3
MAINTAINERS
··· 3138 3138 K: ma35d1 3139 3139 3140 3140 ARM/NUVOTON NPCM ARCHITECTURE 3141 + M: Andrew Jeffery <andrew@codeconstruct.com.au> 3141 3142 M: Avi Fishman <avifishman70@gmail.com> 3142 3143 M: Tomer Maimon <tmaimon77@gmail.com> 3143 3144 M: Tali Perry <tali.perry1@gmail.com> ··· 3147 3146 R: Benjamin Fair <benjaminfair@google.com> 3148 3147 L: openbmc@lists.ozlabs.org (moderated for non-subscribers) 3149 3148 S: Supported 3149 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/bmc/linux.git 3150 3150 F: Documentation/devicetree/bindings/*/*/*npcm* 3151 3151 F: Documentation/devicetree/bindings/*/*npcm* 3152 3152 F: Documentation/devicetree/bindings/rtc/nuvoton,nct3018y.yaml ··· 13189 13187 F: Documentation/driver-api/interconnect.rst 13190 13188 F: drivers/interconnect/ 13191 13189 F: include/dt-bindings/interconnect/ 13190 + F: include/linux/interconnect-clk.h 13192 13191 F: include/linux/interconnect-provider.h 13193 13192 F: include/linux/interconnect.h 13194 13193 ··· 18504 18501 F: net/ipv4/nexthop.c 18505 18502 18506 18503 NFC SUBSYSTEM 18507 - M: Krzysztof Kozlowski <krzk@kernel.org> 18508 18504 L: netdev@vger.kernel.org 18509 - S: Maintained 18505 + S: Orphan 18510 18506 F: Documentation/devicetree/bindings/net/nfc/ 18511 18507 F: drivers/nfc/ 18512 18508 F: include/net/nfc/ ··· 21131 21129 F: rust/helpers/pwm.c 21132 21130 F: rust/kernel/pwm.rs 21133 21131 21132 + PWM SUBSYSTEM DRIVERS [RUST] 21133 + R: Michal Wilczynski <m.wilczynski@samsung.com> 21134 + F: drivers/pwm/*.rs 21135 + 21134 21136 PXA GPIO DRIVER 21135 21137 M: Robert Jarzmik <robert.jarzmik@free.fr> 21136 21138 L: linux-gpio@vger.kernel.org ··· 22564 22558 F: include/linux/mailbox/riscv-rpmi-message.h 22565 22559 22566 22560 RISC-V SPACEMIT SoC Support 22567 - M: Yixun Lan <dlan@gentoo.org> 22561 + M: Yixun Lan <dlan@kernel.org> 22568 22562 L: linux-riscv@lists.infradead.org 22569 22563 L: spacemit@lists.linux.dev 22570 22564 S: Maintained
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm/boot/dts/microchip/lan966x-pcb8290.dts
··· 54 54 &mdio0 { 55 55 pinctrl-0 = <&miim_a_pins>; 56 56 pinctrl-names = "default"; 57 + reset-gpios = <&gpio 53 GPIO_ACTIVE_LOW>; 57 58 status = "okay"; 58 59 59 60 ext_phy0: ethernet-phy@7 {
+2 -2
arch/arm/boot/dts/microchip/sama7d65.dtsi
··· 527 527 interrupts = <GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>; 528 528 clocks = <&pmc PMC_TYPE_PERIPHERAL 37>; 529 529 #address-cells = <1>; 530 - #size-cells = <1>; 530 + #size-cells = <0>; 531 531 dmas = <&dma0 AT91_XDMAC_DT_PERID(12)>, 532 532 <&dma0 AT91_XDMAC_DT_PERID(11)>; 533 533 dma-names = "tx", "rx"; ··· 676 676 flx9: flexcom@e2820000 { 677 677 compatible = "microchip,sama7d65-flexcom", "atmel,sama5d2-flexcom"; 678 678 reg = <0xe2820000 0x200>; 679 - ranges = <0x0 0xe281c000 0x800>; 679 + ranges = <0x0 0xe2820000 0x800>; 680 680 clocks = <&pmc PMC_TYPE_PERIPHERAL 43>; 681 681 #address-cells = <1>; 682 682 #size-cells = <1>;
-1
arch/arm/mach-npcm/Kconfig
··· 30 30 select ARM_ERRATA_764369 if SMP 31 31 select ARM_ERRATA_720789 32 32 select ARM_ERRATA_754322 33 - select ARM_ERRATA_794072 34 33 select PL310_ERRATA_588369 35 34 select PL310_ERRATA_727915 36 35 select MFD_SYSCON
-24
arch/arm64/boot/dts/nvidia/tegra210.dtsi
··· 202 202 203 203 nvidia,outputs = <&dsia &dsib &sor0 &sor1>; 204 204 nvidia,head = <0>; 205 - 206 - interconnects = <&mc TEGRA210_MC_DISPLAY0A &emc>, 207 - <&mc TEGRA210_MC_DISPLAY0B &emc>, 208 - <&mc TEGRA210_MC_DISPLAY0C &emc>, 209 - <&mc TEGRA210_MC_DISPLAYHC &emc>, 210 - <&mc TEGRA210_MC_DISPLAYD &emc>, 211 - <&mc TEGRA210_MC_DISPLAYT &emc>; 212 - interconnect-names = "wina", 213 - "winb", 214 - "winc", 215 - "cursor", 216 - "wind", 217 - "wint"; 218 205 }; 219 206 220 207 dc@54240000 { ··· 217 230 218 231 nvidia,outputs = <&dsia &dsib &sor0 &sor1>; 219 232 nvidia,head = <1>; 220 - 221 - interconnects = <&mc TEGRA210_MC_DISPLAY0AB &emc>, 222 - <&mc TEGRA210_MC_DISPLAY0BB &emc>, 223 - <&mc TEGRA210_MC_DISPLAY0CB &emc>, 224 - <&mc TEGRA210_MC_DISPLAYHCB &emc>; 225 - interconnect-names = "wina", 226 - "winb", 227 - "winc", 228 - "cursor"; 229 233 }; 230 234 231 235 dsia: dsi@54300000 { ··· 1030 1052 1031 1053 #iommu-cells = <1>; 1032 1054 #reset-cells = <1>; 1033 - #interconnect-cells = <1>; 1034 1055 }; 1035 1056 1036 1057 emc: external-memory-controller@7001b000 { ··· 1043 1066 nvidia,memory-controller = <&mc>; 1044 1067 operating-points-v2 = <&emc_icc_dvfs_opp_table>; 1045 1068 1046 - #interconnect-cells = <0>; 1047 1069 #cooling-cells = <2>; 1048 1070 }; 1049 1071
+12 -4
arch/arm64/boot/dts/qcom/sc8280xp.dtsi
··· 5788 5788 clocks = <&rpmhcc RPMH_CXO_CLK>; 5789 5789 clock-names = "xo"; 5790 5790 5791 - power-domains = <&rpmhpd SC8280XP_NSP>; 5792 - power-domain-names = "nsp"; 5791 + power-domains = <&rpmhpd SC8280XP_NSP>, 5792 + <&rpmhpd SC8280XP_CX>, 5793 + <&rpmhpd SC8280XP_MXC>; 5794 + power-domain-names = "nsp", 5795 + "cx", 5796 + "mxc"; 5793 5797 5794 5798 memory-region = <&pil_nsp0_mem>; 5795 5799 ··· 5923 5919 clocks = <&rpmhcc RPMH_CXO_CLK>; 5924 5920 clock-names = "xo"; 5925 5921 5926 - power-domains = <&rpmhpd SC8280XP_NSP>; 5927 - power-domain-names = "nsp"; 5922 + power-domains = <&rpmhpd SC8280XP_NSP>, 5923 + <&rpmhpd SC8280XP_CX>, 5924 + <&rpmhpd SC8280XP_MXC>; 5925 + power-domain-names = "nsp", 5926 + "cx", 5927 + "mxc"; 5928 5928 5929 5929 memory-region = <&pil_nsp1_mem>; 5930 5930
+2 -2
arch/arm64/boot/dts/qcom/sdm845-oneplus-enchilada.dts
··· 31 31 }; 32 32 33 33 &display_panel { 34 - status = "okay"; 34 + compatible = "samsung,sofef00-ams628nw01", "samsung,sofef00"; 35 35 36 - compatible = "samsung,sofef00"; 36 + status = "okay"; 37 37 }; 38 38 39 39 &bq27441_fg {
-2
arch/arm64/boot/dts/qcom/sm8550.dtsi
··· 4133 4133 usb_1: usb@a600000 { 4134 4134 compatible = "qcom,sm8550-dwc3", "qcom,snps-dwc3"; 4135 4135 reg = <0x0 0x0a600000 0x0 0xfc100>; 4136 - #address-cells = <1>; 4137 - #size-cells = <0>; 4138 4136 4139 4137 clocks = <&gcc GCC_CFG_NOC_USB3_PRIM_AXI_CLK>, 4140 4138 <&gcc GCC_USB30_PRIM_MASTER_CLK>,
-3
arch/arm64/boot/dts/qcom/sm8650.dtsi
··· 5150 5150 5151 5151 dma-coherent; 5152 5152 5153 - #address-cells = <1>; 5154 - #size-cells = <0>; 5155 - 5156 5153 status = "disabled"; 5157 5154 5158 5155 ports {
+2 -2
arch/arm64/boot/dts/qcom/talos.dtsi
··· 1399 1399 <&gcc GCC_AGGRE_UFS_PHY_AXI_CLK>, 1400 1400 <&gcc GCC_UFS_PHY_AHB_CLK>, 1401 1401 <&gcc GCC_UFS_PHY_UNIPRO_CORE_CLK>, 1402 - <&gcc GCC_UFS_PHY_ICE_CORE_CLK>, 1403 1402 <&rpmhcc RPMH_CXO_CLK>, 1404 1403 <&gcc GCC_UFS_PHY_TX_SYMBOL_0_CLK>, 1405 - <&gcc GCC_UFS_PHY_RX_SYMBOL_0_CLK>; 1404 + <&gcc GCC_UFS_PHY_RX_SYMBOL_0_CLK>, 1405 + <&gcc GCC_UFS_PHY_ICE_CORE_CLK>; 1406 1406 clock-names = "core_clk", 1407 1407 "bus_aggr_clk", 1408 1408 "iface_clk",
+1 -1
arch/arm64/boot/dts/rockchip/rk3308-sakurapi-rk3308b.dts
··· 199 199 compatible = "brcm,bcm43455-fmac", "brcm,bcm4329-fmac"; 200 200 reg = <1>; 201 201 interrupt-parent = <&gpio0>; 202 - interrupts = <RK_PA3 GPIO_ACTIVE_HIGH>; 202 + interrupts = <RK_PA3 IRQ_TYPE_LEVEL_HIGH>; 203 203 interrupt-names = "host-wake"; 204 204 pinctrl-names = "default"; 205 205 pinctrl-0 = <&wifi_host_wake>;
+2 -1
arch/arm64/boot/dts/rockchip/rk3326-odroid-go3.dts
··· 14 14 15 15 joystick_mux_controller: mux-controller { 16 16 compatible = "gpio-mux"; 17 - pinctrl = <&mux_en_pins>; 17 + pinctrl-0 = <&mux_en_pins>; 18 + pinctrl-names = "default"; 18 19 #mux-control-cells = <0>; 19 20 20 21 mux-gpios = <&gpio3 RK_PB3 GPIO_ACTIVE_LOW>,
-2
arch/arm64/boot/dts/rockchip/rk3399-kobol-helios64.dts
··· 424 424 425 425 &pcie0 { 426 426 ep-gpios = <&gpio2 RK_PD4 GPIO_ACTIVE_HIGH>; 427 - max-link-speed = <2>; 428 427 num-lanes = <2>; 429 - pinctrl-names = "default"; 430 428 status = "okay"; 431 429 432 430 vpcie12v-supply = <&vcc12v_dcin>;
-1
arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dtsi
··· 71 71 }; 72 72 73 73 &pcie0 { 74 - max-link-speed = <1>; 75 74 num-lanes = <1>; 76 75 vpcie3v3-supply = <&vcc3v3_sys>; 77 76 };
-1
arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
··· 969 969 }; 970 970 971 971 &spi1 { 972 - max-freq = <10000000>; 973 972 status = "okay"; 974 973 975 974 spiflash: flash@0 {
+2 -2
arch/arm64/boot/dts/rockchip/rk3399-pinephone-pro.dts
··· 40 40 button-up { 41 41 label = "Volume Up"; 42 42 linux,code = <KEY_VOLUMEUP>; 43 - press-threshold-microvolt = <100000>; 43 + press-threshold-microvolt = <2000>; 44 44 }; 45 45 46 46 button-down { 47 47 label = "Volume Down"; 48 48 linux,code = <KEY_VOLUMEDOWN>; 49 - press-threshold-microvolt = <600000>; 49 + press-threshold-microvolt = <300000>; 50 50 }; 51 51 }; 52 52
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
··· 483 483 pinctrl-names = "default"; 484 484 pinctrl-0 = <&q7_thermal_pin &bios_disable_override_hog_pin>; 485 485 486 - gpios { 486 + gpio-pins { 487 487 bios_disable_override_hog_pin: bios-disable-override-hog-pin { 488 488 rockchip,pins = 489 489 <3 RK_PD5 RK_FUNC_GPIO &pcfg_pull_down>;
+2 -2
arch/arm64/boot/dts/rockchip/rk3399-rock-4c-plus.dts
··· 529 529 rockchip,pins = <1 RK_PC5 RK_FUNC_GPIO &pcfg_pull_up>; 530 530 }; 531 531 532 - vsel1_gpio: vsel1-gpio { 532 + vsel1_gpio: vsel1-gpio-pin { 533 533 rockchip,pins = <1 RK_PC1 RK_FUNC_GPIO &pcfg_pull_down>; 534 534 }; 535 535 536 - vsel2_gpio: vsel2-gpio { 536 + vsel2_gpio: vsel2-gpio-pin { 537 537 rockchip,pins = <1 RK_PB6 RK_FUNC_GPIO &pcfg_pull_down>; 538 538 }; 539 539 };
+1 -2
arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-display-vz.dtso
··· 11 11 #include "rk3568-wolfvision-pf5-display.dtsi" 12 12 13 13 &st7789 { 14 - compatible = "jasonic,jt240mhqs-hwt-ek-e3", 15 - "sitronix,st7789v"; 14 + compatible = "jasonic,jt240mhqs-hwt-ek-e3"; 16 15 rotation = <270>; 17 16 };
+8 -4
arch/arm64/boot/dts/rockchip/rk3576-nanopi-m5.dts
··· 201 201 pinctrl-names = "default"; 202 202 pinctrl-0 = <&hp_det_l>; 203 203 204 + simple-audio-card,bitclock-master = <&masterdai>; 204 205 simple-audio-card,format = "i2s"; 205 206 simple-audio-card,hp-det-gpios = <&gpio2 RK_PD6 GPIO_ACTIVE_LOW>; 206 207 simple-audio-card,mclk-fs = <256>; ··· 212 211 "Headphones", "HPOR", 213 212 "IN1P", "Microphone Jack"; 214 213 simple-audio-card,widgets = 215 - "Headphone", "Headphone Jack", 214 + "Headphone", "Headphones", 216 215 "Microphone", "Microphone Jack"; 217 216 218 217 simple-audio-card,codec { 219 218 sound-dai = <&rt5616>; 220 219 }; 221 220 222 - simple-audio-card,cpu { 221 + masterdai: simple-audio-card,cpu { 223 222 sound-dai = <&sai2>; 223 + system-clock-frequency = <12288000>; 224 224 }; 225 225 }; 226 226 }; ··· 729 727 rt5616: audio-codec@1b { 730 728 compatible = "realtek,rt5616"; 731 729 reg = <0x1b>; 732 - assigned-clocks = <&cru CLK_SAI2_MCLKOUT>; 730 + assigned-clocks = <&cru CLK_SAI2_MCLKOUT_TO_IO>; 733 731 assigned-clock-rates = <12288000>; 734 - clocks = <&cru CLK_SAI2_MCLKOUT>; 732 + clocks = <&cru CLK_SAI2_MCLKOUT_TO_IO>; 735 733 clock-names = "mclk"; 734 + pinctrl-0 = <&sai2m0_mclk>; 735 + pinctrl-names = "default"; 736 736 #sound-dai-cells = <0>; 737 737 }; 738 738 };
+1 -1
arch/arm64/boot/dts/rockchip/rk3576.dtsi
··· 1261 1261 1262 1262 gpu: gpu@27800000 { 1263 1263 compatible = "rockchip,rk3576-mali", "arm,mali-bifrost"; 1264 - reg = <0x0 0x27800000 0x0 0x200000>; 1264 + reg = <0x0 0x27800000 0x0 0x20000>; 1265 1265 assigned-clocks = <&scmi_clk SCMI_CLK_GPU>; 1266 1266 assigned-clock-rates = <198000000>; 1267 1267 clocks = <&cru CLK_GPU>;
+2 -2
arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
··· 1200 1200 status = "disabled"; 1201 1201 }; 1202 1202 1203 - rknn_mmu_1: iommu@fdac9000 { 1203 + rknn_mmu_1: iommu@fdaca000 { 1204 1204 compatible = "rockchip,rk3588-iommu", "rockchip,rk3568-iommu"; 1205 1205 reg = <0x0 0xfdaca000 0x0 0x100>; 1206 1206 interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH 0>; ··· 1230 1230 status = "disabled"; 1231 1231 }; 1232 1232 1233 - rknn_mmu_2: iommu@fdad9000 { 1233 + rknn_mmu_2: iommu@fdada000 { 1234 1234 compatible = "rockchip,rk3588-iommu", "rockchip,rk3568-iommu"; 1235 1235 reg = <0x0 0xfdada000 0x0 0x100>; 1236 1236 interrupts = <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH 0>;
+2
arch/arm64/include/asm/kvm_asm.h
··· 300 300 __le32 *origptr, __le32 *updptr, int nr_inst); 301 301 void kvm_compute_final_ctr_el0(struct alt_instr *alt, 302 302 __le32 *origptr, __le32 *updptr, int nr_inst); 303 + void kvm_pan_patch_el2_entry(struct alt_instr *alt, 304 + __le32 *origptr, __le32 *updptr, int nr_inst); 303 305 void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt, 304 306 u64 elr_phys, u64 par, uintptr_t vcpu, u64 far, u64 hpfar); 305 307
-16
arch/arm64/include/asm/kvm_emulate.h
··· 119 119 return (unsigned long *)&vcpu->arch.hcr_el2; 120 120 } 121 121 122 - static inline void vcpu_clear_wfx_traps(struct kvm_vcpu *vcpu) 123 - { 124 - vcpu->arch.hcr_el2 &= ~HCR_TWE; 125 - if (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) || 126 - vcpu->kvm->arch.vgic.nassgireq) 127 - vcpu->arch.hcr_el2 &= ~HCR_TWI; 128 - else 129 - vcpu->arch.hcr_el2 |= HCR_TWI; 130 - } 131 - 132 - static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu) 133 - { 134 - vcpu->arch.hcr_el2 |= HCR_TWE; 135 - vcpu->arch.hcr_el2 |= HCR_TWI; 136 - } 137 - 138 122 static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu) 139 123 { 140 124 return vcpu->arch.vsesr_el2;
+12 -4
arch/arm64/include/asm/kvm_pgtable.h
··· 87 87 88 88 #define KVM_PTE_LEAF_ATTR_HI_SW GENMASK(58, 55) 89 89 90 - #define KVM_PTE_LEAF_ATTR_HI_S1_XN BIT(54) 90 + #define __KVM_PTE_LEAF_ATTR_HI_S1_XN BIT(54) 91 + #define __KVM_PTE_LEAF_ATTR_HI_S1_UXN BIT(54) 92 + #define __KVM_PTE_LEAF_ATTR_HI_S1_PXN BIT(53) 93 + 94 + #define KVM_PTE_LEAF_ATTR_HI_S1_XN \ 95 + ({ cpus_have_final_cap(ARM64_KVM_HVHE) ? \ 96 + (__KVM_PTE_LEAF_ATTR_HI_S1_UXN | \ 97 + __KVM_PTE_LEAF_ATTR_HI_S1_PXN) : \ 98 + __KVM_PTE_LEAF_ATTR_HI_S1_XN; }) 91 99 92 100 #define KVM_PTE_LEAF_ATTR_HI_S2_XN GENMASK(54, 53) 93 101 ··· 301 293 * children. 302 294 * @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared 303 295 * with other software walkers. 304 - * @KVM_PGTABLE_WALK_HANDLE_FAULT: Indicates the page-table walk was 305 - * invoked from a fault handler. 296 + * @KVM_PGTABLE_WALK_IGNORE_EAGAIN: Don't terminate the walk early if 297 + * the walker returns -EAGAIN. 306 298 * @KVM_PGTABLE_WALK_SKIP_BBM_TLBI: Visit and update table entries 307 299 * without Break-before-make's 308 300 * TLB invalidation. ··· 315 307 KVM_PGTABLE_WALK_TABLE_PRE = BIT(1), 316 308 KVM_PGTABLE_WALK_TABLE_POST = BIT(2), 317 309 KVM_PGTABLE_WALK_SHARED = BIT(3), 318 - KVM_PGTABLE_WALK_HANDLE_FAULT = BIT(4), 310 + KVM_PGTABLE_WALK_IGNORE_EAGAIN = BIT(4), 319 311 KVM_PGTABLE_WALK_SKIP_BBM_TLBI = BIT(5), 320 312 KVM_PGTABLE_WALK_SKIP_CMO = BIT(6), 321 313 };
+2 -1
arch/arm64/include/asm/sysreg.h
··· 91 91 */ 92 92 #define pstate_field(op1, op2) ((op1) << Op1_shift | (op2) << Op2_shift) 93 93 #define PSTATE_Imm_shift CRm_shift 94 - #define SET_PSTATE(x, r) __emit_inst(0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift)) 94 + #define ENCODE_PSTATE(x, r) (0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift)) 95 + #define SET_PSTATE(x, r) __emit_inst(ENCODE_PSTATE(x, r)) 95 96 96 97 #define PSTATE_PAN pstate_field(0, 4) 97 98 #define PSTATE_UAO pstate_field(0, 3)
+1 -1
arch/arm64/kernel/hibernate.c
··· 402 402 * Memory allocated by get_safe_page() will be dealt with by the hibernate code, 403 403 * we don't need to free it here. 404 404 */ 405 - int swsusp_arch_resume(void) 405 + int __nocfi swsusp_arch_resume(void) 406 406 { 407 407 int rc; 408 408 void *zero_page;
+1
arch/arm64/kernel/image-vars.h
··· 86 86 KVM_NVHE_ALIAS(kvm_update_va_mask); 87 87 KVM_NVHE_ALIAS(kvm_get_kimage_voffset); 88 88 KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0); 89 + KVM_NVHE_ALIAS(kvm_pan_patch_el2_entry); 89 90 KVM_NVHE_ALIAS(spectre_bhb_patch_loop_iter); 90 91 KVM_NVHE_ALIAS(spectre_bhb_patch_loop_mitigation_enable); 91 92 KVM_NVHE_ALIAS(spectre_bhb_patch_wa3);
+12 -14
arch/arm64/kernel/ptrace.c
··· 968 968 vq = sve_vq_from_vl(task_get_vl(target, type)); 969 969 970 970 /* Enter/exit streaming mode */ 971 - if (system_supports_sme()) { 972 - switch (type) { 973 - case ARM64_VEC_SVE: 974 - target->thread.svcr &= ~SVCR_SM_MASK; 975 - set_tsk_thread_flag(target, TIF_SVE); 976 - break; 977 - case ARM64_VEC_SME: 978 - target->thread.svcr |= SVCR_SM_MASK; 979 - set_tsk_thread_flag(target, TIF_SME); 980 - break; 981 - default: 982 - WARN_ON_ONCE(1); 983 - return -EINVAL; 984 - } 971 + switch (type) { 972 + case ARM64_VEC_SVE: 973 + target->thread.svcr &= ~SVCR_SM_MASK; 974 + set_tsk_thread_flag(target, TIF_SVE); 975 + break; 976 + case ARM64_VEC_SME: 977 + target->thread.svcr |= SVCR_SM_MASK; 978 + set_tsk_thread_flag(target, TIF_SME); 979 + break; 980 + default: 981 + WARN_ON_ONCE(1); 982 + return -EINVAL; 985 983 } 986 984 987 985 /* Always zero V regs, FPSR, and FPCR */
+20 -6
arch/arm64/kernel/signal.c
··· 449 449 if (user->sve_size < SVE_SIG_CONTEXT_SIZE(vq)) 450 450 return -EINVAL; 451 451 452 + if (sm) { 453 + sme_alloc(current, false); 454 + if (!current->thread.sme_state) 455 + return -ENOMEM; 456 + } 457 + 452 458 sve_alloc(current, true); 453 459 if (!current->thread.sve_state) { 454 460 clear_thread_flag(TIF_SVE); 455 461 return -ENOMEM; 456 462 } 463 + 464 + if (sm) { 465 + current->thread.svcr |= SVCR_SM_MASK; 466 + set_thread_flag(TIF_SME); 467 + } else { 468 + current->thread.svcr &= ~SVCR_SM_MASK; 469 + set_thread_flag(TIF_SVE); 470 + } 471 + 472 + current->thread.fp_type = FP_STATE_SVE; 457 473 458 474 err = __copy_from_user(current->thread.sve_state, 459 475 (char __user const *)user->sve + ··· 477 461 SVE_SIG_REGS_SIZE(vq)); 478 462 if (err) 479 463 return -EFAULT; 480 - 481 - if (flags & SVE_SIG_FLAG_SM) 482 - current->thread.svcr |= SVCR_SM_MASK; 483 - else 484 - set_thread_flag(TIF_SVE); 485 - current->thread.fp_type = FP_STATE_SVE; 486 464 487 465 err = read_fpsimd_context(&fpsimd, user); 488 466 if (err) ··· 585 575 586 576 if (user->za_size < ZA_SIG_CONTEXT_SIZE(vq)) 587 577 return -EINVAL; 578 + 579 + sve_alloc(current, false); 580 + if (!current->thread.sve_state) 581 + return -ENOMEM; 588 582 589 583 sme_alloc(current, true); 590 584 if (!current->thread.sme_state) {
+1
arch/arm64/kvm/arm.c
··· 569 569 return kvm_wfi_trap_policy == KVM_WFX_NOTRAP; 570 570 571 571 return single_task_running() && 572 + vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 && 572 573 (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) || 573 574 vcpu->kvm->arch.vgic.nassgireq); 574 575 }
+6 -2
arch/arm64/kvm/at.c
··· 403 403 struct s1_walk_result *wr, u64 va) 404 404 { 405 405 u64 va_top, va_bottom, baddr, desc, new_desc, ipa; 406 + struct kvm_s2_trans s2_trans = {}; 406 407 int level, stride, ret; 407 408 408 409 level = wi->sl; ··· 421 420 ipa = baddr | index; 422 421 423 422 if (wi->s2) { 424 - struct kvm_s2_trans s2_trans = {}; 425 - 426 423 ret = kvm_walk_nested_s2(vcpu, ipa, &s2_trans); 427 424 if (ret) { 428 425 fail_s1_walk(wr, ··· 514 515 new_desc |= PTE_AF; 515 516 516 517 if (new_desc != desc) { 518 + if (wi->s2 && !kvm_s2_trans_writable(&s2_trans)) { 519 + fail_s1_walk(wr, ESR_ELx_FSC_PERM_L(level), true); 520 + return -EPERM; 521 + } 522 + 517 523 ret = kvm_swap_s1_desc(vcpu, ipa, desc, new_desc, wi); 518 524 if (ret) 519 525 return ret;
+3 -1
arch/arm64/kvm/hyp/entry.S
··· 126 126 127 127 add x1, x1, #VCPU_CONTEXT 128 128 129 - ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN) 129 + alternative_cb ARM64_ALWAYS_SYSTEM, kvm_pan_patch_el2_entry 130 + nop 131 + alternative_cb_end 130 132 131 133 // Store the guest regs x2 and x3 132 134 stp x2, x3, [x1, #CPU_XREG_OFFSET(2)]
+1 -1
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 854 854 return false; 855 855 } 856 856 857 - static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code) 857 + static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu) 858 858 { 859 859 /* 860 860 * Check for the conditions of Cortex-A510's #2077057. When these occur
+3
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 180 180 /* Propagate WFx trapping flags */ 181 181 hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI); 182 182 hyp_vcpu->vcpu.arch.hcr_el2 |= hcr_el2 & (HCR_TWE | HCR_TWI); 183 + } else { 184 + memcpy(&hyp_vcpu->vcpu.arch.fgt, hyp_vcpu->host_vcpu->arch.fgt, 185 + sizeof(hyp_vcpu->vcpu.arch.fgt)); 183 186 } 184 187 } 185 188
-1
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 172 172 173 173 /* Trust the host for non-protected vcpu features. */ 174 174 vcpu->arch.hcrx_el2 = host_vcpu->arch.hcrx_el2; 175 - memcpy(vcpu->arch.fgt, host_vcpu->arch.fgt, sizeof(vcpu->arch.fgt)); 176 175 return 0; 177 176 } 178 177
+1 -1
arch/arm64/kvm/hyp/nvhe/switch.c
··· 211 211 { 212 212 const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu); 213 213 214 - synchronize_vcpu_pstate(vcpu, exit_code); 214 + synchronize_vcpu_pstate(vcpu); 215 215 216 216 /* 217 217 * Some guests (e.g., protected VMs) are not be allowed to run in
+3 -2
arch/arm64/kvm/hyp/pgtable.c
··· 144 144 * page table walk. 145 145 */ 146 146 if (r == -EAGAIN) 147 - return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT); 147 + return walker->flags & KVM_PGTABLE_WALK_IGNORE_EAGAIN; 148 148 149 149 return !r; 150 150 } ··· 1262 1262 { 1263 1263 return stage2_update_leaf_attrs(pgt, addr, size, 0, 1264 1264 KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W, 1265 - NULL, NULL, 0); 1265 + NULL, NULL, 1266 + KVM_PGTABLE_WALK_IGNORE_EAGAIN); 1266 1267 } 1267 1268 1268 1269 void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr,
+1 -1
arch/arm64/kvm/hyp/vhe/switch.c
··· 536 536 537 537 static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) 538 538 { 539 - synchronize_vcpu_pstate(vcpu, exit_code); 539 + synchronize_vcpu_pstate(vcpu); 540 540 541 541 /* 542 542 * If we were in HYP context on entry, adjust the PSTATE view
+5 -7
arch/arm64/kvm/mmu.c
··· 497 497 this->count = 1; 498 498 rb_link_node(&this->node, parent, node); 499 499 rb_insert_color(&this->node, &hyp_shared_pfns); 500 - ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn, 1); 500 + ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn); 501 501 unlock: 502 502 mutex_unlock(&hyp_shared_pfns_lock); 503 503 ··· 523 523 524 524 rb_erase(&this->node, &hyp_shared_pfns); 525 525 kfree(this); 526 - ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn, 1); 526 + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn); 527 527 unlock: 528 528 mutex_unlock(&hyp_shared_pfns_lock); 529 529 ··· 1563 1563 *prot &= ~KVM_PGTABLE_PROT_PX; 1564 1564 } 1565 1565 1566 - #define KVM_PGTABLE_WALK_MEMABORT_FLAGS (KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED) 1567 - 1568 1566 static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, 1569 1567 struct kvm_s2_trans *nested, 1570 1568 struct kvm_memory_slot *memslot, bool is_perm) 1571 1569 { 1572 1570 bool write_fault, exec_fault, writable; 1573 - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; 1571 + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED; 1574 1572 enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; 1575 1573 struct kvm_pgtable *pgt = vcpu->arch.hw_mmu->pgt; 1576 1574 unsigned long mmu_seq; ··· 1663 1665 struct kvm_pgtable *pgt; 1664 1666 struct page *page; 1665 1667 vm_flags_t vm_flags; 1666 - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; 1668 + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED; 1667 1669 1668 1670 if (fault_is_perm) 1669 1671 fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); ··· 1931 1933 /* Resolve the access fault by making the page young again. */ 1932 1934 static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) 1933 1935 { 1934 - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; 1936 + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED; 1935 1937 struct kvm_s2_mmu *mmu; 1936 1938 1937 1939 trace_kvm_access_fault(fault_ipa);
+4 -1
arch/arm64/kvm/sys_regs.c
··· 4668 4668 * that we don't know how to handle. This certainly qualifies 4669 4669 * as a gross bug that should be fixed right away. 4670 4670 */ 4671 - BUG_ON(!r->access); 4671 + if (!r->access) { 4672 + bad_trap(vcpu, params, r, "register access"); 4673 + return; 4674 + } 4672 4675 4673 4676 /* Skip instruction if instructed so */ 4674 4677 if (likely(r->access(vcpu, params, r)))
+28
arch/arm64/kvm/va_layout.c
··· 296 296 generate_mov_q(read_sanitised_ftr_reg(SYS_CTR_EL0), 297 297 origptr, updptr, nr_inst); 298 298 } 299 + 300 + void kvm_pan_patch_el2_entry(struct alt_instr *alt, 301 + __le32 *origptr, __le32 *updptr, int nr_inst) 302 + { 303 + /* 304 + * If we're running at EL1 without hVHE, then SCTLR_EL2.SPAN means 305 + * nothing to us (it is RES1), and we don't need to set PSTATE.PAN 306 + * to anything useful. 307 + */ 308 + if (!is_kernel_in_hyp_mode() && !cpus_have_cap(ARM64_KVM_HVHE)) 309 + return; 310 + 311 + /* 312 + * Leap of faith: at this point, we must be running VHE one way or 313 + * another, and FEAT_PAN is required to be implemented. If KVM 314 + * explodes at runtime because your system does not abide by this 315 + * requirement, call your favourite HW vendor, they have screwed up. 316 + * 317 + * We don't expect hVHE to access any userspace mapping, so always 318 + * set PSTATE.PAN on enty. Same thing if we have PAN enabled on an 319 + * EL2 kernel. Only force it to 0 if we have not configured PAN in 320 + * the kernel (and you know this is really silly). 321 + */ 322 + if (cpus_have_cap(ARM64_KVM_HVHE) || IS_ENABLED(CONFIG_ARM64_PAN)) 323 + *updptr = cpu_to_le32(ENCODE_PSTATE(1, PAN)); 324 + else 325 + *updptr = cpu_to_le32(ENCODE_PSTATE(0, PAN)); 326 + }
+1
arch/riscv/Kconfig.errata
··· 84 84 select DMA_GLOBAL_POOL 85 85 select RISCV_DMA_NONCOHERENT 86 86 select RISCV_NONSTANDARD_CACHE_OPS 87 + select CACHEMAINT_FOR_DMA 87 88 select SIFIVE_CCACHE 88 89 default n 89 90 help
+12 -2
arch/riscv/include/asm/uaccess.h
··· 97 97 */ 98 98 99 99 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT 100 + /* 101 + * Use a temporary variable for the output of the asm goto to avoid a 102 + * triggering an LLVM assertion due to sign extending the output when 103 + * it is used in later function calls: 104 + * https://github.com/llvm/llvm-project/issues/143795 105 + */ 100 106 #define __get_user_asm(insn, x, ptr, label) \ 107 + do { \ 108 + u64 __tmp; \ 101 109 asm_goto_output( \ 102 110 "1:\n" \ 103 111 " " insn " %0, %1\n" \ 104 112 _ASM_EXTABLE_UACCESS_ERR(1b, %l2, %0) \ 105 - : "=&r" (x) \ 106 - : "m" (*(ptr)) : : label) 113 + : "=&r" (__tmp) \ 114 + : "m" (*(ptr)) : : label); \ 115 + (x) = (__typeof__(x))(unsigned long)__tmp; \ 116 + } while (0) 107 117 #else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */ 108 118 #define __get_user_asm(insn, x, ptr, label) \ 109 119 do { \
+2 -1
arch/riscv/kernel/suspend.c
··· 51 51 52 52 #ifdef CONFIG_MMU 53 53 if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SSTC)) { 54 - csr_write(CSR_STIMECMP, context->stimecmp); 55 54 #if __riscv_xlen < 64 55 + csr_write(CSR_STIMECMP, ULONG_MAX); 56 56 csr_write(CSR_STIMECMPH, context->stimecmph); 57 57 #endif 58 + csr_write(CSR_STIMECMP, context->stimecmp); 58 59 } 59 60 60 61 csr_write(CSR_SATP, context->satp);
+4 -2
arch/riscv/kvm/vcpu_timer.c
··· 72 72 static int kvm_riscv_vcpu_update_vstimecmp(struct kvm_vcpu *vcpu, u64 ncycles) 73 73 { 74 74 #if defined(CONFIG_32BIT) 75 - ncsr_write(CSR_VSTIMECMP, ncycles & 0xFFFFFFFF); 75 + ncsr_write(CSR_VSTIMECMP, ULONG_MAX); 76 76 ncsr_write(CSR_VSTIMECMPH, ncycles >> 32); 77 + ncsr_write(CSR_VSTIMECMP, (u32)ncycles); 77 78 #else 78 79 ncsr_write(CSR_VSTIMECMP, ncycles); 79 80 #endif ··· 308 307 return; 309 308 310 309 #if defined(CONFIG_32BIT) 311 - ncsr_write(CSR_VSTIMECMP, (u32)t->next_cycles); 310 + ncsr_write(CSR_VSTIMECMP, ULONG_MAX); 312 311 ncsr_write(CSR_VSTIMECMPH, (u32)(t->next_cycles >> 32)); 312 + ncsr_write(CSR_VSTIMECMP, (u32)(t->next_cycles)); 313 313 #else 314 314 ncsr_write(CSR_VSTIMECMP, t->next_cycles); 315 315 #endif
+9 -8
arch/s390/boot/vmlinux.lds.S
··· 137 137 } 138 138 _end = .; 139 139 140 + /* Sections to be discarded */ 141 + /DISCARD/ : { 142 + COMMON_DISCARDS 143 + *(.eh_frame) 144 + *(*__ksymtab*) 145 + *(___kcrctab*) 146 + *(.modinfo) 147 + } 148 + 140 149 DWARF_DEBUG 141 150 ELF_DETAILS 142 151 ··· 170 161 *(.rela.*) *(.rela_*) 171 162 } 172 163 ASSERT(SIZEOF(.rela.dyn) == 0, "Unexpected run-time relocations (.rela) detected!") 173 - 174 - /* Sections to be discarded */ 175 - /DISCARD/ : { 176 - COMMON_DISCARDS 177 - *(.eh_frame) 178 - *(*__ksymtab*) 179 - *(___kcrctab*) 180 - } 181 164 }
+1 -1
arch/s390/kernel/vdso/Makefile
··· 28 28 KBUILD_CFLAGS_VDSO := $(filter-out -munaligned-symbols,$(KBUILD_CFLAGS_VDSO)) 29 29 KBUILD_CFLAGS_VDSO := $(filter-out -fno-asynchronous-unwind-tables,$(KBUILD_CFLAGS_VDSO)) 30 30 KBUILD_CFLAGS_VDSO += -fPIC -fno-common -fno-builtin -fasynchronous-unwind-tables 31 - KBUILD_CFLAGS_VDSO += -fno-stack-protector 31 + KBUILD_CFLAGS_VDSO += -fno-stack-protector $(DISABLE_KSTACK_ERASE) 32 32 ldflags-y := -shared -soname=linux-vdso.so.1 \ 33 33 --hash-style=both --build-id=sha1 -T 34 34
+11 -2
arch/x86/events/perf_event.h
··· 1574 1574 struct hw_perf_event *hwc = &event->hw; 1575 1575 unsigned int hw_event, bts_event; 1576 1576 1577 - if (event->attr.freq) 1577 + /* 1578 + * Only use BTS for fixed rate period==1 events. 1579 + */ 1580 + if (event->attr.freq || period != 1) 1581 + return false; 1582 + 1583 + /* 1584 + * BTS doesn't virtualize. 1585 + */ 1586 + if (event->attr.exclude_host) 1578 1587 return false; 1579 1588 1580 1589 hw_event = hwc->config & INTEL_ARCH_EVENT_MASK; 1581 1590 bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); 1582 1591 1583 - return hw_event == bts_event && period == 1; 1592 + return hw_event == bts_event; 1584 1593 } 1585 1594 1586 1595 static inline bool intel_pmu_has_bts(struct perf_event *event)
+24 -5
arch/x86/include/asm/kfence.h
··· 42 42 { 43 43 unsigned int level; 44 44 pte_t *pte = lookup_address(addr, &level); 45 + pteval_t val; 45 46 46 47 if (WARN_ON(!pte || level != PG_LEVEL_4K)) 47 48 return false; 49 + 50 + val = pte_val(*pte); 51 + 52 + /* 53 + * protect requires making the page not-present. If the PTE is 54 + * already in the right state, there's nothing to do. 55 + */ 56 + if (protect != !!(val & _PAGE_PRESENT)) 57 + return true; 58 + 59 + /* 60 + * Otherwise, invert the entire PTE. This avoids writing out an 61 + * L1TF-vulnerable PTE (not present, without the high address bits 62 + * set). 63 + */ 64 + set_pte(pte, __pte(~val)); 65 + 66 + /* 67 + * If the page was protected (non-present) and we're making it 68 + * present, there is no need to flush the TLB at all. 69 + */ 70 + if (!protect) 71 + return true; 48 72 49 73 /* 50 74 * We need to avoid IPIs, as we may get KFENCE allocations or faults ··· 76 52 * does not flush TLBs on all CPUs. We can tolerate some inaccuracy; 77 53 * lazy fault handling takes care of faults after the page is PRESENT. 78 54 */ 79 - 80 - if (protect) 81 - set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); 82 - else 83 - set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); 84 55 85 56 /* 86 57 * Flush this CPU's TLB, assuming whoever did the allocation/free is
+5 -10
arch/x86/mm/fault.c
··· 821 821 force_sig_pkuerr((void __user *)address, pkey); 822 822 else 823 823 force_sig_fault(SIGSEGV, si_code, (void __user *)address); 824 - 825 - local_irq_disable(); 826 824 } 827 825 828 826 static noinline void ··· 1472 1474 do_kern_addr_fault(regs, error_code, address); 1473 1475 } else { 1474 1476 do_user_addr_fault(regs, error_code, address); 1475 - /* 1476 - * User address page fault handling might have reenabled 1477 - * interrupts. Fixing up all potential exit points of 1478 - * do_user_addr_fault() and its leaf functions is just not 1479 - * doable w/o creating an unholy mess or turning the code 1480 - * upside down. 1481 - */ 1482 - local_irq_disable(); 1483 1477 } 1478 + /* 1479 + * page fault handling might have reenabled interrupts, 1480 + * make sure to disable them again. 1481 + */ 1482 + local_irq_disable(); 1484 1483 } 1485 1484 1486 1485 DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
+1 -1
block/blk-mq.c
··· 1480 1480 static void blk_rq_poll_completion(struct request *rq, struct completion *wait) 1481 1481 { 1482 1482 do { 1483 - blk_hctx_poll(rq->q, rq->mq_hctx, NULL, 0); 1483 + blk_hctx_poll(rq->q, rq->mq_hctx, NULL, BLK_POLL_ONESHOT); 1484 1484 cond_resched(); 1485 1485 } while (!completion_done(wait)); 1486 1486 }
+1
block/blk-zoned.c
··· 1957 1957 1958 1958 disk->nr_zones = args->nr_zones; 1959 1959 if (args->nr_conv_zones >= disk->nr_zones) { 1960 + queue_limits_cancel_update(q); 1960 1961 pr_warn("%s: Invalid number of conventional zones %u / %u\n", 1961 1962 disk->disk_name, args->nr_conv_zones, disk->nr_zones); 1962 1963 ret = -ENODEV;
+6
crypto/authencesn.c
··· 169 169 struct scatterlist *src, *dst; 170 170 int err; 171 171 172 + if (assoclen < 8) 173 + return -EINVAL; 174 + 172 175 sg_init_table(areq_ctx->src, 2); 173 176 src = scatterwalk_ffwd(areq_ctx->src, req->src, assoclen); 174 177 dst = src; ··· 258 255 struct scatterlist *dst = req->dst; 259 256 u32 tmp[2]; 260 257 int err; 258 + 259 + if (assoclen < 8) 260 + return -EINVAL; 261 261 262 262 cryptlen -= authsize; 263 263
+5 -5
drivers/ata/ahci.c
··· 2094 2094 if (ap->flags & ATA_FLAG_EM) 2095 2095 ap->em_message_type = hpriv->em_msg_type; 2096 2096 2097 - ahci_mark_external_port(ap); 2098 - 2099 - ahci_update_initial_lpm_policy(ap); 2100 - 2101 2097 /* disabled/not-implemented port */ 2102 - if (!(hpriv->port_map & (1 << i))) 2098 + if (!(hpriv->port_map & (1 << i))) { 2103 2099 ap->ops = &ata_dummy_port_ops; 2100 + } else { 2101 + ahci_mark_external_port(ap); 2102 + ahci_update_initial_lpm_policy(ap); 2103 + } 2104 2104 } 2105 2105 2106 2106 /* apply workaround for ASUS P5W DH Deluxe mainboard */
+7 -1
drivers/ata/libata-core.c
··· 2872 2872 2873 2873 static void ata_dev_print_features(struct ata_device *dev) 2874 2874 { 2875 - if (!(dev->flags & ATA_DFLAG_FEATURES_MASK)) 2875 + if (!(dev->flags & ATA_DFLAG_FEATURES_MASK) && !dev->cpr_log && 2876 + !ata_id_has_hipm(dev->id) && !ata_id_has_dipm(dev->id)) 2876 2877 return; 2877 2878 2878 2879 ata_dev_info(dev, ··· 3117 3116 ata_mode_string(xfer_mask), 3118 3117 cdb_intr_string, atapi_an_string, 3119 3118 dma_dir_string); 3119 + 3120 + ata_dev_config_lpm(dev); 3121 + 3122 + if (print_info) 3123 + ata_dev_print_features(dev); 3120 3124 } 3121 3125 3122 3126 /* determine max_sectors */
+1 -1
drivers/ata/libata-sata.c
··· 909 909 struct ata_link *link; 910 910 struct ata_device *dev; 911 911 912 - if (ap->flags & ATA_FLAG_NO_LPM) 912 + if ((ap->flags & ATA_FLAG_NO_LPM) || !ap->ops->set_lpm) 913 913 return false; 914 914 915 915 ata_for_each_link(link, ap, EDGE) {
+2
drivers/base/dd.c
··· 548 548 static void device_unbind_cleanup(struct device *dev) 549 549 { 550 550 devres_release_all(dev); 551 + if (dev->driver->p_cb.post_unbind_rust) 552 + dev->driver->p_cb.post_unbind_rust(dev); 551 553 arch_teardown_dma_ops(dev); 552 554 kfree(dev->dma_range_map); 553 555 dev->dma_range_map = NULL;
+6 -5
drivers/base/regmap/regcache-maple.c
··· 95 95 96 96 mas_unlock(&mas); 97 97 98 - if (ret == 0) { 99 - kfree(lower); 100 - kfree(upper); 98 + if (ret) { 99 + kfree(entry); 100 + return ret; 101 101 } 102 - 103 - return ret; 102 + kfree(lower); 103 + kfree(upper); 104 + return 0; 104 105 } 105 106 106 107 static int regcache_maple_drop(struct regmap *map, unsigned int min,
+3 -1
drivers/base/regmap/regmap.c
··· 408 408 static void regmap_lock_hwlock_irqsave(void *__map) 409 409 { 410 410 struct regmap *map = __map; 411 + unsigned long flags = 0; 411 412 412 413 hwspin_lock_timeout_irqsave(map->hwlock, UINT_MAX, 413 - &map->spinlock_flags); 414 + &flags); 415 + map->spinlock_flags = flags; 414 416 } 415 417 416 418 static void regmap_unlock_hwlock(void *__map)
+34 -5
drivers/block/ublk_drv.c
··· 2885 2885 return ub; 2886 2886 } 2887 2887 2888 + static bool ublk_validate_user_pid(struct ublk_device *ub, pid_t ublksrv_pid) 2889 + { 2890 + rcu_read_lock(); 2891 + ublksrv_pid = pid_nr(find_vpid(ublksrv_pid)); 2892 + rcu_read_unlock(); 2893 + 2894 + return ub->ublksrv_tgid == ublksrv_pid; 2895 + } 2896 + 2888 2897 static int ublk_ctrl_start_dev(struct ublk_device *ub, 2889 2898 const struct ublksrv_ctrl_cmd *header) 2890 2899 { ··· 2962 2953 if (wait_for_completion_interruptible(&ub->completion) != 0) 2963 2954 return -EINTR; 2964 2955 2965 - if (ub->ublksrv_tgid != ublksrv_pid) 2956 + if (!ublk_validate_user_pid(ub, ublksrv_pid)) 2966 2957 return -EINVAL; 2967 2958 2968 2959 mutex_lock(&ub->mutex); ··· 2981 2972 disk->fops = &ub_fops; 2982 2973 disk->private_data = ub; 2983 2974 2984 - ub->dev_info.ublksrv_pid = ublksrv_pid; 2975 + ub->dev_info.ublksrv_pid = ub->ublksrv_tgid; 2985 2976 ub->ub_disk = disk; 2986 2977 2987 2978 ublk_apply_params(ub); ··· 3329 3320 static int ublk_ctrl_get_dev_info(struct ublk_device *ub, 3330 3321 const struct ublksrv_ctrl_cmd *header) 3331 3322 { 3323 + struct task_struct *p; 3324 + struct pid *pid; 3325 + struct ublksrv_ctrl_dev_info dev_info; 3326 + pid_t init_ublksrv_tgid = ub->dev_info.ublksrv_pid; 3332 3327 void __user *argp = (void __user *)(unsigned long)header->addr; 3333 3328 3334 3329 if (header->len < sizeof(struct ublksrv_ctrl_dev_info) || !header->addr) 3335 3330 return -EINVAL; 3336 3331 3337 - if (copy_to_user(argp, &ub->dev_info, sizeof(ub->dev_info))) 3332 + memcpy(&dev_info, &ub->dev_info, sizeof(dev_info)); 3333 + dev_info.ublksrv_pid = -1; 3334 + 3335 + if (init_ublksrv_tgid > 0) { 3336 + rcu_read_lock(); 3337 + pid = find_pid_ns(init_ublksrv_tgid, &init_pid_ns); 3338 + p = pid_task(pid, PIDTYPE_TGID); 3339 + if (p) { 3340 + int vnr = task_tgid_vnr(p); 3341 + 3342 + if (vnr) 3343 + dev_info.ublksrv_pid = vnr; 3344 + } 3345 + rcu_read_unlock(); 3346 + } 3347 + 3348 + if (copy_to_user(argp, &dev_info, sizeof(dev_info))) 3338 3349 return -EFAULT; 3339 3350 3340 3351 return 0; ··· 3499 3470 pr_devel("%s: All FETCH_REQs received, dev id %d\n", __func__, 3500 3471 header->dev_id); 3501 3472 3502 - if (ub->ublksrv_tgid != ublksrv_pid) 3473 + if (!ublk_validate_user_pid(ub, ublksrv_pid)) 3503 3474 return -EINVAL; 3504 3475 3505 3476 mutex_lock(&ub->mutex); ··· 3510 3481 ret = -EBUSY; 3511 3482 goto out_unlock; 3512 3483 } 3513 - ub->dev_info.ublksrv_pid = ublksrv_pid; 3484 + ub->dev_info.ublksrv_pid = ub->ublksrv_tgid; 3514 3485 ub->dev_info.state = UBLK_S_DEV_LIVE; 3515 3486 pr_devel("%s: new ublksrv_pid %d, dev id %d\n", 3516 3487 __func__, ublksrv_pid, header->dev_id);
+2 -1
drivers/clocksource/timer-riscv.c
··· 50 50 51 51 if (static_branch_likely(&riscv_sstc_available)) { 52 52 #if defined(CONFIG_32BIT) 53 - csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF); 53 + csr_write(CSR_STIMECMP, ULONG_MAX); 54 54 csr_write(CSR_STIMECMPH, next_tval >> 32); 55 + csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF); 55 56 #else 56 57 csr_write(CSR_STIMECMP, next_tval); 57 58 #endif
+1 -1
drivers/comedi/comedi_fops.c
··· 1155 1155 for (i = 0; i < s->n_chan; i++) { 1156 1156 int x; 1157 1157 1158 - x = (dev->minor << 28) | (it->subdev << 24) | (i << 16) | 1158 + x = (it->subdev << 24) | (i << 16) | 1159 1159 (s->range_table_list[i]->length); 1160 1160 if (put_user(x, it->rangelist + i)) 1161 1161 return -EFAULT;
+30 -2
drivers/comedi/drivers/dmm32at.c
··· 330 330 331 331 static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec) 332 332 { 333 + unsigned long irq_flags; 333 334 unsigned char lo1, lo2, hi2; 334 335 unsigned short both2; 335 336 ··· 342 341 343 342 /* set counter clocks to 10MHz, disable all aux dio */ 344 343 outb(0, dev->iobase + DMM32AT_CTRDIO_CFG_REG); 344 + 345 + /* serialize access to control register and paged registers */ 346 + spin_lock_irqsave(&dev->spinlock, irq_flags); 345 347 346 348 /* get access to the clock regs */ 347 349 outb(DMM32AT_CTRL_PAGE_8254, dev->iobase + DMM32AT_CTRL_REG); ··· 358 354 outb(lo2, dev->iobase + DMM32AT_CLK2); 359 355 outb(hi2, dev->iobase + DMM32AT_CLK2); 360 356 357 + spin_unlock_irqrestore(&dev->spinlock, irq_flags); 358 + 361 359 /* enable the ai conversion interrupt and the clock to start scans */ 362 360 outb(DMM32AT_INTCLK_ADINT | 363 361 DMM32AT_INTCLK_CLKEN | DMM32AT_INTCLK_CLKSEL, ··· 369 363 static int dmm32at_ai_cmd(struct comedi_device *dev, struct comedi_subdevice *s) 370 364 { 371 365 struct comedi_cmd *cmd = &s->async->cmd; 366 + unsigned long irq_flags; 372 367 int ret; 373 368 374 369 dmm32at_ai_set_chanspec(dev, s, cmd->chanlist[0], cmd->chanlist_len); 375 370 371 + /* serialize access to control register and paged registers */ 372 + spin_lock_irqsave(&dev->spinlock, irq_flags); 373 + 376 374 /* reset the interrupt just in case */ 377 375 outb(DMM32AT_CTRL_INTRST, dev->iobase + DMM32AT_CTRL_REG); 376 + 377 + spin_unlock_irqrestore(&dev->spinlock, irq_flags); 378 378 379 379 /* 380 380 * wait for circuit to settle ··· 441 429 comedi_handle_events(dev, s); 442 430 } 443 431 432 + /* serialize access to control register and paged registers */ 433 + spin_lock(&dev->spinlock); 434 + 444 435 /* reset the interrupt */ 445 436 outb(DMM32AT_CTRL_INTRST, dev->iobase + DMM32AT_CTRL_REG); 437 + 438 + spin_unlock(&dev->spinlock); 446 439 return IRQ_HANDLED; 447 440 } 448 441 ··· 498 481 static int dmm32at_8255_io(struct comedi_device *dev, 499 482 int dir, int port, int data, unsigned long regbase) 500 483 { 484 + unsigned long irq_flags; 485 + int ret; 486 + 487 + /* serialize access to control register and paged registers */ 488 + spin_lock_irqsave(&dev->spinlock, irq_flags); 489 + 501 490 /* get access to the DIO regs */ 502 491 outb(DMM32AT_CTRL_PAGE_8255, dev->iobase + DMM32AT_CTRL_REG); 503 492 504 493 if (dir) { 505 494 outb(data, dev->iobase + regbase + port); 506 - return 0; 495 + ret = 0; 496 + } else { 497 + ret = inb(dev->iobase + regbase + port); 507 498 } 508 - return inb(dev->iobase + regbase + port); 499 + 500 + spin_unlock_irqrestore(&dev->spinlock, irq_flags); 501 + 502 + return ret; 509 503 } 510 504 511 505 /* Make sure the board is there and put it to a known state */
+1 -1
drivers/comedi/range.c
··· 52 52 const struct comedi_lrange *lr; 53 53 struct comedi_subdevice *s; 54 54 55 - subd = (it->range_type >> 24) & 0xf; 55 + subd = (it->range_type >> 24) & 0xff; 56 56 chan = (it->range_type >> 16) & 0xff; 57 57 58 58 if (!dev->attached)
+4 -8
drivers/dpll/dpll_core.c
··· 83 83 if (ref->pin != pin) 84 84 continue; 85 85 reg = dpll_pin_registration_find(ref, ops, priv, cookie); 86 - if (reg) { 87 - refcount_inc(&ref->refcount); 88 - return 0; 89 - } 86 + if (reg) 87 + return -EEXIST; 90 88 ref_exists = true; 91 89 break; 92 90 } ··· 162 164 if (ref->dpll != dpll) 163 165 continue; 164 166 reg = dpll_pin_registration_find(ref, ops, priv, cookie); 165 - if (reg) { 166 - refcount_inc(&ref->refcount); 167 - return 0; 168 - } 167 + if (reg) 168 + return -EEXIST; 169 169 ref_exists = true; 170 170 break; 171 171 }
+1 -1
drivers/firmware/efi/efi.c
··· 74 74 .page_table_lock = __SPIN_LOCK_UNLOCKED(efi_mm.page_table_lock), 75 75 .mmlist = LIST_HEAD_INIT(efi_mm.mmlist), 76 76 .user_ns = &init_user_ns, 77 - .cpu_bitmap = { [BITS_TO_LONGS(NR_CPUS)] = 0}, 78 77 #ifdef CONFIG_SCHED_MM_CID 79 78 .mm_cid.lock = __RAW_SPIN_LOCK_UNLOCKED(efi_mm.mm_cid.lock), 80 79 #endif 80 + .flexible_array = MM_STRUCT_FLEXIBLE_ARRAY_INIT, 81 81 }; 82 82 83 83 struct workqueue_struct *efi_rts_wq;
+9 -3
drivers/gpio/gpiolib-cdev.c
··· 2549 2549 ctx = kzalloc(sizeof(*ctx), GFP_ATOMIC); 2550 2550 if (!ctx) { 2551 2551 pr_err("Failed to allocate memory for line info notification\n"); 2552 + fput(fp); 2552 2553 return NOTIFY_DONE; 2553 2554 } 2554 2555 ··· 2697 2696 2698 2697 cdev = kzalloc(sizeof(*cdev), GFP_KERNEL); 2699 2698 if (!cdev) 2700 - return -ENODEV; 2699 + return -ENOMEM; 2701 2700 2702 2701 cdev->watched_lines = bitmap_zalloc(gdev->ngpio, GFP_KERNEL); 2703 2702 if (!cdev->watched_lines) ··· 2797 2796 return -ENOMEM; 2798 2797 2799 2798 ret = cdev_device_add(&gdev->chrdev, &gdev->dev); 2800 - if (ret) 2799 + if (ret) { 2800 + destroy_workqueue(gdev->line_state_wq); 2801 2801 return ret; 2802 + } 2802 2803 2803 2804 guard(srcu)(&gdev->srcu); 2804 2805 gc = srcu_dereference(gdev->chip, &gdev->srcu); 2805 - if (!gc) 2806 + if (!gc) { 2807 + cdev_device_del(&gdev->chrdev, &gdev->dev); 2808 + destroy_workqueue(gdev->line_state_wq); 2806 2809 return -ENODEV; 2810 + } 2807 2811 2808 2812 gpiochip_dbg(gc, "added GPIO chardev (%d:%d)\n", MAJOR(devt), gdev->id); 2809 2813
+11 -5
drivers/gpio/gpiolib-shared.c
··· 515 515 { 516 516 struct gpio_shared_entry *entry; 517 517 struct gpio_shared_ref *ref; 518 - unsigned long *flags; 518 + struct gpio_desc *desc; 519 519 int ret; 520 520 521 521 list_for_each_entry(entry, &gpio_shared_list, list) { ··· 543 543 if (list_count_nodes(&entry->refs) <= 1) 544 544 continue; 545 545 546 - flags = &gdev->descs[entry->offset].flags; 546 + desc = &gdev->descs[entry->offset]; 547 547 548 - __set_bit(GPIOD_FLAG_SHARED, flags); 548 + __set_bit(GPIOD_FLAG_SHARED, &desc->flags); 549 549 /* 550 550 * Shared GPIOs are not requested via the normal path. Make 551 551 * them inaccessible to anyone even before we register the 552 552 * chip. 553 553 */ 554 - __set_bit(GPIOD_FLAG_REQUESTED, flags); 554 + ret = gpiod_request_commit(desc, "shared"); 555 + if (ret) 556 + return ret; 555 557 556 558 pr_debug("GPIO %u owned by %s is shared by multiple consumers\n", 557 559 entry->offset, gpio_device_get_label(gdev)); ··· 564 562 ref->con_id ?: "(none)"); 565 563 566 564 ret = gpio_shared_make_adev(gdev, entry, ref); 567 - if (ret) 565 + if (ret) { 566 + gpiod_free_commit(desc); 568 567 return ret; 568 + } 569 569 } 570 570 } 571 571 ··· 582 578 list_for_each_entry(entry, &gpio_shared_list, list) { 583 579 if (!device_match_fwnode(&gdev->dev, entry->fwnode)) 584 580 continue; 581 + 582 + gpiod_free_commit(&gdev->descs[entry->offset]); 585 583 586 584 list_for_each_entry(ref, &entry->refs, list) { 587 585 guard(mutex)(&ref->lock);
+2 -2
drivers/gpio/gpiolib.c
··· 2453 2453 * on each other, and help provide better diagnostics in debugfs. 2454 2454 * They're called even less than the "set direction" calls. 2455 2455 */ 2456 - static int gpiod_request_commit(struct gpio_desc *desc, const char *label) 2456 + int gpiod_request_commit(struct gpio_desc *desc, const char *label) 2457 2457 { 2458 2458 unsigned int offset; 2459 2459 int ret; ··· 2515 2515 return ret; 2516 2516 } 2517 2517 2518 - static void gpiod_free_commit(struct gpio_desc *desc) 2518 + void gpiod_free_commit(struct gpio_desc *desc) 2519 2519 { 2520 2520 unsigned long flags; 2521 2521
+2
drivers/gpio/gpiolib.h
··· 244 244 struct gpio_desc *desc) 245 245 246 246 int gpiod_request(struct gpio_desc *desc, const char *label); 247 + int gpiod_request_commit(struct gpio_desc *desc, const char *label); 247 248 void gpiod_free(struct gpio_desc *desc); 249 + void gpiod_free_commit(struct gpio_desc *desc); 248 250 249 251 static inline int gpiod_request_user(struct gpio_desc *desc, const char *label) 250 252 {
+1 -1
drivers/gpu/drm/Kconfig
··· 210 210 211 211 config DRM_GPUSVM 212 212 tristate 213 - depends on DRM && DEVICE_PRIVATE 213 + depends on DRM 214 214 select HMM_MIRROR 215 215 select MMU_NOTIFIER 216 216 help
+3 -1
drivers/gpu/drm/Makefile
··· 108 108 obj-$(CONFIG_DRM_GPUVM) += drm_gpuvm.o 109 109 110 110 drm_gpusvm_helper-y := \ 111 - drm_gpusvm.o\ 111 + drm_gpusvm.o 112 + drm_gpusvm_helper-$(CONFIG_ZONE_DEVICE) += \ 112 113 drm_pagemap.o 114 + 113 115 obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o 114 116 115 117 obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 763 763 } 764 764 765 765 static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring, 766 - u64 start_wptr, u32 end_wptr) 766 + u64 start_wptr, u64 end_wptr) 767 767 { 768 768 unsigned int first_idx = start_wptr & ring->buf_mask; 769 769 unsigned int last_idx = end_wptr & ring->buf_mask;
+4 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
··· 733 733 734 734 if (!adev->gmc.flush_pasid_uses_kiq || !ring->sched.ready) { 735 735 736 - if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid) 737 - return 0; 736 + if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid) { 737 + r = 0; 738 + goto error_unlock_reset; 739 + } 738 740 739 741 if (adev->gmc.flush_tlb_needs_extra_type_2) 740 742 adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
··· 302 302 if (job && job->vmid) 303 303 amdgpu_vmid_reset(adev, ring->vm_hub, job->vmid); 304 304 amdgpu_ring_undo(ring); 305 - return r; 305 + goto free_fence; 306 306 } 307 307 *f = &af->base; 308 308 /* get a ref for the job */
+5 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 217 217 if (!entity) 218 218 return 0; 219 219 220 - return drm_sched_job_init(&(*job)->base, entity, 1, owner, 221 - drm_client_id); 220 + r = drm_sched_job_init(&(*job)->base, entity, 1, owner, drm_client_id); 221 + if (!r) 222 + return 0; 223 + 224 + kfree((*job)->hw_vm_fence); 222 225 223 226 err_fence: 224 227 kfree((*job)->hw_fence);
-12
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
··· 278 278 u32 sh_num, u32 instance, int xcc_id); 279 279 static u32 gfx_v12_0_get_wgp_active_bitmap_per_sh(struct amdgpu_device *adev); 280 280 281 - static void gfx_v12_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start, bool secure); 282 281 static void gfx_v12_0_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg, 283 282 uint32_t val); 284 283 static int gfx_v12_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev); ··· 4633 4634 return r; 4634 4635 } 4635 4636 4636 - static void gfx_v12_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, 4637 - bool start, 4638 - bool secure) 4639 - { 4640 - uint32_t v = secure ? FRAME_TMZ : 0; 4641 - 4642 - amdgpu_ring_write(ring, PACKET3(PACKET3_FRAME_CONTROL, 0)); 4643 - amdgpu_ring_write(ring, v | FRAME_CMD(start ? 0 : 1)); 4644 - } 4645 - 4646 4637 static void gfx_v12_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg, 4647 4638 uint32_t reg_val_offs) 4648 4639 { ··· 5509 5520 .emit_cntxcntl = gfx_v12_0_ring_emit_cntxcntl, 5510 5521 .init_cond_exec = gfx_v12_0_ring_emit_init_cond_exec, 5511 5522 .preempt_ib = gfx_v12_0_ring_preempt_ib, 5512 - .emit_frame_cntl = gfx_v12_0_ring_emit_frame_cntl, 5513 5523 .emit_wreg = gfx_v12_0_ring_emit_wreg, 5514 5524 .emit_reg_wait = gfx_v12_0_ring_emit_reg_wait, 5515 5525 .emit_reg_write_reg_wait = gfx_v12_0_ring_emit_reg_write_reg_wait,
+1 -2
drivers/gpu/drm/amd/amdkfd/kfd_debug.h
··· 120 120 && dev->kfd->mec2_fw_version < 0x1b6) || 121 121 (KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1) 122 122 && dev->kfd->mec2_fw_version < 0x30) || 123 - (KFD_GC_VERSION(dev) >= IP_VERSION(11, 0, 0) && 124 - KFD_GC_VERSION(dev) < IP_VERSION(12, 0, 0))) 123 + kfd_dbg_has_cwsr_workaround(dev)) 125 124 return false; 126 125 127 126 /* Assume debugging and cooperative launch supported otherwise. */
+3 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_colorop.c
··· 79 79 goto cleanup; 80 80 81 81 list->type = ops[i]->base.id; 82 - list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[i]->base.id); 83 82 84 83 i++; 85 84 ··· 196 197 goto cleanup; 197 198 198 199 drm_colorop_set_next_property(ops[i-1], ops[i]); 200 + 201 + list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[0]->base.id); 202 + 199 203 return 0; 200 204 201 205 cleanup:
-11
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 248 248 struct vblank_control_work *vblank_work = 249 249 container_of(work, struct vblank_control_work, work); 250 250 struct amdgpu_display_manager *dm = vblank_work->dm; 251 - struct amdgpu_device *adev = drm_to_adev(dm->ddev); 252 - int r; 253 251 254 252 mutex_lock(&dm->dc_lock); 255 253 ··· 277 279 278 280 if (dm->active_vblank_irq_count == 0) { 279 281 dc_post_update_surfaces_to_stream(dm->dc); 280 - 281 - r = amdgpu_dpm_pause_power_profile(adev, true); 282 - if (r) 283 - dev_warn(adev->dev, "failed to set default power profile mode\n"); 284 - 285 282 dc_allow_idle_optimizations(dm->dc, true); 286 - 287 - r = amdgpu_dpm_pause_power_profile(adev, false); 288 - if (r) 289 - dev_warn(adev->dev, "failed to restore the power profile mode\n"); 290 283 } 291 284 292 285 mutex_unlock(&dm->dc_lock);
+8 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
··· 915 915 struct amdgpu_dm_connector *amdgpu_dm_connector; 916 916 const struct dc_link *dc_link; 917 917 918 - use_polling |= connector->polled != DRM_CONNECTOR_POLL_HPD; 919 - 920 918 if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK) 921 919 continue; 922 920 923 921 amdgpu_dm_connector = to_amdgpu_dm_connector(connector); 922 + 923 + /* 924 + * Analog connectors may be hot-plugged unlike other connector 925 + * types that don't support HPD. Only poll analog connectors. 926 + */ 927 + use_polling |= 928 + amdgpu_dm_connector->dc_link && 929 + dc_connector_supports_analog(amdgpu_dm_connector->dc_link->link_id.id); 924 930 925 931 dc_link = amdgpu_dm_connector->dc_link; 926 932
+9 -4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
··· 1790 1790 static int 1791 1791 dm_plane_init_colorops(struct drm_plane *plane) 1792 1792 { 1793 - struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES]; 1793 + struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES] = {}; 1794 1794 struct drm_device *dev = plane->dev; 1795 1795 struct amdgpu_device *adev = drm_to_adev(dev); 1796 1796 struct dc *dc = adev->dm.dc; 1797 1797 int len = 0; 1798 - int ret; 1798 + int ret = 0; 1799 + int i; 1799 1800 1800 1801 if (plane->type == DRM_PLANE_TYPE_CURSOR) 1801 1802 return 0; ··· 1807 1806 if (ret) { 1808 1807 drm_err(plane->dev, "Failed to create color pipeline for plane %d: %d\n", 1809 1808 plane->base.id, ret); 1810 - return ret; 1809 + goto out; 1811 1810 } 1812 1811 len++; 1813 1812 ··· 1815 1814 drm_plane_create_color_pipeline_property(plane, pipelines, len); 1816 1815 } 1817 1816 1818 - return 0; 1817 + out: 1818 + for (i = 0; i < len; i++) 1819 + kfree(pipelines[i].name); 1820 + 1821 + return ret; 1819 1822 } 1820 1823 #endif 1821 1824
+16 -15
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
··· 2273 2273 if (scaling_factor == 0) 2274 2274 return -EINVAL; 2275 2275 2276 - memset(smc_table, 0, sizeof(SISLANDS_SMC_STATETABLE)); 2277 - 2278 2276 ret = si_calculate_adjusted_tdp_limits(adev, 2279 2277 false, /* ??? */ 2280 2278 adev->pm.dpm.tdp_adjustment, ··· 2280 2282 &near_tdp_limit); 2281 2283 if (ret) 2282 2284 return ret; 2285 + 2286 + if (adev->pdev->device == 0x6611 && adev->pdev->revision == 0x87) { 2287 + /* Workaround buggy powertune on Radeon 430 and 520. */ 2288 + tdp_limit = 32; 2289 + near_tdp_limit = 28; 2290 + } 2283 2291 2284 2292 smc_table->dpm2Params.TDPLimit = 2285 2293 cpu_to_be32(si_scale_power_for_smc(tdp_limit, scaling_factor) * 1000); ··· 2332 2328 2333 2329 if (ni_pi->enable_power_containment) { 2334 2330 SISLANDS_SMC_STATETABLE *smc_table = &si_pi->smc_statetable; 2335 - u32 scaling_factor = si_get_smc_power_scaling_factor(adev); 2336 2331 int ret; 2337 - 2338 - memset(smc_table, 0, sizeof(SISLANDS_SMC_STATETABLE)); 2339 - 2340 - smc_table->dpm2Params.NearTDPLimit = 2341 - cpu_to_be32(si_scale_power_for_smc(adev->pm.dpm.near_tdp_limit_adjusted, scaling_factor) * 1000); 2342 - smc_table->dpm2Params.SafePowerLimit = 2343 - cpu_to_be32(si_scale_power_for_smc((adev->pm.dpm.near_tdp_limit_adjusted * SISLANDS_DPM2_TDP_SAFE_LIMIT_PERCENT) / 100, scaling_factor) * 1000); 2344 2332 2345 2333 ret = amdgpu_si_copy_bytes_to_smc(adev, 2346 2334 (si_pi->state_table_start + ··· 3469 3473 (adev->pdev->revision == 0x80) || 3470 3474 (adev->pdev->revision == 0x81) || 3471 3475 (adev->pdev->revision == 0x83) || 3472 - (adev->pdev->revision == 0x87) || 3476 + (adev->pdev->revision == 0x87 && 3477 + adev->pdev->device != 0x6611) || 3473 3478 (adev->pdev->device == 0x6604) || 3474 3479 (adev->pdev->device == 0x6605)) { 3475 3480 max_sclk = 75000; 3481 + } else if (adev->pdev->revision == 0x87 && 3482 + adev->pdev->device == 0x6611) { 3483 + /* Radeon 430 and 520 */ 3484 + max_sclk = 78000; 3476 3485 } 3477 3486 } 3478 3487 ··· 7601 7600 case AMDGPU_IRQ_STATE_DISABLE: 7602 7601 cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT); 7603 7602 cg_thermal_int |= CG_THERMAL_INT__THERM_INT_MASK_HIGH_MASK; 7604 - WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int); 7603 + WREG32(mmCG_THERMAL_INT, cg_thermal_int); 7605 7604 break; 7606 7605 case AMDGPU_IRQ_STATE_ENABLE: 7607 7606 cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT); 7608 7607 cg_thermal_int &= ~CG_THERMAL_INT__THERM_INT_MASK_HIGH_MASK; 7609 - WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int); 7608 + WREG32(mmCG_THERMAL_INT, cg_thermal_int); 7610 7609 break; 7611 7610 default: 7612 7611 break; ··· 7618 7617 case AMDGPU_IRQ_STATE_DISABLE: 7619 7618 cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT); 7620 7619 cg_thermal_int |= CG_THERMAL_INT__THERM_INT_MASK_LOW_MASK; 7621 - WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int); 7620 + WREG32(mmCG_THERMAL_INT, cg_thermal_int); 7622 7621 break; 7623 7622 case AMDGPU_IRQ_STATE_ENABLE: 7624 7623 cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT); 7625 7624 cg_thermal_int &= ~CG_THERMAL_INT__THERM_INT_MASK_LOW_MASK; 7626 - WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int); 7625 + WREG32(mmCG_THERMAL_INT, cg_thermal_int); 7627 7626 break; 7628 7627 default: 7629 7628 break;
+14 -6
drivers/gpu/drm/bridge/synopsys/dw-dp.c
··· 2062 2062 } 2063 2063 2064 2064 ret = drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR); 2065 - if (ret) 2065 + if (ret) { 2066 2066 dev_err_probe(dev, ret, "Failed to attach bridge\n"); 2067 + goto unregister_aux; 2068 + } 2067 2069 2068 2070 dw_dp_init_hw(dp); 2069 2071 2070 2072 ret = phy_init(dp->phy); 2071 2073 if (ret) { 2072 2074 dev_err_probe(dev, ret, "phy init failed\n"); 2073 - return ERR_PTR(ret); 2075 + goto unregister_aux; 2074 2076 } 2075 2077 2076 2078 ret = devm_add_action_or_reset(dev, dw_dp_phy_exit, dp); 2077 2079 if (ret) 2078 - return ERR_PTR(ret); 2080 + goto unregister_aux; 2079 2081 2080 2082 dp->irq = platform_get_irq(pdev, 0); 2081 - if (dp->irq < 0) 2082 - return ERR_PTR(ret); 2083 + if (dp->irq < 0) { 2084 + ret = dp->irq; 2085 + goto unregister_aux; 2086 + } 2083 2087 2084 2088 ret = devm_request_threaded_irq(dev, dp->irq, NULL, dw_dp_irq, 2085 2089 IRQF_ONESHOT, dev_name(dev), dp); 2086 2090 if (ret) { 2087 2091 dev_err_probe(dev, ret, "failed to request irq\n"); 2088 - return ERR_PTR(ret); 2092 + goto unregister_aux; 2089 2093 } 2090 2094 2091 2095 return dp; 2096 + 2097 + unregister_aux: 2098 + drm_dp_aux_unregister(&dp->aux); 2099 + return ERR_PTR(ret); 2092 2100 } 2093 2101 EXPORT_SYMBOL_GPL(dw_dp_bind); 2094 2102
+22 -14
drivers/gpu/drm/i915/display/intel_color_pipeline.c
··· 34 34 return ret; 35 35 36 36 list->type = colorop->base.base.id; 37 - list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", colorop->base.base.id); 38 37 39 38 /* TODO: handle failures and clean up */ 39 + prev_op = &colorop->base; 40 + 41 + colorop = intel_colorop_create(INTEL_PLANE_CB_CSC); 42 + ret = drm_plane_colorop_ctm_3x4_init(dev, &colorop->base, plane, 43 + DRM_COLOROP_FLAG_ALLOW_BYPASS); 44 + if (ret) 45 + return ret; 46 + 47 + drm_colorop_set_next_property(prev_op, &colorop->base); 40 48 prev_op = &colorop->base; 41 49 42 50 if (DISPLAY_VER(display) >= 35 && ··· 63 55 prev_op = &colorop->base; 64 56 } 65 57 66 - colorop = intel_colorop_create(INTEL_PLANE_CB_CSC); 67 - ret = drm_plane_colorop_ctm_3x4_init(dev, &colorop->base, plane, 68 - DRM_COLOROP_FLAG_ALLOW_BYPASS); 69 - if (ret) 70 - return ret; 71 - 72 - drm_colorop_set_next_property(prev_op, &colorop->base); 73 - prev_op = &colorop->base; 74 - 75 58 colorop = intel_colorop_create(INTEL_PLANE_CB_POST_CSC_LUT); 76 59 ret = drm_plane_colorop_curve_1d_lut_init(dev, &colorop->base, plane, 77 60 PLANE_GAMMA_SIZE, ··· 73 74 74 75 drm_colorop_set_next_property(prev_op, &colorop->base); 75 76 77 + list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", list->type); 78 + 76 79 return 0; 77 80 } 78 81 ··· 82 81 { 83 82 struct drm_device *dev = plane->dev; 84 83 struct intel_display *display = to_intel_display(dev); 85 - struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES]; 84 + struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES] = {}; 86 85 int len = 0; 87 - int ret; 86 + int ret = 0; 87 + int i; 88 88 89 89 /* Currently expose pipeline only for HDR planes */ 90 90 if (!icl_is_hdr_plane(display, to_intel_plane(plane)->id)) ··· 94 92 /* Add pipeline consisting of transfer functions */ 95 93 ret = _intel_color_pipeline_plane_init(plane, &pipelines[len], pipe); 96 94 if (ret) 97 - return ret; 95 + goto out; 98 96 len++; 99 97 100 - return drm_plane_create_color_pipeline_property(plane, pipelines, len); 98 + ret = drm_plane_create_color_pipeline_property(plane, pipelines, len); 99 + 100 + for (i = 0; i < len; i++) 101 + kfree(pipelines[i].name); 102 + 103 + out: 104 + return ret; 101 105 }
+7 -1
drivers/gpu/drm/imagination/pvr_fw_trace.c
··· 137 137 struct rogue_fwif_kccb_cmd cmd; 138 138 int idx; 139 139 int err; 140 + int slot; 140 141 141 142 if (group_mask) 142 143 fw_trace->tracebuf_ctrl->log_type = ROGUE_FWIF_LOG_TYPE_TRACE | group_mask; ··· 155 154 cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_LOGTYPE_UPDATE; 156 155 cmd.kccb_flags = 0; 157 156 158 - err = pvr_kccb_send_cmd(pvr_dev, &cmd, NULL); 157 + err = pvr_kccb_send_cmd(pvr_dev, &cmd, &slot); 158 + if (err) 159 + goto err_drm_dev_exit; 159 160 161 + err = pvr_kccb_wait_for_completion(pvr_dev, slot, HZ, NULL); 162 + 163 + err_drm_dev_exit: 160 164 drm_dev_exit(idx); 161 165 162 166 err_up_read:
+1 -1
drivers/gpu/drm/mediatek/Kconfig
··· 8 8 depends on OF 9 9 depends on MTK_MMSYS 10 10 select DRM_CLIENT_SELECTION 11 - select DRM_GEM_DMA_HELPER if DRM_FBDEV_EMULATION 11 + select DRM_GEM_DMA_HELPER 12 12 select DRM_KMS_HELPER 13 13 select DRM_DISPLAY_HELPER 14 14 select DRM_BRIDGE_CONNECTOR
+9 -14
drivers/gpu/drm/mediatek/mtk_dpi.c
··· 836 836 enum drm_bridge_attach_flags flags) 837 837 { 838 838 struct mtk_dpi *dpi = bridge_to_dpi(bridge); 839 - int ret; 840 - 841 - dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 1, -1); 842 - if (IS_ERR(dpi->next_bridge)) { 843 - ret = PTR_ERR(dpi->next_bridge); 844 - if (ret == -EPROBE_DEFER) 845 - return ret; 846 - 847 - /* Old devicetree has only one endpoint */ 848 - dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 0, 0); 849 - if (IS_ERR(dpi->next_bridge)) 850 - return dev_err_probe(dpi->dev, PTR_ERR(dpi->next_bridge), 851 - "Failed to get bridge\n"); 852 - } 853 839 854 840 return drm_bridge_attach(encoder, dpi->next_bridge, 855 841 &dpi->bridge, flags); ··· 1304 1318 dpi->irq = platform_get_irq(pdev, 0); 1305 1319 if (dpi->irq < 0) 1306 1320 return dpi->irq; 1321 + 1322 + dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 1, -1); 1323 + if (IS_ERR(dpi->next_bridge) && PTR_ERR(dpi->next_bridge) == -ENODEV) { 1324 + /* Old devicetree has only one endpoint */ 1325 + dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 0, 0); 1326 + } 1327 + if (IS_ERR(dpi->next_bridge)) 1328 + return dev_err_probe(dpi->dev, PTR_ERR(dpi->next_bridge), 1329 + "Failed to get bridge\n"); 1307 1330 1308 1331 platform_set_drvdata(pdev, dpi); 1309 1332
+103 -161
drivers/gpu/drm/mediatek/mtk_gem.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 3 * Copyright (c) 2015 MediaTek Inc. 4 + * Copyright (c) 2025 Collabora Ltd. 5 + * AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> 4 6 */ 5 7 6 8 #include <linux/dma-buf.h> ··· 20 18 21 19 static int mtk_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); 22 20 23 - static const struct vm_operations_struct vm_ops = { 24 - .open = drm_gem_vm_open, 25 - .close = drm_gem_vm_close, 26 - }; 21 + static void mtk_gem_free_object(struct drm_gem_object *obj) 22 + { 23 + struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj); 24 + struct mtk_drm_private *priv = obj->dev->dev_private; 25 + 26 + if (dma_obj->sgt) 27 + drm_prime_gem_destroy(obj, dma_obj->sgt); 28 + else 29 + dma_free_wc(priv->dma_dev, dma_obj->base.size, 30 + dma_obj->vaddr, dma_obj->dma_addr); 31 + 32 + /* release file pointer to gem object. */ 33 + drm_gem_object_release(obj); 34 + 35 + kfree(dma_obj); 36 + } 37 + 38 + /* 39 + * Allocate a sg_table for this GEM object. 40 + * Note: Both the table's contents, and the sg_table itself must be freed by 41 + * the caller. 42 + * Returns a pointer to the newly allocated sg_table, or an ERR_PTR() error. 43 + */ 44 + static struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj) 45 + { 46 + struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj); 47 + struct mtk_drm_private *priv = obj->dev->dev_private; 48 + struct sg_table *sgt; 49 + int ret; 50 + 51 + sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); 52 + if (!sgt) 53 + return ERR_PTR(-ENOMEM); 54 + 55 + ret = dma_get_sgtable(priv->dma_dev, sgt, dma_obj->vaddr, 56 + dma_obj->dma_addr, obj->size); 57 + if (ret) { 58 + DRM_ERROR("failed to allocate sgt, %d\n", ret); 59 + kfree(sgt); 60 + return ERR_PTR(ret); 61 + } 62 + 63 + return sgt; 64 + } 27 65 28 66 static const struct drm_gem_object_funcs mtk_gem_object_funcs = { 29 67 .free = mtk_gem_free_object, 68 + .print_info = drm_gem_dma_object_print_info, 30 69 .get_sg_table = mtk_gem_prime_get_sg_table, 31 - .vmap = mtk_gem_prime_vmap, 32 - .vunmap = mtk_gem_prime_vunmap, 70 + .vmap = drm_gem_dma_object_vmap, 33 71 .mmap = mtk_gem_object_mmap, 34 - .vm_ops = &vm_ops, 72 + .vm_ops = &drm_gem_dma_vm_ops, 35 73 }; 36 74 37 - static struct mtk_gem_obj *mtk_gem_init(struct drm_device *dev, 38 - unsigned long size) 75 + static struct drm_gem_dma_object *mtk_gem_init(struct drm_device *dev, 76 + unsigned long size, bool private) 39 77 { 40 - struct mtk_gem_obj *mtk_gem_obj; 78 + struct drm_gem_dma_object *dma_obj; 41 79 int ret; 42 80 43 81 size = round_up(size, PAGE_SIZE); ··· 85 43 if (size == 0) 86 44 return ERR_PTR(-EINVAL); 87 45 88 - mtk_gem_obj = kzalloc(sizeof(*mtk_gem_obj), GFP_KERNEL); 89 - if (!mtk_gem_obj) 46 + dma_obj = kzalloc(sizeof(*dma_obj), GFP_KERNEL); 47 + if (!dma_obj) 90 48 return ERR_PTR(-ENOMEM); 91 49 92 - mtk_gem_obj->base.funcs = &mtk_gem_object_funcs; 50 + dma_obj->base.funcs = &mtk_gem_object_funcs; 93 51 94 - ret = drm_gem_object_init(dev, &mtk_gem_obj->base, size); 95 - if (ret < 0) { 52 + if (private) { 53 + ret = 0; 54 + drm_gem_private_object_init(dev, &dma_obj->base, size); 55 + } else { 56 + ret = drm_gem_object_init(dev, &dma_obj->base, size); 57 + } 58 + if (ret) { 96 59 DRM_ERROR("failed to initialize gem object\n"); 97 - kfree(mtk_gem_obj); 60 + kfree(dma_obj); 98 61 return ERR_PTR(ret); 99 62 } 100 63 101 - return mtk_gem_obj; 64 + return dma_obj; 102 65 } 103 66 104 - struct mtk_gem_obj *mtk_gem_create(struct drm_device *dev, 105 - size_t size, bool alloc_kmap) 67 + static struct drm_gem_dma_object *mtk_gem_create(struct drm_device *dev, size_t size) 106 68 { 107 69 struct mtk_drm_private *priv = dev->dev_private; 108 - struct mtk_gem_obj *mtk_gem; 70 + struct drm_gem_dma_object *dma_obj; 109 71 struct drm_gem_object *obj; 110 72 int ret; 111 73 112 - mtk_gem = mtk_gem_init(dev, size); 113 - if (IS_ERR(mtk_gem)) 114 - return ERR_CAST(mtk_gem); 74 + dma_obj = mtk_gem_init(dev, size, false); 75 + if (IS_ERR(dma_obj)) 76 + return ERR_CAST(dma_obj); 115 77 116 - obj = &mtk_gem->base; 78 + obj = &dma_obj->base; 117 79 118 - mtk_gem->dma_attrs = DMA_ATTR_WRITE_COMBINE; 119 - 120 - if (!alloc_kmap) 121 - mtk_gem->dma_attrs |= DMA_ATTR_NO_KERNEL_MAPPING; 122 - 123 - mtk_gem->cookie = dma_alloc_attrs(priv->dma_dev, obj->size, 124 - &mtk_gem->dma_addr, GFP_KERNEL, 125 - mtk_gem->dma_attrs); 126 - if (!mtk_gem->cookie) { 80 + dma_obj->vaddr = dma_alloc_wc(priv->dma_dev, obj->size, 81 + &dma_obj->dma_addr, 82 + GFP_KERNEL | __GFP_NOWARN); 83 + if (!dma_obj->vaddr) { 127 84 DRM_ERROR("failed to allocate %zx byte dma buffer", obj->size); 128 85 ret = -ENOMEM; 129 86 goto err_gem_free; 130 87 } 131 88 132 - if (alloc_kmap) 133 - mtk_gem->kvaddr = mtk_gem->cookie; 134 - 135 - DRM_DEBUG_DRIVER("cookie = %p dma_addr = %pad size = %zu\n", 136 - mtk_gem->cookie, &mtk_gem->dma_addr, 89 + DRM_DEBUG_DRIVER("vaddr = %p dma_addr = %pad size = %zu\n", 90 + dma_obj->vaddr, &dma_obj->dma_addr, 137 91 size); 138 92 139 - return mtk_gem; 93 + return dma_obj; 140 94 141 95 err_gem_free: 142 96 drm_gem_object_release(obj); 143 - kfree(mtk_gem); 97 + kfree(dma_obj); 144 98 return ERR_PTR(ret); 145 - } 146 - 147 - void mtk_gem_free_object(struct drm_gem_object *obj) 148 - { 149 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 150 - struct mtk_drm_private *priv = obj->dev->dev_private; 151 - 152 - if (mtk_gem->sg) 153 - drm_prime_gem_destroy(obj, mtk_gem->sg); 154 - else 155 - dma_free_attrs(priv->dma_dev, obj->size, mtk_gem->cookie, 156 - mtk_gem->dma_addr, mtk_gem->dma_attrs); 157 - 158 - /* release file pointer to gem object. */ 159 - drm_gem_object_release(obj); 160 - 161 - kfree(mtk_gem); 162 99 } 163 100 164 101 int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, 165 102 struct drm_mode_create_dumb *args) 166 103 { 167 - struct mtk_gem_obj *mtk_gem; 104 + struct drm_gem_dma_object *dma_obj; 168 105 int ret; 169 106 170 107 args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8); ··· 156 135 args->size = args->pitch; 157 136 args->size *= args->height; 158 137 159 - mtk_gem = mtk_gem_create(dev, args->size, false); 160 - if (IS_ERR(mtk_gem)) 161 - return PTR_ERR(mtk_gem); 138 + dma_obj = mtk_gem_create(dev, args->size); 139 + if (IS_ERR(dma_obj)) 140 + return PTR_ERR(dma_obj); 162 141 163 142 /* 164 143 * allocate a id of idr table where the obj is registered 165 144 * and handle has the id what user can see. 166 145 */ 167 - ret = drm_gem_handle_create(file_priv, &mtk_gem->base, &args->handle); 146 + ret = drm_gem_handle_create(file_priv, &dma_obj->base, &args->handle); 168 147 if (ret) 169 148 goto err_handle_create; 170 149 171 150 /* drop reference from allocate - handle holds it now. */ 172 - drm_gem_object_put(&mtk_gem->base); 151 + drm_gem_object_put(&dma_obj->base); 173 152 174 153 return 0; 175 154 176 155 err_handle_create: 177 - mtk_gem_free_object(&mtk_gem->base); 156 + mtk_gem_free_object(&dma_obj->base); 178 157 return ret; 179 158 } 180 159 ··· 182 161 struct vm_area_struct *vma) 183 162 184 163 { 185 - int ret; 186 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 164 + struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj); 187 165 struct mtk_drm_private *priv = obj->dev->dev_private; 166 + int ret; 188 167 189 168 /* 190 169 * Set vm_pgoff (used as a fake buffer offset by DRM) to 0 and map the 191 170 * whole buffer from the start. 192 171 */ 193 - vma->vm_pgoff = 0; 172 + vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node); 194 173 195 174 /* 196 175 * dma_alloc_attrs() allocated a struct page table for mtk_gem, so clear 197 176 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap(). 198 177 */ 199 - vm_flags_set(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); 178 + vm_flags_mod(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP, VM_PFNMAP); 179 + 200 180 vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); 201 181 vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); 202 182 203 - ret = dma_mmap_attrs(priv->dma_dev, vma, mtk_gem->cookie, 204 - mtk_gem->dma_addr, obj->size, mtk_gem->dma_attrs); 183 + ret = dma_mmap_wc(priv->dma_dev, vma, dma_obj->vaddr, 184 + dma_obj->dma_addr, obj->size); 185 + if (ret) 186 + drm_gem_vm_close(vma); 205 187 206 188 return ret; 207 189 } 208 190 209 - /* 210 - * Allocate a sg_table for this GEM object. 211 - * Note: Both the table's contents, and the sg_table itself must be freed by 212 - * the caller. 213 - * Returns a pointer to the newly allocated sg_table, or an ERR_PTR() error. 214 - */ 215 - struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj) 216 - { 217 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 218 - struct mtk_drm_private *priv = obj->dev->dev_private; 219 - struct sg_table *sgt; 220 - int ret; 221 - 222 - sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); 223 - if (!sgt) 224 - return ERR_PTR(-ENOMEM); 225 - 226 - ret = dma_get_sgtable_attrs(priv->dma_dev, sgt, mtk_gem->cookie, 227 - mtk_gem->dma_addr, obj->size, 228 - mtk_gem->dma_attrs); 229 - if (ret) { 230 - DRM_ERROR("failed to allocate sgt, %d\n", ret); 231 - kfree(sgt); 232 - return ERR_PTR(ret); 233 - } 234 - 235 - return sgt; 236 - } 237 - 238 191 struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, 239 - struct dma_buf_attachment *attach, struct sg_table *sg) 192 + struct dma_buf_attachment *attach, struct sg_table *sgt) 240 193 { 241 - struct mtk_gem_obj *mtk_gem; 194 + struct drm_gem_dma_object *dma_obj; 242 195 243 196 /* check if the entries in the sg_table are contiguous */ 244 - if (drm_prime_get_contiguous_size(sg) < attach->dmabuf->size) { 197 + if (drm_prime_get_contiguous_size(sgt) < attach->dmabuf->size) { 245 198 DRM_ERROR("sg_table is not contiguous"); 246 199 return ERR_PTR(-EINVAL); 247 200 } 248 201 249 - mtk_gem = mtk_gem_init(dev, attach->dmabuf->size); 250 - if (IS_ERR(mtk_gem)) 251 - return ERR_CAST(mtk_gem); 202 + dma_obj = mtk_gem_init(dev, attach->dmabuf->size, true); 203 + if (IS_ERR(dma_obj)) 204 + return ERR_CAST(dma_obj); 252 205 253 - mtk_gem->dma_addr = sg_dma_address(sg->sgl); 254 - mtk_gem->sg = sg; 206 + dma_obj->dma_addr = sg_dma_address(sgt->sgl); 207 + dma_obj->sgt = sgt; 255 208 256 - return &mtk_gem->base; 257 - } 258 - 259 - int mtk_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) 260 - { 261 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 262 - struct sg_table *sgt = NULL; 263 - unsigned int npages; 264 - 265 - if (mtk_gem->kvaddr) 266 - goto out; 267 - 268 - sgt = mtk_gem_prime_get_sg_table(obj); 269 - if (IS_ERR(sgt)) 270 - return PTR_ERR(sgt); 271 - 272 - npages = obj->size >> PAGE_SHIFT; 273 - mtk_gem->pages = kcalloc(npages, sizeof(*mtk_gem->pages), GFP_KERNEL); 274 - if (!mtk_gem->pages) { 275 - sg_free_table(sgt); 276 - kfree(sgt); 277 - return -ENOMEM; 278 - } 279 - 280 - drm_prime_sg_to_page_array(sgt, mtk_gem->pages, npages); 281 - 282 - mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP, 283 - pgprot_writecombine(PAGE_KERNEL)); 284 - if (!mtk_gem->kvaddr) { 285 - sg_free_table(sgt); 286 - kfree(sgt); 287 - kfree(mtk_gem->pages); 288 - return -ENOMEM; 289 - } 290 - sg_free_table(sgt); 291 - kfree(sgt); 292 - 293 - out: 294 - iosys_map_set_vaddr(map, mtk_gem->kvaddr); 295 - 296 - return 0; 297 - } 298 - 299 - void mtk_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map) 300 - { 301 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 302 - void *vaddr = map->vaddr; 303 - 304 - if (!mtk_gem->pages) 305 - return; 306 - 307 - vunmap(vaddr); 308 - mtk_gem->kvaddr = NULL; 309 - kfree(mtk_gem->pages); 209 + return &dma_obj->base; 310 210 }
+1 -32
drivers/gpu/drm/mediatek/mtk_gem.h
··· 7 7 #define _MTK_GEM_H_ 8 8 9 9 #include <drm/drm_gem.h> 10 + #include <drm/drm_gem_dma_helper.h> 10 11 11 - /* 12 - * mtk drm buffer structure. 13 - * 14 - * @base: a gem object. 15 - * - a new handle to this gem object would be created 16 - * by drm_gem_handle_create(). 17 - * @cookie: the return value of dma_alloc_attrs(), keep it for dma_free_attrs() 18 - * @kvaddr: kernel virtual address of gem buffer. 19 - * @dma_addr: dma address of gem buffer. 20 - * @dma_attrs: dma attributes of gem buffer. 21 - * 22 - * P.S. this object would be transferred to user as kms_bo.handle so 23 - * user can access the buffer through kms_bo.handle. 24 - */ 25 - struct mtk_gem_obj { 26 - struct drm_gem_object base; 27 - void *cookie; 28 - void *kvaddr; 29 - dma_addr_t dma_addr; 30 - unsigned long dma_attrs; 31 - struct sg_table *sg; 32 - struct page **pages; 33 - }; 34 - 35 - #define to_mtk_gem_obj(x) container_of(x, struct mtk_gem_obj, base) 36 - 37 - void mtk_gem_free_object(struct drm_gem_object *gem); 38 - struct mtk_gem_obj *mtk_gem_create(struct drm_device *dev, size_t size, 39 - bool alloc_kmap); 40 12 int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, 41 13 struct drm_mode_create_dumb *args); 42 - struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj); 43 14 struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, 44 15 struct dma_buf_attachment *attach, struct sg_table *sg); 45 - int mtk_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); 46 - void mtk_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map); 47 16 48 17 #endif
+1 -1
drivers/gpu/drm/mediatek/mtk_hdmi_common.c
··· 303 303 return dev_err_probe(dev, ret, "Failed to get clocks\n"); 304 304 305 305 hdmi->irq = platform_get_irq(pdev, 0); 306 - if (!hdmi->irq) 306 + if (hdmi->irq < 0) 307 307 return hdmi->irq; 308 308 309 309 hdmi->regs = device_node_to_regmap(dev->of_node);
+1 -1
drivers/gpu/drm/mediatek/mtk_hdmi_common.h
··· 168 168 bool audio_enable; 169 169 bool powered; 170 170 bool enabled; 171 - unsigned int irq; 171 + int irq; 172 172 enum hdmi_hpd_state hpd; 173 173 hdmi_codec_plugged_cb plugged_cb; 174 174 struct device *codec_dev;
+33 -25
drivers/gpu/drm/mediatek/mtk_hdmi_ddc_v2.c
··· 66 66 return 0; 67 67 } 68 68 69 - static int mtk_ddc_wr_one(struct mtk_hdmi_ddc *ddc, u16 addr_id, 70 - u16 offset_id, u8 *wr_data) 69 + static int mtk_ddcm_write_hdmi(struct mtk_hdmi_ddc *ddc, u16 addr_id, 70 + u16 offset_id, u16 data_cnt, u8 *wr_data) 71 71 { 72 72 u32 val; 73 - int ret; 73 + int ret, i; 74 + 75 + /* Don't allow transfer with a size over than the transfer fifo size 76 + * (16 byte) 77 + */ 78 + if (data_cnt > 16) { 79 + dev_err(ddc->dev, "Invalid DDCM write request\n"); 80 + return -EINVAL; 81 + } 74 82 75 83 /* If down, rise bus for write operation */ 76 84 mtk_ddc_check_and_rise_low_bus(ddc); ··· 86 78 regmap_update_bits(ddc->regs, HPD_DDC_CTRL, HPD_DDC_DELAY_CNT, 87 79 FIELD_PREP(HPD_DDC_DELAY_CNT, DDC2_DLY_CNT)); 88 80 81 + /* In case there is no payload data, just do a single write for the 82 + * address only 83 + */ 89 84 if (wr_data) { 90 - regmap_write(ddc->regs, SI2C_CTRL, 91 - FIELD_PREP(SI2C_ADDR, SI2C_ADDR_READ) | 92 - FIELD_PREP(SI2C_WDATA, *wr_data) | 93 - SI2C_WR); 85 + /* Fill transfer fifo with payload data */ 86 + for (i = 0; i < data_cnt; i++) { 87 + regmap_write(ddc->regs, SI2C_CTRL, 88 + FIELD_PREP(SI2C_ADDR, SI2C_ADDR_READ) | 89 + FIELD_PREP(SI2C_WDATA, wr_data[i]) | 90 + SI2C_WR); 91 + } 94 92 } 95 - 96 93 regmap_write(ddc->regs, DDC_CTRL, 97 94 FIELD_PREP(DDC_CTRL_CMD, DDC_CMD_SEQ_WRITE) | 98 - FIELD_PREP(DDC_CTRL_DIN_CNT, wr_data == NULL ? 0 : 1) | 95 + FIELD_PREP(DDC_CTRL_DIN_CNT, wr_data == NULL ? 0 : data_cnt) | 99 96 FIELD_PREP(DDC_CTRL_OFFSET, offset_id) | 100 97 FIELD_PREP(DDC_CTRL_ADDR, addr_id)); 101 98 usleep_range(1000, 1250); ··· 109 96 !(val & DDC_I2C_IN_PROG), 500, 1000); 110 97 if (ret) { 111 98 dev_err(ddc->dev, "DDC I2C write timeout\n"); 99 + 100 + /* Abort transfer if it is still in progress */ 101 + regmap_update_bits(ddc->regs, DDC_CTRL, DDC_CTRL_CMD, 102 + FIELD_PREP(DDC_CTRL_CMD, DDC_CMD_ABORT_XFER)); 103 + 112 104 return ret; 113 105 } 114 106 ··· 197 179 500 * (temp_length + 5)); 198 180 if (ret) { 199 181 dev_err(ddc->dev, "Timeout waiting for DDC I2C\n"); 182 + 183 + /* Abort transfer if it is still in progress */ 184 + regmap_update_bits(ddc->regs, DDC_CTRL, DDC_CTRL_CMD, 185 + FIELD_PREP(DDC_CTRL_CMD, DDC_CMD_ABORT_XFER)); 186 + 200 187 return ret; 201 188 } 202 189 ··· 273 250 static int mtk_hdmi_ddc_fg_data_write(struct mtk_hdmi_ddc *ddc, u16 b_dev, 274 251 u8 data_addr, u16 data_cnt, u8 *pr_data) 275 252 { 276 - int i, ret; 277 - 278 253 regmap_set_bits(ddc->regs, HDCP2X_POL_CTRL, HDCP2X_DIS_POLL_EN); 279 - /* 280 - * In case there is no payload data, just do a single write for the 281 - * address only 282 - */ 283 - if (data_cnt == 0) 284 - return mtk_ddc_wr_one(ddc, b_dev, data_addr, NULL); 285 254 286 - i = 0; 287 - do { 288 - ret = mtk_ddc_wr_one(ddc, b_dev, data_addr + i, pr_data + i); 289 - if (ret) 290 - return ret; 291 - } while (++i < data_cnt); 292 - 293 - return 0; 255 + return mtk_ddcm_write_hdmi(ddc, b_dev, data_addr, data_cnt, pr_data); 294 256 } 295 257 296 258 static int mtk_hdmi_ddc_v2_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs, int num)
+4 -3
drivers/gpu/drm/mediatek/mtk_hdmi_v2.c
··· 1120 1120 mtk_hdmi_v2_disable(hdmi); 1121 1121 } 1122 1122 1123 - static int mtk_hdmi_v2_hdmi_tmds_char_rate_valid(const struct drm_bridge *bridge, 1124 - const struct drm_display_mode *mode, 1125 - unsigned long long tmds_rate) 1123 + static enum drm_mode_status 1124 + mtk_hdmi_v2_hdmi_tmds_char_rate_valid(const struct drm_bridge *bridge, 1125 + const struct drm_display_mode *mode, 1126 + unsigned long long tmds_rate) 1126 1127 { 1127 1128 if (mode->clock < MTK_HDMI_V2_CLOCK_MIN) 1128 1129 return MODE_CLOCK_LOW;
+4 -4
drivers/gpu/drm/mediatek/mtk_plane.c
··· 11 11 #include <drm/drm_fourcc.h> 12 12 #include <drm/drm_framebuffer.h> 13 13 #include <drm/drm_gem_atomic_helper.h> 14 + #include <drm/drm_gem_dma_helper.h> 14 15 #include <drm/drm_print.h> 15 16 #include <linux/align.h> 16 17 17 18 #include "mtk_crtc.h" 18 19 #include "mtk_ddp_comp.h" 19 20 #include "mtk_drm_drv.h" 20 - #include "mtk_gem.h" 21 21 #include "mtk_plane.h" 22 22 23 23 static const u64 modifiers[] = { ··· 114 114 struct mtk_plane_state *mtk_plane_state) 115 115 { 116 116 struct drm_framebuffer *fb = new_state->fb; 117 + struct drm_gem_dma_object *dma_obj; 117 118 struct drm_gem_object *gem; 118 - struct mtk_gem_obj *mtk_gem; 119 119 unsigned int pitch, format; 120 120 u64 modifier; 121 121 dma_addr_t addr; ··· 124 124 int offset; 125 125 126 126 gem = fb->obj[0]; 127 - mtk_gem = to_mtk_gem_obj(gem); 128 - addr = mtk_gem->dma_addr; 127 + dma_obj = to_drm_gem_dma_obj(gem); 128 + addr = dma_obj->dma_addr; 129 129 pitch = fb->pitches[0]; 130 130 format = fb->format->format; 131 131 modifier = fb->modifier;
+74 -21
drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h
··· 1 1 /* SPDX-License-Identifier: MIT */ 2 2 #ifndef __NVBIOS_CONN_H__ 3 3 #define __NVBIOS_CONN_H__ 4 + 5 + /* 6 + * An enumerator representing all of the possible VBIOS connector types defined 7 + * by Nvidia at 8 + * https://nvidia.github.io/open-gpu-doc/DCB/DCB-4.x-Specification.html. 9 + * 10 + * [1] Nvidia's documentation actually claims DCB_CONNECTOR_HDMI_0 is a "3-Pin 11 + * DIN Stereo Connector". This seems very likely to be a documentation typo 12 + * or some sort of funny historical baggage, because we've treated this 13 + * connector type as HDMI for years without issue. 14 + * TODO: Check with Nvidia what's actually happening here. 15 + */ 4 16 enum dcb_connector_type { 5 - DCB_CONNECTOR_VGA = 0x00, 6 - DCB_CONNECTOR_TV_0 = 0x10, 7 - DCB_CONNECTOR_TV_1 = 0x11, 8 - DCB_CONNECTOR_TV_3 = 0x13, 9 - DCB_CONNECTOR_DVI_I = 0x30, 10 - DCB_CONNECTOR_DVI_D = 0x31, 11 - DCB_CONNECTOR_DMS59_0 = 0x38, 12 - DCB_CONNECTOR_DMS59_1 = 0x39, 13 - DCB_CONNECTOR_LVDS = 0x40, 14 - DCB_CONNECTOR_LVDS_SPWG = 0x41, 15 - DCB_CONNECTOR_DP = 0x46, 16 - DCB_CONNECTOR_eDP = 0x47, 17 - DCB_CONNECTOR_mDP = 0x48, 18 - DCB_CONNECTOR_HDMI_0 = 0x60, 19 - DCB_CONNECTOR_HDMI_1 = 0x61, 20 - DCB_CONNECTOR_HDMI_C = 0x63, 21 - DCB_CONNECTOR_DMS59_DP0 = 0x64, 22 - DCB_CONNECTOR_DMS59_DP1 = 0x65, 23 - DCB_CONNECTOR_WFD = 0x70, 24 - DCB_CONNECTOR_USB_C = 0x71, 25 - DCB_CONNECTOR_NONE = 0xff 17 + /* Analog outputs */ 18 + DCB_CONNECTOR_VGA = 0x00, // VGA 15-pin connector 19 + DCB_CONNECTOR_DVI_A = 0x01, // DVI-A 20 + DCB_CONNECTOR_POD_VGA = 0x02, // Pod - VGA 15-pin connector 21 + DCB_CONNECTOR_TV_0 = 0x10, // TV - Composite Out 22 + DCB_CONNECTOR_TV_1 = 0x11, // TV - S-Video Out 23 + DCB_CONNECTOR_TV_2 = 0x12, // TV - S-Video Breakout - Composite 24 + DCB_CONNECTOR_TV_3 = 0x13, // HDTV Component - YPrPb 25 + DCB_CONNECTOR_TV_SCART = 0x14, // TV - SCART Connector 26 + DCB_CONNECTOR_TV_SCART_D = 0x16, // TV - Composite SCART over D-connector 27 + DCB_CONNECTOR_TV_DTERM = 0x17, // HDTV - D-connector (EIAJ4120) 28 + DCB_CONNECTOR_POD_TV_3 = 0x18, // Pod - HDTV - YPrPb 29 + DCB_CONNECTOR_POD_TV_1 = 0x19, // Pod - S-Video 30 + DCB_CONNECTOR_POD_TV_0 = 0x1a, // Pod - Composite 31 + 32 + /* DVI digital outputs */ 33 + DCB_CONNECTOR_DVI_I_TV_1 = 0x20, // DVI-I-TV-S-Video 34 + DCB_CONNECTOR_DVI_I_TV_0 = 0x21, // DVI-I-TV-Composite 35 + DCB_CONNECTOR_DVI_I_TV_2 = 0x22, // DVI-I-TV-S-Video Breakout-Composite 36 + DCB_CONNECTOR_DVI_I = 0x30, // DVI-I 37 + DCB_CONNECTOR_DVI_D = 0x31, // DVI-D 38 + DCB_CONNECTOR_DVI_ADC = 0x32, // Apple Display Connector (ADC) 39 + DCB_CONNECTOR_DMS59_0 = 0x38, // LFH-DVI-I-1 40 + DCB_CONNECTOR_DMS59_1 = 0x39, // LFH-DVI-I-2 41 + DCB_CONNECTOR_BNC = 0x3c, // BNC Connector [for SDI?] 42 + 43 + /* LVDS / TMDS digital outputs */ 44 + DCB_CONNECTOR_LVDS = 0x40, // LVDS-SPWG-Attached [is this name correct?] 45 + DCB_CONNECTOR_LVDS_SPWG = 0x41, // LVDS-OEM-Attached (non-removable) 46 + DCB_CONNECTOR_LVDS_REM = 0x42, // LVDS-SPWG-Detached [following naming above] 47 + DCB_CONNECTOR_LVDS_SPWG_REM = 0x43, // LVDS-OEM-Detached (removable) 48 + DCB_CONNECTOR_TMDS = 0x45, // TMDS-OEM-Attached (non-removable) 49 + 50 + /* DP digital outputs */ 51 + DCB_CONNECTOR_DP = 0x46, // DisplayPort External Connector 52 + DCB_CONNECTOR_eDP = 0x47, // DisplayPort Internal Connector 53 + DCB_CONNECTOR_mDP = 0x48, // DisplayPort (Mini) External Connector 54 + 55 + /* Dock outputs (not used) */ 56 + DCB_CONNECTOR_DOCK_VGA_0 = 0x50, // VGA 15-pin if not docked 57 + DCB_CONNECTOR_DOCK_VGA_1 = 0x51, // VGA 15-pin if docked 58 + DCB_CONNECTOR_DOCK_DVI_I_0 = 0x52, // DVI-I if not docked 59 + DCB_CONNECTOR_DOCK_DVI_I_1 = 0x53, // DVI-I if docked 60 + DCB_CONNECTOR_DOCK_DVI_D_0 = 0x54, // DVI-D if not docked 61 + DCB_CONNECTOR_DOCK_DVI_D_1 = 0x55, // DVI-D if docked 62 + DCB_CONNECTOR_DOCK_DP_0 = 0x56, // DisplayPort if not docked 63 + DCB_CONNECTOR_DOCK_DP_1 = 0x57, // DisplayPort if docked 64 + DCB_CONNECTOR_DOCK_mDP_0 = 0x58, // DisplayPort (Mini) if not docked 65 + DCB_CONNECTOR_DOCK_mDP_1 = 0x59, // DisplayPort (Mini) if docked 66 + 67 + /* HDMI? digital outputs */ 68 + DCB_CONNECTOR_HDMI_0 = 0x60, // HDMI? See [1] in top-level enum comment above 69 + DCB_CONNECTOR_HDMI_1 = 0x61, // HDMI-A connector 70 + DCB_CONNECTOR_SPDIF = 0x62, // Audio S/PDIF connector 71 + DCB_CONNECTOR_HDMI_C = 0x63, // HDMI-C (Mini) connector 72 + 73 + /* Misc. digital outputs */ 74 + DCB_CONNECTOR_DMS59_DP0 = 0x64, // LFH-DP-1 75 + DCB_CONNECTOR_DMS59_DP1 = 0x65, // LFH-DP-2 76 + DCB_CONNECTOR_WFD = 0x70, // Virtual connector for Wifi Display (WFD) 77 + DCB_CONNECTOR_USB_C = 0x71, // [DP over USB-C; not present in docs] 78 + DCB_CONNECTOR_NONE = 0xff // Skip Entry 26 79 }; 27 80 28 81 struct nvbios_connT {
+2
drivers/gpu/drm/nouveau/nouveau_display.c
··· 352 352 353 353 static const struct drm_mode_config_funcs nouveau_mode_config_funcs = { 354 354 .fb_create = nouveau_user_framebuffer_create, 355 + .atomic_commit = drm_atomic_helper_commit, 356 + .atomic_check = drm_atomic_helper_check, 355 357 }; 356 358 357 359
+53 -20
drivers/gpu/drm/nouveau/nvkm/engine/disp/uconn.c
··· 191 191 spin_lock(&disp->client.lock); 192 192 if (!conn->object.func) { 193 193 switch (conn->info.type) { 194 - case DCB_CONNECTOR_VGA : args->v0.type = NVIF_CONN_V0_VGA; break; 195 - case DCB_CONNECTOR_TV_0 : 196 - case DCB_CONNECTOR_TV_1 : 197 - case DCB_CONNECTOR_TV_3 : args->v0.type = NVIF_CONN_V0_TV; break; 198 - case DCB_CONNECTOR_DMS59_0 : 199 - case DCB_CONNECTOR_DMS59_1 : 200 - case DCB_CONNECTOR_DVI_I : args->v0.type = NVIF_CONN_V0_DVI_I; break; 201 - case DCB_CONNECTOR_DVI_D : args->v0.type = NVIF_CONN_V0_DVI_D; break; 202 - case DCB_CONNECTOR_LVDS : args->v0.type = NVIF_CONN_V0_LVDS; break; 203 - case DCB_CONNECTOR_LVDS_SPWG: args->v0.type = NVIF_CONN_V0_LVDS_SPWG; break; 204 - case DCB_CONNECTOR_DMS59_DP0: 205 - case DCB_CONNECTOR_DMS59_DP1: 206 - case DCB_CONNECTOR_DP : 207 - case DCB_CONNECTOR_mDP : 208 - case DCB_CONNECTOR_USB_C : args->v0.type = NVIF_CONN_V0_DP; break; 209 - case DCB_CONNECTOR_eDP : args->v0.type = NVIF_CONN_V0_EDP; break; 210 - case DCB_CONNECTOR_HDMI_0 : 211 - case DCB_CONNECTOR_HDMI_1 : 212 - case DCB_CONNECTOR_HDMI_C : args->v0.type = NVIF_CONN_V0_HDMI; break; 194 + /* VGA */ 195 + case DCB_CONNECTOR_DVI_A : 196 + case DCB_CONNECTOR_POD_VGA : 197 + case DCB_CONNECTOR_VGA : args->v0.type = NVIF_CONN_V0_VGA; break; 198 + 199 + /* TV */ 200 + case DCB_CONNECTOR_TV_0 : 201 + case DCB_CONNECTOR_TV_1 : 202 + case DCB_CONNECTOR_TV_2 : 203 + case DCB_CONNECTOR_TV_SCART : 204 + case DCB_CONNECTOR_TV_SCART_D : 205 + case DCB_CONNECTOR_TV_DTERM : 206 + case DCB_CONNECTOR_POD_TV_3 : 207 + case DCB_CONNECTOR_POD_TV_1 : 208 + case DCB_CONNECTOR_POD_TV_0 : 209 + case DCB_CONNECTOR_TV_3 : args->v0.type = NVIF_CONN_V0_TV; break; 210 + 211 + /* DVI */ 212 + case DCB_CONNECTOR_DVI_I_TV_1 : 213 + case DCB_CONNECTOR_DVI_I_TV_0 : 214 + case DCB_CONNECTOR_DVI_I_TV_2 : 215 + case DCB_CONNECTOR_DVI_ADC : 216 + case DCB_CONNECTOR_DMS59_0 : 217 + case DCB_CONNECTOR_DMS59_1 : 218 + case DCB_CONNECTOR_DVI_I : args->v0.type = NVIF_CONN_V0_DVI_I; break; 219 + case DCB_CONNECTOR_TMDS : 220 + case DCB_CONNECTOR_DVI_D : args->v0.type = NVIF_CONN_V0_DVI_D; break; 221 + 222 + /* LVDS */ 223 + case DCB_CONNECTOR_LVDS : args->v0.type = NVIF_CONN_V0_LVDS; break; 224 + case DCB_CONNECTOR_LVDS_SPWG : args->v0.type = NVIF_CONN_V0_LVDS_SPWG; break; 225 + 226 + /* DP */ 227 + case DCB_CONNECTOR_DMS59_DP0 : 228 + case DCB_CONNECTOR_DMS59_DP1 : 229 + case DCB_CONNECTOR_DP : 230 + case DCB_CONNECTOR_mDP : 231 + case DCB_CONNECTOR_USB_C : args->v0.type = NVIF_CONN_V0_DP; break; 232 + case DCB_CONNECTOR_eDP : args->v0.type = NVIF_CONN_V0_EDP; break; 233 + 234 + /* HDMI */ 235 + case DCB_CONNECTOR_HDMI_0 : 236 + case DCB_CONNECTOR_HDMI_1 : 237 + case DCB_CONNECTOR_HDMI_C : args->v0.type = NVIF_CONN_V0_HDMI; break; 238 + 239 + /* 240 + * Dock & unused outputs. 241 + * BNC, SPDIF, WFD, and detached LVDS go here. 242 + */ 213 243 default: 214 - WARN_ON(1); 244 + nvkm_warn(&disp->engine.subdev, 245 + "unimplemented connector type 0x%02x\n", 246 + conn->info.type); 247 + args->v0.type = NVIF_CONN_V0_VGA; 215 248 ret = -EINVAL; 216 249 break; 217 250 }
+8 -7
drivers/gpu/drm/vkms/vkms_colorop.c
··· 37 37 goto cleanup; 38 38 39 39 list->type = ops[i]->base.id; 40 - list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[i]->base.id); 41 40 42 41 i++; 43 42 ··· 87 88 88 89 drm_colorop_set_next_property(ops[i - 1], ops[i]); 89 90 91 + list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[0]->base.id); 92 + 90 93 return 0; 91 94 92 95 cleanup: ··· 104 103 105 104 int vkms_initialize_colorops(struct drm_plane *plane) 106 105 { 107 - struct drm_prop_enum_list pipeline; 108 - int ret; 106 + struct drm_prop_enum_list pipeline = {}; 107 + int ret = 0; 109 108 110 109 /* Add color pipeline */ 111 110 ret = vkms_initialize_color_pipeline(plane, &pipeline); 112 111 if (ret) 113 - return ret; 112 + goto out; 114 113 115 114 /* Create COLOR_PIPELINE property and attach */ 116 115 ret = drm_plane_create_color_pipeline_property(plane, &pipeline, 1); 117 - if (ret) 118 - return ret; 119 116 120 - return 0; 117 + kfree(pipeline.name); 118 + out: 119 + return ret; 121 120 }
+3 -2
drivers/gpu/drm/xe/Kconfig
··· 39 39 select DRM_TTM 40 40 select DRM_TTM_HELPER 41 41 select DRM_EXEC 42 - select DRM_GPUSVM if !UML && DEVICE_PRIVATE 42 + select DRM_GPUSVM if !UML 43 43 select DRM_GPUVM 44 44 select DRM_SCHED 45 45 select MMU_NOTIFIER ··· 80 80 bool "Enable CPU to GPU address mirroring" 81 81 depends on DRM_XE 82 82 depends on !UML 83 - depends on DEVICE_PRIVATE 83 + depends on ZONE_DEVICE 84 84 default y 85 + select DEVICE_PRIVATE 85 86 select DRM_GPUSVM 86 87 help 87 88 Enable this option if you want support for CPU to GPU address
+7 -2
drivers/gpu/drm/xe/xe_bo.c
··· 1055 1055 unsigned long *scanned) 1056 1056 { 1057 1057 struct xe_device *xe = ttm_to_xe_device(bo->bdev); 1058 + struct ttm_tt *tt = bo->ttm; 1058 1059 long lret; 1059 1060 1060 1061 /* Fake move to system, without copying data. */ ··· 1080 1079 .writeback = false, 1081 1080 .allow_move = false}); 1082 1081 1083 - if (lret > 0) 1082 + if (lret > 0) { 1084 1083 xe_ttm_tt_account_subtract(xe, bo->ttm); 1084 + update_global_total_pages(bo->bdev, -(long)tt->num_pages); 1085 + } 1085 1086 1086 1087 return lret; 1087 1088 } ··· 1169 1166 if (needs_rpm) 1170 1167 xe_pm_runtime_put(xe); 1171 1168 1172 - if (lret > 0) 1169 + if (lret > 0) { 1173 1170 xe_ttm_tt_account_subtract(xe, tt); 1171 + update_global_total_pages(bo->bdev, -(long)tt->num_pages); 1172 + } 1174 1173 1175 1174 out_unref: 1176 1175 xe_bo_put(xe_bo);
+57 -15
drivers/gpu/drm/xe/xe_debugfs.c
··· 256 256 return simple_read_from_buffer(ubuf, size, pos, buf, len); 257 257 } 258 258 259 + static int __wedged_mode_set_reset_policy(struct xe_gt *gt, enum xe_wedged_mode mode) 260 + { 261 + bool enable_engine_reset; 262 + int ret; 263 + 264 + enable_engine_reset = (mode != XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET); 265 + ret = xe_guc_ads_scheduler_policy_toggle_reset(&gt->uc.guc.ads, 266 + enable_engine_reset); 267 + if (ret) 268 + xe_gt_err(gt, "Failed to update GuC ADS scheduler policy (%pe)\n", ERR_PTR(ret)); 269 + 270 + return ret; 271 + } 272 + 273 + static int wedged_mode_set_reset_policy(struct xe_device *xe, enum xe_wedged_mode mode) 274 + { 275 + struct xe_gt *gt; 276 + int ret; 277 + u8 id; 278 + 279 + guard(xe_pm_runtime)(xe); 280 + for_each_gt(gt, xe, id) { 281 + ret = __wedged_mode_set_reset_policy(gt, mode); 282 + if (ret) { 283 + if (id > 0) { 284 + xe->wedged.inconsistent_reset = true; 285 + drm_err(&xe->drm, "Inconsistent reset policy state between GTs\n"); 286 + } 287 + return ret; 288 + } 289 + } 290 + 291 + xe->wedged.inconsistent_reset = false; 292 + 293 + return 0; 294 + } 295 + 296 + static bool wedged_mode_needs_policy_update(struct xe_device *xe, enum xe_wedged_mode mode) 297 + { 298 + if (xe->wedged.inconsistent_reset) 299 + return true; 300 + 301 + if (xe->wedged.mode == mode) 302 + return false; 303 + 304 + if (xe->wedged.mode == XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET || 305 + mode == XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET) 306 + return true; 307 + 308 + return false; 309 + } 310 + 259 311 static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf, 260 312 size_t size, loff_t *pos) 261 313 { 262 314 struct xe_device *xe = file_inode(f)->i_private; 263 - struct xe_gt *gt; 264 315 u32 wedged_mode; 265 316 ssize_t ret; 266 - u8 id; 267 317 268 318 ret = kstrtouint_from_user(ubuf, size, 0, &wedged_mode); 269 319 if (ret) ··· 322 272 if (wedged_mode > 2) 323 273 return -EINVAL; 324 274 325 - if (xe->wedged.mode == wedged_mode) 326 - return size; 275 + if (wedged_mode_needs_policy_update(xe, wedged_mode)) { 276 + ret = wedged_mode_set_reset_policy(xe, wedged_mode); 277 + if (ret) 278 + return ret; 279 + } 327 280 328 281 xe->wedged.mode = wedged_mode; 329 - 330 - xe_pm_runtime_get(xe); 331 - for_each_gt(gt, xe, id) { 332 - ret = xe_guc_ads_scheduler_policy_toggle_reset(&gt->uc.guc.ads); 333 - if (ret) { 334 - xe_gt_err(gt, "Failed to update GuC ADS scheduler policy. GuC may still cause engine reset even with wedged_mode=2\n"); 335 - xe_pm_runtime_put(xe); 336 - return -EIO; 337 - } 338 - } 339 - xe_pm_runtime_put(xe); 340 282 341 283 return size; 342 284 }
+18
drivers/gpu/drm/xe/xe_device_types.h
··· 44 44 struct xe_pxp; 45 45 struct xe_vram_region; 46 46 47 + /** 48 + * enum xe_wedged_mode - possible wedged modes 49 + * @XE_WEDGED_MODE_NEVER: Device will never be declared wedged. 50 + * @XE_WEDGED_MODE_UPON_CRITICAL_ERROR: Device will be declared wedged only 51 + * when critical error occurs like GT reset failure or firmware failure. 52 + * This is the default mode. 53 + * @XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET: Device will be declared wedged on 54 + * any hang. In this mode, engine resets are disabled to avoid automatic 55 + * recovery attempts. This mode is primarily intended for debugging hangs. 56 + */ 57 + enum xe_wedged_mode { 58 + XE_WEDGED_MODE_NEVER = 0, 59 + XE_WEDGED_MODE_UPON_CRITICAL_ERROR = 1, 60 + XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET = 2, 61 + }; 62 + 47 63 #define XE_BO_INVALID_OFFSET LONG_MAX 48 64 49 65 #define GRAPHICS_VER(xe) ((xe)->info.graphics_verx100 / 100) ··· 603 587 int mode; 604 588 /** @wedged.method: Recovery method to be sent in the drm device wedged uevent */ 605 589 unsigned long method; 590 + /** @wedged.inconsistent_reset: Inconsistent reset policy state between GTs */ 591 + bool inconsistent_reset; 606 592 } wedged; 607 593 608 594 /** @bo_device: Struct to control async free of BOs */
+31 -1
drivers/gpu/drm/xe/xe_exec_queue.c
··· 328 328 * @xe: Xe device. 329 329 * @tile: tile which bind exec queue belongs to. 330 330 * @flags: exec queue creation flags 331 + * @user_vm: The user VM which this exec queue belongs to 331 332 * @extensions: exec queue creation extensions 332 333 * 333 334 * Normalize bind exec queue creation. Bind exec queue is tied to migration VM ··· 342 341 */ 343 342 struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe, 344 343 struct xe_tile *tile, 344 + struct xe_vm *user_vm, 345 345 u32 flags, u64 extensions) 346 346 { 347 347 struct xe_gt *gt = tile->primary_gt; ··· 379 377 xe_exec_queue_put(q); 380 378 return ERR_PTR(err); 381 379 } 380 + 381 + if (user_vm) 382 + q->user_vm = xe_vm_get(user_vm); 382 383 } 383 384 384 385 return q; ··· 410 405 list_for_each_entry_safe(eq, next, &q->multi_gt_list, 411 406 multi_gt_link) 412 407 xe_exec_queue_put(eq); 408 + } 409 + 410 + if (q->user_vm) { 411 + xe_vm_put(q->user_vm); 412 + q->user_vm = NULL; 413 413 } 414 414 415 415 q->ops->destroy(q); ··· 752 742 XE_IOCTL_DBG(xe, eci[0].engine_instance != 0)) 753 743 return -EINVAL; 754 744 745 + vm = xe_vm_lookup(xef, args->vm_id); 746 + if (XE_IOCTL_DBG(xe, !vm)) 747 + return -ENOENT; 748 + 749 + err = down_read_interruptible(&vm->lock); 750 + if (err) { 751 + xe_vm_put(vm); 752 + return err; 753 + } 754 + 755 + if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) { 756 + up_read(&vm->lock); 757 + xe_vm_put(vm); 758 + return -ENOENT; 759 + } 760 + 755 761 for_each_tile(tile, xe, id) { 756 762 struct xe_exec_queue *new; 757 763 ··· 775 749 if (id) 776 750 flags |= EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD; 777 751 778 - new = xe_exec_queue_create_bind(xe, tile, flags, 752 + new = xe_exec_queue_create_bind(xe, tile, vm, flags, 779 753 args->extensions); 780 754 if (IS_ERR(new)) { 755 + up_read(&vm->lock); 756 + xe_vm_put(vm); 781 757 err = PTR_ERR(new); 782 758 if (q) 783 759 goto put_exec_queue; ··· 791 763 list_add_tail(&new->multi_gt_list, 792 764 &q->multi_gt_link); 793 765 } 766 + up_read(&vm->lock); 767 + xe_vm_put(vm); 794 768 } else { 795 769 logical_mask = calc_validate_logical_mask(xe, eci, 796 770 args->width,
+1
drivers/gpu/drm/xe/xe_exec_queue.h
··· 28 28 u32 flags, u64 extensions); 29 29 struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe, 30 30 struct xe_tile *tile, 31 + struct xe_vm *user_vm, 31 32 u32 flags, u64 extensions); 32 33 33 34 void xe_exec_queue_fini(struct xe_exec_queue *q);
+6
drivers/gpu/drm/xe/xe_exec_queue_types.h
··· 54 54 struct kref refcount; 55 55 /** @vm: VM (address space) for this exec queue */ 56 56 struct xe_vm *vm; 57 + /** 58 + * @user_vm: User VM (address space) for this exec queue (bind queues 59 + * only) 60 + */ 61 + struct xe_vm *user_vm; 62 + 57 63 /** @class: class of this exec queue */ 58 64 enum xe_engine_class class; 59 65 /**
+1 -1
drivers/gpu/drm/xe/xe_ggtt.c
··· 322 322 else 323 323 ggtt->pt_ops = &xelp_pt_ops; 324 324 325 - ggtt->wq = alloc_workqueue("xe-ggtt-wq", 0, WQ_MEM_RECLAIM); 325 + ggtt->wq = alloc_workqueue("xe-ggtt-wq", WQ_MEM_RECLAIM, 0); 326 326 if (!ggtt->wq) 327 327 return -ENOMEM; 328 328
+2 -2
drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
··· 41 41 }; 42 42 43 43 /** 44 - * xe_gt_sriov_vf_migration - VF migration data. 44 + * struct xe_gt_sriov_vf_migration - VF migration data. 45 45 */ 46 46 struct xe_gt_sriov_vf_migration { 47 - /** @migration: VF migration recovery worker */ 47 + /** @worker: VF migration recovery worker */ 48 48 struct work_struct worker; 49 49 /** @lock: Protects recovery_queued, teardown */ 50 50 spinlock_t lock;
+8 -6
drivers/gpu/drm/xe/xe_guc_ads.c
··· 983 983 /** 984 984 * xe_guc_ads_scheduler_policy_toggle_reset - Toggle reset policy 985 985 * @ads: Additional data structures object 986 + * @enable_engine_reset: true to enable engine resets, false otherwise 986 987 * 987 - * This function update the GuC's engine reset policy based on wedged.mode. 988 + * This function update the GuC's engine reset policy. 988 989 * 989 990 * Return: 0 on success, and negative error code otherwise. 990 991 */ 991 - int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads) 992 + int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads, 993 + bool enable_engine_reset) 992 994 { 993 995 struct guc_policies *policies; 994 996 struct xe_guc *guc = ads_to_guc(ads); 995 - struct xe_device *xe = ads_to_xe(ads); 996 997 CLASS(xe_guc_buf, buf)(&guc->buf, sizeof(*policies)); 997 998 998 999 if (!xe_guc_buf_is_valid(buf)) ··· 1005 1004 policies->dpc_promote_time = ads_blob_read(ads, policies.dpc_promote_time); 1006 1005 policies->max_num_work_items = ads_blob_read(ads, policies.max_num_work_items); 1007 1006 policies->is_valid = 1; 1008 - if (xe->wedged.mode == 2) 1009 - policies->global_flags |= GLOBAL_POLICY_DISABLE_ENGINE_RESET; 1010 - else 1007 + 1008 + if (enable_engine_reset) 1011 1009 policies->global_flags &= ~GLOBAL_POLICY_DISABLE_ENGINE_RESET; 1010 + else 1011 + policies->global_flags |= GLOBAL_POLICY_DISABLE_ENGINE_RESET; 1012 1012 1013 1013 return guc_ads_action_update_policies(ads, xe_guc_buf_flush(buf)); 1014 1014 }
+4 -1
drivers/gpu/drm/xe/xe_guc_ads.h
··· 6 6 #ifndef _XE_GUC_ADS_H_ 7 7 #define _XE_GUC_ADS_H_ 8 8 9 + #include <linux/types.h> 10 + 9 11 struct xe_guc_ads; 10 12 11 13 int xe_guc_ads_init(struct xe_guc_ads *ads); ··· 15 13 void xe_guc_ads_populate(struct xe_guc_ads *ads); 16 14 void xe_guc_ads_populate_minimal(struct xe_guc_ads *ads); 17 15 void xe_guc_ads_populate_post_load(struct xe_guc_ads *ads); 18 - int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads); 16 + int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads, 17 + bool enable_engine_reset); 19 18 20 19 #endif
+3 -1
drivers/gpu/drm/xe/xe_late_bind_fw_types.h
··· 15 15 #define XE_LB_MAX_PAYLOAD_SIZE SZ_4K 16 16 17 17 /** 18 - * xe_late_bind_fw_id - enum to determine late binding fw index 18 + * enum xe_late_bind_fw_id - enum to determine late binding fw index 19 19 */ 20 20 enum xe_late_bind_fw_id { 21 + /** @XE_LB_FW_FAN_CONTROL: Fan control */ 21 22 XE_LB_FW_FAN_CONTROL = 0, 23 + /** @XE_LB_FW_MAX_ID: Number of IDs */ 22 24 XE_LB_FW_MAX_ID 23 25 }; 24 26
+3
drivers/gpu/drm/xe/xe_lrc.c
··· 1050 1050 { 1051 1051 u32 *cmd = batch; 1052 1052 1053 + if (IS_SRIOV_VF(gt_to_xe(lrc->gt))) 1054 + return 0; 1055 + 1053 1056 if (xe_gt_WARN_ON(lrc->gt, max_len < 12)) 1054 1057 return -ENOSPC; 1055 1058
+2 -2
drivers/gpu/drm/xe/xe_migrate.c
··· 2445 2445 if (is_migrate) 2446 2446 mutex_lock(&m->job_mutex); 2447 2447 else 2448 - xe_vm_assert_held(q->vm); /* User queues VM's should be locked */ 2448 + xe_vm_assert_held(q->user_vm); /* User queues VM's should be locked */ 2449 2449 } 2450 2450 2451 2451 /** ··· 2463 2463 if (is_migrate) 2464 2464 mutex_unlock(&m->job_mutex); 2465 2465 else 2466 - xe_vm_assert_held(q->vm); /* User queues VM's should be locked */ 2466 + xe_vm_assert_held(q->user_vm); /* User queues VM's should be locked */ 2467 2467 } 2468 2468 2469 2469 #if IS_ENABLED(CONFIG_PROVE_LOCKING)
+1 -1
drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
··· 346 346 flags = EXEC_QUEUE_FLAG_KERNEL | 347 347 EXEC_QUEUE_FLAG_PERMANENT | 348 348 EXEC_QUEUE_FLAG_MIGRATE; 349 - q = xe_exec_queue_create_bind(xe, tile, flags, 0); 349 + q = xe_exec_queue_create_bind(xe, tile, NULL, flags, 0); 350 350 if (IS_ERR(q)) { 351 351 err = PTR_ERR(q); 352 352 goto err_ret;
+6 -1
drivers/gpu/drm/xe/xe_vm.c
··· 1617 1617 if (!vm->pt_root[id]) 1618 1618 continue; 1619 1619 1620 - q = xe_exec_queue_create_bind(xe, tile, create_flags, 0); 1620 + q = xe_exec_queue_create_bind(xe, tile, vm, create_flags, 0); 1621 1621 if (IS_ERR(q)) { 1622 1622 err = PTR_ERR(q); 1623 1623 goto err_close; ··· 3576 3576 err = -EINVAL; 3577 3577 goto put_exec_queue; 3578 3578 } 3579 + } 3580 + 3581 + if (XE_IOCTL_DBG(xe, q && vm != q->user_vm)) { 3582 + err = -EINVAL; 3583 + goto put_exec_queue; 3579 3584 } 3580 3585 3581 3586 /* Ensure all UNMAPs visible */
+1 -1
drivers/gpu/drm/xe/xe_vm.h
··· 379 379 } 380 380 381 381 /** 382 - * xe_vm_set_validation_exec() - Accessor to read the drm_exec object 382 + * xe_vm_validation_exec() - Accessor to read the drm_exec object 383 383 * @vm: The vm we want to register a drm_exec object with. 384 384 * 385 385 * Return: The drm_exec object used to lock the vm's resv. The value
+7 -5
drivers/hv/hv_common.c
··· 195 195 196 196 /* 197 197 * Write dump contents to the page. No need to synchronize; panic should 198 - * be single-threaded. 198 + * be single-threaded. Ignore failures from kmsg_dump_get_buffer() since 199 + * panic notification should be done even if there is no message data. 200 + * Don't assume bytes_written is set in case of failure, so initialize it. 199 201 */ 200 202 kmsg_dump_rewind(&iter); 201 - kmsg_dump_get_buffer(&iter, false, hv_panic_page, HV_HYP_PAGE_SIZE, 203 + bytes_written = 0; 204 + (void)kmsg_dump_get_buffer(&iter, false, hv_panic_page, HV_HYP_PAGE_SIZE, 202 205 &bytes_written); 203 - if (!bytes_written) 204 - return; 206 + 205 207 /* 206 208 * P3 to contain the physical address of the panic page & P4 to 207 209 * contain the size of the panic data in that page. Rest of the ··· 212 210 hv_set_msr(HV_MSR_CRASH_P0, 0); 213 211 hv_set_msr(HV_MSR_CRASH_P1, 0); 214 212 hv_set_msr(HV_MSR_CRASH_P2, 0); 215 - hv_set_msr(HV_MSR_CRASH_P3, virt_to_phys(hv_panic_page)); 213 + hv_set_msr(HV_MSR_CRASH_P3, bytes_written ? virt_to_phys(hv_panic_page) : 0); 216 214 hv_set_msr(HV_MSR_CRASH_P4, bytes_written); 217 215 218 216 /*
+1 -1
drivers/hv/hyperv_vmbus.h
··· 375 375 return; 376 376 377 377 /* 378 - * The cmxchg() above does an implicit memory barrier to 378 + * The cmpxchg() above does an implicit memory barrier to 379 379 * ensure the write to MessageType (ie set to 380 380 * HVMSG_NONE) happens before we read the 381 381 * MessagePending and EOMing. Otherwise, the EOMing
+1 -1
drivers/hv/mshv_eventfd.c
··· 388 388 { 389 389 struct eventfd_ctx *eventfd = NULL, *resamplefd = NULL; 390 390 struct mshv_irqfd *irqfd, *tmp; 391 - unsigned int events; 391 + __poll_t events; 392 392 int ret; 393 393 int idx; 394 394
+62 -31
drivers/hv/mshv_regions.c
··· 20 20 #define MSHV_MAP_FAULT_IN_PAGES PTRS_PER_PMD 21 21 22 22 /** 23 + * mshv_chunk_stride - Compute stride for mapping guest memory 24 + * @page : The page to check for huge page backing 25 + * @gfn : Guest frame number for the mapping 26 + * @page_count: Total number of pages in the mapping 27 + * 28 + * Determines the appropriate stride (in pages) for mapping guest memory. 29 + * Uses huge page stride if the backing page is huge and the guest mapping 30 + * is properly aligned; otherwise falls back to single page stride. 31 + * 32 + * Return: Stride in pages, or -EINVAL if page order is unsupported. 33 + */ 34 + static int mshv_chunk_stride(struct page *page, 35 + u64 gfn, u64 page_count) 36 + { 37 + unsigned int page_order; 38 + 39 + /* 40 + * Use single page stride by default. For huge page stride, the 41 + * page must be compound and point to the head of the compound 42 + * page, and both gfn and page_count must be huge-page aligned. 43 + */ 44 + if (!PageCompound(page) || !PageHead(page) || 45 + !IS_ALIGNED(gfn, PTRS_PER_PMD) || 46 + !IS_ALIGNED(page_count, PTRS_PER_PMD)) 47 + return 1; 48 + 49 + page_order = folio_order(page_folio(page)); 50 + /* The hypervisor only supports 2M huge page */ 51 + if (page_order != PMD_ORDER) 52 + return -EINVAL; 53 + 54 + return 1 << page_order; 55 + } 56 + 57 + /** 23 58 * mshv_region_process_chunk - Processes a contiguous chunk of memory pages 24 59 * in a region. 25 60 * @region : Pointer to the memory region structure. ··· 80 45 int (*handler)(struct mshv_mem_region *region, 81 46 u32 flags, 82 47 u64 page_offset, 83 - u64 page_count)) 48 + u64 page_count, 49 + bool huge_page)) 84 50 { 85 - u64 count, stride; 86 - unsigned int page_order; 51 + u64 gfn = region->start_gfn + page_offset; 52 + u64 count; 87 53 struct page *page; 88 - int ret; 54 + int stride, ret; 89 55 90 56 page = region->pages[page_offset]; 91 57 if (!page) 92 58 return -EINVAL; 93 59 94 - page_order = folio_order(page_folio(page)); 95 - /* The hypervisor only supports 4K and 2M page sizes */ 96 - if (page_order && page_order != PMD_ORDER) 97 - return -EINVAL; 60 + stride = mshv_chunk_stride(page, gfn, page_count); 61 + if (stride < 0) 62 + return stride; 98 63 99 - stride = 1 << page_order; 100 - 101 - /* Start at stride since the first page is validated */ 64 + /* Start at stride since the first stride is validated */ 102 65 for (count = stride; count < page_count; count += stride) { 103 66 page = region->pages[page_offset + count]; 104 67 ··· 104 71 if (!page) 105 72 break; 106 73 107 - /* Break if page size changes */ 108 - if (page_order != folio_order(page_folio(page))) 74 + /* Break if stride size changes */ 75 + if (stride != mshv_chunk_stride(page, gfn + count, 76 + page_count - count)) 109 77 break; 110 78 } 111 79 112 - ret = handler(region, flags, page_offset, count); 80 + ret = handler(region, flags, page_offset, count, stride > 1); 113 81 if (ret) 114 82 return ret; 115 83 ··· 142 108 int (*handler)(struct mshv_mem_region *region, 143 109 u32 flags, 144 110 u64 page_offset, 145 - u64 page_count)) 111 + u64 page_count, 112 + bool huge_page)) 146 113 { 147 114 long ret; 148 115 ··· 197 162 198 163 static int mshv_region_chunk_share(struct mshv_mem_region *region, 199 164 u32 flags, 200 - u64 page_offset, u64 page_count) 165 + u64 page_offset, u64 page_count, 166 + bool huge_page) 201 167 { 202 - struct page *page = region->pages[page_offset]; 203 - 204 - if (PageHuge(page) || PageTransCompound(page)) 168 + if (huge_page) 205 169 flags |= HV_MODIFY_SPA_PAGE_HOST_ACCESS_LARGE_PAGE; 206 170 207 171 return hv_call_modify_spa_host_access(region->partition->pt_id, ··· 222 188 223 189 static int mshv_region_chunk_unshare(struct mshv_mem_region *region, 224 190 u32 flags, 225 - u64 page_offset, u64 page_count) 191 + u64 page_offset, u64 page_count, 192 + bool huge_page) 226 193 { 227 - struct page *page = region->pages[page_offset]; 228 - 229 - if (PageHuge(page) || PageTransCompound(page)) 194 + if (huge_page) 230 195 flags |= HV_MODIFY_SPA_PAGE_HOST_ACCESS_LARGE_PAGE; 231 196 232 197 return hv_call_modify_spa_host_access(region->partition->pt_id, ··· 245 212 246 213 static int mshv_region_chunk_remap(struct mshv_mem_region *region, 247 214 u32 flags, 248 - u64 page_offset, u64 page_count) 215 + u64 page_offset, u64 page_count, 216 + bool huge_page) 249 217 { 250 - struct page *page = region->pages[page_offset]; 251 - 252 - if (PageHuge(page) || PageTransCompound(page)) 218 + if (huge_page) 253 219 flags |= HV_MAP_GPA_LARGE_PAGE; 254 220 255 221 return hv_call_map_gpa_pages(region->partition->pt_id, ··· 327 295 328 296 static int mshv_region_chunk_unmap(struct mshv_mem_region *region, 329 297 u32 flags, 330 - u64 page_offset, u64 page_count) 298 + u64 page_offset, u64 page_count, 299 + bool huge_page) 331 300 { 332 - struct page *page = region->pages[page_offset]; 333 - 334 - if (PageHuge(page) || PageTransCompound(page)) 301 + if (huge_page) 335 302 flags |= HV_UNMAP_GPA_LARGE_PAGE; 336 303 337 304 return hv_call_unmap_gpa_pages(region->partition->pt_id,
+9 -8
drivers/hv/mshv_root_main.c
··· 611 611 return NULL; 612 612 } 613 613 614 - #ifdef CONFIG_X86_64 615 614 static struct mshv_mem_region * 616 615 mshv_partition_region_by_gfn_get(struct mshv_partition *p, u64 gfn) 617 616 { ··· 642 643 { 643 644 struct mshv_partition *p = vp->vp_partition; 644 645 struct mshv_mem_region *region; 645 - struct hv_x64_memory_intercept_message *msg; 646 646 bool ret; 647 647 u64 gfn; 648 - 649 - msg = (struct hv_x64_memory_intercept_message *) 648 + #if defined(CONFIG_X86_64) 649 + struct hv_x64_memory_intercept_message *msg = 650 + (struct hv_x64_memory_intercept_message *) 650 651 vp->vp_intercept_msg_page->u.payload; 652 + #elif defined(CONFIG_ARM64) 653 + struct hv_arm64_memory_intercept_message *msg = 654 + (struct hv_arm64_memory_intercept_message *) 655 + vp->vp_intercept_msg_page->u.payload; 656 + #endif 651 657 652 658 gfn = HVPFN_DOWN(msg->guest_physical_address); 653 659 ··· 670 666 671 667 return ret; 672 668 } 673 - #else /* CONFIG_X86_64 */ 674 - static bool mshv_handle_gpa_intercept(struct mshv_vp *vp) { return false; } 675 - #endif /* CONFIG_X86_64 */ 676 669 677 670 static bool mshv_vp_handle_intercept(struct mshv_vp *vp) 678 671 { ··· 1281 1280 long ret; 1282 1281 1283 1282 if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP) || 1284 - !access_ok((const void *)mem.userspace_addr, mem.size)) 1283 + !access_ok((const void __user *)mem.userspace_addr, mem.size)) 1285 1284 return -EINVAL; 1286 1285 1287 1286 mmap_read_lock(current->mm);
+19 -6
drivers/hwtracing/intel_th/core.c
··· 810 810 int err; 811 811 812 812 dev = bus_find_device_by_devt(&intel_th_bus, inode->i_rdev); 813 - if (!dev || !dev->driver) { 813 + if (!dev) 814 + return -ENODEV; 815 + 816 + if (!dev->driver) { 814 817 err = -ENODEV; 815 - goto out_no_device; 818 + goto err_put_dev; 816 819 } 817 820 818 821 thdrv = to_intel_th_driver(dev->driver); 819 822 fops = fops_get(thdrv->fops); 820 823 if (!fops) { 821 824 err = -ENODEV; 822 - goto out_put_device; 825 + goto err_put_dev; 823 826 } 824 827 825 828 replace_fops(file, fops); ··· 832 829 if (file->f_op->open) { 833 830 err = file->f_op->open(inode, file); 834 831 if (err) 835 - goto out_put_device; 832 + goto err_put_dev; 836 833 } 837 834 838 835 return 0; 839 836 840 - out_put_device: 837 + err_put_dev: 841 838 put_device(dev); 842 - out_no_device: 839 + 843 840 return err; 841 + } 842 + 843 + static int intel_th_output_release(struct inode *inode, struct file *file) 844 + { 845 + struct intel_th_device *thdev = file->private_data; 846 + 847 + put_device(&thdev->dev); 848 + 849 + return 0; 844 850 } 845 851 846 852 static const struct file_operations intel_th_output_fops = { 847 853 .open = intel_th_output_open, 854 + .release = intel_th_output_release, 848 855 .llseek = noop_llseek, 849 856 }; 850 857
+1 -1
drivers/i2c/busses/i2c-k1.c
··· 566 566 return dev_err_probe(dev, i2c->irq, "failed to get irq resource"); 567 567 568 568 ret = devm_request_irq(i2c->dev, i2c->irq, spacemit_i2c_irq_handler, 569 - IRQF_NO_SUSPEND | IRQF_ONESHOT, dev_name(i2c->dev), i2c); 569 + IRQF_NO_SUSPEND, dev_name(i2c->dev), i2c); 570 570 if (ret) 571 571 return dev_err_probe(dev, ret, "failed to request irq"); 572 572
+3 -3
drivers/iio/accel/adxl380.c
··· 1784 1784 st->int_map[1] = ADXL380_INT0_MAP1_REG; 1785 1785 } else { 1786 1786 st->irq = fwnode_irq_get_byname(dev_fwnode(st->dev), "INT1"); 1787 - if (st->irq > 0) 1788 - return dev_err_probe(st->dev, -ENODEV, 1789 - "no interrupt name specified"); 1787 + if (st->irq < 0) 1788 + return dev_err_probe(st->dev, st->irq, 1789 + "no interrupt name specified\n"); 1790 1790 st->int_map[0] = ADXL380_INT1_MAP0_REG; 1791 1791 st->int_map[1] = ADXL380_INT1_MAP1_REG; 1792 1792 }
+71 -1
drivers/iio/accel/st_accel_core.c
··· 517 517 .wai_addr = ST_SENSORS_DEFAULT_WAI_ADDRESS, 518 518 .sensors_supported = { 519 519 [0] = H3LIS331DL_ACCEL_DEV_NAME, 520 - [1] = IIS328DQ_ACCEL_DEV_NAME, 521 520 }, 522 521 .ch = (struct iio_chan_spec *)st_accel_12bit_channels, 523 522 .odr = { ··· 557 558 .num = ST_ACCEL_FS_AVL_400G, 558 559 .value = 0x03, 559 560 .gain = IIO_G_TO_M_S_2(195000), 561 + }, 562 + }, 563 + }, 564 + .bdu = { 565 + .addr = 0x23, 566 + .mask = 0x80, 567 + }, 568 + .drdy_irq = { 569 + .int1 = { 570 + .addr = 0x22, 571 + .mask = 0x02, 572 + }, 573 + .int2 = { 574 + .addr = 0x22, 575 + .mask = 0x10, 576 + }, 577 + .addr_ihl = 0x22, 578 + .mask_ihl = 0x80, 579 + }, 580 + .sim = { 581 + .addr = 0x23, 582 + .value = BIT(0), 583 + }, 584 + .multi_read_bit = true, 585 + .bootime = 2, 586 + }, 587 + { 588 + .wai = 0x32, 589 + .wai_addr = ST_SENSORS_DEFAULT_WAI_ADDRESS, 590 + .sensors_supported = { 591 + [0] = IIS328DQ_ACCEL_DEV_NAME, 592 + }, 593 + .ch = (struct iio_chan_spec *)st_accel_12bit_channels, 594 + .odr = { 595 + .addr = 0x20, 596 + .mask = 0x18, 597 + .odr_avl = { 598 + { .hz = 50, .value = 0x00, }, 599 + { .hz = 100, .value = 0x01, }, 600 + { .hz = 400, .value = 0x02, }, 601 + { .hz = 1000, .value = 0x03, }, 602 + }, 603 + }, 604 + .pw = { 605 + .addr = 0x20, 606 + .mask = 0x20, 607 + .value_on = ST_SENSORS_DEFAULT_POWER_ON_VALUE, 608 + .value_off = ST_SENSORS_DEFAULT_POWER_OFF_VALUE, 609 + }, 610 + .enable_axis = { 611 + .addr = ST_SENSORS_DEFAULT_AXIS_ADDR, 612 + .mask = ST_SENSORS_DEFAULT_AXIS_MASK, 613 + }, 614 + .fs = { 615 + .addr = 0x23, 616 + .mask = 0x30, 617 + .fs_avl = { 618 + [0] = { 619 + .num = ST_ACCEL_FS_AVL_100G, 620 + .value = 0x00, 621 + .gain = IIO_G_TO_M_S_2(980), 622 + }, 623 + [1] = { 624 + .num = ST_ACCEL_FS_AVL_200G, 625 + .value = 0x01, 626 + .gain = IIO_G_TO_M_S_2(1950), 627 + }, 628 + [2] = { 629 + .num = ST_ACCEL_FS_AVL_400G, 630 + .value = 0x03, 631 + .gain = IIO_G_TO_M_S_2(3910), 560 632 }, 561 633 }, 562 634 },
+3 -1
drivers/iio/adc/ad7280a.c
··· 1024 1024 1025 1025 st->spi->max_speed_hz = AD7280A_MAX_SPI_CLK_HZ; 1026 1026 st->spi->mode = SPI_MODE_1; 1027 - spi_setup(st->spi); 1027 + ret = spi_setup(st->spi); 1028 + if (ret < 0) 1029 + return ret; 1028 1030 1029 1031 st->ctrl_lb = FIELD_PREP(AD7280A_CTRL_LB_ACQ_TIME_MSK, st->acquisition_time) | 1030 1032 FIELD_PREP(AD7280A_CTRL_LB_THERMISTOR_MSK, st->thermistor_term_en);
+2 -1
drivers/iio/adc/ad7606_par.c
··· 43 43 struct iio_dev *indio_dev) 44 44 { 45 45 struct ad7606_state *st = iio_priv(indio_dev); 46 - unsigned int ret, c; 46 + unsigned int c; 47 + int ret; 47 48 struct iio_backend_data_fmt data = { 48 49 .sign_extend = true, 49 50 .enable = true,
+1 -1
drivers/iio/adc/ad9467.c
··· 95 95 96 96 #define CHIPID_AD9434 0x6A 97 97 #define AD9434_DEF_OUTPUT_MODE 0x00 98 - #define AD9434_REG_VREF_MASK 0xC0 98 + #define AD9434_REG_VREF_MASK GENMASK(4, 0) 99 99 100 100 /* 101 101 * Analog Devices AD9467 16-Bit, 200/250 MSPS ADC
+1
drivers/iio/adc/at91-sama5d2_adc.c
··· 2481 2481 struct at91_adc_state *st = iio_priv(indio_dev); 2482 2482 2483 2483 iio_device_unregister(indio_dev); 2484 + cancel_work_sync(&st->touch_st.workq); 2484 2485 2485 2486 at91_adc_dma_disable(st); 2486 2487
+2 -13
drivers/iio/adc/exynos_adc.c
··· 540 540 ADC_CHANNEL(9, "adc9"), 541 541 }; 542 542 543 - static int exynos_adc_remove_devices(struct device *dev, void *c) 544 - { 545 - struct platform_device *pdev = to_platform_device(dev); 546 - 547 - platform_device_unregister(pdev); 548 - 549 - return 0; 550 - } 551 - 552 543 static int exynos_adc_probe(struct platform_device *pdev) 553 544 { 554 545 struct exynos_adc *info = NULL; ··· 651 660 return 0; 652 661 653 662 err_of_populate: 654 - device_for_each_child(&indio_dev->dev, NULL, 655 - exynos_adc_remove_devices); 663 + of_platform_depopulate(&indio_dev->dev); 656 664 iio_device_unregister(indio_dev); 657 665 err_irq: 658 666 free_irq(info->irq, info); ··· 671 681 struct iio_dev *indio_dev = platform_get_drvdata(pdev); 672 682 struct exynos_adc *info = iio_priv(indio_dev); 673 683 674 - device_for_each_child(&indio_dev->dev, NULL, 675 - exynos_adc_remove_devices); 684 + of_platform_depopulate(&indio_dev->dev); 676 685 iio_device_unregister(indio_dev); 677 686 free_irq(info->irq, info); 678 687 if (info->data->exit_hw)
+3 -3
drivers/iio/adc/pac1934.c
··· 665 665 /* add the power_acc field */ 666 666 curr_energy += inc; 667 667 668 - clamp(curr_energy, PAC_193X_MIN_POWER_ACC, PAC_193X_MAX_POWER_ACC); 669 - 670 - reg_data->energy_sec_acc[cnt] = curr_energy; 668 + reg_data->energy_sec_acc[cnt] = clamp(curr_energy, 669 + PAC_193X_MIN_POWER_ACC, 670 + PAC_193X_MAX_POWER_ACC); 671 671 } 672 672 673 673 offset_reg_data_p += PAC1934_VPOWER_ACC_REG_LEN;
+3 -3
drivers/iio/chemical/scd4x.c
··· 584 584 .sign = 'u', 585 585 .realbits = 16, 586 586 .storagebits = 16, 587 - .endianness = IIO_BE, 587 + .endianness = IIO_CPU, 588 588 }, 589 589 }, 590 590 { ··· 599 599 .sign = 'u', 600 600 .realbits = 16, 601 601 .storagebits = 16, 602 - .endianness = IIO_BE, 602 + .endianness = IIO_CPU, 603 603 }, 604 604 }, 605 605 { ··· 612 612 .sign = 'u', 613 613 .realbits = 16, 614 614 .storagebits = 16, 615 - .endianness = IIO_BE, 615 + .endianness = IIO_CPU, 616 616 }, 617 617 }, 618 618 };
+4 -1
drivers/iio/dac/ad3552r-hs.c
··· 549 549 550 550 guard(mutex)(&st->lock); 551 551 552 + if (count >= sizeof(buf)) 553 + return -ENOSPC; 554 + 552 555 ret = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, userbuf, 553 556 count); 554 557 if (ret < 0) 555 558 return ret; 556 559 557 - buf[count] = '\0'; 560 + buf[ret] = '\0'; 558 561 559 562 ret = match_string(dbgfs_attr_source, ARRAY_SIZE(dbgfs_attr_source), 560 563 buf);
+6
drivers/iio/dac/ad5686.c
··· 434 434 .num_channels = 4, 435 435 .regmap_type = AD5686_REGMAP, 436 436 }, 437 + [ID_AD5695R] = { 438 + .channels = ad5685r_channels, 439 + .int_vref_mv = 2500, 440 + .num_channels = 4, 441 + .regmap_type = AD5686_REGMAP, 442 + }, 437 443 [ID_AD5696] = { 438 444 .channels = ad5686_channels, 439 445 .num_channels = 4,
+5 -4
drivers/iio/imu/inv_icm45600/inv_icm45600_core.c
··· 960 960 return IIO_VAL_INT; 961 961 /* 962 962 * T°C = (temp / 128) + 25 963 - * Tm°C = 1000 * ((temp * 100 / 12800) + 25) 964 - * scale: 100000 / 13248 = 7.8125 965 - * offset: 25000 963 + * Tm°C = ((temp + 25 * 128) / 128)) * 1000 964 + * Tm°C = (temp + 3200) * (1000 / 128) 965 + * scale: 1000 / 128 = 7.8125 966 + * offset: 3200 966 967 */ 967 968 case IIO_CHAN_INFO_SCALE: 968 969 *val = 7; 969 970 *val2 = 812500; 970 971 return IIO_VAL_INT_PLUS_MICRO; 971 972 case IIO_CHAN_INFO_OFFSET: 972 - *val = 25000; 973 + *val = 3200; 973 974 return IIO_VAL_INT; 974 975 default: 975 976 return -EINVAL;
+11 -4
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
··· 101 101 IIO_CHAN_SOFT_TIMESTAMP(3), 102 102 }; 103 103 104 + static const struct iio_chan_spec st_lsm6ds0_acc_channels[] = { 105 + ST_LSM6DSX_CHANNEL(IIO_ACCEL, 0x28, IIO_MOD_X, 0), 106 + ST_LSM6DSX_CHANNEL(IIO_ACCEL, 0x2a, IIO_MOD_Y, 1), 107 + ST_LSM6DSX_CHANNEL(IIO_ACCEL, 0x2c, IIO_MOD_Z, 2), 108 + IIO_CHAN_SOFT_TIMESTAMP(3), 109 + }; 110 + 104 111 static const struct iio_chan_spec st_lsm6dsx_gyro_channels[] = { 105 112 ST_LSM6DSX_CHANNEL(IIO_ANGL_VEL, 0x22, IIO_MOD_X, 0), 106 113 ST_LSM6DSX_CHANNEL(IIO_ANGL_VEL, 0x24, IIO_MOD_Y, 1), ··· 149 142 }, 150 143 .channels = { 151 144 [ST_LSM6DSX_ID_ACC] = { 152 - .chan = st_lsm6dsx_acc_channels, 153 - .len = ARRAY_SIZE(st_lsm6dsx_acc_channels), 145 + .chan = st_lsm6ds0_acc_channels, 146 + .len = ARRAY_SIZE(st_lsm6ds0_acc_channels), 154 147 }, 155 148 [ST_LSM6DSX_ID_GYRO] = { 156 149 .chan = st_lsm6ds0_gyro_channels, ··· 1456 1449 }, 1457 1450 .channels = { 1458 1451 [ST_LSM6DSX_ID_ACC] = { 1459 - .chan = st_lsm6dsx_acc_channels, 1460 - .len = ARRAY_SIZE(st_lsm6dsx_acc_channels), 1452 + .chan = st_lsm6ds0_acc_channels, 1453 + .len = ARRAY_SIZE(st_lsm6ds0_acc_channels), 1461 1454 }, 1462 1455 [ST_LSM6DSX_ID_GYRO] = { 1463 1456 .chan = st_lsm6dsx_gyro_channels,
+3 -1
drivers/iio/industrialio-core.c
··· 1657 1657 mutex_destroy(&iio_dev_opaque->info_exist_lock); 1658 1658 mutex_destroy(&iio_dev_opaque->mlock); 1659 1659 1660 + lockdep_unregister_key(&iio_dev_opaque->info_exist_key); 1660 1661 lockdep_unregister_key(&iio_dev_opaque->mlock_key); 1661 1662 1662 1663 ida_free(&iio_ida, iio_dev_opaque->id); ··· 1718 1717 INIT_LIST_HEAD(&iio_dev_opaque->ioctl_handlers); 1719 1718 1720 1719 lockdep_register_key(&iio_dev_opaque->mlock_key); 1720 + lockdep_register_key(&iio_dev_opaque->info_exist_key); 1721 1721 1722 1722 mutex_init_with_key(&iio_dev_opaque->mlock, &iio_dev_opaque->mlock_key); 1723 - mutex_init(&iio_dev_opaque->info_exist_lock); 1723 + mutex_init_with_key(&iio_dev_opaque->info_exist_lock, &iio_dev_opaque->info_exist_key); 1724 1724 1725 1725 indio_dev->dev.parent = parent; 1726 1726 indio_dev->dev.type = &iio_device_type;
+18
drivers/input/serio/i8042-acpipnpio.h
··· 116 116 .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_NEVER) 117 117 }, 118 118 { 119 + /* 120 + * ASUS Zenbook UX425QA_UM425QA 121 + * Some Zenbooks report "Zenbook" with a lowercase b. 122 + */ 123 + .matches = { 124 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 125 + DMI_MATCH(DMI_PRODUCT_NAME, "Zenbook UX425QA_UM425QA"), 126 + }, 127 + .driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER) 128 + }, 129 + { 119 130 /* ASUS ZenBook UX425UA/QA */ 120 131 .matches = { 121 132 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ··· 1183 1172 { 1184 1173 .matches = { 1185 1174 DMI_MATCH(DMI_BOARD_NAME, "X5KK45xS_X5SP45xS"), 1175 + }, 1176 + .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1177 + SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1178 + }, 1179 + { 1180 + .matches = { 1181 + DMI_MATCH(DMI_BOARD_NAME, "WUJIE Series-X5SP4NAG"), 1186 1182 }, 1187 1183 .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1188 1184 SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+5
drivers/interconnect/debugfs-client.c
··· 150 150 return ret; 151 151 } 152 152 153 + src_node = devm_kstrdup(&pdev->dev, "", GFP_KERNEL); 154 + dst_node = devm_kstrdup(&pdev->dev, "", GFP_KERNEL); 155 + if (!src_node || !dst_node) 156 + return -ENOMEM; 157 + 153 158 client_dir = debugfs_create_dir("test_client", icc_dir); 154 159 155 160 debugfs_create_str("src_node", 0600, client_dir, &src_node);
+1 -2
drivers/iommu/amd/iommu.c
··· 2450 2450 goto out_err; 2451 2451 } 2452 2452 2453 - out_err: 2454 - 2455 2453 iommu_completion_wait(iommu); 2456 2454 2457 2455 if (FEATURE_NUM_INT_REMAP_SUP_2K(amd_iommu_efr2)) ··· 2460 2462 if (dev_is_pci(dev)) 2461 2463 pci_prepare_ats(to_pci_dev(dev), PAGE_SHIFT); 2462 2464 2465 + out_err: 2463 2466 return iommu_dev; 2464 2467 } 2465 2468
+1 -1
drivers/iommu/generic_pt/iommu_pt.h
··· 645 645 struct pt_iommu_map_args *map = arg; 646 646 647 647 pts.type = pt_load_single_entry(&pts); 648 - if (level == 0) { 648 + if (pts.level == 0) { 649 649 if (pts.type != PT_ENTRY_EMPTY) 650 650 return -EADDRINUSE; 651 651 pt_install_leaf_entry(&pts, map->oa, PAGE_SHIFT,
+1 -1
drivers/iommu/io-pgtable-arm.c
··· 637 637 pte = READ_ONCE(*ptep); 638 638 if (!pte) { 639 639 WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN)); 640 - return -ENOENT; 640 + return 0; 641 641 } 642 642 643 643 /* If the size matches this level, we're in the right place */
+4 -4
drivers/irqchip/irq-gic-v3-its.c
··· 709 709 struct its_cmd_block *cmd, 710 710 struct its_cmd_desc *desc) 711 711 { 712 - unsigned long itt_addr; 712 + phys_addr_t itt_addr; 713 713 u8 size = ilog2(desc->its_mapd_cmd.dev->nr_ites); 714 714 715 715 itt_addr = virt_to_phys(desc->its_mapd_cmd.dev->itt); ··· 879 879 struct its_cmd_desc *desc) 880 880 { 881 881 struct its_vpe *vpe = valid_vpe(its, desc->its_vmapp_cmd.vpe); 882 - unsigned long vpt_addr, vconf_addr; 882 + phys_addr_t vpt_addr, vconf_addr; 883 883 u64 target; 884 884 bool alloc; 885 885 ··· 2477 2477 baser->psz = psz; 2478 2478 tmp = indirect ? GITS_LVL1_ENTRY_SIZE : esz; 2479 2479 2480 - pr_info("ITS@%pa: allocated %d %s @%lx (%s, esz %d, psz %dK, shr %d)\n", 2480 + pr_info("ITS@%pa: allocated %d %s @%llx (%s, esz %d, psz %dK, shr %d)\n", 2481 2481 &its->phys_base, (int)(PAGE_ORDER_TO_SIZE(order) / (int)tmp), 2482 2482 its_base_type_string[type], 2483 - (unsigned long)virt_to_phys(base), 2483 + (u64)virt_to_phys(base), 2484 2484 indirect ? "indirect" : "flat", (int)esz, 2485 2485 psz / SZ_1K, (int)shr >> GITS_BASER_SHAREABILITY_SHIFT); 2486 2486
+8 -1
drivers/irqchip/irq-renesas-rzv2h.c
··· 328 328 u32 titsr, titsr_k, titsel_n, tien; 329 329 struct rzv2h_icu_priv *priv; 330 330 u32 tssr, tssr_k, tssel_n; 331 + u32 titsr_cur, tssr_cur; 331 332 unsigned int hwirq; 332 333 u32 tint, sense; 333 334 int tint_nr; ··· 377 376 guard(raw_spinlock)(&priv->lock); 378 377 379 378 tssr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TSSR(tssr_k)); 379 + titsr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TITSR(titsr_k)); 380 + 381 + tssr_cur = field_get(ICU_TSSR_TSSEL_MASK(tssel_n, priv->info->field_width), tssr); 382 + titsr_cur = field_get(ICU_TITSR_TITSEL_MASK(titsel_n), titsr); 383 + if (tssr_cur == tint && titsr_cur == sense) 384 + return 0; 385 + 380 386 tssr &= ~(ICU_TSSR_TSSEL_MASK(tssel_n, priv->info->field_width) | tien); 381 387 tssr |= ICU_TSSR_TSSEL_PREP(tint, tssel_n, priv->info->field_width); 382 388 383 389 writel_relaxed(tssr, priv->base + priv->info->t_offs + ICU_TSSR(tssr_k)); 384 390 385 - titsr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TITSR(titsr_k)); 386 391 titsr &= ~ICU_TITSR_TITSEL_MASK(titsel_n); 387 392 titsr |= ICU_TITSR_TITSEL_PREP(sense, titsel_n); 388 393
+8 -5
drivers/isdn/mISDN/timerdev.c
··· 109 109 spin_unlock_irq(&dev->lock); 110 110 if (filep->f_flags & O_NONBLOCK) 111 111 return -EAGAIN; 112 - wait_event_interruptible(dev->wait, (dev->work || 112 + wait_event_interruptible(dev->wait, (READ_ONCE(dev->work) || 113 113 !list_empty(list))); 114 114 if (signal_pending(current)) 115 115 return -ERESTARTSYS; 116 116 spin_lock_irq(&dev->lock); 117 117 } 118 118 if (dev->work) 119 - dev->work = 0; 119 + WRITE_ONCE(dev->work, 0); 120 120 if (!list_empty(list)) { 121 121 timer = list_first_entry(list, struct mISDNtimer, list); 122 122 list_del(&timer->list); ··· 141 141 if (*debug & DEBUG_TIMER) 142 142 printk(KERN_DEBUG "%s(%p, %p)\n", __func__, filep, wait); 143 143 if (dev) { 144 + u32 work; 145 + 144 146 poll_wait(filep, &dev->wait, wait); 145 147 mask = 0; 146 - if (dev->work || !list_empty(&dev->expired)) 148 + work = READ_ONCE(dev->work); 149 + if (work || !list_empty(&dev->expired)) 147 150 mask |= (EPOLLIN | EPOLLRDNORM); 148 151 if (*debug & DEBUG_TIMER) 149 152 printk(KERN_DEBUG "%s work(%d) empty(%d)\n", __func__, 150 - dev->work, list_empty(&dev->expired)); 153 + work, list_empty(&dev->expired)); 151 154 } 152 155 return mask; 153 156 } ··· 175 172 struct mISDNtimer *timer; 176 173 177 174 if (!timeout) { 178 - dev->work = 1; 175 + WRITE_ONCE(dev->work, 1); 179 176 wake_up_interruptible(&dev->wait); 180 177 id = 0; 181 178 } else {
+5 -5
drivers/leds/led-class.c
··· 560 560 #ifdef CONFIG_LEDS_BRIGHTNESS_HW_CHANGED 561 561 led_cdev->brightness_hw_changed = -1; 562 562 #endif 563 - /* add to the list of leds */ 564 - down_write(&leds_list_lock); 565 - list_add_tail(&led_cdev->node, &leds_list); 566 - up_write(&leds_list_lock); 567 - 568 563 if (!led_cdev->max_brightness) 569 564 led_cdev->max_brightness = LED_FULL; 570 565 ··· 568 573 led_cdev->wq = leds_wq; 569 574 570 575 led_init_core(led_cdev); 576 + 577 + /* add to the list of leds */ 578 + down_write(&leds_list_lock); 579 + list_add_tail(&led_cdev->node, &leds_list); 580 + up_write(&leds_list_lock); 571 581 572 582 #ifdef CONFIG_LEDS_TRIGGERS 573 583 led_trigger_set_default(led_cdev);
+9
drivers/md/bcache/bcache.h
··· 273 273 274 274 struct bio_set bio_split; 275 275 276 + struct bio_set bio_detached; 277 + 276 278 unsigned int data_csum:1; 277 279 278 280 int (*cache_miss)(struct btree *b, struct search *s, ··· 753 751 */ 754 752 }; 755 753 struct bio bio; 754 + }; 755 + 756 + struct detached_dev_io_private { 757 + struct bcache_device *d; 758 + unsigned long start_time; 759 + struct bio *orig_bio; 760 + struct bio bio; 756 761 }; 757 762 758 763 #define BTREE_PRIO USHRT_MAX
+36 -45
drivers/md/bcache/request.c
··· 1077 1077 continue_at(cl, cached_dev_bio_complete, NULL); 1078 1078 } 1079 1079 1080 - struct detached_dev_io_private { 1081 - struct bcache_device *d; 1082 - unsigned long start_time; 1083 - bio_end_io_t *bi_end_io; 1084 - void *bi_private; 1085 - struct block_device *orig_bdev; 1086 - }; 1087 - 1088 1080 static void detached_dev_end_io(struct bio *bio) 1089 1081 { 1090 - struct detached_dev_io_private *ddip; 1091 - 1092 - ddip = bio->bi_private; 1093 - bio->bi_end_io = ddip->bi_end_io; 1094 - bio->bi_private = ddip->bi_private; 1082 + struct detached_dev_io_private *ddip = 1083 + container_of(bio, struct detached_dev_io_private, bio); 1084 + struct bio *orig_bio = ddip->orig_bio; 1095 1085 1096 1086 /* Count on the bcache device */ 1097 - bio_end_io_acct_remapped(bio, ddip->start_time, ddip->orig_bdev); 1087 + bio_end_io_acct(orig_bio, ddip->start_time); 1098 1088 1099 1089 if (bio->bi_status) { 1100 - struct cached_dev *dc = container_of(ddip->d, 1101 - struct cached_dev, disk); 1090 + struct cached_dev *dc = bio->bi_private; 1091 + 1102 1092 /* should count I/O error for backing device here */ 1103 1093 bch_count_backing_io_errors(dc, bio); 1094 + orig_bio->bi_status = bio->bi_status; 1104 1095 } 1105 1096 1106 - kfree(ddip); 1107 - bio_endio(bio); 1097 + bio_put(bio); 1098 + bio_endio(orig_bio); 1108 1099 } 1109 1100 1110 - static void detached_dev_do_request(struct bcache_device *d, struct bio *bio, 1111 - struct block_device *orig_bdev, unsigned long start_time) 1101 + static void detached_dev_do_request(struct bcache_device *d, 1102 + struct bio *orig_bio, unsigned long start_time) 1112 1103 { 1113 1104 struct detached_dev_io_private *ddip; 1114 1105 struct cached_dev *dc = container_of(d, struct cached_dev, disk); 1106 + struct bio *clone_bio; 1115 1107 1116 - /* 1117 - * no need to call closure_get(&dc->disk.cl), 1118 - * because upper layer had already opened bcache device, 1119 - * which would call closure_get(&dc->disk.cl) 1120 - */ 1121 - ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO); 1122 - if (!ddip) { 1123 - bio->bi_status = BLK_STS_RESOURCE; 1124 - bio_endio(bio); 1108 + if (bio_op(orig_bio) == REQ_OP_DISCARD && 1109 + !bdev_max_discard_sectors(dc->bdev)) { 1110 + bio_endio(orig_bio); 1125 1111 return; 1126 1112 } 1127 1113 1128 - ddip->d = d; 1129 - /* Count on the bcache device */ 1130 - ddip->orig_bdev = orig_bdev; 1131 - ddip->start_time = start_time; 1132 - ddip->bi_end_io = bio->bi_end_io; 1133 - ddip->bi_private = bio->bi_private; 1134 - bio->bi_end_io = detached_dev_end_io; 1135 - bio->bi_private = ddip; 1114 + clone_bio = bio_alloc_clone(dc->bdev, orig_bio, GFP_NOIO, 1115 + &d->bio_detached); 1116 + if (!clone_bio) { 1117 + orig_bio->bi_status = BLK_STS_RESOURCE; 1118 + bio_endio(orig_bio); 1119 + return; 1120 + } 1136 1121 1137 - if ((bio_op(bio) == REQ_OP_DISCARD) && 1138 - !bdev_max_discard_sectors(dc->bdev)) 1139 - detached_dev_end_io(bio); 1140 - else 1141 - submit_bio_noacct(bio); 1122 + ddip = container_of(clone_bio, struct detached_dev_io_private, bio); 1123 + /* Count on the bcache device */ 1124 + ddip->d = d; 1125 + ddip->start_time = start_time; 1126 + ddip->orig_bio = orig_bio; 1127 + 1128 + clone_bio->bi_end_io = detached_dev_end_io; 1129 + clone_bio->bi_private = dc; 1130 + 1131 + submit_bio_noacct(clone_bio); 1142 1132 } 1143 1133 1144 1134 static void quit_max_writeback_rate(struct cache_set *c, ··· 1204 1214 1205 1215 start_time = bio_start_io_acct(bio); 1206 1216 1207 - bio_set_dev(bio, dc->bdev); 1208 1217 bio->bi_iter.bi_sector += dc->sb.data_offset; 1209 1218 1210 1219 if (cached_dev_get(dc)) { 1220 + bio_set_dev(bio, dc->bdev); 1211 1221 s = search_alloc(bio, d, orig_bdev, start_time); 1212 1222 trace_bcache_request_start(s->d, bio); 1213 1223 ··· 1227 1237 else 1228 1238 cached_dev_read(dc, s); 1229 1239 } 1230 - } else 1240 + } else { 1231 1241 /* I/O request sent to backing device */ 1232 - detached_dev_do_request(d, bio, orig_bdev, start_time); 1242 + detached_dev_do_request(d, bio, start_time); 1243 + } 1233 1244 } 1234 1245 1235 1246 static int cached_dev_ioctl(struct bcache_device *d, blk_mode_t mode,
+10 -2
drivers/md/bcache/super.c
··· 887 887 } 888 888 889 889 bioset_exit(&d->bio_split); 890 + bioset_exit(&d->bio_detached); 890 891 kvfree(d->full_dirty_stripes); 891 892 kvfree(d->stripe_sectors_dirty); 892 893 ··· 950 949 BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER)) 951 950 goto out_ida_remove; 952 951 952 + if (bioset_init(&d->bio_detached, 4, 953 + offsetof(struct detached_dev_io_private, bio), 954 + BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER)) 955 + goto out_bioset_split_exit; 956 + 953 957 if (lim.logical_block_size > PAGE_SIZE && cached_bdev) { 954 958 /* 955 959 * This should only happen with BCACHE_SB_VERSION_BDEV. ··· 970 964 971 965 d->disk = blk_alloc_disk(&lim, NUMA_NO_NODE); 972 966 if (IS_ERR(d->disk)) 973 - goto out_bioset_exit; 967 + goto out_bioset_detach_exit; 974 968 975 969 set_capacity(d->disk, sectors); 976 970 snprintf(d->disk->disk_name, DISK_NAME_LEN, "bcache%i", idx); ··· 982 976 d->disk->private_data = d; 983 977 return 0; 984 978 985 - out_bioset_exit: 979 + out_bioset_detach_exit: 980 + bioset_exit(&d->bio_detached); 981 + out_bioset_split_exit: 986 982 bioset_exit(&d->bio_split); 987 983 out_ida_remove: 988 984 ida_free(&bcache_device_idx, idx);
+9 -9
drivers/misc/mei/mei-trace.h
··· 21 21 TP_ARGS(dev, reg, offs, val), 22 22 TP_STRUCT__entry( 23 23 __string(dev, dev_name(dev)) 24 - __field(const char *, reg) 24 + __string(reg, reg) 25 25 __field(u32, offs) 26 26 __field(u32, val) 27 27 ), 28 28 TP_fast_assign( 29 29 __assign_str(dev); 30 - __entry->reg = reg; 30 + __assign_str(reg); 31 31 __entry->offs = offs; 32 32 __entry->val = val; 33 33 ), 34 34 TP_printk("[%s] read %s:[%#x] = %#x", 35 - __get_str(dev), __entry->reg, __entry->offs, __entry->val) 35 + __get_str(dev), __get_str(reg), __entry->offs, __entry->val) 36 36 ); 37 37 38 38 TRACE_EVENT(mei_reg_write, ··· 40 40 TP_ARGS(dev, reg, offs, val), 41 41 TP_STRUCT__entry( 42 42 __string(dev, dev_name(dev)) 43 - __field(const char *, reg) 43 + __string(reg, reg) 44 44 __field(u32, offs) 45 45 __field(u32, val) 46 46 ), 47 47 TP_fast_assign( 48 48 __assign_str(dev); 49 - __entry->reg = reg; 49 + __assign_str(reg); 50 50 __entry->offs = offs; 51 51 __entry->val = val; 52 52 ), 53 53 TP_printk("[%s] write %s[%#x] = %#x", 54 - __get_str(dev), __entry->reg, __entry->offs, __entry->val) 54 + __get_str(dev), __get_str(reg), __entry->offs, __entry->val) 55 55 ); 56 56 57 57 TRACE_EVENT(mei_pci_cfg_read, ··· 59 59 TP_ARGS(dev, reg, offs, val), 60 60 TP_STRUCT__entry( 61 61 __string(dev, dev_name(dev)) 62 - __field(const char *, reg) 62 + __string(reg, reg) 63 63 __field(u32, offs) 64 64 __field(u32, val) 65 65 ), 66 66 TP_fast_assign( 67 67 __assign_str(dev); 68 - __entry->reg = reg; 68 + __assign_str(reg); 69 69 __entry->offs = offs; 70 70 __entry->val = val; 71 71 ), 72 72 TP_printk("[%s] pci cfg read %s:[%#x] = %#x", 73 - __get_str(dev), __entry->reg, __entry->offs, __entry->val) 73 + __get_str(dev), __get_str(reg), __entry->offs, __entry->val) 74 74 ); 75 75 76 76 #endif /* _MEI_TRACE_H_ */
+40 -8
drivers/misc/uacce/uacce.c
··· 40 40 return 0; 41 41 } 42 42 43 - static int uacce_put_queue(struct uacce_queue *q) 43 + static int uacce_stop_queue(struct uacce_queue *q) 44 44 { 45 45 struct uacce_device *uacce = q->uacce; 46 46 47 - if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue) 47 + if (q->state != UACCE_Q_STARTED) 48 + return 0; 49 + 50 + if (uacce->ops->stop_queue) 48 51 uacce->ops->stop_queue(q); 49 52 50 - if ((q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED) && 51 - uacce->ops->put_queue) 53 + q->state = UACCE_Q_INIT; 54 + 55 + return 0; 56 + } 57 + 58 + static void uacce_put_queue(struct uacce_queue *q) 59 + { 60 + struct uacce_device *uacce = q->uacce; 61 + 62 + uacce_stop_queue(q); 63 + 64 + if (q->state != UACCE_Q_INIT) 65 + return; 66 + 67 + if (uacce->ops->put_queue) 52 68 uacce->ops->put_queue(q); 53 69 54 70 q->state = UACCE_Q_ZOMBIE; 55 - 56 - return 0; 57 71 } 58 72 59 73 static long uacce_fops_unl_ioctl(struct file *filep, ··· 94 80 ret = uacce_start_queue(q); 95 81 break; 96 82 case UACCE_CMD_PUT_Q: 97 - ret = uacce_put_queue(q); 83 + ret = uacce_stop_queue(q); 98 84 break; 99 85 default: 100 86 if (uacce->ops->ioctl) ··· 228 214 } 229 215 } 230 216 217 + static int uacce_vma_mremap(struct vm_area_struct *area) 218 + { 219 + return -EPERM; 220 + } 221 + 231 222 static const struct vm_operations_struct uacce_vm_ops = { 232 223 .close = uacce_vma_close, 224 + .mremap = uacce_vma_mremap, 233 225 }; 234 226 235 227 static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma) ··· 402 382 struct uacce_device *uacce = to_uacce_device(dev); 403 383 u32 val; 404 384 385 + if (!uacce->ops->isolate_err_threshold_read) 386 + return -ENOENT; 387 + 405 388 val = uacce->ops->isolate_err_threshold_read(uacce); 406 389 407 390 return sysfs_emit(buf, "%u\n", val); ··· 416 393 struct uacce_device *uacce = to_uacce_device(dev); 417 394 unsigned long val; 418 395 int ret; 396 + 397 + if (!uacce->ops->isolate_err_threshold_write) 398 + return -ENOENT; 419 399 420 400 if (kstrtoul(buf, 0, &val) < 0) 421 401 return -EINVAL; ··· 545 519 */ 546 520 int uacce_register(struct uacce_device *uacce) 547 521 { 522 + int ret; 523 + 548 524 if (!uacce) 549 525 return -ENODEV; 550 526 ··· 557 529 uacce->cdev->ops = &uacce_fops; 558 530 uacce->cdev->owner = THIS_MODULE; 559 531 560 - return cdev_device_add(uacce->cdev, &uacce->dev); 532 + ret = cdev_device_add(uacce->cdev, &uacce->dev); 533 + if (ret) 534 + uacce->cdev = NULL; 535 + 536 + return ret; 561 537 } 562 538 EXPORT_SYMBOL_GPL(uacce_register); 563 539
+41
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 1306 1306 return err; 1307 1307 } 1308 1308 1309 + static int sdmmc_card_busy(struct mmc_host *mmc) 1310 + { 1311 + struct realtek_pci_sdmmc *host = mmc_priv(mmc); 1312 + struct rtsx_pcr *pcr = host->pcr; 1313 + int err; 1314 + u8 stat; 1315 + u8 mask = SD_DAT3_STATUS | SD_DAT2_STATUS | SD_DAT1_STATUS 1316 + | SD_DAT0_STATUS; 1317 + 1318 + mutex_lock(&pcr->pcr_mutex); 1319 + 1320 + rtsx_pci_start_run(pcr); 1321 + 1322 + err = rtsx_pci_write_register(pcr, SD_BUS_STAT, 1323 + SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, 1324 + SD_CLK_TOGGLE_EN); 1325 + if (err) 1326 + goto out; 1327 + 1328 + mdelay(1); 1329 + 1330 + err = rtsx_pci_read_register(pcr, SD_BUS_STAT, &stat); 1331 + if (err) 1332 + goto out; 1333 + 1334 + err = rtsx_pci_write_register(pcr, SD_BUS_STAT, 1335 + SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, 0); 1336 + out: 1337 + mutex_unlock(&pcr->pcr_mutex); 1338 + 1339 + if (err) 1340 + return err; 1341 + 1342 + /* check if any pin between dat[0:3] is low */ 1343 + if ((stat & mask) != mask) 1344 + return 1; 1345 + else 1346 + return 0; 1347 + } 1348 + 1309 1349 static int sdmmc_execute_tuning(struct mmc_host *mmc, u32 opcode) 1310 1350 { 1311 1351 struct realtek_pci_sdmmc *host = mmc_priv(mmc); ··· 1458 1418 .get_ro = sdmmc_get_ro, 1459 1419 .get_cd = sdmmc_get_cd, 1460 1420 .start_signal_voltage_switch = sdmmc_switch_voltage, 1421 + .card_busy = sdmmc_card_busy, 1461 1422 .execute_tuning = sdmmc_execute_tuning, 1462 1423 .init_sd_express = sdmmc_init_sd_express, 1463 1424 };
+14
drivers/mmc/host/sdhci-of-dwcmshc.c
··· 739 739 sdhci_writel(host, extra, reg); 740 740 741 741 if (clock <= 52000000) { 742 + if (host->mmc->ios.timing == MMC_TIMING_MMC_HS200 || 743 + host->mmc->ios.timing == MMC_TIMING_MMC_HS400) { 744 + dev_err(mmc_dev(host->mmc), 745 + "Can't reduce the clock below 52MHz in HS200/HS400 mode"); 746 + return; 747 + } 748 + 742 749 /* 743 750 * Disable DLL and reset both of sample and drive clock. 744 751 * The bypass bit and start bit need to be set if DLL is not locked. ··· 1595 1588 { 1596 1589 u32 emmc_caps = MMC_CAP2_NO_SD | MMC_CAP2_NO_SDIO; 1597 1590 unsigned int val, hsp_int_status, hsp_pwr_ctrl; 1591 + static const char * const clk_ids[] = {"axi"}; 1598 1592 struct of_phandle_args args; 1599 1593 struct eic7700_priv *priv; 1600 1594 struct regmap *hsp_regmap; ··· 1612 1604 dev_err(dev, "failed to reset\n"); 1613 1605 return ret; 1614 1606 } 1607 + 1608 + ret = dwcmshc_get_enable_other_clks(mmc_dev(host->mmc), dwc_priv, 1609 + ARRAY_SIZE(clk_ids), clk_ids); 1610 + if (ret) 1611 + return ret; 1615 1612 1616 1613 ret = of_parse_phandle_with_fixed_args(dev->of_node, "eswin,hsp-sp-csr", 2, 0, &args); 1617 1614 if (ret) { ··· 1739 1726 .set_uhs_signaling = sdhci_eic7700_set_uhs_wrapper, 1740 1727 .set_power = sdhci_set_power_and_bus_voltage, 1741 1728 .irq = dwcmshc_cqe_irq_handler, 1729 + .adma_write_desc = dwcmshc_adma_write_desc, 1742 1730 .platform_execute_tuning = sdhci_eic7700_executing_tuning, 1743 1731 }; 1744 1732
+4 -4
drivers/mux/mmio.c
··· 101 101 mux_mmio = mux_chip_priv(mux_chip); 102 102 103 103 mux_mmio->fields = devm_kmalloc(dev, num_fields * sizeof(*mux_mmio->fields), GFP_KERNEL); 104 - if (IS_ERR(mux_mmio->fields)) 105 - return PTR_ERR(mux_mmio->fields); 104 + if (!mux_mmio->fields) 105 + return -ENOMEM; 106 106 107 107 mux_mmio->hardware_states = devm_kmalloc(dev, num_fields * 108 108 sizeof(*mux_mmio->hardware_states), GFP_KERNEL); 109 - if (IS_ERR(mux_mmio->hardware_states)) 110 - return PTR_ERR(mux_mmio->hardware_states); 109 + if (!mux_mmio->hardware_states) 110 + return -ENOMEM; 111 111 112 112 for (i = 0; i < num_fields; i++) { 113 113 struct mux_control *mux = &mux_chip->mux[i];
+9 -2
drivers/net/bonding/bond_main.c
··· 1862 1862 */ 1863 1863 if (!bond_has_slaves(bond)) { 1864 1864 if (bond_dev->type != slave_dev->type) { 1865 + if (slave_dev->type != ARPHRD_ETHER && 1866 + BOND_MODE(bond) == BOND_MODE_8023AD) { 1867 + SLAVE_NL_ERR(bond_dev, slave_dev, extack, 1868 + "8023AD mode requires Ethernet devices"); 1869 + return -EINVAL; 1870 + } 1865 1871 slave_dbg(bond_dev, slave_dev, "change device type from %d to %d\n", 1866 1872 bond_dev->type, slave_dev->type); 1867 1873 ··· 4096 4090 case BOND_XMIT_POLICY_ENCAP23: 4097 4091 case BOND_XMIT_POLICY_ENCAP34: 4098 4092 memset(fk, 0, sizeof(*fk)); 4099 - return __skb_flow_dissect(NULL, skb, &flow_keys_bonding, 4100 - fk, data, l2_proto, nhoff, hlen, 0); 4093 + return __skb_flow_dissect(dev_net(bond->dev), skb, 4094 + &flow_keys_bonding, fk, data, 4095 + l2_proto, nhoff, hlen, 0); 4101 4096 default: 4102 4097 break; 4103 4098 }
+1
drivers/net/can/dev/dev.c
··· 332 332 333 333 can_ml = (void *)priv + ALIGN(sizeof_priv, NETDEV_ALIGN); 334 334 can_set_ml_priv(dev, can_ml); 335 + can_set_cap(dev, CAN_CAP_CC); 335 336 336 337 if (echo_skb_max) { 337 338 priv->echo_skb_max = echo_skb_max;
+7 -1
drivers/net/can/usb/ems_usb.c
··· 486 486 urb->transfer_buffer, RX_BUFFER_SIZE, 487 487 ems_usb_read_bulk_callback, dev); 488 488 489 + usb_anchor_urb(urb, &dev->rx_submitted); 490 + 489 491 retval = usb_submit_urb(urb, GFP_ATOMIC); 492 + if (!retval) 493 + return; 494 + 495 + usb_unanchor_urb(urb); 490 496 491 497 if (retval == -ENODEV) 492 498 netif_device_detach(netdev); 493 - else if (retval) 499 + else 494 500 netdev_err(netdev, 495 501 "failed resubmitting read bulk urb: %d\n", retval); 496 502 }
+8 -1
drivers/net/can/usb/esd_usb.c
··· 541 541 urb->transfer_buffer, ESD_USB_RX_BUFFER_SIZE, 542 542 esd_usb_read_bulk_callback, dev); 543 543 544 + usb_anchor_urb(urb, &dev->rx_submitted); 545 + 544 546 err = usb_submit_urb(urb, GFP_ATOMIC); 547 + if (!err) 548 + return; 549 + 550 + usb_unanchor_urb(urb); 551 + 545 552 if (err == -ENODEV) { 546 553 for (i = 0; i < dev->net_count; i++) { 547 554 if (dev->nets[i]) 548 555 netif_device_detach(dev->nets[i]->netdev); 549 556 } 550 - } else if (err) { 557 + } else { 551 558 dev_err(dev->udev->dev.parent, 552 559 "failed resubmitting read bulk urb: %pe\n", ERR_PTR(err)); 553 560 }
+7
drivers/net/can/usb/gs_usb.c
··· 754 754 usb_anchor_urb(urb, &parent->rx_submitted); 755 755 756 756 rc = usb_submit_urb(urb, GFP_ATOMIC); 757 + if (!rc) 758 + return; 759 + 760 + usb_unanchor_urb(urb); 757 761 758 762 /* USB failure take down all interfaces */ 759 763 if (rc == -ENODEV) { ··· 766 762 if (parent->canch[rc]) 767 763 netif_device_detach(parent->canch[rc]->netdev); 768 764 } 765 + } else if (rc != -ESHUTDOWN && net_ratelimit()) { 766 + netdev_info(netdev, "failed to re-submit IN URB: %pe\n", 767 + ERR_PTR(urb->status)); 769 768 } 770 769 } 771 770
+8 -1
drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
··· 361 361 urb->transfer_buffer, KVASER_USB_RX_BUFFER_SIZE, 362 362 kvaser_usb_read_bulk_callback, dev); 363 363 364 + usb_anchor_urb(urb, &dev->rx_submitted); 365 + 364 366 err = usb_submit_urb(urb, GFP_ATOMIC); 367 + if (!err) 368 + return; 369 + 370 + usb_unanchor_urb(urb); 371 + 365 372 if (err == -ENODEV) { 366 373 for (i = 0; i < dev->nchannels; i++) { 367 374 struct kvaser_usb_net_priv *priv; ··· 379 372 380 373 netif_device_detach(priv->netdev); 381 374 } 382 - } else if (err) { 375 + } else { 383 376 dev_err(&dev->intf->dev, 384 377 "Failed resubmitting read bulk urb: %d\n", err); 385 378 }
+7 -1
drivers/net/can/usb/mcba_usb.c
··· 608 608 urb->transfer_buffer, MCBA_USB_RX_BUFF_SIZE, 609 609 mcba_usb_read_bulk_callback, priv); 610 610 611 + usb_anchor_urb(urb, &priv->rx_submitted); 612 + 611 613 retval = usb_submit_urb(urb, GFP_ATOMIC); 614 + if (!retval) 615 + return; 616 + 617 + usb_unanchor_urb(urb); 612 618 613 619 if (retval == -ENODEV) 614 620 netif_device_detach(netdev); 615 - else if (retval) 621 + else 616 622 netdev_err(netdev, "failed resubmitting read bulk urb: %d\n", 617 623 retval); 618 624 }
+7 -1
drivers/net/can/usb/usb_8dev.c
··· 541 541 urb->transfer_buffer, RX_BUFFER_SIZE, 542 542 usb_8dev_read_bulk_callback, priv); 543 543 544 + usb_anchor_urb(urb, &priv->rx_submitted); 545 + 544 546 retval = usb_submit_urb(urb, GFP_ATOMIC); 547 + if (!retval) 548 + return; 549 + 550 + usb_unanchor_urb(urb); 545 551 546 552 if (retval == -ENODEV) 547 553 netif_device_detach(netdev); 548 - else if (retval) 554 + else 549 555 netdev_err(netdev, 550 556 "failed resubmitting read bulk urb: %d\n", retval); 551 557 }
+1 -4
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 1837 1837 s->multicast = pstats->rxmulticastframes_g; 1838 1838 s->rx_length_errors = pstats->rxlengtherror; 1839 1839 s->rx_crc_errors = pstats->rxcrcerror; 1840 - s->rx_fifo_errors = pstats->rxfifooverflow; 1840 + s->rx_over_errors = pstats->rxfifooverflow; 1841 1841 1842 1842 s->tx_packets = pstats->txframecount_gb; 1843 1843 s->tx_bytes = pstats->txoctetcount_gb; ··· 2292 2292 goto read_again; 2293 2293 2294 2294 if (error || packet->errors) { 2295 - if (packet->errors) 2296 - netif_err(pdata, rx_err, netdev, 2297 - "error in received packet\n"); 2298 2295 dev_kfree_skb(skb); 2299 2296 goto next_packet; 2300 2297 }
+3 -2
drivers/net/ethernet/broadcom/asp2/bcmasp.c
··· 156 156 ASP_RX_FILTER_NET_OFFSET_L4(32), 157 157 ASP_RX_FILTER_NET_OFFSET(nfilt->hw_index + 1)); 158 158 159 - rx_filter_core_wl(priv, ASP_RX_FILTER_NET_CFG_CH(nfilt->port + 8) | 159 + rx_filter_core_wl(priv, ASP_RX_FILTER_NET_CFG_CH(nfilt->ch) | 160 160 ASP_RX_FILTER_NET_CFG_EN | 161 161 ASP_RX_FILTER_NET_CFG_L2_EN | 162 162 ASP_RX_FILTER_NET_CFG_L3_EN | ··· 166 166 ASP_RX_FILTER_NET_CFG_UMC(nfilt->port), 167 167 ASP_RX_FILTER_NET_CFG(nfilt->hw_index)); 168 168 169 - rx_filter_core_wl(priv, ASP_RX_FILTER_NET_CFG_CH(nfilt->port + 8) | 169 + rx_filter_core_wl(priv, ASP_RX_FILTER_NET_CFG_CH(nfilt->ch) | 170 170 ASP_RX_FILTER_NET_CFG_EN | 171 171 ASP_RX_FILTER_NET_CFG_L2_EN | 172 172 ASP_RX_FILTER_NET_CFG_L3_EN | ··· 714 714 nfilter = &priv->net_filters[open_index]; 715 715 nfilter->claimed = true; 716 716 nfilter->port = intf->port; 717 + nfilter->ch = intf->channel + priv->tx_chan_offset; 717 718 nfilter->hw_index = open_index; 718 719 } 719 720
+1
drivers/net/ethernet/broadcom/asp2/bcmasp.h
··· 348 348 bool wake_filter; 349 349 350 350 int port; 351 + int ch; 351 352 unsigned int hw_index; 352 353 }; 353 354
+2 -1
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 3801 3801 { 3802 3802 int status; 3803 3803 bool pmac_valid = false; 3804 + u32 pmac_id; 3804 3805 3805 3806 eth_zero_addr(mac); 3806 3807 ··· 3814 3813 adapter->if_handle, 0); 3815 3814 } else { 3816 3815 status = be_cmd_get_mac_from_list(adapter, mac, &pmac_valid, 3817 - NULL, adapter->if_handle, 0); 3816 + &pmac_id, adapter->if_handle, 0); 3818 3817 } 3819 3818 3820 3819 return status;
+5 -3
drivers/net/ethernet/emulex/benet/be_main.c
··· 2141 2141 struct be_aic_obj *aic; 2142 2142 struct be_rx_obj *rxo; 2143 2143 struct be_tx_obj *txo; 2144 - u64 rx_pkts = 0, tx_pkts = 0; 2144 + u64 rx_pkts = 0, tx_pkts = 0, pkts; 2145 2145 ulong now; 2146 2146 u32 pps, delta; 2147 2147 int i; ··· 2157 2157 for_all_rx_queues_on_eq(adapter, eqo, rxo, i) { 2158 2158 do { 2159 2159 start = u64_stats_fetch_begin(&rxo->stats.sync); 2160 - rx_pkts += rxo->stats.rx_pkts; 2160 + pkts = rxo->stats.rx_pkts; 2161 2161 } while (u64_stats_fetch_retry(&rxo->stats.sync, start)); 2162 + rx_pkts += pkts; 2162 2163 } 2163 2164 2164 2165 for_all_tx_queues_on_eq(adapter, eqo, txo, i) { 2165 2166 do { 2166 2167 start = u64_stats_fetch_begin(&txo->stats.sync); 2167 - tx_pkts += txo->stats.tx_reqs; 2168 + pkts = txo->stats.tx_reqs; 2168 2169 } while (u64_stats_fetch_retry(&txo->stats.sync, start)); 2170 + tx_pkts += pkts; 2169 2171 } 2170 2172 2171 2173 /* Skip, if wrapped around or first calculation */
+7 -6
drivers/net/ethernet/freescale/fec_main.c
··· 1150 1150 u32 rcntl = FEC_RCR_MII; 1151 1151 1152 1152 if (OPT_ARCH_HAS_MAX_FL) 1153 - rcntl |= (fep->netdev->mtu + ETH_HLEN + ETH_FCS_LEN) << 16; 1153 + rcntl |= (fep->netdev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN) << 16; 1154 1154 1155 1155 if (fep->bufdesc_ex) 1156 1156 fec_ptp_save_state(fep); ··· 1285 1285 1286 1286 /* When Jumbo Frame is enabled, the FIFO may not be large enough 1287 1287 * to hold an entire frame. In such cases, if the MTU exceeds 1288 - * (PKT_MAXBUF_SIZE - ETH_HLEN - ETH_FCS_LEN), configure the interface 1289 - * to operate in cut-through mode, triggered by the FIFO threshold. 1288 + * (PKT_MAXBUF_SIZE - VLAN_ETH_HLEN - ETH_FCS_LEN), configure 1289 + * the interface to operate in cut-through mode, triggered by 1290 + * the FIFO threshold. 1290 1291 * Otherwise, enable the ENET store-and-forward mode. 1291 1292 */ 1292 1293 if ((fep->quirks & FEC_QUIRK_JUMBO_FRAME) && 1293 - (ndev->mtu > (PKT_MAXBUF_SIZE - ETH_HLEN - ETH_FCS_LEN))) 1294 + (ndev->mtu > (PKT_MAXBUF_SIZE - VLAN_ETH_HLEN - ETH_FCS_LEN))) 1294 1295 writel(0xF, fep->hwp + FEC_X_WMRK); 1295 1296 else 1296 1297 writel(FEC_TXWMRK_STRFWD, fep->hwp + FEC_X_WMRK); ··· 4038 4037 if (netif_running(ndev)) 4039 4038 return -EBUSY; 4040 4039 4041 - order = get_order(new_mtu + ETH_HLEN + ETH_FCS_LEN 4040 + order = get_order(new_mtu + VLAN_ETH_HLEN + ETH_FCS_LEN 4042 4041 + FEC_DRV_RESERVE_SPACE); 4043 4042 fep->rx_frame_size = (PAGE_SIZE << order) - FEC_DRV_RESERVE_SPACE; 4044 4043 fep->pagepool_order = order; ··· 4589 4588 else 4590 4589 fep->max_buf_size = PKT_MAXBUF_SIZE; 4591 4590 4592 - ndev->max_mtu = fep->max_buf_size - ETH_HLEN - ETH_FCS_LEN; 4591 + ndev->max_mtu = fep->max_buf_size - VLAN_ETH_HLEN - ETH_FCS_LEN; 4593 4592 4594 4593 ret = register_netdev(ndev); 4595 4594 if (ret)
+3 -1
drivers/net/ethernet/freescale/ucc_geth.c
··· 1602 1602 pr_warn("TBI mode requires that the device tree specify a tbi-handle\n"); 1603 1603 1604 1604 tbiphy = of_phy_find_device(ug_info->tbi_node); 1605 - if (!tbiphy) 1605 + if (!tbiphy) { 1606 1606 pr_warn("Could not get TBI device\n"); 1607 + return; 1608 + } 1607 1609 1608 1610 value = phy_read(tbiphy, ENET_TBI_MII_CR); 1609 1611 value &= ~0x1000; /* Turn off autonegotiation */
+36 -33
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 2529 2529 static void hns3_fetch_stats(struct rtnl_link_stats64 *stats, 2530 2530 struct hns3_enet_ring *ring, bool is_tx) 2531 2531 { 2532 + struct ring_stats ring_stats; 2532 2533 unsigned int start; 2533 2534 2534 2535 do { 2535 2536 start = u64_stats_fetch_begin(&ring->syncp); 2536 - if (is_tx) { 2537 - stats->tx_bytes += ring->stats.tx_bytes; 2538 - stats->tx_packets += ring->stats.tx_pkts; 2539 - stats->tx_dropped += ring->stats.sw_err_cnt; 2540 - stats->tx_dropped += ring->stats.tx_vlan_err; 2541 - stats->tx_dropped += ring->stats.tx_l4_proto_err; 2542 - stats->tx_dropped += ring->stats.tx_l2l3l4_err; 2543 - stats->tx_dropped += ring->stats.tx_tso_err; 2544 - stats->tx_dropped += ring->stats.over_max_recursion; 2545 - stats->tx_dropped += ring->stats.hw_limitation; 2546 - stats->tx_dropped += ring->stats.copy_bits_err; 2547 - stats->tx_dropped += ring->stats.skb2sgl_err; 2548 - stats->tx_dropped += ring->stats.map_sg_err; 2549 - stats->tx_errors += ring->stats.sw_err_cnt; 2550 - stats->tx_errors += ring->stats.tx_vlan_err; 2551 - stats->tx_errors += ring->stats.tx_l4_proto_err; 2552 - stats->tx_errors += ring->stats.tx_l2l3l4_err; 2553 - stats->tx_errors += ring->stats.tx_tso_err; 2554 - stats->tx_errors += ring->stats.over_max_recursion; 2555 - stats->tx_errors += ring->stats.hw_limitation; 2556 - stats->tx_errors += ring->stats.copy_bits_err; 2557 - stats->tx_errors += ring->stats.skb2sgl_err; 2558 - stats->tx_errors += ring->stats.map_sg_err; 2559 - } else { 2560 - stats->rx_bytes += ring->stats.rx_bytes; 2561 - stats->rx_packets += ring->stats.rx_pkts; 2562 - stats->rx_dropped += ring->stats.l2_err; 2563 - stats->rx_errors += ring->stats.l2_err; 2564 - stats->rx_errors += ring->stats.l3l4_csum_err; 2565 - stats->rx_crc_errors += ring->stats.l2_err; 2566 - stats->multicast += ring->stats.rx_multicast; 2567 - stats->rx_length_errors += ring->stats.err_pkt_len; 2568 - } 2537 + ring_stats = ring->stats; 2569 2538 } while (u64_stats_fetch_retry(&ring->syncp, start)); 2539 + 2540 + if (is_tx) { 2541 + stats->tx_bytes += ring_stats.tx_bytes; 2542 + stats->tx_packets += ring_stats.tx_pkts; 2543 + stats->tx_dropped += ring_stats.sw_err_cnt; 2544 + stats->tx_dropped += ring_stats.tx_vlan_err; 2545 + stats->tx_dropped += ring_stats.tx_l4_proto_err; 2546 + stats->tx_dropped += ring_stats.tx_l2l3l4_err; 2547 + stats->tx_dropped += ring_stats.tx_tso_err; 2548 + stats->tx_dropped += ring_stats.over_max_recursion; 2549 + stats->tx_dropped += ring_stats.hw_limitation; 2550 + stats->tx_dropped += ring_stats.copy_bits_err; 2551 + stats->tx_dropped += ring_stats.skb2sgl_err; 2552 + stats->tx_dropped += ring_stats.map_sg_err; 2553 + stats->tx_errors += ring_stats.sw_err_cnt; 2554 + stats->tx_errors += ring_stats.tx_vlan_err; 2555 + stats->tx_errors += ring_stats.tx_l4_proto_err; 2556 + stats->tx_errors += ring_stats.tx_l2l3l4_err; 2557 + stats->tx_errors += ring_stats.tx_tso_err; 2558 + stats->tx_errors += ring_stats.over_max_recursion; 2559 + stats->tx_errors += ring_stats.hw_limitation; 2560 + stats->tx_errors += ring_stats.copy_bits_err; 2561 + stats->tx_errors += ring_stats.skb2sgl_err; 2562 + stats->tx_errors += ring_stats.map_sg_err; 2563 + } else { 2564 + stats->rx_bytes += ring_stats.rx_bytes; 2565 + stats->rx_packets += ring_stats.rx_pkts; 2566 + stats->rx_dropped += ring_stats.l2_err; 2567 + stats->rx_errors += ring_stats.l2_err; 2568 + stats->rx_errors += ring_stats.l3l4_csum_err; 2569 + stats->rx_crc_errors += ring_stats.l2_err; 2570 + stats->multicast += ring_stats.rx_multicast; 2571 + stats->rx_length_errors += ring_stats.err_pkt_len; 2572 + } 2570 2573 } 2571 2574 2572 2575 static void hns3_nic_get_stats64(struct net_device *netdev,
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
··· 731 731 #define HCLGE_FD_AD_QID_M GENMASK(11, 2) 732 732 #define HCLGE_FD_AD_USE_COUNTER_B 12 733 733 #define HCLGE_FD_AD_COUNTER_NUM_S 13 734 - #define HCLGE_FD_AD_COUNTER_NUM_M GENMASK(20, 13) 734 + #define HCLGE_FD_AD_COUNTER_NUM_M GENMASK(19, 13) 735 735 #define HCLGE_FD_AD_NXT_STEP_B 20 736 736 #define HCLGE_FD_AD_NXT_KEY_S 21 737 737 #define HCLGE_FD_AD_NXT_KEY_M GENMASK(25, 21)
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 5690 5690 HCLGE_FD_AD_COUNTER_NUM_S, action->counter_id); 5691 5691 hnae3_set_bit(ad_data, HCLGE_FD_AD_NXT_STEP_B, action->use_next_stage); 5692 5692 hnae3_set_field(ad_data, HCLGE_FD_AD_NXT_KEY_M, HCLGE_FD_AD_NXT_KEY_S, 5693 - action->counter_id); 5693 + action->next_input_key); 5694 5694 5695 5695 req->ad_data = cpu_to_le64(ad_data); 5696 5696 ret = hclge_cmd_send(&hdev->hw, &desc, 1);
+13 -9
drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
··· 43 43 struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev); 44 44 45 45 netif_napi_add(nic_dev->netdev, &irq_cfg->napi, hinic3_poll); 46 - netif_queue_set_napi(irq_cfg->netdev, irq_cfg->irq_id, 47 - NETDEV_QUEUE_TYPE_RX, &irq_cfg->napi); 48 - netif_queue_set_napi(irq_cfg->netdev, irq_cfg->irq_id, 49 - NETDEV_QUEUE_TYPE_TX, &irq_cfg->napi); 50 46 napi_enable(&irq_cfg->napi); 51 47 } 52 48 53 49 static void qp_del_napi(struct hinic3_irq_cfg *irq_cfg) 54 50 { 55 51 napi_disable(&irq_cfg->napi); 56 - netif_queue_set_napi(irq_cfg->netdev, irq_cfg->irq_id, 57 - NETDEV_QUEUE_TYPE_RX, NULL); 58 - netif_queue_set_napi(irq_cfg->netdev, irq_cfg->irq_id, 59 - NETDEV_QUEUE_TYPE_TX, NULL); 60 - netif_stop_subqueue(irq_cfg->netdev, irq_cfg->irq_id); 61 52 netif_napi_del(&irq_cfg->napi); 62 53 } 63 54 ··· 141 150 goto err_release_irqs; 142 151 } 143 152 153 + netif_queue_set_napi(irq_cfg->netdev, q_id, 154 + NETDEV_QUEUE_TYPE_RX, &irq_cfg->napi); 155 + netif_queue_set_napi(irq_cfg->netdev, q_id, 156 + NETDEV_QUEUE_TYPE_TX, &irq_cfg->napi); 157 + 144 158 hinic3_set_msix_auto_mask_state(nic_dev->hwdev, 145 159 irq_cfg->msix_entry_idx, 146 160 HINIC3_SET_MSIX_AUTO_MASK); ··· 160 164 q_id--; 161 165 irq_cfg = &nic_dev->q_params.irq_cfg[q_id]; 162 166 qp_del_napi(irq_cfg); 167 + netif_queue_set_napi(irq_cfg->netdev, q_id, 168 + NETDEV_QUEUE_TYPE_RX, NULL); 169 + netif_queue_set_napi(irq_cfg->netdev, q_id, 170 + NETDEV_QUEUE_TYPE_TX, NULL); 163 171 hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, 164 172 HINIC3_MSIX_DISABLE); 165 173 hinic3_set_msix_auto_mask_state(nic_dev->hwdev, ··· 184 184 for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) { 185 185 irq_cfg = &nic_dev->q_params.irq_cfg[q_id]; 186 186 qp_del_napi(irq_cfg); 187 + netif_queue_set_napi(irq_cfg->netdev, q_id, 188 + NETDEV_QUEUE_TYPE_RX, NULL); 189 + netif_queue_set_napi(irq_cfg->netdev, q_id, 190 + NETDEV_QUEUE_TYPE_TX, NULL); 187 191 hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, 188 192 HINIC3_MSIX_DISABLE); 189 193 hinic3_set_msix_auto_mask_state(nic_dev->hwdev,
+1
drivers/net/ethernet/intel/ice/devlink/devlink.c
··· 460 460 ice_vsi_decfg(ice_get_main_vsi(pf)); 461 461 rtnl_unlock(); 462 462 ice_deinit_pf(pf); 463 + ice_deinit_hw(&pf->hw); 463 464 ice_deinit_dev(pf); 464 465 } 465 466
+1
drivers/net/ethernet/intel/ice/ice.h
··· 979 979 int 980 980 ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, 981 981 u32 flags); 982 + int ice_get_rss(struct ice_vsi *vsi, u8 *seed, u8 *lut, u16 lut_size); 982 983 int ice_set_rss_lut(struct ice_vsi *vsi, u8 *lut, u16 lut_size); 983 984 int ice_get_rss_lut(struct ice_vsi *vsi, u8 *lut, u16 lut_size); 984 985 int ice_set_rss_key(struct ice_vsi *vsi, u8 *seed);
+1 -1
drivers/net/ethernet/intel/ice/ice_common.c
··· 2251 2251 /* there are some rare cases when trying to release the resource 2252 2252 * results in an admin queue timeout, so handle them correctly 2253 2253 */ 2254 - timeout = jiffies + 10 * ICE_CTL_Q_SQ_CMD_TIMEOUT; 2254 + timeout = jiffies + 10 * usecs_to_jiffies(ICE_CTL_Q_SQ_CMD_TIMEOUT); 2255 2255 do { 2256 2256 status = ice_aq_release_res(hw, res, 0, NULL); 2257 2257 if (status != -EIO)
+1 -5
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 3626 3626 if (!lut) 3627 3627 return -ENOMEM; 3628 3628 3629 - err = ice_get_rss_key(vsi, rxfh->key); 3630 - if (err) 3631 - goto out; 3632 - 3633 - err = ice_get_rss_lut(vsi, lut, vsi->rss_table_size); 3629 + err = ice_get_rss(vsi, rxfh->key, lut, vsi->rss_table_size); 3634 3630 if (err) 3635 3631 goto out; 3636 3632
+21 -8
drivers/net/ethernet/intel/ice/ice_lib.c
··· 398 398 if (!ring_stats) 399 399 goto err_out; 400 400 401 + u64_stats_init(&ring_stats->syncp); 402 + 401 403 WRITE_ONCE(tx_ring_stats[i], ring_stats); 402 404 } 403 405 ··· 418 416 ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL); 419 417 if (!ring_stats) 420 418 goto err_out; 419 + 420 + u64_stats_init(&ring_stats->syncp); 421 421 422 422 WRITE_ONCE(rx_ring_stats[i], ring_stats); 423 423 } ··· 3809 3805 int ice_vsi_del_vlan_zero(struct ice_vsi *vsi) 3810 3806 { 3811 3807 struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); 3808 + struct ice_pf *pf = vsi->back; 3812 3809 struct ice_vlan vlan; 3813 3810 int err; 3814 3811 3815 - vlan = ICE_VLAN(0, 0, 0); 3816 - err = vlan_ops->del_vlan(vsi, &vlan); 3817 - if (err && err != -EEXIST) 3818 - return err; 3812 + if (pf->lag && pf->lag->primary) { 3813 + dev_dbg(ice_pf_to_dev(pf), "Interface is primary in aggregate - not deleting prune list\n"); 3814 + } else { 3815 + vlan = ICE_VLAN(0, 0, 0); 3816 + err = vlan_ops->del_vlan(vsi, &vlan); 3817 + if (err && err != -EEXIST) 3818 + return err; 3819 + } 3819 3820 3820 3821 /* in SVM both VLAN 0 filters are identical */ 3821 3822 if (!ice_is_dvm_ena(&vsi->back->hw)) 3822 3823 return 0; 3823 3824 3824 - vlan = ICE_VLAN(ETH_P_8021Q, 0, 0); 3825 - err = vlan_ops->del_vlan(vsi, &vlan); 3826 - if (err && err != -EEXIST) 3827 - return err; 3825 + if (pf->lag && pf->lag->primary) { 3826 + dev_dbg(ice_pf_to_dev(pf), "Interface is primary in aggregate - not deleting QinQ prune list\n"); 3827 + } else { 3828 + vlan = ICE_VLAN(ETH_P_8021Q, 0, 0); 3829 + err = vlan_ops->del_vlan(vsi, &vlan); 3830 + if (err && err != -EEXIST) 3831 + return err; 3832 + } 3828 3833 3829 3834 /* when deleting the last VLAN filter, make sure to disable the VLAN 3830 3835 * promisc mode so the filter isn't left by accident
+29 -2
drivers/net/ethernet/intel/ice/ice_main.c
··· 4836 4836 ice_dpll_deinit(pf); 4837 4837 if (pf->eswitch_mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) 4838 4838 xa_destroy(&pf->eswitch.reprs); 4839 + ice_hwmon_exit(pf); 4839 4840 } 4840 4841 4841 4842 static void ice_init_wakeup(struct ice_pf *pf) ··· 5437 5436 set_bit(ICE_VF_RESETS_DISABLED, pf->state); 5438 5437 ice_free_vfs(pf); 5439 5438 } 5440 - 5441 - ice_hwmon_exit(pf); 5442 5439 5443 5440 if (!ice_is_safe_mode(pf)) 5444 5441 ice_remove_arfs(pf); ··· 7985 7986 status, libie_aq_str(hw->adminq.sq_last_status)); 7986 7987 7987 7988 return status; 7989 + } 7990 + 7991 + /** 7992 + * ice_get_rss - Get RSS LUT and/or key 7993 + * @vsi: Pointer to VSI structure 7994 + * @seed: Buffer to store the key in 7995 + * @lut: Buffer to store the lookup table entries 7996 + * @lut_size: Size of buffer to store the lookup table entries 7997 + * 7998 + * Return: 0 on success, negative on failure 7999 + */ 8000 + int ice_get_rss(struct ice_vsi *vsi, u8 *seed, u8 *lut, u16 lut_size) 8001 + { 8002 + int err; 8003 + 8004 + if (seed) { 8005 + err = ice_get_rss_key(vsi, seed); 8006 + if (err) 8007 + return err; 8008 + } 8009 + 8010 + if (lut) { 8011 + err = ice_get_rss_lut(vsi, lut, lut_size); 8012 + if (err) 8013 + return err; 8014 + } 8015 + 8016 + return 0; 7988 8017 } 7989 8018 7990 8019 /**
+1 -1
drivers/net/ethernet/intel/idpf/idpf_ptp.c
··· 108 108 ptp_read_system_prets(sts); 109 109 110 110 idpf_ptp_enable_shtime(adapter); 111 + lo = readl(ptp->dev_clk_regs.dev_clk_ns_l); 111 112 112 113 /* Read the system timestamp post PHC read */ 113 114 ptp_read_system_postts(sts); 114 115 115 - lo = readl(ptp->dev_clk_regs.dev_clk_ns_l); 116 116 hi = readl(ptp->dev_clk_regs.dev_clk_ns_h); 117 117 118 118 spin_unlock(&ptp->read_dev_clk_lock);
+11 -5
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 3941 3941 static void idpf_net_dim(struct idpf_q_vector *q_vector) 3942 3942 { 3943 3943 struct dim_sample dim_sample = { }; 3944 - u64 packets, bytes; 3944 + u64 packets, bytes, pkts, bts; 3945 3945 u32 i; 3946 3946 3947 3947 if (!IDPF_ITR_IS_DYNAMIC(q_vector->tx_intr_mode)) ··· 3953 3953 3954 3954 do { 3955 3955 start = u64_stats_fetch_begin(&txq->stats_sync); 3956 - packets += u64_stats_read(&txq->q_stats.packets); 3957 - bytes += u64_stats_read(&txq->q_stats.bytes); 3956 + pkts = u64_stats_read(&txq->q_stats.packets); 3957 + bts = u64_stats_read(&txq->q_stats.bytes); 3958 3958 } while (u64_stats_fetch_retry(&txq->stats_sync, start)); 3959 + 3960 + packets += pkts; 3961 + bytes += bts; 3959 3962 } 3960 3963 3961 3964 idpf_update_dim_sample(q_vector, &dim_sample, &q_vector->tx_dim, ··· 3975 3972 3976 3973 do { 3977 3974 start = u64_stats_fetch_begin(&rxq->stats_sync); 3978 - packets += u64_stats_read(&rxq->q_stats.packets); 3979 - bytes += u64_stats_read(&rxq->q_stats.bytes); 3975 + pkts = u64_stats_read(&rxq->q_stats.packets); 3976 + bts = u64_stats_read(&rxq->q_stats.bytes); 3980 3977 } while (u64_stats_fetch_retry(&rxq->stats_sync, start)); 3978 + 3979 + packets += pkts; 3980 + bytes += bts; 3981 3981 } 3982 3982 3983 3983 idpf_update_dim_sample(q_vector, &dim_sample, &q_vector->rx_dim,
+3 -2
drivers/net/ethernet/intel/igc/igc_defines.h
··· 443 443 #define IGC_TXPBSIZE_DEFAULT ( \ 444 444 IGC_TXPB0SIZE(20) | IGC_TXPB1SIZE(0) | IGC_TXPB2SIZE(0) | \ 445 445 IGC_TXPB3SIZE(0) | IGC_OS2BMCPBSIZE(4)) 446 + /* TSN value following I225/I226 SW User Manual Section 7.5.4 */ 446 447 #define IGC_TXPBSIZE_TSN ( \ 447 - IGC_TXPB0SIZE(7) | IGC_TXPB1SIZE(7) | IGC_TXPB2SIZE(7) | \ 448 - IGC_TXPB3SIZE(7) | IGC_OS2BMCPBSIZE(4)) 448 + IGC_TXPB0SIZE(5) | IGC_TXPB1SIZE(5) | IGC_TXPB2SIZE(5) | \ 449 + IGC_TXPB3SIZE(5) | IGC_OS2BMCPBSIZE(4)) 449 450 450 451 #define IGC_DTXMXPKTSZ_TSN 0x19 /* 1600 bytes of max TX DMA packet size */ 451 452 #define IGC_DTXMXPKTSZ_DEFAULT 0x98 /* 9728-byte Jumbo frames */
+2 -2
drivers/net/ethernet/intel/igc/igc_ethtool.c
··· 1565 1565 if (ch->other_count != NON_Q_VECTORS) 1566 1566 return -EINVAL; 1567 1567 1568 - /* Do not allow channel reconfiguration when mqprio is enabled */ 1569 - if (adapter->strict_priority_enable) 1568 + /* Do not allow channel reconfiguration when any TSN qdisc is enabled */ 1569 + if (adapter->flags & IGC_FLAG_TSN_ANY_ENABLED) 1570 1570 return -EINVAL; 1571 1571 1572 1572 /* Verify the number of channels doesn't exceed hw limits */
+5
drivers/net/ethernet/intel/igc/igc_main.c
··· 7759 7759 if (netif_running(netdev)) 7760 7760 err = igc_open(netdev); 7761 7761 7762 + if (!err) { 7763 + /* Restore default IEEE 802.1Qbv schedule after queue reinit */ 7764 + igc_tsn_clear_schedule(adapter); 7765 + } 7766 + 7762 7767 return err; 7763 7768 } 7764 7769
+25 -18
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 774 774 static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter) 775 775 { 776 776 struct igc_hw *hw = &adapter->hw; 777 + u32 txstmpl_old; 777 778 u64 regval; 778 779 u32 mask; 779 780 int i; 781 + 782 + /* Establish baseline of TXSTMPL_0 before checking TXTT_0. 783 + * This baseline is used to detect if a new timestamp arrives in 784 + * register 0 during the hardware bug workaround below. 785 + */ 786 + txstmpl_old = rd32(IGC_TXSTMPL); 780 787 781 788 mask = rd32(IGC_TSYNCTXCTL) & IGC_TSYNCTXCTL_TXTT_ANY; 782 789 if (mask & IGC_TSYNCTXCTL_TXTT_0) { 783 790 regval = rd32(IGC_TXSTMPL); 784 791 regval |= (u64)rd32(IGC_TXSTMPH) << 32; 785 792 } else { 786 - /* There's a bug in the hardware that could cause 787 - * missing interrupts for TX timestamping. The issue 788 - * is that for new interrupts to be triggered, the 789 - * IGC_TXSTMPH_0 register must be read. 793 + /* TXTT_0 not set - register 0 has no new timestamp initially. 790 794 * 791 - * To avoid discarding a valid timestamp that just 792 - * happened at the "wrong" time, we need to confirm 793 - * that there was no timestamp captured, we do that by 794 - * assuming that no two timestamps in sequence have 795 - * the same nanosecond value. 795 + * Hardware bug: Future timestamp interrupts won't fire unless 796 + * TXSTMPH_0 is read, even if the timestamp was captured in 797 + * registers 1-3. 796 798 * 797 - * So, we read the "low" register, read the "high" 798 - * register (to latch a new timestamp) and read the 799 - * "low" register again, if "old" and "new" versions 800 - * of the "low" register are different, a valid 801 - * timestamp was captured, we can read the "high" 802 - * register again. 799 + * Workaround: Read TXSTMPH_0 here to enable future interrupts. 800 + * However, this read clears TXTT_0. If a timestamp arrives in 801 + * register 0 after checking TXTT_0 but before this read, it 802 + * would be lost. 803 + * 804 + * To detect this race: We saved a baseline read of TXSTMPL_0 805 + * before TXTT_0 check. After performing the workaround read of 806 + * TXSTMPH_0, we read TXSTMPL_0 again. Since consecutive 807 + * timestamps never share the same nanosecond value, a change 808 + * between the baseline and new TXSTMPL_0 indicates a timestamp 809 + * arrived during the race window. If so, read the complete 810 + * timestamp. 803 811 */ 804 - u32 txstmpl_old, txstmpl_new; 812 + u32 txstmpl_new; 805 813 806 - txstmpl_old = rd32(IGC_TXSTMPL); 807 814 rd32(IGC_TXSTMPH); 808 815 txstmpl_new = rd32(IGC_TXSTMPL); 809 816 ··· 825 818 826 819 done: 827 820 /* Now that the problematic first register was handled, we can 828 - * use retrieve the timestamps from the other registers 821 + * retrieve the timestamps from the other registers 829 822 * (starting from '1') with less complications. 830 823 */ 831 824 for (i = 1; i < IGC_MAX_TX_TSTAMP_REGS; i++) {
+64 -22
drivers/net/ethernet/marvell/octeontx2/af/rvu.c
··· 1551 1551 return -ENODEV; 1552 1552 } 1553 1553 1554 - static void rvu_attach_block(struct rvu *rvu, int pcifunc, int blktype, 1555 - int num_lfs, struct rsrc_attach *attach) 1554 + static int rvu_attach_block(struct rvu *rvu, int pcifunc, int blktype, 1555 + int num_lfs, struct rsrc_attach *attach) 1556 1556 { 1557 1557 struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); 1558 1558 struct rvu_hwinfo *hw = rvu->hw; ··· 1562 1562 u64 cfg; 1563 1563 1564 1564 if (!num_lfs) 1565 - return; 1565 + return -EINVAL; 1566 1566 1567 1567 blkaddr = rvu_get_attach_blkaddr(rvu, blktype, pcifunc, attach); 1568 1568 if (blkaddr < 0) 1569 - return; 1569 + return -EFAULT; 1570 1570 1571 1571 block = &hw->block[blkaddr]; 1572 1572 if (!block->lf.bmap) 1573 - return; 1573 + return -ESRCH; 1574 1574 1575 1575 for (slot = 0; slot < num_lfs; slot++) { 1576 1576 /* Allocate the resource */ 1577 1577 lf = rvu_alloc_rsrc(&block->lf); 1578 1578 if (lf < 0) 1579 - return; 1579 + return -EFAULT; 1580 1580 1581 1581 cfg = (1ULL << 63) | (pcifunc << 8) | slot; 1582 1582 rvu_write64(rvu, blkaddr, block->lfcfg_reg | ··· 1587 1587 /* Set start MSIX vector for this LF within this PF/VF */ 1588 1588 rvu_set_msix_offset(rvu, pfvf, block, lf); 1589 1589 } 1590 + 1591 + return 0; 1590 1592 } 1591 1593 1592 1594 static int rvu_check_rsrc_availability(struct rvu *rvu, ··· 1726 1724 int err; 1727 1725 1728 1726 /* If first request, detach all existing attached resources */ 1729 - if (!attach->modify) 1730 - rvu_detach_rsrcs(rvu, NULL, pcifunc); 1727 + if (!attach->modify) { 1728 + err = rvu_detach_rsrcs(rvu, NULL, pcifunc); 1729 + if (err) 1730 + return err; 1731 + } 1731 1732 1732 1733 mutex_lock(&rvu->rsrc_lock); 1733 1734 1734 1735 /* Check if the request can be accommodated */ 1735 1736 err = rvu_check_rsrc_availability(rvu, attach, pcifunc); 1736 1737 if (err) 1737 - goto exit; 1738 + goto fail1; 1738 1739 1739 1740 /* Now attach the requested resources */ 1740 - if (attach->npalf) 1741 - rvu_attach_block(rvu, pcifunc, BLKTYPE_NPA, 1, attach); 1741 + if (attach->npalf) { 1742 + err = rvu_attach_block(rvu, pcifunc, BLKTYPE_NPA, 1, attach); 1743 + if (err) 1744 + goto fail1; 1745 + } 1742 1746 1743 - if (attach->nixlf) 1744 - rvu_attach_block(rvu, pcifunc, BLKTYPE_NIX, 1, attach); 1747 + if (attach->nixlf) { 1748 + err = rvu_attach_block(rvu, pcifunc, BLKTYPE_NIX, 1, attach); 1749 + if (err) 1750 + goto fail2; 1751 + } 1745 1752 1746 1753 if (attach->sso) { 1747 1754 /* RVU func doesn't know which exact LF or slot is attached ··· 1760 1749 */ 1761 1750 if (attach->modify) 1762 1751 rvu_detach_block(rvu, pcifunc, BLKTYPE_SSO); 1763 - rvu_attach_block(rvu, pcifunc, BLKTYPE_SSO, 1764 - attach->sso, attach); 1752 + err = rvu_attach_block(rvu, pcifunc, BLKTYPE_SSO, 1753 + attach->sso, attach); 1754 + if (err) 1755 + goto fail3; 1765 1756 } 1766 1757 1767 1758 if (attach->ssow) { 1768 1759 if (attach->modify) 1769 1760 rvu_detach_block(rvu, pcifunc, BLKTYPE_SSOW); 1770 - rvu_attach_block(rvu, pcifunc, BLKTYPE_SSOW, 1771 - attach->ssow, attach); 1761 + err = rvu_attach_block(rvu, pcifunc, BLKTYPE_SSOW, 1762 + attach->ssow, attach); 1763 + if (err) 1764 + goto fail4; 1772 1765 } 1773 1766 1774 1767 if (attach->timlfs) { 1775 1768 if (attach->modify) 1776 1769 rvu_detach_block(rvu, pcifunc, BLKTYPE_TIM); 1777 - rvu_attach_block(rvu, pcifunc, BLKTYPE_TIM, 1778 - attach->timlfs, attach); 1770 + err = rvu_attach_block(rvu, pcifunc, BLKTYPE_TIM, 1771 + attach->timlfs, attach); 1772 + if (err) 1773 + goto fail5; 1779 1774 } 1780 1775 1781 1776 if (attach->cptlfs) { 1782 1777 if (attach->modify && 1783 1778 rvu_attach_from_same_block(rvu, BLKTYPE_CPT, attach)) 1784 1779 rvu_detach_block(rvu, pcifunc, BLKTYPE_CPT); 1785 - rvu_attach_block(rvu, pcifunc, BLKTYPE_CPT, 1786 - attach->cptlfs, attach); 1780 + err = rvu_attach_block(rvu, pcifunc, BLKTYPE_CPT, 1781 + attach->cptlfs, attach); 1782 + if (err) 1783 + goto fail6; 1787 1784 } 1788 1785 1789 - exit: 1786 + mutex_unlock(&rvu->rsrc_lock); 1787 + return 0; 1788 + 1789 + fail6: 1790 + if (attach->timlfs) 1791 + rvu_detach_block(rvu, pcifunc, BLKTYPE_TIM); 1792 + 1793 + fail5: 1794 + if (attach->ssow) 1795 + rvu_detach_block(rvu, pcifunc, BLKTYPE_SSOW); 1796 + 1797 + fail4: 1798 + if (attach->sso) 1799 + rvu_detach_block(rvu, pcifunc, BLKTYPE_SSO); 1800 + 1801 + fail3: 1802 + if (attach->nixlf) 1803 + rvu_detach_block(rvu, pcifunc, BLKTYPE_NIX); 1804 + 1805 + fail2: 1806 + if (attach->npalf) 1807 + rvu_detach_block(rvu, pcifunc, BLKTYPE_NPA); 1808 + 1809 + fail1: 1790 1810 mutex_unlock(&rvu->rsrc_lock); 1791 1811 return err; 1792 1812 }
+3
drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
··· 1222 1222 u8 cgx_idx, lmac; 1223 1223 void *cgxd; 1224 1224 1225 + if (!rvu->fwdata) 1226 + return LMAC_AF_ERR_FIRMWARE_DATA_NOT_MAPPED; 1227 + 1225 1228 if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc)) 1226 1229 return -EPERM; 1227 1230
+1 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_sdp.c
··· 56 56 struct rvu_pfvf *pfvf; 57 57 u32 i = 0; 58 58 59 - if (rvu->fwdata->channel_data.valid) { 59 + if (rvu->fwdata && rvu->fwdata->channel_data.valid) { 60 60 sdp_pf_num[0] = 0; 61 61 pfvf = &rvu->pf[sdp_pf_num[0]]; 62 62 pfvf->sdp_info = &rvu->fwdata->channel_data.info;
+1 -1
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
··· 328 328 329 329 req->data[0] = FIELD_PREP(MCS_TCAM0_MAC_DA_MASK, mac_da); 330 330 req->mask[0] = ~0ULL; 331 - req->mask[0] = ~MCS_TCAM0_MAC_DA_MASK; 331 + req->mask[0] &= ~MCS_TCAM0_MAC_DA_MASK; 332 332 333 333 req->data[1] = FIELD_PREP(MCS_TCAM1_ETYPE_MASK, ETH_P_MACSEC); 334 334 req->mask[1] = ~0ULL;
+1 -6
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
··· 940 940 size_t offset, size_t size, 941 941 enum dma_data_direction dir) 942 942 { 943 - dma_addr_t iova; 944 - 945 - iova = dma_map_page_attrs(pfvf->dev, page, 943 + return dma_map_page_attrs(pfvf->dev, page, 946 944 offset, size, dir, DMA_ATTR_SKIP_CPU_SYNC); 947 - if (unlikely(dma_mapping_error(pfvf->dev, iova))) 948 - return (dma_addr_t)NULL; 949 - return iova; 950 945 } 951 946 952 947 static inline void otx2_dma_unmap_page(struct otx2_nic *pfvf,
+3 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 3249 3249 netdev->watchdog_timeo = OTX2_TX_TIMEOUT; 3250 3250 3251 3251 netdev->netdev_ops = &otx2_netdev_ops; 3252 - netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; 3252 + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | 3253 + NETDEV_XDP_ACT_NDO_XMIT | 3254 + NETDEV_XDP_ACT_XSK_ZEROCOPY; 3253 3255 3254 3256 netdev->min_mtu = OTX2_MIN_MTU; 3255 3257 netdev->max_mtu = otx2_get_max_mtu(pf);
+9 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 4359 4359 unsigned int first_entry, tx_packets; 4360 4360 struct stmmac_txq_stats *txq_stats; 4361 4361 struct stmmac_tx_queue *tx_q; 4362 + bool set_ic, is_last_segment; 4362 4363 u32 pay_len, mss, queue; 4363 4364 int i, first_tx, nfrags; 4364 4365 u8 proto_hdr_len, hdr; 4365 4366 dma_addr_t des; 4366 - bool set_ic; 4367 4367 4368 4368 /* Always insert VLAN tag to SKB payload for TSO frames. 4369 4369 * ··· 4551 4551 stmmac_enable_tx_timestamp(priv, first); 4552 4552 } 4553 4553 4554 + /* If we only have one entry used, then the first entry is the last 4555 + * segment. 4556 + */ 4557 + is_last_segment = ((tx_q->cur_tx - first_entry) & 4558 + (priv->dma_conf.dma_tx_size - 1)) == 1; 4559 + 4554 4560 /* Complete the first descriptor before granting the DMA */ 4555 4561 stmmac_prepare_tso_tx_desc(priv, first, 1, proto_hdr_len, 0, 1, 4556 - tx_q->tx_skbuff_dma[first_entry].last_segment, 4557 - hdr / 4, (skb->len - proto_hdr_len)); 4562 + is_last_segment, hdr / 4, 4563 + skb->len - proto_hdr_len); 4558 4564 4559 4565 /* If context desc is used to change MSS */ 4560 4566 if (mss_desc) {
+2 -2
drivers/net/ethernet/wangxun/txgbe/txgbe_aml.c
··· 70 70 buffer.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; 71 71 72 72 return wx_host_interface_command(wx, (u32 *)&buffer, sizeof(buffer), 73 - WX_HI_COMMAND_TIMEOUT, true); 73 + WX_HI_COMMAND_TIMEOUT, false); 74 74 } 75 75 76 76 int txgbe_read_eeprom_hostif(struct wx *wx, ··· 148 148 buffer.duplex = duplex; 149 149 150 150 return wx_host_interface_command(wx, (u32 *)&buffer, sizeof(buffer), 151 - WX_HI_COMMAND_TIMEOUT, true); 151 + WX_HI_COMMAND_TIMEOUT, false); 152 152 } 153 153 154 154 static void txgbe_get_link_capabilities(struct wx *wx, int *speed,
+1 -1
drivers/net/ipvlan/ipvlan.h
··· 69 69 DECLARE_BITMAP(mac_filters, IPVLAN_MAC_FILTER_SIZE); 70 70 netdev_features_t sfeatures; 71 71 u32 msg_enable; 72 - spinlock_t addrs_lock; 73 72 }; 74 73 75 74 struct ipvl_addr { ··· 89 90 struct net_device *dev; 90 91 possible_net_t pnet; 91 92 struct hlist_head hlhead[IPVLAN_HASH_SIZE]; 93 + spinlock_t addrs_lock; /* guards hash-table and addrs */ 92 94 struct list_head ipvlans; 93 95 u16 mode; 94 96 u16 flags;
+7 -9
drivers/net/ipvlan/ipvlan_core.c
··· 107 107 struct ipvl_addr *ipvlan_find_addr(const struct ipvl_dev *ipvlan, 108 108 const void *iaddr, bool is_v6) 109 109 { 110 - struct ipvl_addr *addr, *ret = NULL; 110 + struct ipvl_addr *addr; 111 111 112 - rcu_read_lock(); 113 - list_for_each_entry_rcu(addr, &ipvlan->addrs, anode) { 114 - if (addr_equal(is_v6, addr, iaddr)) { 115 - ret = addr; 116 - break; 117 - } 112 + assert_spin_locked(&ipvlan->port->addrs_lock); 113 + 114 + list_for_each_entry(addr, &ipvlan->addrs, anode) { 115 + if (addr_equal(is_v6, addr, iaddr)) 116 + return addr; 118 117 } 119 - rcu_read_unlock(); 120 - return ret; 118 + return NULL; 121 119 } 122 120 123 121 bool ipvlan_addr_busy(struct ipvl_port *port, void *iaddr, bool is_v6)
+29 -20
drivers/net/ipvlan/ipvlan_main.c
··· 75 75 for (idx = 0; idx < IPVLAN_HASH_SIZE; idx++) 76 76 INIT_HLIST_HEAD(&port->hlhead[idx]); 77 77 78 + spin_lock_init(&port->addrs_lock); 78 79 skb_queue_head_init(&port->backlog); 79 80 INIT_WORK(&port->wq, ipvlan_process_multicast); 80 81 ida_init(&port->ida); ··· 182 181 static int ipvlan_open(struct net_device *dev) 183 182 { 184 183 struct ipvl_dev *ipvlan = netdev_priv(dev); 184 + struct ipvl_port *port = ipvlan->port; 185 185 struct ipvl_addr *addr; 186 186 187 187 if (ipvlan->port->mode == IPVLAN_MODE_L3 || ··· 191 189 else 192 190 dev->flags &= ~IFF_NOARP; 193 191 194 - rcu_read_lock(); 195 - list_for_each_entry_rcu(addr, &ipvlan->addrs, anode) 192 + spin_lock_bh(&port->addrs_lock); 193 + list_for_each_entry(addr, &ipvlan->addrs, anode) 196 194 ipvlan_ht_addr_add(ipvlan, addr); 197 - rcu_read_unlock(); 195 + spin_unlock_bh(&port->addrs_lock); 198 196 199 197 return 0; 200 198 } ··· 208 206 dev_uc_unsync(phy_dev, dev); 209 207 dev_mc_unsync(phy_dev, dev); 210 208 211 - rcu_read_lock(); 212 - list_for_each_entry_rcu(addr, &ipvlan->addrs, anode) 209 + spin_lock_bh(&ipvlan->port->addrs_lock); 210 + list_for_each_entry(addr, &ipvlan->addrs, anode) 213 211 ipvlan_ht_addr_del(addr); 214 - rcu_read_unlock(); 212 + spin_unlock_bh(&ipvlan->port->addrs_lock); 215 213 216 214 return 0; 217 215 } ··· 581 579 if (!tb[IFLA_MTU]) 582 580 ipvlan_adjust_mtu(ipvlan, phy_dev); 583 581 INIT_LIST_HEAD(&ipvlan->addrs); 584 - spin_lock_init(&ipvlan->addrs_lock); 585 582 586 583 /* TODO Probably put random address here to be presented to the 587 584 * world but keep using the physical-dev address for the outgoing ··· 658 657 struct ipvl_dev *ipvlan = netdev_priv(dev); 659 658 struct ipvl_addr *addr, *next; 660 659 661 - spin_lock_bh(&ipvlan->addrs_lock); 660 + spin_lock_bh(&ipvlan->port->addrs_lock); 662 661 list_for_each_entry_safe(addr, next, &ipvlan->addrs, anode) { 663 662 ipvlan_ht_addr_del(addr); 664 663 list_del_rcu(&addr->anode); 665 664 kfree_rcu(addr, rcu); 666 665 } 667 - spin_unlock_bh(&ipvlan->addrs_lock); 666 + spin_unlock_bh(&ipvlan->port->addrs_lock); 668 667 669 668 ida_free(&ipvlan->port->ida, dev->dev_id); 670 669 list_del_rcu(&ipvlan->pnode); ··· 818 817 { 819 818 struct ipvl_addr *addr; 820 819 820 + assert_spin_locked(&ipvlan->port->addrs_lock); 821 + 821 822 addr = kzalloc(sizeof(struct ipvl_addr), GFP_ATOMIC); 822 823 if (!addr) 823 824 return -ENOMEM; ··· 850 847 { 851 848 struct ipvl_addr *addr; 852 849 853 - spin_lock_bh(&ipvlan->addrs_lock); 850 + spin_lock_bh(&ipvlan->port->addrs_lock); 854 851 addr = ipvlan_find_addr(ipvlan, iaddr, is_v6); 855 852 if (!addr) { 856 - spin_unlock_bh(&ipvlan->addrs_lock); 853 + spin_unlock_bh(&ipvlan->port->addrs_lock); 857 854 return; 858 855 } 859 856 860 857 ipvlan_ht_addr_del(addr); 861 858 list_del_rcu(&addr->anode); 862 - spin_unlock_bh(&ipvlan->addrs_lock); 859 + spin_unlock_bh(&ipvlan->port->addrs_lock); 863 860 kfree_rcu(addr, rcu); 864 861 } 865 862 ··· 881 878 { 882 879 int ret = -EINVAL; 883 880 884 - spin_lock_bh(&ipvlan->addrs_lock); 881 + spin_lock_bh(&ipvlan->port->addrs_lock); 885 882 if (ipvlan_addr_busy(ipvlan->port, ip6_addr, true)) 886 883 netif_err(ipvlan, ifup, ipvlan->dev, 887 884 "Failed to add IPv6=%pI6c addr for %s intf\n", 888 885 ip6_addr, ipvlan->dev->name); 889 886 else 890 887 ret = ipvlan_add_addr(ipvlan, ip6_addr, true); 891 - spin_unlock_bh(&ipvlan->addrs_lock); 888 + spin_unlock_bh(&ipvlan->port->addrs_lock); 892 889 return ret; 893 890 } 894 891 ··· 927 924 struct in6_validator_info *i6vi = (struct in6_validator_info *)ptr; 928 925 struct net_device *dev = (struct net_device *)i6vi->i6vi_dev->dev; 929 926 struct ipvl_dev *ipvlan = netdev_priv(dev); 927 + int ret = NOTIFY_OK; 930 928 931 929 if (!ipvlan_is_valid_dev(dev)) 932 930 return NOTIFY_DONE; 933 931 934 932 switch (event) { 935 933 case NETDEV_UP: 934 + spin_lock_bh(&ipvlan->port->addrs_lock); 936 935 if (ipvlan_addr_busy(ipvlan->port, &i6vi->i6vi_addr, true)) { 937 936 NL_SET_ERR_MSG(i6vi->extack, 938 937 "Address already assigned to an ipvlan device"); 939 - return notifier_from_errno(-EADDRINUSE); 938 + ret = notifier_from_errno(-EADDRINUSE); 940 939 } 940 + spin_unlock_bh(&ipvlan->port->addrs_lock); 941 941 break; 942 942 } 943 943 944 - return NOTIFY_OK; 944 + return ret; 945 945 } 946 946 #endif 947 947 ··· 952 946 { 953 947 int ret = -EINVAL; 954 948 955 - spin_lock_bh(&ipvlan->addrs_lock); 949 + spin_lock_bh(&ipvlan->port->addrs_lock); 956 950 if (ipvlan_addr_busy(ipvlan->port, ip4_addr, false)) 957 951 netif_err(ipvlan, ifup, ipvlan->dev, 958 952 "Failed to add IPv4=%pI4 on %s intf.\n", 959 953 ip4_addr, ipvlan->dev->name); 960 954 else 961 955 ret = ipvlan_add_addr(ipvlan, ip4_addr, false); 962 - spin_unlock_bh(&ipvlan->addrs_lock); 956 + spin_unlock_bh(&ipvlan->port->addrs_lock); 963 957 return ret; 964 958 } 965 959 ··· 1001 995 struct in_validator_info *ivi = (struct in_validator_info *)ptr; 1002 996 struct net_device *dev = (struct net_device *)ivi->ivi_dev->dev; 1003 997 struct ipvl_dev *ipvlan = netdev_priv(dev); 998 + int ret = NOTIFY_OK; 1004 999 1005 1000 if (!ipvlan_is_valid_dev(dev)) 1006 1001 return NOTIFY_DONE; 1007 1002 1008 1003 switch (event) { 1009 1004 case NETDEV_UP: 1005 + spin_lock_bh(&ipvlan->port->addrs_lock); 1010 1006 if (ipvlan_addr_busy(ipvlan->port, &ivi->ivi_addr, false)) { 1011 1007 NL_SET_ERR_MSG(ivi->extack, 1012 1008 "Address already assigned to an ipvlan device"); 1013 - return notifier_from_errno(-EADDRINUSE); 1009 + ret = notifier_from_errno(-EADDRINUSE); 1014 1010 } 1011 + spin_unlock_bh(&ipvlan->port->addrs_lock); 1015 1012 break; 1016 1013 } 1017 1014 1018 - return NOTIFY_OK; 1015 + return ret; 1019 1016 } 1020 1017 1021 1018 static struct notifier_block ipvlan_addr4_notifier_block __read_mostly = {
+6
drivers/net/netdevsim/bpf.c
··· 244 244 &state->state, &nsim_bpf_string_fops); 245 245 debugfs_create_bool("loaded", 0400, state->ddir, &state->is_loaded); 246 246 247 + mutex_lock(&nsim_dev->progs_list_lock); 247 248 list_add_tail(&state->l, &nsim_dev->bpf_bound_progs); 249 + mutex_unlock(&nsim_dev->progs_list_lock); 248 250 249 251 prog->aux->offload->dev_priv = state; 250 252 ··· 275 273 static void nsim_bpf_destroy_prog(struct bpf_prog *prog) 276 274 { 277 275 struct nsim_bpf_bound_prog *state; 276 + struct nsim_dev *nsim_dev; 278 277 279 278 state = prog->aux->offload->dev_priv; 279 + nsim_dev = state->nsim_dev; 280 280 WARN(state->is_loaded, 281 281 "offload state destroyed while program still bound"); 282 282 debugfs_remove_recursive(state->ddir); 283 + mutex_lock(&nsim_dev->progs_list_lock); 283 284 list_del(&state->l); 285 + mutex_unlock(&nsim_dev->progs_list_lock); 284 286 kfree(state); 285 287 } 286 288
+2
drivers/net/netdevsim/dev.c
··· 1647 1647 nsim_dev->test1 = NSIM_DEV_TEST1_DEFAULT; 1648 1648 nsim_dev->test2 = NSIM_DEV_TEST2_DEFAULT; 1649 1649 spin_lock_init(&nsim_dev->fa_cookie_lock); 1650 + mutex_init(&nsim_dev->progs_list_lock); 1650 1651 1651 1652 dev_set_drvdata(&nsim_bus_dev->dev, nsim_dev); 1652 1653 ··· 1786 1785 devl_unregister(devlink); 1787 1786 kfree(nsim_dev->vfconfigs); 1788 1787 kfree(nsim_dev->fa_cookie); 1788 + mutex_destroy(&nsim_dev->progs_list_lock); 1789 1789 devl_unlock(devlink); 1790 1790 devlink_free(devlink); 1791 1791 dev_set_drvdata(&nsim_bus_dev->dev, NULL);
+1
drivers/net/netdevsim/netdevsim.h
··· 324 324 u32 prog_id_gen; 325 325 struct list_head bpf_bound_progs; 326 326 struct list_head bpf_bound_maps; 327 + struct mutex progs_list_lock; 327 328 struct netdev_phys_item_id switch_id; 328 329 struct list_head port_list; 329 330 bool fw_update_status;
+1 -3
drivers/net/pcs/pcs-mtk-lynxi.c
··· 93 93 { 94 94 switch (interface) { 95 95 case PHY_INTERFACE_MODE_1000BASEX: 96 + case PHY_INTERFACE_MODE_2500BASEX: 96 97 case PHY_INTERFACE_MODE_SGMII: 97 98 return LINK_INBAND_DISABLE | LINK_INBAND_ENABLE; 98 - 99 - case PHY_INTERFACE_MODE_2500BASEX: 100 - return LINK_INBAND_DISABLE; 101 99 102 100 default: 103 101 return 0;
+5 -2
drivers/net/phy/intel-xway.c
··· 277 277 278 278 static int xway_gphy_config_init(struct phy_device *phydev) 279 279 { 280 - struct device_node *np = phydev->mdio.dev.of_node; 280 + struct device_node *np; 281 281 int err; 282 282 283 283 /* Mask all interrupts */ ··· 286 286 return err; 287 287 288 288 /* Use default LED configuration if 'leds' node isn't defined */ 289 - if (!of_get_child_by_name(np, "leds")) 289 + np = of_get_child_by_name(phydev->mdio.dev.of_node, "leds"); 290 + if (np) 291 + of_node_put(np); 292 + else 290 293 xway_gphy_init_leds(phydev); 291 294 292 295 /* Clear all pending interrupts */
+2
drivers/net/phy/sfp.c
··· 519 519 520 520 SFP_QUIRK_F("HALNy", "HL-GSFP", sfp_fixup_halny_gsfp), 521 521 522 + SFP_QUIRK_F("H-COM", "SPP425H-GAB4", sfp_fixup_potron), 523 + 522 524 // HG MXPD-483II-F 2.5G supports 2500Base-X, but incorrectly reports 523 525 // 2600MBd in their EERPOM 524 526 SFP_QUIRK_S("HG GENUINE", "MXPD-483II", sfp_quirk_2500basex),
-4
drivers/net/usb/dm9601.c
··· 604 604 .driver_info = (unsigned long)&dm9601_info, 605 605 }, 606 606 { 607 - USB_DEVICE(0x0fe6, 0x9700), /* DM9601 USB to Fast Ethernet Adapter */ 608 - .driver_info = (unsigned long)&dm9601_info, 609 - }, 610 - { 611 607 USB_DEVICE(0x0a46, 0x9000), /* DM9000E */ 612 608 .driver_info = (unsigned long)&dm9601_info, 613 609 },
+7 -3
drivers/net/usb/usbnet.c
··· 1821 1821 if ((dev->driver_info->flags & FLAG_NOARP) != 0) 1822 1822 net->flags |= IFF_NOARP; 1823 1823 1824 - /* maybe the remote can't receive an Ethernet MTU */ 1825 - if (net->mtu > (dev->hard_mtu - net->hard_header_len)) 1826 - net->mtu = dev->hard_mtu - net->hard_header_len; 1824 + if (net->max_mtu > (dev->hard_mtu - net->hard_header_len)) 1825 + net->max_mtu = dev->hard_mtu - net->hard_header_len; 1826 + 1827 + if (net->mtu > net->max_mtu) 1828 + net->mtu = net->max_mtu; 1829 + 1827 1830 } else if (!info->in || !info->out) 1828 1831 status = usbnet_get_endpoints(dev, udev); 1829 1832 else { ··· 1987 1984 } else { 1988 1985 netif_trans_update(dev->net); 1989 1986 __skb_queue_tail(&dev->txq, skb); 1987 + netdev_sent_queue(dev->net, skb->len); 1990 1988 } 1991 1989 } 1992 1990
+6 -2
drivers/net/veth.c
··· 228 228 const struct veth_rq_stats *rq_stats = &rcv_priv->rq[i].stats; 229 229 const void *base = (void *)&rq_stats->vs; 230 230 unsigned int start, tx_idx = idx; 231 + u64 buf[VETH_TQ_STATS_LEN]; 231 232 size_t offset; 232 233 233 - tx_idx += (i % dev->real_num_tx_queues) * VETH_TQ_STATS_LEN; 234 234 do { 235 235 start = u64_stats_fetch_begin(&rq_stats->syncp); 236 236 for (j = 0; j < VETH_TQ_STATS_LEN; j++) { 237 237 offset = veth_tq_stats_desc[j].offset; 238 - data[tx_idx + j] += *(u64 *)(base + offset); 238 + buf[j] = *(u64 *)(base + offset); 239 239 } 240 240 } while (u64_stats_fetch_retry(&rq_stats->syncp, start)); 241 + 242 + tx_idx += (i % dev->real_num_tx_queues) * VETH_TQ_STATS_LEN; 243 + for (j = 0; j < VETH_TQ_STATS_LEN; j++) 244 + data[tx_idx + j] += buf[j]; 241 245 } 242 246 pp_idx = idx + dev->real_num_tx_queues * VETH_TQ_STATS_LEN; 243 247
+8 -8
drivers/net/wireless/ath/ath10k/ce.c
··· 1727 1727 (ce_state->src_ring->nentries * 1728 1728 sizeof(struct ce_desc) + 1729 1729 CE_DESC_RING_ALIGN), 1730 - ce_state->src_ring->base_addr_owner_space, 1731 - ce_state->src_ring->base_addr_ce_space); 1730 + ce_state->src_ring->base_addr_owner_space_unaligned, 1731 + ce_state->src_ring->base_addr_ce_space_unaligned); 1732 1732 kfree(ce_state->src_ring); 1733 1733 } 1734 1734 ··· 1737 1737 (ce_state->dest_ring->nentries * 1738 1738 sizeof(struct ce_desc) + 1739 1739 CE_DESC_RING_ALIGN), 1740 - ce_state->dest_ring->base_addr_owner_space, 1741 - ce_state->dest_ring->base_addr_ce_space); 1740 + ce_state->dest_ring->base_addr_owner_space_unaligned, 1741 + ce_state->dest_ring->base_addr_ce_space_unaligned); 1742 1742 kfree(ce_state->dest_ring); 1743 1743 } 1744 1744 ··· 1758 1758 (ce_state->src_ring->nentries * 1759 1759 sizeof(struct ce_desc_64) + 1760 1760 CE_DESC_RING_ALIGN), 1761 - ce_state->src_ring->base_addr_owner_space, 1762 - ce_state->src_ring->base_addr_ce_space); 1761 + ce_state->src_ring->base_addr_owner_space_unaligned, 1762 + ce_state->src_ring->base_addr_ce_space_unaligned); 1763 1763 kfree(ce_state->src_ring); 1764 1764 } 1765 1765 ··· 1768 1768 (ce_state->dest_ring->nentries * 1769 1769 sizeof(struct ce_desc_64) + 1770 1770 CE_DESC_RING_ALIGN), 1771 - ce_state->dest_ring->base_addr_owner_space, 1772 - ce_state->dest_ring->base_addr_ce_space); 1771 + ce_state->dest_ring->base_addr_owner_space_unaligned, 1772 + ce_state->dest_ring->base_addr_ce_space_unaligned); 1773 1773 kfree(ce_state->dest_ring); 1774 1774 } 1775 1775
+6 -6
drivers/net/wireless/ath/ath12k/ce.c
··· 984 984 dma_free_coherent(ab->dev, 985 985 pipe->src_ring->nentries * desc_sz + 986 986 CE_DESC_RING_ALIGN, 987 - pipe->src_ring->base_addr_owner_space, 988 - pipe->src_ring->base_addr_ce_space); 987 + pipe->src_ring->base_addr_owner_space_unaligned, 988 + pipe->src_ring->base_addr_ce_space_unaligned); 989 989 kfree(pipe->src_ring); 990 990 pipe->src_ring = NULL; 991 991 } ··· 995 995 dma_free_coherent(ab->dev, 996 996 pipe->dest_ring->nentries * desc_sz + 997 997 CE_DESC_RING_ALIGN, 998 - pipe->dest_ring->base_addr_owner_space, 999 - pipe->dest_ring->base_addr_ce_space); 998 + pipe->dest_ring->base_addr_owner_space_unaligned, 999 + pipe->dest_ring->base_addr_ce_space_unaligned); 1000 1000 kfree(pipe->dest_ring); 1001 1001 pipe->dest_ring = NULL; 1002 1002 } ··· 1007 1007 dma_free_coherent(ab->dev, 1008 1008 pipe->status_ring->nentries * desc_sz + 1009 1009 CE_DESC_RING_ALIGN, 1010 - pipe->status_ring->base_addr_owner_space, 1011 - pipe->status_ring->base_addr_ce_space); 1010 + pipe->status_ring->base_addr_owner_space_unaligned, 1011 + pipe->status_ring->base_addr_ce_space_unaligned); 1012 1012 kfree(pipe->status_ring); 1013 1013 pipe->status_ring = NULL; 1014 1014 }
+10 -6
drivers/net/wireless/ath/ath12k/mac.c
··· 5495 5495 5496 5496 for_each_set_bit(link_id, &links_map, ATH12K_NUM_MAX_LINKS) { 5497 5497 arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]); 5498 - if (!arvif || arvif->is_started) 5498 + if (!arvif || !arvif->is_created || 5499 + arvif->ar->scan.arvif != arvif) 5499 5500 continue; 5500 5501 5501 5502 ar = arvif->ar; ··· 9173 9172 return; 9174 9173 } 9175 9174 } else { 9176 - link_id = 0; 9175 + if (vif->type == NL80211_IFTYPE_P2P_DEVICE) 9176 + link_id = ATH12K_FIRST_SCAN_LINK; 9177 + else 9178 + link_id = 0; 9177 9179 } 9178 9180 9179 9181 arvif = rcu_dereference(ahvif->link[link_id]); ··· 12146 12142 if (drop) 12147 12143 return; 12148 12144 12145 + for_each_ar(ah, ar, i) 12146 + wiphy_work_flush(hw->wiphy, &ar->wmi_mgmt_tx_work); 12147 + 12149 12148 /* vif can be NULL when flush() is considered for hw */ 12150 12149 if (!vif) { 12151 12150 for_each_ar(ah, ar, i) 12152 12151 ath12k_mac_flush(ar); 12153 12152 return; 12154 12153 } 12155 - 12156 - for_each_ar(ah, ar, i) 12157 - wiphy_work_flush(hw->wiphy, &ar->wmi_mgmt_tx_work); 12158 12154 12159 12155 ahvif = ath12k_vif_to_ahvif(vif); 12160 12156 links = ahvif->links_map; ··· 13347 13343 ath12k_scan_abort(ar); 13348 13344 13349 13345 cancel_delayed_work_sync(&ar->scan.timeout); 13350 - wiphy_work_cancel(hw->wiphy, &ar->scan.vdev_clean_wk); 13346 + wiphy_work_flush(hw->wiphy, &ar->scan.vdev_clean_wk); 13351 13347 13352 13348 return 0; 13353 13349 }
+1 -8
drivers/net/wireless/ath/ath12k/wmi.c
··· 6575 6575 if (!sband) 6576 6576 continue; 6577 6577 6578 - for (ch = 0; ch < sband->n_channels; ch++, idx++) { 6579 - if (sband->channels[ch].center_freq < 6580 - KHZ_TO_MHZ(ar->freq_range.start_freq) || 6581 - sband->channels[ch].center_freq > 6582 - KHZ_TO_MHZ(ar->freq_range.end_freq)) 6583 - continue; 6584 - 6578 + for (ch = 0; ch < sband->n_channels; ch++, idx++) 6585 6579 if (sband->channels[ch].center_freq == freq) 6586 6580 goto exit; 6587 - } 6588 6581 } 6589 6582 6590 6583 exit:
+3 -3
drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
··· 825 825 static void mwifiex_update_ampdu_rxwinsize(struct mwifiex_adapter *adapter, 826 826 bool coex_flag) 827 827 { 828 - u8 i; 828 + u8 i, j; 829 829 u32 rx_win_size; 830 830 struct mwifiex_private *priv; 831 831 ··· 863 863 if (rx_win_size != priv->add_ba_param.rx_win_size) { 864 864 if (!priv->media_connected) 865 865 continue; 866 - for (i = 0; i < MAX_NUM_TID; i++) 867 - mwifiex_11n_delba(priv, i); 866 + for (j = 0; j < MAX_NUM_TID; j++) 867 + mwifiex_11n_delba(priv, j); 868 868 } 869 869 } 870 870 }
+1
drivers/net/wireless/rsi/rsi_91x_mac80211.c
··· 2035 2035 2036 2036 hw->queues = MAX_HW_QUEUES; 2037 2037 hw->extra_tx_headroom = RSI_NEEDED_HEADROOM; 2038 + hw->vif_data_size = sizeof(struct vif_priv); 2038 2039 2039 2040 hw->max_rates = 1; 2040 2041 hw->max_rate_tries = MAX_RETRIES;
+8 -9
drivers/net/wwan/mhi_wwan_mbim.c
··· 78 78 79 79 struct mbim_tx_hdr { 80 80 struct usb_cdc_ncm_nth16 nth16; 81 - 82 - /* Must be last as it ends in a flexible-array member. */ 83 81 struct usb_cdc_ncm_ndp16 ndp16; 82 + struct usb_cdc_ncm_dpe16 dpe16[2]; 84 83 } __packed; 85 84 86 85 static struct mhi_mbim_link *mhi_mbim_get_link_rcu(struct mhi_mbim_context *mbim, ··· 107 108 static struct sk_buff *mbim_tx_fixup(struct sk_buff *skb, unsigned int session, 108 109 u16 tx_seq) 109 110 { 110 - DEFINE_RAW_FLEX(struct mbim_tx_hdr, mbim_hdr, ndp16.dpe16, 2); 111 111 unsigned int dgram_size = skb->len; 112 112 struct usb_cdc_ncm_nth16 *nth16; 113 113 struct usb_cdc_ncm_ndp16 *ndp16; 114 + struct mbim_tx_hdr *mbim_hdr; 114 115 115 116 /* Only one NDP is sent, containing the IP packet (no aggregation) */ 116 117 117 118 /* Ensure we have enough headroom for crafting MBIM header */ 118 - if (skb_cow_head(skb, __struct_size(mbim_hdr))) { 119 + if (skb_cow_head(skb, sizeof(struct mbim_tx_hdr))) { 119 120 dev_kfree_skb_any(skb); 120 121 return NULL; 121 122 } 122 123 123 - mbim_hdr = skb_push(skb, __struct_size(mbim_hdr)); 124 + mbim_hdr = skb_push(skb, sizeof(struct mbim_tx_hdr)); 124 125 125 126 /* Fill NTB header */ 126 127 nth16 = &mbim_hdr->nth16; ··· 133 134 /* Fill the unique NDP */ 134 135 ndp16 = &mbim_hdr->ndp16; 135 136 ndp16->dwSignature = cpu_to_le32(USB_CDC_MBIM_NDP16_IPS_SIGN | (session << 24)); 136 - ndp16->wLength = cpu_to_le16(struct_size(ndp16, dpe16, 2)); 137 + ndp16->wLength = cpu_to_le16(sizeof(struct usb_cdc_ncm_ndp16) 138 + + sizeof(struct usb_cdc_ncm_dpe16) * 2); 137 139 ndp16->wNextNdpIndex = 0; 138 140 139 141 /* Datagram follows the mbim header */ 140 - ndp16->dpe16[0].wDatagramIndex = cpu_to_le16(__struct_size(mbim_hdr)); 142 + ndp16->dpe16[0].wDatagramIndex = cpu_to_le16(sizeof(struct mbim_tx_hdr)); 141 143 ndp16->dpe16[0].wDatagramLength = cpu_to_le16(dgram_size); 142 144 143 145 /* null termination */ ··· 584 584 { 585 585 ndev->header_ops = NULL; /* No header */ 586 586 ndev->type = ARPHRD_RAWIP; 587 - ndev->needed_headroom = 588 - struct_size_t(struct mbim_tx_hdr, ndp16.dpe16, 2); 587 + ndev->needed_headroom = sizeof(struct mbim_tx_hdr); 589 588 ndev->hard_header_len = 0; 590 589 ndev->addr_len = 0; 591 590 ndev->flags = IFF_POINTOPOINT | IFF_NOARP;
-4
drivers/nfc/virtual_ncidev.c
··· 125 125 kfree_skb(skb); 126 126 return -EFAULT; 127 127 } 128 - if (strnlen(skb->data, count) != count) { 129 - kfree_skb(skb); 130 - return -EINVAL; 131 - } 132 128 133 129 nci_recv_frame(vdev->ndev, skb); 134 130 return count;
+1
drivers/ntb/ntb_transport.c
··· 1394 1394 goto err2; 1395 1395 } 1396 1396 1397 + mutex_init(&nt->link_event_lock); 1397 1398 INIT_DELAYED_WORK(&nt->link_work, ntb_transport_link_work); 1398 1399 INIT_WORK(&nt->link_cleanup, ntb_transport_link_cleanup_work); 1399 1400
+6 -2
drivers/of/base.c
··· 1942 1942 end--; 1943 1943 len = end - start; 1944 1944 1945 - if (kstrtoint(end, 10, &id) < 0) 1945 + if (kstrtoint(end, 10, &id) < 0) { 1946 + of_node_put(np); 1946 1947 continue; 1948 + } 1947 1949 1948 1950 /* Allocate an alias_prop with enough space for the stem */ 1949 1951 ap = dt_alloc(sizeof(*ap) + len + 1, __alignof__(*ap)); 1950 - if (!ap) 1952 + if (!ap) { 1953 + of_node_put(np); 1951 1954 continue; 1955 + } 1952 1956 memset(ap, 0, sizeof(*ap) + len + 1); 1953 1957 ap->alias = start; 1954 1958 of_alias_add(ap, np, id, start, len);
+1 -1
drivers/of/platform.c
··· 569 569 570 570 node = of_find_node_by_path("/firmware"); 571 571 if (node) { 572 - of_platform_populate(node, NULL, NULL, NULL); 572 + of_platform_default_populate(node, NULL, NULL); 573 573 of_node_put(node); 574 574 } 575 575
+1 -17
drivers/pci/rebar.c
··· 295 295 int exclude_bars) 296 296 { 297 297 struct pci_host_bridge *host; 298 - int old, ret; 299 298 300 299 /* Check if we must preserve the firmware's resource assignment */ 301 300 host = pci_find_host_bridge(dev->bus); ··· 307 308 if (!pci_rebar_size_supported(dev, resno, size)) 308 309 return -EINVAL; 309 310 310 - old = pci_rebar_get_current_size(dev, resno); 311 - if (old < 0) 312 - return old; 313 - 314 - ret = pci_rebar_set_size(dev, resno, size); 315 - if (ret) 316 - return ret; 317 - 318 - ret = pci_do_resource_release_and_resize(dev, resno, size, exclude_bars); 319 - if (ret) 320 - goto error_resize; 321 - return 0; 322 - 323 - error_resize: 324 - pci_rebar_set_size(dev, resno, old); 325 - return ret; 311 + return pci_do_resource_release_and_resize(dev, resno, size, exclude_bars); 326 312 } 327 313 EXPORT_SYMBOL(pci_resize_resource);
+19 -4
drivers/pci/setup-bus.c
··· 2504 2504 struct resource *b_win, *r; 2505 2505 LIST_HEAD(saved); 2506 2506 unsigned int i; 2507 - int ret = 0; 2507 + int old, ret; 2508 2508 2509 2509 b_win = pbus_select_window(bus, res); 2510 2510 if (!b_win) 2511 2511 return -EINVAL; 2512 + 2513 + old = pci_rebar_get_current_size(pdev, resno); 2514 + if (old < 0) 2515 + return old; 2516 + 2517 + ret = pci_rebar_set_size(pdev, resno, size); 2518 + if (ret) 2519 + return ret; 2512 2520 2513 2521 pci_dev_for_each_resource(pdev, r, i) { 2514 2522 if (i >= PCI_BRIDGE_RESOURCES) ··· 2550 2542 return ret; 2551 2543 2552 2544 restore: 2553 - /* Revert to the old configuration */ 2545 + /* 2546 + * Revert to the old configuration. 2547 + * 2548 + * BAR Size must be restored first because it affects the read-only 2549 + * bits in BAR (the old address might not be restorable otherwise 2550 + * due to low address bits). 2551 + */ 2552 + pci_rebar_set_size(pdev, resno, old); 2553 + 2554 2554 list_for_each_entry(dev_res, &saved, list) { 2555 2555 struct resource *res = dev_res->res; 2556 2556 struct pci_dev *dev = dev_res->dev; ··· 2572 2556 2573 2557 restore_dev_resource(dev_res); 2574 2558 2575 - ret = pci_claim_resource(dev, i); 2576 - if (ret) 2559 + if (pci_claim_resource(dev, i)) 2577 2560 continue; 2578 2561 2579 2562 if (i < PCI_BRIDGE_RESOURCES) {
+1 -1
drivers/platform/mellanox/mlx-platform.c
··· 7381 7381 mlxplat_hotplug = &mlxplat_mlxcpld_ng800_hi171_data; 7382 7382 mlxplat_hotplug->deferred_nr = 7383 7383 mlxplat_msn21xx_channels[MLXPLAT_CPLD_GRP_CHNL_NUM - 1]; 7384 - mlxplat_led = &mlxplat_default_ng_led_data; 7384 + mlxplat_led = &mlxplat_xdr_led_data; 7385 7385 mlxplat_regs_io = &mlxplat_default_ng_regs_io_data; 7386 7386 mlxplat_fan = &mlxplat_xdr_fan_data; 7387 7387
+10 -3
drivers/platform/x86/acer-wmi.c
··· 455 455 .mailled = 1, 456 456 }; 457 457 458 + static struct quirk_entry quirk_acer_nitro_an515_58 = { 459 + .predator_v4 = 1, 460 + .pwm = 1, 461 + }; 462 + 458 463 static struct quirk_entry quirk_acer_predator_ph315_53 = { 459 464 .turbo = 1, 460 465 .cpu_fans = 1, ··· 660 655 DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 661 656 DMI_MATCH(DMI_PRODUCT_NAME, "Nitro AN515-58"), 662 657 }, 663 - .driver_data = &quirk_acer_predator_v4, 658 + .driver_data = &quirk_acer_nitro_an515_58, 664 659 }, 665 660 { 666 661 .callback = dmi_matched, ··· 2070 2065 WMID_gaming_set_u64(0x1, ACER_CAP_TURBO_LED); 2071 2066 2072 2067 /* Set FAN mode to auto */ 2073 - WMID_gaming_set_fan_mode(ACER_WMID_FAN_MODE_AUTO); 2068 + if (has_cap(ACER_CAP_TURBO_FAN)) 2069 + WMID_gaming_set_fan_mode(ACER_WMID_FAN_MODE_AUTO); 2074 2070 2075 2071 /* Set OC to normal */ 2076 2072 if (has_cap(ACER_CAP_TURBO_OC)) { ··· 2085 2079 WMID_gaming_set_u64(0x10001, ACER_CAP_TURBO_LED); 2086 2080 2087 2081 /* Set FAN mode to turbo */ 2088 - WMID_gaming_set_fan_mode(ACER_WMID_FAN_MODE_TURBO); 2082 + if (has_cap(ACER_CAP_TURBO_FAN)) 2083 + WMID_gaming_set_fan_mode(ACER_WMID_FAN_MODE_TURBO); 2089 2084 2090 2085 /* Set OC to turbo mode */ 2091 2086 if (has_cap(ACER_CAP_TURBO_OC)) {
+3 -1
drivers/platform/x86/amd/wbrf.c
··· 104 104 obj = acpi_evaluate_dsm(adev->handle, &wifi_acpi_dsm_guid, 105 105 WBRF_REVISION, WBRF_RECORD, &argv4); 106 106 107 - if (!obj) 107 + if (!obj) { 108 + kfree(tmp); 108 109 return -EINVAL; 110 + } 109 111 110 112 if (obj->type != ACPI_TYPE_INTEGER) { 111 113 ret = -EINVAL;
+221 -3
drivers/platform/x86/asus-armoury.h
··· 348 348 static const struct dmi_system_id power_limits[] = { 349 349 { 350 350 .matches = { 351 + DMI_MATCH(DMI_BOARD_NAME, "FA401UV"), 352 + }, 353 + .driver_data = &(struct power_data) { 354 + .ac_data = &(struct power_limits) { 355 + .ppt_pl1_spl_min = 15, 356 + .ppt_pl1_spl_max = 80, 357 + .ppt_pl2_sppt_min = 35, 358 + .ppt_pl2_sppt_max = 80, 359 + .ppt_pl3_fppt_min = 35, 360 + .ppt_pl3_fppt_max = 80, 361 + .nv_dynamic_boost_min = 5, 362 + .nv_dynamic_boost_max = 25, 363 + .nv_temp_target_min = 75, 364 + .nv_temp_target_max = 87, 365 + .nv_tgp_min = 55, 366 + .nv_tgp_max = 75, 367 + }, 368 + .dc_data = &(struct power_limits) { 369 + .ppt_pl1_spl_min = 25, 370 + .ppt_pl1_spl_max = 35, 371 + .ppt_pl2_sppt_min = 31, 372 + .ppt_pl2_sppt_max = 44, 373 + .ppt_pl3_fppt_min = 45, 374 + .ppt_pl3_fppt_max = 65, 375 + .nv_temp_target_min = 75, 376 + .nv_temp_target_max = 87, 377 + }, 378 + }, 379 + }, 380 + { 381 + .matches = { 351 382 DMI_MATCH(DMI_BOARD_NAME, "FA401W"), 352 383 }, 353 384 .driver_data = &(struct power_data) { ··· 611 580 .ppt_pl2_sppt_def = 54, 612 581 .ppt_pl2_sppt_max = 90, 613 582 .ppt_pl3_fppt_min = 35, 614 - .ppt_pl3_fppt_def = 90, 615 - .ppt_pl3_fppt_max = 65, 583 + .ppt_pl3_fppt_def = 65, 584 + .ppt_pl3_fppt_max = 90, 616 585 .nv_dynamic_boost_min = 10, 617 586 .nv_dynamic_boost_max = 15, 618 587 .nv_temp_target_min = 75, ··· 729 698 .ppt_platform_sppt_max = 100, 730 699 .nv_temp_target_min = 75, 731 700 .nv_temp_target_max = 87, 701 + }, 702 + }, 703 + }, 704 + { 705 + .matches = { 706 + DMI_MATCH(DMI_BOARD_NAME, "FA617XT"), 707 + }, 708 + .driver_data = &(struct power_data) { 709 + .ac_data = &(struct power_limits) { 710 + .ppt_apu_sppt_min = 15, 711 + .ppt_apu_sppt_max = 80, 712 + .ppt_platform_sppt_min = 30, 713 + .ppt_platform_sppt_max = 145, 714 + }, 715 + .dc_data = &(struct power_limits) { 716 + .ppt_apu_sppt_min = 25, 717 + .ppt_apu_sppt_max = 35, 718 + .ppt_platform_sppt_min = 45, 719 + .ppt_platform_sppt_max = 100, 732 720 }, 733 721 }, 734 722 }, ··· 893 843 }, 894 844 { 895 845 .matches = { 896 - DMI_MATCH(DMI_BOARD_NAME, "GA403U"), 846 + DMI_MATCH(DMI_BOARD_NAME, "GA403UI"), 897 847 }, 898 848 .driver_data = &(struct power_data) { 899 849 .ac_data = &(struct power_limits) { ··· 925 875 }, 926 876 { 927 877 .matches = { 878 + DMI_MATCH(DMI_BOARD_NAME, "GA403UV"), 879 + }, 880 + .driver_data = &(struct power_data) { 881 + .ac_data = &(struct power_limits) { 882 + .ppt_pl1_spl_min = 15, 883 + .ppt_pl1_spl_max = 80, 884 + .ppt_pl2_sppt_min = 25, 885 + .ppt_pl2_sppt_max = 80, 886 + .ppt_pl3_fppt_min = 35, 887 + .ppt_pl3_fppt_max = 80, 888 + .nv_dynamic_boost_min = 5, 889 + .nv_dynamic_boost_max = 25, 890 + .nv_temp_target_min = 75, 891 + .nv_temp_target_max = 87, 892 + .nv_tgp_min = 55, 893 + .nv_tgp_max = 65, 894 + }, 895 + .dc_data = &(struct power_limits) { 896 + .ppt_pl1_spl_min = 15, 897 + .ppt_pl1_spl_max = 35, 898 + .ppt_pl2_sppt_min = 25, 899 + .ppt_pl2_sppt_max = 35, 900 + .ppt_pl3_fppt_min = 35, 901 + .ppt_pl3_fppt_max = 65, 902 + .nv_temp_target_min = 75, 903 + .nv_temp_target_max = 87, 904 + }, 905 + .requires_fan_curve = true, 906 + }, 907 + }, 908 + { 909 + .matches = { 910 + DMI_MATCH(DMI_BOARD_NAME, "GA403WM"), 911 + }, 912 + .driver_data = &(struct power_data) { 913 + .ac_data = &(struct power_limits) { 914 + .ppt_pl1_spl_min = 15, 915 + .ppt_pl1_spl_max = 80, 916 + .ppt_pl2_sppt_min = 25, 917 + .ppt_pl2_sppt_max = 80, 918 + .ppt_pl3_fppt_min = 35, 919 + .ppt_pl3_fppt_max = 80, 920 + .nv_dynamic_boost_min = 0, 921 + .nv_dynamic_boost_max = 15, 922 + .nv_temp_target_min = 75, 923 + .nv_temp_target_max = 87, 924 + .nv_tgp_min = 55, 925 + .nv_tgp_max = 85, 926 + }, 927 + .dc_data = &(struct power_limits) { 928 + .ppt_pl1_spl_min = 15, 929 + .ppt_pl1_spl_max = 35, 930 + .ppt_pl2_sppt_min = 25, 931 + .ppt_pl2_sppt_max = 35, 932 + .ppt_pl3_fppt_min = 35, 933 + .ppt_pl3_fppt_max = 65, 934 + .nv_temp_target_min = 75, 935 + .nv_temp_target_max = 87, 936 + }, 937 + .requires_fan_curve = true, 938 + }, 939 + }, 940 + { 941 + .matches = { 928 942 DMI_MATCH(DMI_BOARD_NAME, "GA403WR"), 943 + }, 944 + .driver_data = &(struct power_data) { 945 + .ac_data = &(struct power_limits) { 946 + .ppt_pl1_spl_min = 15, 947 + .ppt_pl1_spl_max = 80, 948 + .ppt_pl2_sppt_min = 25, 949 + .ppt_pl2_sppt_max = 80, 950 + .ppt_pl3_fppt_min = 35, 951 + .ppt_pl3_fppt_max = 80, 952 + .nv_dynamic_boost_min = 0, 953 + .nv_dynamic_boost_max = 25, 954 + .nv_temp_target_min = 75, 955 + .nv_temp_target_max = 87, 956 + .nv_tgp_min = 80, 957 + .nv_tgp_max = 95, 958 + }, 959 + .dc_data = &(struct power_limits) { 960 + .ppt_pl1_spl_min = 15, 961 + .ppt_pl1_spl_max = 35, 962 + .ppt_pl2_sppt_min = 25, 963 + .ppt_pl2_sppt_max = 35, 964 + .ppt_pl3_fppt_min = 35, 965 + .ppt_pl3_fppt_max = 65, 966 + .nv_temp_target_min = 75, 967 + .nv_temp_target_max = 87, 968 + }, 969 + .requires_fan_curve = true, 970 + }, 971 + }, 972 + { 973 + .matches = { 974 + DMI_MATCH(DMI_BOARD_NAME, "GA403WW"), 929 975 }, 930 976 .driver_data = &(struct power_data) { 931 977 .ac_data = &(struct power_limits) { ··· 1335 1189 }, 1336 1190 { 1337 1191 .matches = { 1192 + DMI_MATCH(DMI_BOARD_NAME, "GV302XV"), 1193 + }, 1194 + .driver_data = &(struct power_data) { 1195 + .ac_data = &(struct power_limits) { 1196 + .ppt_pl1_spl_min = 15, 1197 + .ppt_pl1_spl_max = 55, 1198 + .ppt_pl2_sppt_min = 25, 1199 + .ppt_pl2_sppt_max = 60, 1200 + .ppt_pl3_fppt_min = 35, 1201 + .ppt_pl3_fppt_max = 65, 1202 + .nv_temp_target_min = 75, 1203 + .nv_temp_target_max = 87, 1204 + }, 1205 + .dc_data = &(struct power_limits) { 1206 + .ppt_pl1_spl_min = 15, 1207 + .ppt_pl1_spl_max = 35, 1208 + .ppt_pl2_sppt_min = 25, 1209 + .ppt_pl2_sppt_max = 35, 1210 + .ppt_pl3_fppt_min = 35, 1211 + .ppt_pl3_fppt_max = 65, 1212 + .nv_temp_target_min = 75, 1213 + .nv_temp_target_max = 87, 1214 + }, 1215 + }, 1216 + }, 1217 + { 1218 + .matches = { 1338 1219 DMI_MATCH(DMI_BOARD_NAME, "GV601R"), 1339 1220 }, 1340 1221 .driver_data = &(struct power_data) { ··· 1484 1311 .ppt_pl1_spl_max = 100, 1485 1312 .ppt_pl2_sppt_min = 15, 1486 1313 .ppt_pl2_sppt_max = 190, 1314 + }, 1315 + .dc_data = NULL, 1316 + .requires_fan_curve = true, 1317 + }, 1318 + }, 1319 + { 1320 + .matches = { 1321 + DMI_MATCH(DMI_BOARD_NAME, "G513QY"), 1322 + }, 1323 + .driver_data = &(struct power_data) { 1324 + .ac_data = &(struct power_limits) { 1325 + /* Advantage Edition Laptop, no PL1 or PL2 limits */ 1326 + .ppt_apu_sppt_min = 15, 1327 + .ppt_apu_sppt_max = 100, 1328 + .ppt_platform_sppt_min = 70, 1329 + .ppt_platform_sppt_max = 190, 1487 1330 }, 1488 1331 .dc_data = NULL, 1489 1332 .requires_fan_curve = true, ··· 1744 1555 .nv_dynamic_boost_max = 25, 1745 1556 .nv_temp_target_min = 75, 1746 1557 .nv_temp_target_max = 87, 1558 + }, 1559 + .dc_data = &(struct power_limits) { 1560 + .ppt_pl1_spl_min = 25, 1561 + .ppt_pl1_spl_max = 55, 1562 + .ppt_pl2_sppt_min = 25, 1563 + .ppt_pl2_sppt_max = 70, 1564 + .nv_temp_target_min = 75, 1565 + .nv_temp_target_max = 87, 1566 + }, 1567 + .requires_fan_curve = true, 1568 + }, 1569 + }, 1570 + { 1571 + .matches = { 1572 + DMI_MATCH(DMI_BOARD_NAME, "G835LR"), 1573 + }, 1574 + .driver_data = &(struct power_data) { 1575 + .ac_data = &(struct power_limits) { 1576 + .ppt_pl1_spl_min = 28, 1577 + .ppt_pl1_spl_def = 140, 1578 + .ppt_pl1_spl_max = 175, 1579 + .ppt_pl2_sppt_min = 28, 1580 + .ppt_pl2_sppt_max = 175, 1581 + .nv_dynamic_boost_min = 5, 1582 + .nv_dynamic_boost_max = 25, 1583 + .nv_temp_target_min = 75, 1584 + .nv_temp_target_max = 87, 1585 + .nv_tgp_min = 65, 1586 + .nv_tgp_max = 115, 1747 1587 }, 1748 1588 .dc_data = &(struct power_limits) { 1749 1589 .ppt_pl1_spl_min = 25,
+2 -1
drivers/platform/x86/asus-wmi.c
··· 4889 4889 asus->egpu_enable_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_EGPU); 4890 4890 asus->dgpu_disable_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_DGPU); 4891 4891 asus->kbd_rgb_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_TUF_RGB_STATE); 4892 - asus->oobe_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_OOBE); 4893 4892 4894 4893 if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_MINI_LED_MODE)) 4895 4894 asus->mini_led_dev_id = ASUS_WMI_DEVID_MINI_LED_MODE; ··· 4900 4901 else if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_GPU_MUX_VIVO)) 4901 4902 asus->gpu_mux_dev = ASUS_WMI_DEVID_GPU_MUX_VIVO; 4902 4903 #endif /* IS_ENABLED(CONFIG_ASUS_WMI_DEPRECATED_ATTRS) */ 4904 + 4905 + asus->oobe_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_OOBE); 4903 4906 4904 4907 if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY)) 4905 4908 asus->throttle_thermal_policy_dev = ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY;
+8
drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
··· 10 10 #include <linux/fs.h> 11 11 #include <linux/module.h> 12 12 #include <linux/kernel.h> 13 + #include <linux/printk.h> 14 + #include <linux/string.h> 13 15 #include <linux/wmi.h> 14 16 #include "bioscfg.h" 15 17 #include "../../firmware_attributes_class.h" ··· 782 780 783 781 if (ret < 0) 784 782 goto buff_attr_exit; 783 + 784 + if (strlen(str) == 0) { 785 + pr_debug("Ignoring attribute with empty name\n"); 786 + ret = 0; 787 + goto buff_attr_exit; 788 + } 785 789 786 790 if (attr_type == HPWMI_PASSWORD_TYPE || 787 791 attr_type == HPWMI_SECURE_PLATFORM_TYPE)
+7 -5
drivers/platform/x86/hp/hp-bioscfg/bioscfg.h
··· 10 10 11 11 #include <linux/wmi.h> 12 12 #include <linux/types.h> 13 + #include <linux/string.h> 13 14 #include <linux/device.h> 14 15 #include <linux/module.h> 15 16 #include <linux/kernel.h> ··· 57 56 58 57 #define PASSWD_MECHANISM_TYPES "password" 59 58 60 - #define HP_WMI_BIOS_GUID "5FB7F034-2C63-45e9-BE91-3D44E2C707E4" 59 + #define HP_WMI_BIOS_GUID "5FB7F034-2C63-45E9-BE91-3D44E2C707E4" 61 60 62 - #define HP_WMI_BIOS_STRING_GUID "988D08E3-68F4-4c35-AF3E-6A1B8106F83C" 61 + #define HP_WMI_BIOS_STRING_GUID "988D08E3-68F4-4C35-AF3E-6A1B8106F83C" 63 62 #define HP_WMI_BIOS_INTEGER_GUID "8232DE3D-663D-4327-A8F4-E293ADB9BF05" 64 63 #define HP_WMI_BIOS_ENUMERATION_GUID "2D114B49-2DFB-4130-B8FE-4A3C09E75133" 65 64 #define HP_WMI_BIOS_ORDERED_LIST_GUID "14EA9746-CE1F-4098-A0E0-7045CB4DA745" 66 65 #define HP_WMI_BIOS_PASSWORD_GUID "322F2028-0F84-4901-988E-015176049E2D" 67 - #define HP_WMI_SET_BIOS_SETTING_GUID "1F4C91EB-DC5C-460b-951D-C7CB9B4B8D5E" 66 + #define HP_WMI_SET_BIOS_SETTING_GUID "1F4C91EB-DC5C-460B-951D-C7CB9B4B8D5E" 68 67 69 68 enum hp_wmi_spm_commandtype { 70 69 HPWMI_SECUREPLATFORM_GET_STATE = 0x10, ··· 286 285 { \ 287 286 int i; \ 288 287 \ 289 - for (i = 0; i <= bioscfg_drv.type##_instances_count; i++) { \ 290 - if (!strcmp(kobj->name, bioscfg_drv.type##_data[i].attr_name_kobj->name)) \ 288 + for (i = 0; i < bioscfg_drv.type##_instances_count; i++) { \ 289 + if (bioscfg_drv.type##_data[i].attr_name_kobj && \ 290 + !strcmp(kobj->name, bioscfg_drv.type##_data[i].attr_name_kobj->name)) \ 291 291 return i; \ 292 292 } \ 293 293 return -EIO; \
+7 -4
drivers/pmdomain/imx/imx8m-blk-ctrl.c
··· 846 846 return NOTIFY_OK; 847 847 } 848 848 849 + /* 850 + * For i.MX8MQ, the ADB in the VPUMIX domain has no separate reset and clock 851 + * enable bits, but is ungated and reset together with the VPUs. 852 + * Resetting G1 or G2 separately may led to system hang. 853 + * Remove the rst_mask and clk_mask from the domain data of G1 and G2, 854 + * Let imx8mq_vpu_power_notifier() do really vpu reset. 855 + */ 849 856 static const struct imx8m_blk_ctrl_domain_data imx8mq_vpu_blk_ctl_domain_data[] = { 850 857 [IMX8MQ_VPUBLK_PD_G1] = { 851 858 .name = "vpublk-g1", 852 859 .clk_names = (const char *[]){ "g1", }, 853 860 .num_clks = 1, 854 861 .gpc_name = "g1", 855 - .rst_mask = BIT(1), 856 - .clk_mask = BIT(1), 857 862 }, 858 863 [IMX8MQ_VPUBLK_PD_G2] = { 859 864 .name = "vpublk-g2", 860 865 .clk_names = (const char *[]){ "g2", }, 861 866 .num_clks = 1, 862 867 .gpc_name = "g2", 863 - .rst_mask = BIT(0), 864 - .clk_mask = BIT(0), 865 868 }, 866 869 }; 867 870
+4
drivers/pmdomain/qcom/rpmhpd.c
··· 246 246 [SC8280XP_MMCX_AO] = &mmcx_ao, 247 247 [SC8280XP_MX] = &mx, 248 248 [SC8280XP_MX_AO] = &mx_ao, 249 + [SC8280XP_MXC] = &mxc, 250 + [SC8280XP_MXC_AO] = &mxc_ao, 249 251 [SC8280XP_NSP] = &nsp, 250 252 }; 251 253 ··· 702 700 [SC8280XP_MMCX_AO] = &mmcx_ao, 703 701 [SC8280XP_MX] = &mx, 704 702 [SC8280XP_MX_AO] = &mx_ao, 703 + [SC8280XP_MXC] = &mxc, 704 + [SC8280XP_MXC_AO] = &mxc_ao, 705 705 [SC8280XP_NSP] = &nsp, 706 706 [SC8280XP_QPHY] = &qphy, 707 707 };
+10
drivers/pmdomain/rockchip/pm-domains.c
··· 879 879 pd->genpd.name = pd->info->name; 880 880 else 881 881 pd->genpd.name = kbasename(node->full_name); 882 + 883 + /* 884 + * power domain's needing a regulator should default to off, since 885 + * the regulator state is unknown at probe time. Also the regulator 886 + * state cannot be checked, since that usually requires IP needing 887 + * (a different) power domain. 888 + */ 889 + if (pd->info->need_regulator) 890 + rockchip_pd_power(pd, false); 891 + 882 892 pd->genpd.power_off = rockchip_pd_power_off; 883 893 pd->genpd.power_on = rockchip_pd_power_on; 884 894 pd->genpd.attach_dev = rockchip_pd_attach_dev;
+6 -4
drivers/pwm/core.c
··· 2295 2295 .duty_offset_ns = wf.duty_offset_ns, 2296 2296 }; 2297 2297 2298 - return copy_to_user((struct pwmchip_waveform __user *)arg, 2299 - &cwf, sizeof(cwf)); 2298 + ret = copy_to_user((struct pwmchip_waveform __user *)arg, 2299 + &cwf, sizeof(cwf)); 2300 + return ret ? -EFAULT : 0; 2300 2301 } 2301 2302 2302 2303 case PWM_IOCTL_GETWF: ··· 2330 2329 .duty_offset_ns = wf.duty_offset_ns, 2331 2330 }; 2332 2331 2333 - return copy_to_user((struct pwmchip_waveform __user *)arg, 2334 - &cwf, sizeof(cwf)); 2332 + ret = copy_to_user((struct pwmchip_waveform __user *)arg, 2333 + &cwf, sizeof(cwf)); 2334 + return ret ? -EFAULT : 0; 2335 2335 } 2336 2336 2337 2337 case PWM_IOCTL_SETROUNDEDWF:
+1
drivers/pwm/pwm-max7360.c
··· 153 153 } 154 154 155 155 static const struct pwm_ops max7360_pwm_ops = { 156 + .sizeof_wfhw = sizeof(struct max7360_pwm_waveform), 156 157 .request = max7360_pwm_request, 157 158 .round_waveform_tohw = max7360_pwm_round_waveform_tohw, 158 159 .round_waveform_fromhw = max7360_pwm_round_waveform_fromhw,
+3
drivers/regulator/fp9931.c
··· 439 439 int i; 440 440 441 441 data = devm_kzalloc(&client->dev, sizeof(*data), GFP_KERNEL); 442 + if (!data) 443 + return -ENOMEM; 444 + 442 445 data->regmap = devm_regmap_init_i2c(client, &regmap_config); 443 446 if (IS_ERR(data->regmap)) 444 447 return dev_err_probe(&client->dev, PTR_ERR(data->regmap),
+1 -1
drivers/s390/crypto/ap_card.c
··· 43 43 { 44 44 struct ap_card *ac = to_ap_card(dev); 45 45 46 - return sysfs_emit(buf, "%d\n", ac->hwinfo.qd); 46 + return sysfs_emit(buf, "%d\n", ac->hwinfo.qd + 1); 47 47 } 48 48 49 49 static DEVICE_ATTR_RO(depth);
+1 -1
drivers/s390/crypto/ap_queue.c
··· 285 285 list_move_tail(&ap_msg->list, &aq->pendingq); 286 286 aq->requestq_count--; 287 287 aq->pendingq_count++; 288 - if (aq->queue_count < aq->card->hwinfo.qd) { 288 + if (aq->queue_count < aq->card->hwinfo.qd + 1) { 289 289 aq->sm_state = AP_SM_STATE_WORKING; 290 290 return AP_SM_WAIT_AGAIN; 291 291 }
+7
drivers/scsi/qla2xxx/qla_isr.c
··· 878 878 payload_size = sizeof(purex->els_frame_payload); 879 879 } 880 880 881 + if (total_bytes > sizeof(item->iocb.iocb)) 882 + total_bytes = sizeof(item->iocb.iocb); 883 + 881 884 pending_bytes = total_bytes; 882 885 no_bytes = (pending_bytes > payload_size) ? payload_size : 883 886 pending_bytes; ··· 1166 1163 1167 1164 total_bytes = (le16_to_cpu(purex->frame_size) & 0x0FFF) 1168 1165 - PURX_ELS_HEADER_SIZE; 1166 + 1167 + if (total_bytes > sizeof(item->iocb.iocb)) 1168 + total_bytes = sizeof(item->iocb.iocb); 1169 + 1169 1170 pending_bytes = total_bytes; 1170 1171 entry_count = entry_count_remaining = purex->entry_count; 1171 1172 no_bytes = (pending_bytes > sizeof(purex->els_frame_payload)) ?
+10 -1
drivers/scsi/scsi_error.c
··· 282 282 { 283 283 struct scsi_cmnd *scmd = container_of(head, typeof(*scmd), rcu); 284 284 struct Scsi_Host *shost = scmd->device->host; 285 - unsigned int busy = scsi_host_busy(shost); 285 + unsigned int busy; 286 286 unsigned long flags; 287 287 288 288 spin_lock_irqsave(shost->host_lock, flags); 289 289 shost->host_failed++; 290 + spin_unlock_irqrestore(shost->host_lock, flags); 291 + /* 292 + * The counting of busy requests needs to occur after adding to 293 + * host_failed or after the lock acquire for adding to host_failed 294 + * to prevent a race with host unbusy and missing an eh wakeup. 295 + */ 296 + busy = scsi_host_busy(shost); 297 + 298 + spin_lock_irqsave(shost->host_lock, flags); 290 299 scsi_eh_wakeup(shost, busy); 291 300 spin_unlock_irqrestore(shost->host_lock, flags); 292 301 }
+8
drivers/scsi/scsi_lib.c
··· 376 376 rcu_read_lock(); 377 377 __clear_bit(SCMD_STATE_INFLIGHT, &cmd->state); 378 378 if (unlikely(scsi_host_in_recovery(shost))) { 379 + /* 380 + * Ensure the clear of SCMD_STATE_INFLIGHT is visible to 381 + * other CPUs before counting busy requests. Otherwise, 382 + * reordering can cause CPUs to race and miss an eh wakeup 383 + * when no CPU sees all busy requests as done or timed out. 384 + */ 385 + smp_mb(); 386 + 379 387 unsigned int busy = scsi_host_busy(shost); 380 388 381 389 spin_lock_irqsave(shost->host_lock, flags);
+2 -1
drivers/scsi/storvsc_drv.c
··· 1144 1144 * The current SCSI handling on the host side does 1145 1145 * not correctly handle: 1146 1146 * INQUIRY command with page code parameter set to 0x80 1147 - * MODE_SENSE command with cmd[2] == 0x1c 1147 + * MODE_SENSE and MODE_SENSE_10 command with cmd[2] == 0x1c 1148 1148 * MAINTENANCE_IN is not supported by HyperV FC passthrough 1149 1149 * 1150 1150 * Setup srb and scsi status so this won't be fatal. ··· 1154 1154 1155 1155 if ((stor_pkt->vm_srb.cdb[0] == INQUIRY) || 1156 1156 (stor_pkt->vm_srb.cdb[0] == MODE_SENSE) || 1157 + (stor_pkt->vm_srb.cdb[0] == MODE_SENSE_10) || 1157 1158 (stor_pkt->vm_srb.cdb[0] == MAINTENANCE_IN && 1158 1159 hv_dev_is_fc(device))) { 1159 1160 vstor_packet->vm_srb.scsi_status = 0;
+29 -25
drivers/slimbus/core.c
··· 146 146 { 147 147 struct slim_device *sbdev = to_slim_device(dev); 148 148 149 + of_node_put(sbdev->dev.of_node); 149 150 kfree(sbdev); 150 151 } 151 152 ··· 281 280 /* slim_remove_device: Remove the effect of slim_add_device() */ 282 281 static void slim_remove_device(struct slim_device *sbdev) 283 282 { 284 - of_node_put(sbdev->dev.of_node); 285 283 device_unregister(&sbdev->dev); 286 284 } 287 285 ··· 366 366 * @ctrl: Controller on which this device will be added/queried 367 367 * @e_addr: Enumeration address of the device to be queried 368 368 * 369 + * Takes a reference to the embedded struct device which needs to be dropped 370 + * after use. 371 + * 369 372 * Return: pointer to a device if it has already reported. Creates a new 370 373 * device and returns pointer to it if the device has not yet enumerated. 371 374 */ ··· 382 379 sbdev = slim_alloc_device(ctrl, e_addr, NULL); 383 380 if (!sbdev) 384 381 return ERR_PTR(-ENOMEM); 382 + 383 + get_device(&sbdev->dev); 385 384 } 386 385 387 386 return sbdev; 388 387 } 389 388 EXPORT_SYMBOL_GPL(slim_get_device); 390 389 391 - static struct slim_device *of_find_slim_device(struct slim_controller *ctrl, 392 - struct device_node *np) 390 + /** 391 + * of_slim_get_device() - get handle to a device using dt node. 392 + * 393 + * @ctrl: Controller on which this device will be queried 394 + * @np: node pointer to device 395 + * 396 + * Takes a reference to the embedded struct device which needs to be dropped 397 + * after use. 398 + * 399 + * Return: pointer to a device if it has been registered, otherwise NULL. 400 + */ 401 + struct slim_device *of_slim_get_device(struct slim_controller *ctrl, 402 + struct device_node *np) 393 403 { 394 404 struct slim_device *sbdev; 395 405 struct device *dev; ··· 414 398 } 415 399 416 400 return NULL; 417 - } 418 - 419 - /** 420 - * of_slim_get_device() - get handle to a device using dt node. 421 - * 422 - * @ctrl: Controller on which this device will be added/queried 423 - * @np: node pointer to device 424 - * 425 - * Return: pointer to a device if it has already reported. Creates a new 426 - * device and returns pointer to it if the device has not yet enumerated. 427 - */ 428 - struct slim_device *of_slim_get_device(struct slim_controller *ctrl, 429 - struct device_node *np) 430 - { 431 - return of_find_slim_device(ctrl, np); 432 401 } 433 402 EXPORT_SYMBOL_GPL(of_slim_get_device); 434 403 ··· 490 489 if (ctrl->sched.clk_state != SLIM_CLK_ACTIVE) { 491 490 dev_err(ctrl->dev, "slim ctrl not active,state:%d, ret:%d\n", 492 491 ctrl->sched.clk_state, ret); 493 - goto slimbus_not_active; 492 + goto out_put_rpm; 494 493 } 495 494 496 495 sbdev = slim_get_device(ctrl, e_addr); 497 - if (IS_ERR(sbdev)) 498 - return -ENODEV; 496 + if (IS_ERR(sbdev)) { 497 + ret = -ENODEV; 498 + goto out_put_rpm; 499 + } 499 500 500 501 if (sbdev->is_laddr_valid) { 501 502 *laddr = sbdev->laddr; 502 - return 0; 503 + ret = 0; 504 + } else { 505 + ret = slim_device_alloc_laddr(sbdev, true); 503 506 } 504 507 505 - ret = slim_device_alloc_laddr(sbdev, true); 506 - 507 - slimbus_not_active: 508 + put_device(&sbdev->dev); 509 + out_put_rpm: 508 510 pm_runtime_mark_last_busy(ctrl->dev); 509 511 pm_runtime_put_autosuspend(ctrl->dev); 510 512 return ret;
+1
drivers/soc/renesas/Kconfig
··· 445 445 depends on RISCV_SBI 446 446 select ARCH_RZG2L 447 447 select AX45MP_L2_CACHE 448 + select CACHEMAINT_FOR_DMA 448 449 select DMA_GLOBAL_POOL 449 450 select ERRATA_ANDES 450 451 select ERRATA_ANDES_CMO
+128 -6
drivers/spi/spi-aspeed-smc.c
··· 48 48 /* CEx Address Decoding Range Register */ 49 49 #define CE0_SEGMENT_ADDR_REG 0x30 50 50 51 + #define FULL_DUPLEX_RX_DATA 0x1e4 52 + 51 53 /* CEx Read timing compensation register */ 52 54 #define CE0_TIMING_COMPENSATION_REG 0x94 53 55 ··· 83 81 u32 hclk_mask; 84 82 u32 hdiv_max; 85 83 u32 min_window_size; 84 + bool full_duplex; 86 85 87 86 phys_addr_t (*segment_start)(struct aspeed_spi *aspi, u32 reg); 88 87 phys_addr_t (*segment_end)(struct aspeed_spi *aspi, u32 reg); ··· 108 105 109 106 struct clk *clk; 110 107 u32 clk_freq; 108 + u8 cs_change; 111 109 112 110 struct aspeed_spi_chip chips[ASPEED_SPI_MAX_NUM_CS]; 113 111 }; ··· 284 280 } 285 281 286 282 /* support for 1-1-1, 1-1-2 or 1-1-4 */ 287 - static bool aspeed_spi_supports_op(struct spi_mem *mem, const struct spi_mem_op *op) 283 + static bool aspeed_spi_supports_mem_op(struct spi_mem *mem, 284 + const struct spi_mem_op *op) 288 285 { 289 286 if (op->cmd.buswidth > 1) 290 287 return false; ··· 310 305 311 306 static const struct aspeed_spi_data ast2400_spi_data; 312 307 313 - static int do_aspeed_spi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) 308 + static int do_aspeed_spi_exec_mem_op(struct spi_mem *mem, 309 + const struct spi_mem_op *op) 314 310 { 315 311 struct aspeed_spi *aspi = spi_controller_get_devdata(mem->spi->controller); 316 312 struct aspeed_spi_chip *chip = &aspi->chips[spi_get_chipselect(mem->spi, 0)]; ··· 373 367 return ret; 374 368 } 375 369 376 - static int aspeed_spi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) 370 + static int aspeed_spi_exec_mem_op(struct spi_mem *mem, 371 + const struct spi_mem_op *op) 377 372 { 378 373 int ret; 379 374 380 - ret = do_aspeed_spi_exec_op(mem, op); 375 + ret = do_aspeed_spi_exec_mem_op(mem, op); 381 376 if (ret) 382 377 dev_err(&mem->spi->dev, "operation failed: %d\n", ret); 383 378 return ret; ··· 780 773 } 781 774 782 775 static const struct spi_controller_mem_ops aspeed_spi_mem_ops = { 783 - .supports_op = aspeed_spi_supports_op, 784 - .exec_op = aspeed_spi_exec_op, 776 + .supports_op = aspeed_spi_supports_mem_op, 777 + .exec_op = aspeed_spi_exec_mem_op, 785 778 .get_name = aspeed_spi_get_name, 786 779 .dirmap_create = aspeed_spi_dirmap_create, 787 780 .dirmap_read = aspeed_spi_dirmap_read, ··· 850 843 aspeed_spi_chip_enable(aspi, cs, enable); 851 844 } 852 845 846 + static int aspeed_spi_user_prepare_msg(struct spi_controller *ctlr, 847 + struct spi_message *msg) 848 + { 849 + struct aspeed_spi *aspi = 850 + (struct aspeed_spi *)spi_controller_get_devdata(ctlr); 851 + const struct aspeed_spi_data *data = aspi->data; 852 + struct spi_device *spi = msg->spi; 853 + u32 cs = spi_get_chipselect(spi, 0); 854 + struct aspeed_spi_chip *chip = &aspi->chips[cs]; 855 + u32 ctrl_val; 856 + u32 clk_div = data->get_clk_div(chip, spi->max_speed_hz); 857 + 858 + ctrl_val = chip->ctl_val[ASPEED_SPI_BASE]; 859 + ctrl_val &= ~CTRL_IO_MODE_MASK & data->hclk_mask; 860 + ctrl_val |= clk_div; 861 + chip->ctl_val[ASPEED_SPI_BASE] = ctrl_val; 862 + 863 + if (aspi->cs_change == 0) 864 + aspeed_spi_start_user(chip); 865 + 866 + return 0; 867 + } 868 + 869 + static int aspeed_spi_user_unprepare_msg(struct spi_controller *ctlr, 870 + struct spi_message *msg) 871 + { 872 + struct aspeed_spi *aspi = 873 + (struct aspeed_spi *)spi_controller_get_devdata(ctlr); 874 + struct spi_device *spi = msg->spi; 875 + u32 cs = spi_get_chipselect(spi, 0); 876 + struct aspeed_spi_chip *chip = &aspi->chips[cs]; 877 + 878 + if (aspi->cs_change == 0) 879 + aspeed_spi_stop_user(chip); 880 + 881 + return 0; 882 + } 883 + 884 + static void aspeed_spi_user_transfer_tx(struct aspeed_spi *aspi, 885 + struct spi_device *spi, 886 + const u8 *tx_buf, u8 *rx_buf, 887 + void *dst, u32 len) 888 + { 889 + const struct aspeed_spi_data *data = aspi->data; 890 + bool full_duplex_transfer = data->full_duplex && tx_buf == rx_buf; 891 + u32 i; 892 + 893 + if (full_duplex_transfer && 894 + !!(spi->mode & (SPI_TX_DUAL | SPI_TX_QUAD | 895 + SPI_RX_DUAL | SPI_RX_QUAD))) { 896 + dev_err(aspi->dev, 897 + "full duplex is only supported for single IO mode\n"); 898 + return; 899 + } 900 + 901 + for (i = 0; i < len; i++) { 902 + writeb(tx_buf[i], dst); 903 + if (full_duplex_transfer) 904 + rx_buf[i] = readb(aspi->regs + FULL_DUPLEX_RX_DATA); 905 + } 906 + } 907 + 908 + static int aspeed_spi_user_transfer(struct spi_controller *ctlr, 909 + struct spi_device *spi, 910 + struct spi_transfer *xfer) 911 + { 912 + struct aspeed_spi *aspi = 913 + (struct aspeed_spi *)spi_controller_get_devdata(ctlr); 914 + u32 cs = spi_get_chipselect(spi, 0); 915 + struct aspeed_spi_chip *chip = &aspi->chips[cs]; 916 + void __iomem *ahb_base = aspi->chips[cs].ahb_base; 917 + const u8 *tx_buf = xfer->tx_buf; 918 + u8 *rx_buf = xfer->rx_buf; 919 + 920 + dev_dbg(aspi->dev, 921 + "[cs%d] xfer: width %d, len %u, tx %p, rx %p\n", 922 + cs, xfer->bits_per_word, xfer->len, 923 + tx_buf, rx_buf); 924 + 925 + if (tx_buf) { 926 + if (spi->mode & SPI_TX_DUAL) 927 + aspeed_spi_set_io_mode(chip, CTRL_IO_DUAL_DATA); 928 + else if (spi->mode & SPI_TX_QUAD) 929 + aspeed_spi_set_io_mode(chip, CTRL_IO_QUAD_DATA); 930 + 931 + aspeed_spi_user_transfer_tx(aspi, spi, tx_buf, rx_buf, 932 + (void *)ahb_base, xfer->len); 933 + } 934 + 935 + if (rx_buf && rx_buf != tx_buf) { 936 + if (spi->mode & SPI_RX_DUAL) 937 + aspeed_spi_set_io_mode(chip, CTRL_IO_DUAL_DATA); 938 + else if (spi->mode & SPI_RX_QUAD) 939 + aspeed_spi_set_io_mode(chip, CTRL_IO_QUAD_DATA); 940 + 941 + ioread8_rep(ahb_base, rx_buf, xfer->len); 942 + } 943 + 944 + xfer->error = 0; 945 + aspi->cs_change = xfer->cs_change; 946 + 947 + return 0; 948 + } 949 + 853 950 static int aspeed_spi_probe(struct platform_device *pdev) 854 951 { 855 952 struct device *dev = &pdev->dev; ··· 1009 898 ctlr->setup = aspeed_spi_setup; 1010 899 ctlr->cleanup = aspeed_spi_cleanup; 1011 900 ctlr->num_chipselect = of_get_available_child_count(dev->of_node); 901 + ctlr->prepare_message = aspeed_spi_user_prepare_msg; 902 + ctlr->unprepare_message = aspeed_spi_user_unprepare_msg; 903 + ctlr->transfer_one = aspeed_spi_user_transfer; 1012 904 1013 905 aspi->num_cs = ctlr->num_chipselect; 1014 906 ··· 1568 1454 .hclk_mask = 0xfffff0ff, 1569 1455 .hdiv_max = 1, 1570 1456 .min_window_size = 0x800000, 1457 + .full_duplex = false, 1571 1458 .calibrate = aspeed_spi_calibrate, 1572 1459 .get_clk_div = aspeed_get_clk_div_ast2400, 1573 1460 .segment_start = aspeed_spi_segment_start, ··· 1585 1470 .timing = 0x14, 1586 1471 .hclk_mask = 0xfffff0ff, 1587 1472 .hdiv_max = 1, 1473 + .full_duplex = false, 1588 1474 .get_clk_div = aspeed_get_clk_div_ast2400, 1589 1475 .calibrate = aspeed_spi_calibrate, 1590 1476 /* No segment registers */ ··· 1600 1484 .hclk_mask = 0xffffd0ff, 1601 1485 .hdiv_max = 1, 1602 1486 .min_window_size = 0x800000, 1487 + .full_duplex = false, 1603 1488 .get_clk_div = aspeed_get_clk_div_ast2500, 1604 1489 .calibrate = aspeed_spi_calibrate, 1605 1490 .segment_start = aspeed_spi_segment_start, ··· 1618 1501 .hclk_mask = 0xffffd0ff, 1619 1502 .hdiv_max = 1, 1620 1503 .min_window_size = 0x800000, 1504 + .full_duplex = false, 1621 1505 .get_clk_div = aspeed_get_clk_div_ast2500, 1622 1506 .calibrate = aspeed_spi_calibrate, 1623 1507 .segment_start = aspeed_spi_segment_start, ··· 1637 1519 .hclk_mask = 0xf0fff0ff, 1638 1520 .hdiv_max = 2, 1639 1521 .min_window_size = 0x200000, 1522 + .full_duplex = false, 1640 1523 .get_clk_div = aspeed_get_clk_div_ast2600, 1641 1524 .calibrate = aspeed_spi_ast2600_calibrate, 1642 1525 .segment_start = aspeed_spi_segment_ast2600_start, ··· 1656 1537 .hclk_mask = 0xf0fff0ff, 1657 1538 .hdiv_max = 2, 1658 1539 .min_window_size = 0x200000, 1540 + .full_duplex = false, 1659 1541 .get_clk_div = aspeed_get_clk_div_ast2600, 1660 1542 .calibrate = aspeed_spi_ast2600_calibrate, 1661 1543 .segment_start = aspeed_spi_segment_ast2600_start, ··· 1675 1555 .hclk_mask = 0xf0fff0ff, 1676 1556 .hdiv_max = 2, 1677 1557 .min_window_size = 0x10000, 1558 + .full_duplex = true, 1678 1559 .get_clk_div = aspeed_get_clk_div_ast2600, 1679 1560 .calibrate = aspeed_spi_ast2600_calibrate, 1680 1561 .segment_start = aspeed_spi_segment_ast2700_start, ··· 1693 1572 .hclk_mask = 0xf0fff0ff, 1694 1573 .hdiv_max = 2, 1695 1574 .min_window_size = 0x10000, 1575 + .full_duplex = true, 1696 1576 .get_clk_div = aspeed_get_clk_div_ast2600, 1697 1577 .calibrate = aspeed_spi_ast2600_calibrate, 1698 1578 .segment_start = aspeed_spi_segment_ast2700_start,
+1
drivers/spi/spi-cadence.c
··· 728 728 ctlr->unprepare_transfer_hardware = cdns_unprepare_transfer_hardware; 729 729 ctlr->mode_bits = SPI_CPOL | SPI_CPHA; 730 730 ctlr->bits_per_word_mask = SPI_BPW_MASK(8); 731 + ctlr->flags = SPI_CONTROLLER_MUST_TX; 731 732 732 733 if (of_device_is_compatible(pdev->dev.of_node, "cix,sky1-spi-r1p6")) 733 734 ctlr->bits_per_word_mask |= SPI_BPW_MASK(16) | SPI_BPW_MASK(32);
+1 -3
drivers/spi/spi-hisi-kunpeng.c
··· 161 161 static int hisi_spi_debugfs_init(struct hisi_spi *hs) 162 162 { 163 163 char name[32]; 164 + struct spi_controller *host = dev_get_drvdata(hs->dev); 164 165 165 - struct spi_controller *host; 166 - 167 - host = container_of(hs->dev, struct spi_controller, dev); 168 166 snprintf(name, 32, "hisi_spi%d", host->bus_num); 169 167 hs->debugfs = debugfs_create_dir(name, NULL); 170 168 if (IS_ERR(hs->debugfs))
+1
drivers/spi/spi-intel-pci.c
··· 81 81 { PCI_VDEVICE(INTEL, 0x54a4), (unsigned long)&cnl_info }, 82 82 { PCI_VDEVICE(INTEL, 0x5794), (unsigned long)&cnl_info }, 83 83 { PCI_VDEVICE(INTEL, 0x5825), (unsigned long)&cnl_info }, 84 + { PCI_VDEVICE(INTEL, 0x6e24), (unsigned long)&cnl_info }, 84 85 { PCI_VDEVICE(INTEL, 0x7723), (unsigned long)&cnl_info }, 85 86 { PCI_VDEVICE(INTEL, 0x7a24), (unsigned long)&cnl_info }, 86 87 { PCI_VDEVICE(INTEL, 0x7aa4), (unsigned long)&cnl_info },
+10 -1
drivers/spi/spi-mem.c
··· 719 719 720 720 desc->mem = mem; 721 721 desc->info = *info; 722 - if (ctlr->mem_ops && ctlr->mem_ops->dirmap_create) 722 + if (ctlr->mem_ops && ctlr->mem_ops->dirmap_create) { 723 + ret = spi_mem_access_start(mem); 724 + if (ret) { 725 + kfree(desc); 726 + return ERR_PTR(ret); 727 + } 728 + 723 729 ret = ctlr->mem_ops->dirmap_create(desc); 730 + 731 + spi_mem_access_end(mem); 732 + } 724 733 725 734 if (ret) { 726 735 desc->nodirmap = true;
+10 -23
drivers/spi/spi-sprd-adi.c
··· 528 528 pdev->id = of_alias_get_id(np, "spi"); 529 529 num_chipselect = of_get_child_count(np); 530 530 531 - ctlr = spi_alloc_host(&pdev->dev, sizeof(struct sprd_adi)); 531 + ctlr = devm_spi_alloc_host(&pdev->dev, sizeof(struct sprd_adi)); 532 532 if (!ctlr) 533 533 return -ENOMEM; 534 534 ··· 536 536 sadi = spi_controller_get_devdata(ctlr); 537 537 538 538 sadi->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 539 - if (IS_ERR(sadi->base)) { 540 - ret = PTR_ERR(sadi->base); 541 - goto put_ctlr; 542 - } 539 + if (IS_ERR(sadi->base)) 540 + return PTR_ERR(sadi->base); 543 541 544 542 sadi->slave_vbase = (unsigned long)sadi->base + 545 543 data->slave_offset; ··· 549 551 if (ret > 0 || (IS_ENABLED(CONFIG_HWSPINLOCK) && ret == 0)) { 550 552 sadi->hwlock = 551 553 devm_hwspin_lock_request_specific(&pdev->dev, ret); 552 - if (!sadi->hwlock) { 553 - ret = -ENXIO; 554 - goto put_ctlr; 555 - } 554 + if (!sadi->hwlock) 555 + return -ENXIO; 556 556 } else { 557 557 switch (ret) { 558 558 case -ENOENT: 559 559 dev_info(&pdev->dev, "no hardware spinlock supplied\n"); 560 560 break; 561 561 default: 562 - dev_err_probe(&pdev->dev, ret, "failed to find hwlock id\n"); 563 - goto put_ctlr; 562 + return dev_err_probe(&pdev->dev, ret, "failed to find hwlock id\n"); 564 563 } 565 564 } 566 565 ··· 573 578 ctlr->transfer_one = sprd_adi_transfer_one; 574 579 575 580 ret = devm_spi_register_controller(&pdev->dev, ctlr); 576 - if (ret) { 577 - dev_err(&pdev->dev, "failed to register SPI controller\n"); 578 - goto put_ctlr; 579 - } 581 + if (ret) 582 + return dev_err_probe(&pdev->dev, ret, "failed to register SPI controller\n"); 580 583 581 584 if (sadi->data->restart) { 582 585 ret = devm_register_restart_handler(&pdev->dev, 583 586 sadi->data->restart, 584 587 sadi); 585 - if (ret) { 586 - dev_err(&pdev->dev, "can not register restart handler\n"); 587 - goto put_ctlr; 588 - } 588 + if (ret) 589 + return dev_err_probe(&pdev->dev, ret, "can not register restart handler\n"); 589 590 } 590 591 591 592 return 0; 592 - 593 - put_ctlr: 594 - spi_controller_put(ctlr); 595 - return ret; 596 593 } 597 594 598 595 static struct sprd_adi_data sc9860_data = {
+8 -2
drivers/target/iscsi/iscsi_target_util.c
··· 741 741 spin_lock_bh(&sess->session_usage_lock); 742 742 sess->session_usage_count--; 743 743 744 - if (!sess->session_usage_count && sess->session_waiting_on_uc) 744 + if (!sess->session_usage_count && sess->session_waiting_on_uc) { 745 + spin_unlock_bh(&sess->session_usage_lock); 745 746 complete(&sess->session_waiting_on_uc_comp); 747 + return; 748 + } 746 749 747 750 spin_unlock_bh(&sess->session_usage_lock); 748 751 } ··· 813 810 spin_lock_bh(&conn->conn_usage_lock); 814 811 conn->conn_usage_count--; 815 812 816 - if (!conn->conn_usage_count && conn->conn_waiting_on_uc) 813 + if (!conn->conn_usage_count && conn->conn_waiting_on_uc) { 814 + spin_unlock_bh(&conn->conn_usage_lock); 817 815 complete(&conn->conn_waiting_on_uc_comp); 816 + return; 817 + } 818 818 819 819 spin_unlock_bh(&conn->conn_usage_lock); 820 820 }
+1 -1
drivers/tty/serial/8250/8250_pci.c
··· 1658 1658 } 1659 1659 1660 1660 static const struct serial_rs485 pci_fintek_rs485_supported = { 1661 - .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND, 1661 + .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RTS_AFTER_SEND, 1662 1662 /* F81504/508/512 does not support RTS delay before or after send */ 1663 1663 }; 1664 1664
+6 -7
drivers/tty/serial/qcom_geni_serial.c
··· 1888 1888 if (ret) 1889 1889 goto error; 1890 1890 1891 - devm_pm_runtime_enable(port->se.dev); 1892 - 1893 - ret = uart_add_one_port(drv, uport); 1894 - if (ret) 1895 - goto error; 1896 - 1897 1891 if (port->wakeup_irq > 0) { 1898 1892 device_init_wakeup(&pdev->dev, true); 1899 1893 ret = dev_pm_set_dedicated_wake_irq(&pdev->dev, ··· 1895 1901 if (ret) { 1896 1902 device_init_wakeup(&pdev->dev, false); 1897 1903 ida_free(&port_ida, uport->line); 1898 - uart_remove_one_port(drv, uport); 1899 1904 goto error; 1900 1905 } 1901 1906 } 1907 + 1908 + devm_pm_runtime_enable(port->se.dev); 1909 + 1910 + ret = uart_add_one_port(drv, uport); 1911 + if (ret) 1912 + goto error; 1902 1913 1903 1914 return 0; 1904 1915
+6
drivers/tty/serial/serial_core.c
··· 3074 3074 if (uport->cons && uport->dev) 3075 3075 of_console_check(uport->dev->of_node, uport->cons->name, uport->line); 3076 3076 3077 + /* 3078 + * TTY port has to be linked with the driver before register_console() 3079 + * in uart_configure_port(), because user-space could open the console 3080 + * immediately after. 3081 + */ 3082 + tty_port_link_device(port, drv->tty_driver, uport->line); 3077 3083 uart_configure_port(drv, state, uport); 3078 3084 3079 3085 port->console = uart_console(uport);
+2 -2
drivers/uio/uio_pci_generic_sva.c
··· 29 29 struct uio_pci_sva_dev *udev = info->priv; 30 30 struct iommu_domain *domain; 31 31 32 - if (!udev && !udev->pdev) 32 + if (!udev || !udev->pdev) 33 33 return -ENODEV; 34 34 35 35 domain = iommu_get_domain_for_dev(&udev->pdev->dev); ··· 51 51 { 52 52 struct uio_pci_sva_dev *udev = info->priv; 53 53 54 - if (!udev && !udev->pdev) 54 + if (!udev || !udev->pdev) 55 55 return -ENODEV; 56 56 57 57 iommu_sva_unbind_device(udev->sva_handle);
+19 -41
drivers/w1/slaves/w1_therm.c
··· 1836 1836 struct w1_slave *sl = dev_to_w1_slave(device); 1837 1837 struct therm_info info; 1838 1838 u8 new_config_register[3]; /* array of data to be written */ 1839 - int temp, ret; 1840 - char *token = NULL; 1839 + long long temp; 1840 + int ret = 0; 1841 1841 s8 tl, th; /* 1 byte per value + temp ring order */ 1842 - char *p_args, *orig; 1842 + const char *p = buf; 1843 + char *endp; 1843 1844 1844 - p_args = orig = kmalloc(size, GFP_KERNEL); 1845 - /* Safe string copys as buf is const */ 1846 - if (!p_args) { 1847 - dev_warn(device, 1848 - "%s: error unable to allocate memory %d\n", 1849 - __func__, -ENOMEM); 1850 - return size; 1851 - } 1852 - strcpy(p_args, buf); 1853 - 1854 - /* Split string using space char */ 1855 - token = strsep(&p_args, " "); 1856 - 1857 - if (!token) { 1858 - dev_info(device, 1859 - "%s: error parsing args %d\n", __func__, -EINVAL); 1860 - goto free_m; 1861 - } 1862 - 1863 - /* Convert 1st entry to int */ 1864 - ret = kstrtoint (token, 10, &temp); 1845 + temp = simple_strtoll(p, &endp, 10); 1846 + if (p == endp || *endp != ' ') 1847 + ret = -EINVAL; 1848 + else if (temp < INT_MIN || temp > INT_MAX) 1849 + ret = -ERANGE; 1865 1850 if (ret) { 1866 1851 dev_info(device, 1867 1852 "%s: error parsing args %d\n", __func__, ret); 1868 - goto free_m; 1853 + return size; 1869 1854 } 1870 1855 1871 1856 tl = int_to_short(temp); 1872 1857 1873 - /* Split string using space char */ 1874 - token = strsep(&p_args, " "); 1875 - if (!token) { 1876 - dev_info(device, 1877 - "%s: error parsing args %d\n", __func__, -EINVAL); 1878 - goto free_m; 1879 - } 1880 - /* Convert 2nd entry to int */ 1881 - ret = kstrtoint (token, 10, &temp); 1858 + p = endp + 1; 1859 + temp = simple_strtoll(p, &endp, 10); 1860 + if (p == endp) 1861 + ret = -EINVAL; 1862 + else if (temp < INT_MIN || temp > INT_MAX) 1863 + ret = -ERANGE; 1882 1864 if (ret) { 1883 1865 dev_info(device, 1884 1866 "%s: error parsing args %d\n", __func__, ret); 1885 - goto free_m; 1867 + return size; 1886 1868 } 1887 1869 1888 1870 /* Prepare to cast to short by eliminating out of range values */ ··· 1887 1905 dev_info(device, 1888 1906 "%s: error reading from the slave device %d\n", 1889 1907 __func__, ret); 1890 - goto free_m; 1908 + return size; 1891 1909 } 1892 1910 1893 1911 /* Write data in the device RAM */ ··· 1895 1913 dev_info(device, 1896 1914 "%s: Device not supported by the driver %d\n", 1897 1915 __func__, -ENODEV); 1898 - goto free_m; 1916 + return size; 1899 1917 } 1900 1918 1901 1919 ret = SLAVE_SPECIFIC_FUNC(sl)->write_data(sl, new_config_register); ··· 1903 1921 dev_info(device, 1904 1922 "%s: error writing to the slave device %d\n", 1905 1923 __func__, ret); 1906 - 1907 - free_m: 1908 - /* free allocated memory */ 1909 - kfree(orig); 1910 1924 1911 1925 return size; 1912 1926 }
-2
drivers/w1/w1.c
··· 758 758 if (err < 0) { 759 759 dev_err(&dev->dev, "%s: Attaching %s failed.\n", __func__, 760 760 sl->name); 761 - dev->slave_count--; 762 - w1_family_put(sl->family); 763 761 atomic_dec(&sl->master->refcnt); 764 762 kfree(sl); 765 763 return err;
+1
drivers/xen/xen-scsiback.c
··· 1262 1262 gnttab_page_cache_shrink(&info->free_pages, 0); 1263 1263 1264 1264 dev_set_drvdata(&dev->dev, NULL); 1265 + kfree(info); 1265 1266 } 1266 1267 1267 1268 static int scsiback_probe(struct xenbus_device *dev,
+18 -1
fs/btrfs/disk-io.c
··· 1661 1661 btrfs_set_backup_chunk_root_level(root_backup, 1662 1662 btrfs_header_level(info->chunk_root->node)); 1663 1663 1664 - if (!btrfs_fs_compat_ro(info, BLOCK_GROUP_TREE)) { 1664 + if (!btrfs_fs_incompat(info, EXTENT_TREE_V2)) { 1665 1665 struct btrfs_root *extent_root = btrfs_extent_root(info, 0); 1666 1666 struct btrfs_root *csum_root = btrfs_csum_root(info, 0); 1667 1667 ··· 3255 3255 return 0; 3256 3256 } 3257 3257 3258 + static bool fs_is_full_ro(const struct btrfs_fs_info *fs_info) 3259 + { 3260 + if (!sb_rdonly(fs_info->sb)) 3261 + return false; 3262 + if (unlikely(fs_info->mount_opt & BTRFS_MOUNT_FULL_RO_MASK)) 3263 + return true; 3264 + return false; 3265 + } 3266 + 3258 3267 int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices) 3259 3268 { 3260 3269 u32 sectorsize; ··· 3371 3362 /* check FS state, whether FS is broken. */ 3372 3363 if (btrfs_super_flags(disk_super) & BTRFS_SUPER_FLAG_ERROR) 3373 3364 WRITE_ONCE(fs_info->fs_error, -EUCLEAN); 3365 + 3366 + /* If the fs has any rescue options, no transaction is allowed. */ 3367 + if (fs_is_full_ro(fs_info)) 3368 + WRITE_ONCE(fs_info->fs_error, -EROFS); 3374 3369 3375 3370 /* Set up fs_info before parsing mount options */ 3376 3371 nodesize = btrfs_super_nodesize(disk_super); ··· 3502 3489 fs_info->generation == btrfs_super_uuid_tree_generation(disk_super)) 3503 3490 set_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags); 3504 3491 3492 + if (unlikely(btrfs_verify_dev_items(fs_info))) { 3493 + ret = -EUCLEAN; 3494 + goto fail_block_groups; 3495 + } 3505 3496 ret = btrfs_verify_dev_extents(fs_info); 3506 3497 if (ret) { 3507 3498 btrfs_err(fs_info,
+8
fs/btrfs/fs.h
··· 264 264 BTRFS_MOUNT_REF_TRACKER = (1ULL << 33), 265 265 }; 266 266 267 + /* These mount options require a full read-only fs, no new transaction is allowed. */ 268 + #define BTRFS_MOUNT_FULL_RO_MASK \ 269 + (BTRFS_MOUNT_NOLOGREPLAY | \ 270 + BTRFS_MOUNT_IGNOREBADROOTS | \ 271 + BTRFS_MOUNT_IGNOREDATACSUMS | \ 272 + BTRFS_MOUNT_IGNOREMETACSUMS | \ 273 + BTRFS_MOUNT_IGNORESUPERFLAGS) 274 + 267 275 /* 268 276 * Compat flags that we support. If any incompat flags are set other than the 269 277 * ones specified below then we will fail to mount
+1 -1
fs/btrfs/tree-log.c
··· 2798 2798 2799 2799 nritems = btrfs_header_nritems(eb); 2800 2800 for (wc->log_slot = 0; wc->log_slot < nritems; wc->log_slot++) { 2801 - struct btrfs_inode_item *inode_item; 2801 + struct btrfs_inode_item *inode_item = NULL; 2802 2802 2803 2803 btrfs_item_key_to_cpu(eb, &wc->log_key, wc->log_slot); 2804 2804
+42
fs/btrfs/volumes.c
··· 1364 1364 (bytenr + BTRFS_SUPER_INFO_SIZE) >> PAGE_SHIFT); 1365 1365 } 1366 1366 1367 + filemap_invalidate_lock(mapping); 1367 1368 page = read_cache_page_gfp(mapping, bytenr >> PAGE_SHIFT, GFP_NOFS); 1369 + filemap_invalidate_unlock(mapping); 1368 1370 if (IS_ERR(page)) 1369 1371 return ERR_CAST(page); 1370 1372 ··· 7259 7257 return -EINVAL; 7260 7258 } 7261 7259 } 7260 + set_bit(BTRFS_DEV_STATE_ITEM_FOUND, &device->dev_state); 7262 7261 set_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &device->dev_state); 7263 7262 if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state) && 7264 7263 !test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state)) { ··· 8083 8080 8084 8081 /* Ensure all chunks have corresponding dev extents */ 8085 8082 return verify_chunk_dev_extent_mapping(fs_info); 8083 + } 8084 + 8085 + /* 8086 + * Ensure that all devices registered in the fs have their device items in the 8087 + * chunk tree. 8088 + * 8089 + * Return true if unexpected device is found. 8090 + * Return false otherwise. 8091 + */ 8092 + bool btrfs_verify_dev_items(const struct btrfs_fs_info *fs_info) 8093 + { 8094 + struct btrfs_fs_devices *seed_devs; 8095 + struct btrfs_device *dev; 8096 + bool ret = false; 8097 + 8098 + mutex_lock(&uuid_mutex); 8099 + list_for_each_entry(dev, &fs_info->fs_devices->devices, dev_list) { 8100 + if (!test_bit(BTRFS_DEV_STATE_ITEM_FOUND, &dev->dev_state)) { 8101 + btrfs_err(fs_info, 8102 + "devid %llu path %s is registered but not found in chunk tree", 8103 + dev->devid, btrfs_dev_name(dev)); 8104 + ret = true; 8105 + } 8106 + } 8107 + list_for_each_entry(seed_devs, &fs_info->fs_devices->seed_list, seed_list) { 8108 + list_for_each_entry(dev, &seed_devs->devices, dev_list) { 8109 + if (!test_bit(BTRFS_DEV_STATE_ITEM_FOUND, &dev->dev_state)) { 8110 + btrfs_err(fs_info, 8111 + "devid %llu path %s is registered but not found in chunk tree", 8112 + dev->devid, btrfs_dev_name(dev)); 8113 + ret = true; 8114 + } 8115 + } 8116 + } 8117 + mutex_unlock(&uuid_mutex); 8118 + if (ret) 8119 + btrfs_err(fs_info, 8120 + "remove the above devices or use 'btrfs device scan --forget <dev>' to unregister them before mount"); 8121 + return ret; 8086 8122 } 8087 8123 8088 8124 /*
+4
fs/btrfs/volumes.h
··· 100 100 #define BTRFS_DEV_STATE_FLUSH_SENT (4) 101 101 #define BTRFS_DEV_STATE_NO_READA (5) 102 102 103 + /* Set when the device item is found in chunk tree, used to catch unexpected registered device. */ 104 + #define BTRFS_DEV_STATE_ITEM_FOUND (7) 105 + 103 106 /* Special value encoding failure to write primary super block. */ 104 107 #define BTRFS_SUPER_PRIMARY_WRITE_ERROR (INT_MAX / 2) 105 108 ··· 896 893 int btrfs_bg_type_to_factor(u64 flags); 897 894 const char *btrfs_bg_type_to_raid_name(u64 flags); 898 895 int btrfs_verify_dev_extents(struct btrfs_fs_info *fs_info); 896 + bool btrfs_verify_dev_items(const struct btrfs_fs_info *fs_info); 899 897 bool btrfs_repair_one_zone(struct btrfs_fs_info *fs_info, u64 logical); 900 898 901 899 bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr);
+6 -1
fs/fs-writeback.c
··· 2750 2750 * The mapping can appear untagged while still on-list since we 2751 2751 * do not have the mapping lock. Skip it here, wb completion 2752 2752 * will remove it. 2753 + * 2754 + * If the mapping does not have data integrity semantics, 2755 + * there's no need to wait for the writeout to complete, as the 2756 + * mapping cannot guarantee that data is persistently stored. 2753 2757 */ 2754 - if (!mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK)) 2758 + if (!mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK) || 2759 + mapping_no_data_integrity(mapping)) 2755 2760 continue; 2756 2761 2757 2762 spin_unlock_irq(&sb->s_inode_wblist_lock);
+3 -1
fs/fuse/file.c
··· 3200 3200 3201 3201 inode->i_fop = &fuse_file_operations; 3202 3202 inode->i_data.a_ops = &fuse_file_aops; 3203 - if (fc->writeback_cache) 3203 + if (fc->writeback_cache) { 3204 3204 mapping_set_writeback_may_deadlock_on_reclaim(&inode->i_data); 3205 + mapping_set_no_data_integrity(&inode->i_data); 3206 + } 3205 3207 3206 3208 INIT_LIST_HEAD(&fi->write_files); 3207 3209 INIT_LIST_HEAD(&fi->queued_writes);
+8 -8
fs/smb/server/transport_rdma.c
··· 1353 1353 1354 1354 static int get_mapped_sg_list(struct ib_device *device, void *buf, int size, 1355 1355 struct scatterlist *sg_list, int nentries, 1356 - enum dma_data_direction dir) 1356 + enum dma_data_direction dir, int *npages) 1357 1357 { 1358 - int npages; 1359 - 1360 - npages = get_sg_list(buf, size, sg_list, nentries); 1361 - if (npages < 0) 1358 + *npages = get_sg_list(buf, size, sg_list, nentries); 1359 + if (*npages < 0) 1362 1360 return -EINVAL; 1363 - return ib_dma_map_sg(device, sg_list, npages, dir); 1361 + return ib_dma_map_sg(device, sg_list, *npages, dir); 1364 1362 } 1365 1363 1366 1364 static int post_sendmsg(struct smbdirect_socket *sc, ··· 1429 1431 for (i = 0; i < niov; i++) { 1430 1432 struct ib_sge *sge; 1431 1433 int sg_cnt; 1434 + int npages; 1432 1435 1433 1436 sg_init_table(sg, SMBDIRECT_SEND_IO_MAX_SGE - 1); 1434 1437 sg_cnt = get_mapped_sg_list(sc->ib.dev, 1435 1438 iov[i].iov_base, iov[i].iov_len, 1436 1439 sg, SMBDIRECT_SEND_IO_MAX_SGE - 1, 1437 - DMA_TO_DEVICE); 1440 + DMA_TO_DEVICE, &npages); 1438 1441 if (sg_cnt <= 0) { 1439 1442 pr_err("failed to map buffer\n"); 1440 1443 ret = -ENOMEM; ··· 1443 1444 } else if (sg_cnt + msg->num_sge > SMBDIRECT_SEND_IO_MAX_SGE) { 1444 1445 pr_err("buffer not fitted into sges\n"); 1445 1446 ret = -E2BIG; 1446 - ib_dma_unmap_sg(sc->ib.dev, sg, sg_cnt, 1447 + ib_dma_unmap_sg(sc->ib.dev, sg, npages, 1447 1448 DMA_TO_DEVICE); 1448 1449 goto err; 1449 1450 } ··· 2707 2708 { 2708 2709 int ret; 2709 2710 2711 + smb_direct_port = SMB_DIRECT_PORT_INFINIBAND; 2710 2712 smb_direct_listener.cm_id = NULL; 2711 2713 2712 2714 ret = ib_register_client(&smb_direct_ib_client);
+1 -1
fs/smb/server/vfs.c
··· 1227 1227 } 1228 1228 1229 1229 /** 1230 - * ksmbd_vfs_kern_path_start_remove() - lookup a file and get path info prior to removal 1230 + * ksmbd_vfs_kern_path_start_removing() - lookup a file and get path info prior to removal 1231 1231 * @work: work 1232 1232 * @filepath: file path that is relative to share 1233 1233 * @flags: lookup flags
+75 -2
include/asm-generic/tlb.h
··· 46 46 * 47 47 * The mmu_gather API consists of: 48 48 * 49 - * - tlb_gather_mmu() / tlb_gather_mmu_fullmm() / tlb_finish_mmu() 49 + * - tlb_gather_mmu() / tlb_gather_mmu_fullmm() / tlb_gather_mmu_vma() / 50 + * tlb_finish_mmu() 50 51 * 51 52 * start and finish a mmu_gather 52 53 * ··· 365 364 unsigned int vma_huge : 1; 366 365 unsigned int vma_pfn : 1; 367 366 367 + /* 368 + * Did we unshare (unmap) any shared page tables? For now only 369 + * used for hugetlb PMD table sharing. 370 + */ 371 + unsigned int unshared_tables : 1; 372 + 373 + /* 374 + * Did we unshare any page tables such that they are now exclusive 375 + * and could get reused+modified by the new owner? When setting this 376 + * flag, "unshared_tables" will be set as well. For now only used 377 + * for hugetlb PMD table sharing. 378 + */ 379 + unsigned int fully_unshared_tables : 1; 380 + 368 381 unsigned int batch_count; 369 382 370 383 #ifndef CONFIG_MMU_GATHER_NO_GATHER ··· 415 400 tlb->cleared_pmds = 0; 416 401 tlb->cleared_puds = 0; 417 402 tlb->cleared_p4ds = 0; 403 + tlb->unshared_tables = 0; 418 404 /* 419 405 * Do not reset mmu_gather::vma_* fields here, we do not 420 406 * call into tlb_start_vma() again to set them if there is an ··· 500 484 * these bits. 501 485 */ 502 486 if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds || 503 - tlb->cleared_puds || tlb->cleared_p4ds)) 487 + tlb->cleared_puds || tlb->cleared_p4ds || tlb->unshared_tables)) 504 488 return; 505 489 506 490 tlb_flush(tlb); ··· 788 772 return true; 789 773 } 790 774 #endif 775 + 776 + #ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING 777 + static inline void tlb_unshare_pmd_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt, 778 + unsigned long addr) 779 + { 780 + /* 781 + * The caller must make sure that concurrent unsharing + exclusive 782 + * reuse is impossible until tlb_flush_unshared_tables() was called. 783 + */ 784 + VM_WARN_ON_ONCE(!ptdesc_pmd_is_shared(pt)); 785 + ptdesc_pmd_pts_dec(pt); 786 + 787 + /* Clearing a PUD pointing at a PMD table with PMD leaves. */ 788 + tlb_flush_pmd_range(tlb, addr & PUD_MASK, PUD_SIZE); 789 + 790 + /* 791 + * If the page table is now exclusively owned, we fully unshared 792 + * a page table. 793 + */ 794 + if (!ptdesc_pmd_is_shared(pt)) 795 + tlb->fully_unshared_tables = true; 796 + tlb->unshared_tables = true; 797 + } 798 + 799 + static inline void tlb_flush_unshared_tables(struct mmu_gather *tlb) 800 + { 801 + /* 802 + * As soon as the caller drops locks to allow for reuse of 803 + * previously-shared tables, these tables could get modified and 804 + * even reused outside of hugetlb context, so we have to make sure that 805 + * any page table walkers (incl. TLB, GUP-fast) are aware of that 806 + * change. 807 + * 808 + * Even if we are not fully unsharing a PMD table, we must 809 + * flush the TLB for the unsharer now. 810 + */ 811 + if (tlb->unshared_tables) 812 + tlb_flush_mmu_tlbonly(tlb); 813 + 814 + /* 815 + * Similarly, we must make sure that concurrent GUP-fast will not 816 + * walk previously-shared page tables that are getting modified+reused 817 + * elsewhere. So broadcast an IPI to wait for any concurrent GUP-fast. 818 + * 819 + * We only perform this when we are the last sharer of a page table, 820 + * as the IPI will reach all CPUs: any GUP-fast. 821 + * 822 + * Note that on configs where tlb_remove_table_sync_one() is a NOP, 823 + * the expectation is that the tlb_flush_mmu_tlbonly() would have issued 824 + * required IPIs already for us. 825 + */ 826 + if (tlb->fully_unshared_tables) { 827 + tlb_remove_table_sync_one(); 828 + tlb->fully_unshared_tables = false; 829 + } 830 + } 831 + #endif /* CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING */ 791 832 792 833 #endif /* CONFIG_MMU */ 793 834
+17 -2
include/drm/drm_pagemap.h
··· 209 209 struct dma_fence *pre_migrate_fence); 210 210 }; 211 211 212 + #if IS_ENABLED(CONFIG_ZONE_DEVICE) 213 + 214 + struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page); 215 + 216 + #else 217 + 218 + static inline struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page) 219 + { 220 + return NULL; 221 + } 222 + 223 + #endif /* IS_ENABLED(CONFIG_ZONE_DEVICE) */ 224 + 212 225 /** 213 226 * struct drm_pagemap_devmem - Structure representing a GPU SVM device memory allocation 214 227 * ··· 246 233 struct dma_fence *pre_migrate_fence; 247 234 }; 248 235 236 + #if IS_ENABLED(CONFIG_ZONE_DEVICE) 237 + 249 238 int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation, 250 239 struct mm_struct *mm, 251 240 unsigned long start, unsigned long end, ··· 257 242 int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation); 258 243 259 244 const struct dev_pagemap_ops *drm_pagemap_pagemap_ops_get(void); 260 - 261 - struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page); 262 245 263 246 void drm_pagemap_devmem_init(struct drm_pagemap_devmem *devmem_allocation, 264 247 struct device *dev, struct mm_struct *mm, ··· 268 255 unsigned long start, unsigned long end, 269 256 struct mm_struct *mm, 270 257 unsigned long timeslice_ms); 258 + 259 + #endif /* IS_ENABLED(CONFIG_ZONE_DEVICE) */ 271 260 272 261 #endif
+1
include/dt-bindings/power/qcom,rpmhpd.h
··· 264 264 #define SC8280XP_NSP 13 265 265 #define SC8280XP_QPHY 14 266 266 #define SC8280XP_XO 15 267 + #define SC8280XP_MXC_AO 16 267 268 268 269 #endif
+47
include/hyperv/hvhdk.h
··· 800 800 u8 instruction_bytes[16]; 801 801 } __packed; 802 802 803 + #if IS_ENABLED(CONFIG_ARM64) 804 + union hv_arm64_vp_execution_state { 805 + u16 as_uint16; 806 + struct { 807 + u16 cpl:2; /* Exception Level (EL) */ 808 + u16 debug_active:1; 809 + u16 interruption_pending:1; 810 + u16 vtl:4; 811 + u16 virtualization_fault_active:1; 812 + u16 reserved:7; 813 + } __packed; 814 + }; 815 + 816 + struct hv_arm64_intercept_message_header { 817 + u32 vp_index; 818 + u8 instruction_length; 819 + u8 intercept_access_type; 820 + union hv_arm64_vp_execution_state execution_state; 821 + u64 pc; 822 + u64 cpsr; 823 + } __packed; 824 + 825 + union hv_arm64_memory_access_info { 826 + u8 as_uint8; 827 + struct { 828 + u8 gva_valid:1; 829 + u8 gva_gpa_valid:1; 830 + u8 hypercall_output_pending:1; 831 + u8 reserved:5; 832 + } __packed; 833 + }; 834 + 835 + struct hv_arm64_memory_intercept_message { 836 + struct hv_arm64_intercept_message_header header; 837 + u32 cache_type; /* enum hv_cache_type */ 838 + u8 instruction_byte_count; 839 + union hv_arm64_memory_access_info memory_access_info; 840 + u16 reserved1; 841 + u8 instruction_bytes[4]; 842 + u32 reserved2; 843 + u64 guest_virtual_address; 844 + u64 guest_physical_address; 845 + u64 syndrome; 846 + } __packed; 847 + 848 + #endif /* CONFIG_ARM64 */ 849 + 803 850 /* 804 851 * Dispatch state for the VP communicated by the hypervisor to the 805 852 * VP-dispatching thread in the root on return from HVCALL_DISPATCH_VP.
+9
include/linux/device/driver.h
··· 85 85 * uevent. 86 86 * @p: Driver core's private data, no one other than the driver 87 87 * core can touch this. 88 + * @p_cb: Callbacks private to the driver core; no one other than the 89 + * driver core is allowed to touch this. 88 90 * 89 91 * The device driver-model tracks all of the drivers known to the system. 90 92 * The main reason for this tracking is to enable the driver core to match ··· 121 119 void (*coredump) (struct device *dev); 122 120 123 121 struct driver_private *p; 122 + struct { 123 + /* 124 + * Called after remove() and after all devres entries have been 125 + * processed. This is a Rust only callback. 126 + */ 127 + void (*post_unbind_rust)(struct device *dev); 128 + } p_cb; 124 129 }; 125 130 126 131
+11 -6
include/linux/hugetlb.h
··· 240 240 pte_t *huge_pte_offset(struct mm_struct *mm, 241 241 unsigned long addr, unsigned long sz); 242 242 unsigned long hugetlb_mask_last_page(struct hstate *h); 243 - int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, 244 - unsigned long addr, pte_t *ptep); 243 + int huge_pmd_unshare(struct mmu_gather *tlb, struct vm_area_struct *vma, 244 + unsigned long addr, pte_t *ptep); 245 + void huge_pmd_unshare_flush(struct mmu_gather *tlb, struct vm_area_struct *vma); 245 246 void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, 246 247 unsigned long *start, unsigned long *end); 247 248 ··· 301 300 return NULL; 302 301 } 303 302 304 - static inline int huge_pmd_unshare(struct mm_struct *mm, 305 - struct vm_area_struct *vma, 306 - unsigned long addr, pte_t *ptep) 303 + static inline int huge_pmd_unshare(struct mmu_gather *tlb, 304 + struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) 307 305 { 308 306 return 0; 307 + } 308 + 309 + static inline void huge_pmd_unshare_flush(struct mmu_gather *tlb, 310 + struct vm_area_struct *vma) 311 + { 309 312 } 310 313 311 314 static inline void adjust_range_if_pmd_sharing_possible( ··· 1331 1326 #ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING 1332 1327 static inline bool hugetlb_pmd_shared(pte_t *pte) 1333 1328 { 1334 - return page_count(virt_to_page(pte)) > 1; 1329 + return ptdesc_pmd_is_shared(virt_to_ptdesc(pte)); 1335 1330 } 1336 1331 #else 1337 1332 static inline bool hugetlb_pmd_shared(pte_t *pte)
+2
include/linux/iio/iio-opaque.h
··· 14 14 * @mlock: lock used to prevent simultaneous device state changes 15 15 * @mlock_key: lockdep class for iio_dev lock 16 16 * @info_exist_lock: lock to prevent use during removal 17 + * @info_exist_key: lockdep class for info_exist lock 17 18 * @trig_readonly: mark the current trigger immutable 18 19 * @event_interface: event chrdevs associated with interrupt lines 19 20 * @attached_buffers: array of buffers statically attached by the driver ··· 48 47 struct mutex mlock; 49 48 struct lock_class_key mlock_key; 50 49 struct mutex info_exist_lock; 50 + struct lock_class_key info_exist_key; 51 51 bool trig_readonly; 52 52 struct iio_event_interface *event_interface; 53 53 struct iio_buffer **attached_buffers;
+5 -1
include/linux/mm.h
··· 608 608 /* 609 609 * Flags which should result in page tables being copied on fork. These are 610 610 * flags which indicate that the VMA maps page tables which cannot be 611 - * reconsistuted upon page fault, so necessitate page table copying upon 611 + * reconsistuted upon page fault, so necessitate page table copying upon fork. 612 + * 613 + * Note that these flags should be compared with the DESTINATION VMA not the 614 + * source, as VM_UFFD_WP may not be propagated to destination, while all other 615 + * flags will be. 612 616 * 613 617 * VM_PFNMAP / VM_MIXEDMAP - These contain kernel-mapped data which cannot be 614 618 * reasonably reconstructed on page fault.
+14 -5
include/linux/mm_types.h
··· 1329 1329 * The mm_cpumask needs to be at the end of mm_struct, because it 1330 1330 * is dynamically sized based on nr_cpu_ids. 1331 1331 */ 1332 - unsigned long cpu_bitmap[]; 1332 + char flexible_array[] __aligned(__alignof__(unsigned long)); 1333 1333 }; 1334 1334 1335 1335 /* Copy value to the first system word of mm flags, non-atomically. */ ··· 1366 1366 MT_FLAGS_USE_RCU) 1367 1367 extern struct mm_struct init_mm; 1368 1368 1369 + #define MM_STRUCT_FLEXIBLE_ARRAY_INIT \ 1370 + { \ 1371 + [0 ... sizeof(cpumask_t) + MM_CID_STATIC_SIZE - 1] = 0 \ 1372 + } 1373 + 1369 1374 /* Pointer magic because the dynamic array size confuses some compilers. */ 1370 1375 static inline void mm_init_cpumask(struct mm_struct *mm) 1371 1376 { 1372 1377 unsigned long cpu_bitmap = (unsigned long)mm; 1373 1378 1374 - cpu_bitmap += offsetof(struct mm_struct, cpu_bitmap); 1379 + cpu_bitmap += offsetof(struct mm_struct, flexible_array); 1375 1380 cpumask_clear((struct cpumask *)cpu_bitmap); 1376 1381 } 1377 1382 1378 1383 /* Future-safe accessor for struct mm_struct's cpu_vm_mask. */ 1379 1384 static inline cpumask_t *mm_cpumask(struct mm_struct *mm) 1380 1385 { 1381 - return (struct cpumask *)&mm->cpu_bitmap; 1386 + return (struct cpumask *)&mm->flexible_array; 1382 1387 } 1383 1388 1384 1389 #ifdef CONFIG_LRU_GEN ··· 1474 1469 { 1475 1470 unsigned long bitmap = (unsigned long)mm; 1476 1471 1477 - bitmap += offsetof(struct mm_struct, cpu_bitmap); 1472 + bitmap += offsetof(struct mm_struct, flexible_array); 1478 1473 /* Skip cpu_bitmap */ 1479 1474 bitmap += cpumask_size(); 1480 1475 return (struct cpumask *)bitmap; ··· 1500 1495 mm_init_cid(mm, p); 1501 1496 return 0; 1502 1497 } 1503 - #define mm_alloc_cid(...) alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__)) 1498 + # define mm_alloc_cid(...) alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__)) 1504 1499 1505 1500 static inline void mm_destroy_cid(struct mm_struct *mm) 1506 1501 { ··· 1514 1509 return cpumask_size() + bitmap_size(num_possible_cpus()); 1515 1510 } 1516 1511 1512 + /* Use 2 * NR_CPUS as worse case for static allocation. */ 1513 + # define MM_CID_STATIC_SIZE (2 * sizeof(cpumask_t)) 1517 1514 #else /* CONFIG_SCHED_MM_CID */ 1518 1515 static inline void mm_init_cid(struct mm_struct *mm, struct task_struct *p) { } 1519 1516 static inline int mm_alloc_cid(struct mm_struct *mm, struct task_struct *p) { return 0; } ··· 1524 1517 { 1525 1518 return 0; 1526 1519 } 1520 + # define MM_CID_STATIC_SIZE 0 1527 1521 #endif /* CONFIG_SCHED_MM_CID */ 1528 1522 1529 1523 struct mmu_gather; 1530 1524 extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm); 1531 1525 extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm); 1526 + void tlb_gather_mmu_vma(struct mmu_gather *tlb, struct vm_area_struct *vma); 1532 1527 extern void tlb_finish_mmu(struct mmu_gather *tlb); 1533 1528 1534 1529 struct vm_fault;
+5 -4
include/linux/mmzone.h
··· 1648 1648 return is_highmem_idx(zone_idx(zone)); 1649 1649 } 1650 1650 1651 - #ifdef CONFIG_ZONE_DMA 1652 - bool has_managed_dma(void); 1653 - #else 1651 + bool has_managed_zone(enum zone_type zone); 1654 1652 static inline bool has_managed_dma(void) 1655 1653 { 1654 + #ifdef CONFIG_ZONE_DMA 1655 + return has_managed_zone(ZONE_DMA); 1656 + #else 1656 1657 return false; 1657 - } 1658 1658 #endif 1659 + } 1659 1660 1660 1661 1661 1662 #ifndef CONFIG_NUMA
+11
include/linux/pagemap.h
··· 210 210 AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM = 9, 211 211 AS_KERNEL_FILE = 10, /* mapping for a fake kernel file that shouldn't 212 212 account usage to user cgroups */ 213 + AS_NO_DATA_INTEGRITY = 11, /* no data integrity guarantees */ 213 214 /* Bits 16-25 are used for FOLIO_ORDER */ 214 215 AS_FOLIO_ORDER_BITS = 5, 215 216 AS_FOLIO_ORDER_MIN = 16, ··· 344 343 static inline bool mapping_writeback_may_deadlock_on_reclaim(const struct address_space *mapping) 345 344 { 346 345 return test_bit(AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM, &mapping->flags); 346 + } 347 + 348 + static inline void mapping_set_no_data_integrity(struct address_space *mapping) 349 + { 350 + set_bit(AS_NO_DATA_INTEGRITY, &mapping->flags); 351 + } 352 + 353 + static inline bool mapping_no_data_integrity(const struct address_space *mapping) 354 + { 355 + return test_bit(AS_NO_DATA_INTEGRITY, &mapping->flags); 347 356 } 348 357 349 358 static inline gfp_t mapping_gfp_mask(const struct address_space *mapping)
-3
include/net/cfg80211.h
··· 3221 3221 * if this is %NULL for a link, that link is not requested 3222 3222 * @elems: extra elements for the per-STA profile for this link 3223 3223 * @elems_len: length of the elements 3224 - * @disabled: If set this link should be included during association etc. but it 3225 - * should not be used until enabled by the AP MLD. 3226 3224 * @error: per-link error code, must be <= 0. If there is an error, then the 3227 3225 * operation as a whole must fail. 3228 3226 */ ··· 3228 3230 struct cfg80211_bss *bss; 3229 3231 const u8 *elems; 3230 3232 size_t elems_len; 3231 - bool disabled; 3232 3233 int error; 3233 3234 }; 3234 3235
+4
include/trace/events/rxrpc.h
··· 322 322 EM(rxrpc_call_put_kernel, "PUT kernel ") \ 323 323 EM(rxrpc_call_put_poke, "PUT poke ") \ 324 324 EM(rxrpc_call_put_recvmsg, "PUT recvmsg ") \ 325 + EM(rxrpc_call_put_recvmsg_peek_nowait, "PUT peek-nwt") \ 325 326 EM(rxrpc_call_put_release_recvmsg_q, "PUT rls-rcmq") \ 326 327 EM(rxrpc_call_put_release_sock, "PUT rls-sock") \ 327 328 EM(rxrpc_call_put_release_sock_tba, "PUT rls-sk-a") \ ··· 341 340 EM(rxrpc_call_see_input, "SEE input ") \ 342 341 EM(rxrpc_call_see_notify_released, "SEE nfy-rlsd") \ 343 342 EM(rxrpc_call_see_recvmsg, "SEE recvmsg ") \ 343 + EM(rxrpc_call_see_recvmsg_requeue, "SEE recv-rqu") \ 344 + EM(rxrpc_call_see_recvmsg_requeue_first, "SEE recv-rqF") \ 345 + EM(rxrpc_call_see_recvmsg_requeue_move, "SEE recv-rqM") \ 344 346 EM(rxrpc_call_see_release, "SEE release ") \ 345 347 EM(rxrpc_call_see_userid_exists, "SEE u-exists") \ 346 348 EM(rxrpc_call_see_waiting_call, "SEE q-conn ") \
+4 -2
include/uapi/linux/blkzoned.h
··· 81 81 BLK_ZONE_COND_FULL = 0xE, 82 82 BLK_ZONE_COND_OFFLINE = 0xF, 83 83 84 - BLK_ZONE_COND_ACTIVE = 0xFF, 84 + BLK_ZONE_COND_ACTIVE = 0xFF, /* added in Linux 6.19 */ 85 + #define BLK_ZONE_COND_ACTIVE BLK_ZONE_COND_ACTIVE 85 86 }; 86 87 87 88 /** ··· 101 100 BLK_ZONE_REP_CAPACITY = (1U << 0), 102 101 103 102 /* Input flags */ 104 - BLK_ZONE_REP_CACHED = (1U << 31), 103 + BLK_ZONE_REP_CACHED = (1U << 31), /* added in Linux 6.19 */ 104 + #define BLK_ZONE_REP_CACHED BLK_ZONE_REP_CACHED 105 105 }; 106 106 107 107 /**
+1 -1
include/uapi/linux/comedi.h
··· 640 640 641 641 /** 642 642 * struct comedi_rangeinfo - used to retrieve the range table for a channel 643 - * @range_type: Encodes subdevice index (bits 27:24), channel index 643 + * @range_type: Encodes subdevice index (bits 31:24), channel index 644 644 * (bits 23:16) and range table length (bits 15:0). 645 645 * @range_ptr: Pointer to array of @struct comedi_krange to be filled 646 646 * in with the range table for the channel or subdevice.
+3 -2
include/uapi/linux/nl80211.h
··· 2880 2880 * index. If the userspace includes more RNR elements than number of 2881 2881 * MBSSID elements then these will be added in every EMA beacon. 2882 2882 * 2883 - * @NL80211_ATTR_MLO_LINK_DISABLED: Flag attribute indicating that the link is 2884 - * disabled. 2883 + * @NL80211_ATTR_MLO_LINK_DISABLED: Unused. It was used to indicate that a link 2884 + * is disabled during association. However, the AP will send the 2885 + * information by including a TTLM in the association response. 2885 2886 * 2886 2887 * @NL80211_ATTR_BSS_DUMP_INCLUDE_USE_DATA: Include BSS usage data, i.e. 2887 2888 * include BSSes that can only be used in restricted scenarios and/or
+1 -1
io_uring/io-wq.c
··· 598 598 __releases(&acct->lock) 599 599 { 600 600 struct io_wq *wq = worker->wq; 601 - bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); 602 601 603 602 do { 603 + bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); 604 604 struct io_wq_work *work; 605 605 606 606 /*
+11 -4
io_uring/rw.c
··· 144 144 return 0; 145 145 } 146 146 147 - static void io_rw_recycle(struct io_kiocb *req, unsigned int issue_flags) 147 + static bool io_rw_recycle(struct io_kiocb *req, unsigned int issue_flags) 148 148 { 149 149 struct io_async_rw *rw = req->async_data; 150 150 151 151 if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) 152 - return; 152 + return false; 153 153 154 154 io_alloc_cache_vec_kasan(&rw->vec); 155 155 if (rw->vec.nr > IO_VEC_CACHE_SOFT_CAP) 156 156 io_vec_free(&rw->vec); 157 157 158 - if (io_alloc_cache_put(&req->ctx->rw_cache, rw)) 158 + if (io_alloc_cache_put(&req->ctx->rw_cache, rw)) { 159 159 io_req_async_data_clear(req, 0); 160 + return true; 161 + } 162 + return false; 160 163 } 161 164 162 165 static void io_req_rw_cleanup(struct io_kiocb *req, unsigned int issue_flags) ··· 193 190 */ 194 191 if (!(req->flags & (REQ_F_REISSUE | REQ_F_REFCOUNT))) { 195 192 req->flags &= ~REQ_F_NEED_CLEANUP; 196 - io_rw_recycle(req, issue_flags); 193 + if (!io_rw_recycle(req, issue_flags)) { 194 + struct io_async_rw *rw = req->async_data; 195 + 196 + io_vec_free(&rw->vec); 197 + } 197 198 } 198 199 } 199 200
+3 -3
io_uring/waitid.c
··· 114 114 struct io_waitid *iw = io_kiocb_to_cmd(req, struct io_waitid); 115 115 struct wait_queue_head *head; 116 116 117 - head = READ_ONCE(iw->head); 117 + head = smp_load_acquire(&iw->head); 118 118 if (head) { 119 119 struct io_waitid_async *iwa = req->async_data; 120 120 121 - iw->head = NULL; 121 + smp_store_release(&iw->head, NULL); 122 122 spin_lock_irq(&head->lock); 123 123 list_del_init(&iwa->wo.child_wait.entry); 124 124 spin_unlock_irq(&head->lock); ··· 246 246 return 0; 247 247 248 248 list_del_init(&wait->entry); 249 - iw->head = NULL; 249 + smp_store_release(&iw->head, NULL); 250 250 251 251 /* cancel is in progress */ 252 252 if (atomic_fetch_inc(&iw->refs) & IO_WAITID_REF_MASK)
+18 -9
kernel/dma/pool.c
··· 184 184 return pool; 185 185 } 186 186 187 + #ifdef CONFIG_ZONE_DMA32 188 + #define has_managed_dma32 has_managed_zone(ZONE_DMA32) 189 + #else 190 + #define has_managed_dma32 false 191 + #endif 192 + 187 193 static int __init dma_atomic_pool_init(void) 188 194 { 189 195 int ret = 0; ··· 205 199 } 206 200 INIT_WORK(&atomic_pool_work, atomic_pool_work_fn); 207 201 208 - atomic_pool_kernel = __dma_atomic_pool_init(atomic_pool_size, 202 + /* All memory might be in the DMA zone(s) to begin with */ 203 + if (has_managed_zone(ZONE_NORMAL)) { 204 + atomic_pool_kernel = __dma_atomic_pool_init(atomic_pool_size, 209 205 GFP_KERNEL); 210 - if (!atomic_pool_kernel) 211 - ret = -ENOMEM; 206 + if (!atomic_pool_kernel) 207 + ret = -ENOMEM; 208 + } 212 209 if (has_managed_dma()) { 213 210 atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size, 214 211 GFP_KERNEL | GFP_DMA); 215 212 if (!atomic_pool_dma) 216 213 ret = -ENOMEM; 217 214 } 218 - if (IS_ENABLED(CONFIG_ZONE_DMA32)) { 215 + if (has_managed_dma32) { 219 216 atomic_pool_dma32 = __dma_atomic_pool_init(atomic_pool_size, 220 217 GFP_KERNEL | GFP_DMA32); 221 218 if (!atomic_pool_dma32) ··· 233 224 static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp) 234 225 { 235 226 if (prev == NULL) { 236 - if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32)) 237 - return atomic_pool_dma32; 238 - if (atomic_pool_dma && (gfp & GFP_DMA)) 239 - return atomic_pool_dma; 240 - return atomic_pool_kernel; 227 + if (gfp & GFP_DMA) 228 + return atomic_pool_dma ?: atomic_pool_dma32 ?: atomic_pool_kernel; 229 + if (gfp & GFP_DMA32) 230 + return atomic_pool_dma32 ?: atomic_pool_dma ?: atomic_pool_kernel; 231 + return atomic_pool_kernel ?: atomic_pool_dma32 ?: atomic_pool_dma; 241 232 } 242 233 if (prev == atomic_pool_kernel) 243 234 return atomic_pool_dma32 ? atomic_pool_dma32 : atomic_pool_dma;
+9
kernel/events/core.c
··· 6997 6997 if (data_page_nr(event->rb) != nr_pages) 6998 6998 return -EINVAL; 6999 6999 7000 + /* 7001 + * If this event doesn't have mmap_count, we're attempting to 7002 + * create an alias of another event's mmap(); this would mean 7003 + * both events will end up scribbling the same user_page; 7004 + * which makes no sense. 7005 + */ 7006 + if (!refcount_read(&event->mmap_count)) 7007 + return -EBUSY; 7008 + 7000 7009 if (refcount_inc_not_zero(&event->rb->mmap_count)) { 7001 7010 /* 7002 7011 * Success -- managed to mmap() the same buffer
+2 -2
kernel/panic.c
··· 131 131 static int sysctl_panic_print_handler(const struct ctl_table *table, int write, 132 132 void *buffer, size_t *lenp, loff_t *ppos) 133 133 { 134 - panic_print_deprecated(); 134 + if (write) 135 + panic_print_deprecated(); 135 136 return proc_doulongvec_minmax(table, write, buffer, lenp, ppos); 136 137 } 137 138 ··· 1015 1014 1016 1015 static int panic_print_get(char *val, const struct kernel_param *kp) 1017 1016 { 1018 - panic_print_deprecated(); 1019 1017 return param_get_ulong(val, kp); 1020 1018 } 1021 1019
-16
kernel/sched/fair.c
··· 8828 8828 if ((wake_flags & WF_FORK) || pse->sched_delayed) 8829 8829 return; 8830 8830 8831 - /* 8832 - * If @p potentially is completing work required by current then 8833 - * consider preemption. 8834 - * 8835 - * Reschedule if waker is no longer eligible. */ 8836 - if (in_task() && !entity_eligible(cfs_rq, se)) { 8837 - preempt_action = PREEMPT_WAKEUP_RESCHED; 8838 - goto preempt; 8839 - } 8840 - 8841 8831 /* Prefer picking wakee soon if appropriate. */ 8842 8832 if (sched_feat(NEXT_BUDDY) && 8843 8833 set_preempt_buddy(cfs_rq, wake_flags, pse, se)) { ··· 8984 8994 if (new_tasks > 0) 8985 8995 goto again; 8986 8996 } 8987 - 8988 - /* 8989 - * rq is about to be idle, check if we need to update the 8990 - * lost_idle_time of clock_pelt 8991 - */ 8992 - update_idle_rq_clock_pelt(rq); 8993 8997 8994 8998 return NULL; 8995 8999 }
+1 -1
kernel/sched/features.h
··· 29 29 * wakeup-preemption), since its likely going to consume data we 30 30 * touched, increases cache locality. 31 31 */ 32 - SCHED_FEAT(NEXT_BUDDY, true) 32 + SCHED_FEAT(NEXT_BUDDY, false) 33 33 34 34 /* 35 35 * Allow completely ignoring cfs_rq->next; which can be set from various
+6
kernel/sched/idle.c
··· 468 468 scx_update_idle(rq, true, true); 469 469 schedstat_inc(rq->sched_goidle); 470 470 next->se.exec_start = rq_clock_task(rq); 471 + 472 + /* 473 + * rq is about to be idle, check if we need to update the 474 + * lost_idle_time of clock_pelt 475 + */ 476 + update_idle_rq_clock_pelt(rq); 471 477 } 472 478 473 479 struct task_struct *pick_task_idle(struct rq *rq, struct rq_flags *rf)
+1 -1
kernel/time/clocksource.c
··· 252 252 253 253 static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow) 254 254 { 255 - int64_t md = 2 * watchdog->uncertainty_margin; 255 + int64_t md = watchdog->uncertainty_margin; 256 256 unsigned int nretries, max_retries; 257 257 int64_t wd_delay, wd_seq_delay; 258 258 u64 wd_end, wd_end2;
+1 -1
kernel/time/timekeeping.c
··· 2735 2735 timekeeping_update_from_shadow(tkd, TK_CLOCK_WAS_SET); 2736 2736 result->clock_set = true; 2737 2737 } else { 2738 - tk_update_leap_state_all(&tk_core); 2738 + tk_update_leap_state_all(tkd); 2739 2739 } 2740 2740 2741 2741 /* Update the multiplier immediately if frequency was set directly */
+4 -4
kernel/trace/trace.c
··· 6115 6115 unsigned long addr = (unsigned long)key; 6116 6116 const struct trace_mod_entry *ent = pivot; 6117 6117 6118 - if (addr >= ent[0].mod_addr && addr < ent[1].mod_addr) 6119 - return 0; 6120 - else 6121 - return addr - ent->mod_addr; 6118 + if (addr < ent[0].mod_addr) 6119 + return -1; 6120 + 6121 + return addr >= ent[1].mod_addr; 6122 6122 } 6123 6123 6124 6124 /**
+9
kernel/trace/trace_events_hist.c
··· 2057 2057 hist_field->fn_num = HIST_FIELD_FN_RELDYNSTRING; 2058 2058 else 2059 2059 hist_field->fn_num = HIST_FIELD_FN_PSTRING; 2060 + } else if (field->filter_type == FILTER_STACKTRACE) { 2061 + flags |= HIST_FIELD_FL_STACKTRACE; 2062 + 2063 + hist_field->size = MAX_FILTER_STR_VAL; 2064 + hist_field->type = kstrdup_const(field->type, GFP_KERNEL); 2065 + if (!hist_field->type) 2066 + goto free; 2067 + 2068 + hist_field->fn_num = HIST_FIELD_FN_STACK; 2060 2069 } else { 2061 2070 hist_field->size = field->size; 2062 2071 hist_field->is_signed = field->is_signed;
+7 -1
kernel/trace/trace_events_synth.c
··· 130 130 struct synth_event *event = call->data; 131 131 unsigned int i, size, n_u64; 132 132 char *name, *type; 133 + int filter_type; 133 134 bool is_signed; 135 + bool is_stack; 134 136 int ret = 0; 135 137 136 138 for (i = 0, n_u64 = 0; i < event->n_fields; i++) { ··· 140 138 is_signed = event->fields[i]->is_signed; 141 139 type = event->fields[i]->type; 142 140 name = event->fields[i]->name; 141 + is_stack = event->fields[i]->is_stack; 142 + 143 + filter_type = is_stack ? FILTER_STACKTRACE : FILTER_OTHER; 144 + 143 145 ret = trace_define_field(call, type, name, offset, size, 144 - is_signed, FILTER_OTHER); 146 + is_signed, filter_type); 145 147 if (ret) 146 148 break; 147 149
+1 -1
kernel/trace/trace_functions_graph.c
··· 901 901 trace_seq_printf(s, "%ps", func); 902 902 903 903 if (args_size >= FTRACE_REGS_MAX_ARGS * sizeof(long)) { 904 - print_function_args(s, entry->args, (unsigned long)func); 904 + print_function_args(s, FGRAPH_ENTRY_ARGS(entry), (unsigned long)func); 905 905 trace_seq_putc(s, ';'); 906 906 } else 907 907 trace_seq_puts(s, "();");
+72 -59
mm/hugetlb.c
··· 5112 5112 unsigned long last_addr_mask; 5113 5113 pte_t *src_pte, *dst_pte; 5114 5114 struct mmu_notifier_range range; 5115 - bool shared_pmd = false; 5115 + struct mmu_gather tlb; 5116 5116 5117 5117 mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, old_addr, 5118 5118 old_end); ··· 5122 5122 * range. 5123 5123 */ 5124 5124 flush_cache_range(vma, range.start, range.end); 5125 + tlb_gather_mmu_vma(&tlb, vma); 5125 5126 5126 5127 mmu_notifier_invalidate_range_start(&range); 5127 5128 last_addr_mask = hugetlb_mask_last_page(h); ··· 5139 5138 if (huge_pte_none(huge_ptep_get(mm, old_addr, src_pte))) 5140 5139 continue; 5141 5140 5142 - if (huge_pmd_unshare(mm, vma, old_addr, src_pte)) { 5143 - shared_pmd = true; 5141 + if (huge_pmd_unshare(&tlb, vma, old_addr, src_pte)) { 5144 5142 old_addr |= last_addr_mask; 5145 5143 new_addr |= last_addr_mask; 5146 5144 continue; ··· 5150 5150 break; 5151 5151 5152 5152 move_huge_pte(vma, old_addr, new_addr, src_pte, dst_pte, sz); 5153 + tlb_remove_huge_tlb_entry(h, &tlb, src_pte, old_addr); 5153 5154 } 5154 5155 5155 - if (shared_pmd) 5156 - flush_hugetlb_tlb_range(vma, range.start, range.end); 5157 - else 5158 - flush_hugetlb_tlb_range(vma, old_end - len, old_end); 5156 + tlb_flush_mmu_tlbonly(&tlb); 5157 + huge_pmd_unshare_flush(&tlb, vma); 5158 + 5159 5159 mmu_notifier_invalidate_range_end(&range); 5160 5160 i_mmap_unlock_write(mapping); 5161 5161 hugetlb_vma_unlock_write(vma); 5162 + tlb_finish_mmu(&tlb); 5162 5163 5163 5164 return len + old_addr - old_end; 5164 5165 } ··· 5178 5177 unsigned long sz = huge_page_size(h); 5179 5178 bool adjust_reservation; 5180 5179 unsigned long last_addr_mask; 5181 - bool force_flush = false; 5182 5180 5183 5181 WARN_ON(!is_vm_hugetlb_page(vma)); 5184 5182 BUG_ON(start & ~huge_page_mask(h)); ··· 5200 5200 } 5201 5201 5202 5202 ptl = huge_pte_lock(h, mm, ptep); 5203 - if (huge_pmd_unshare(mm, vma, address, ptep)) { 5203 + if (huge_pmd_unshare(tlb, vma, address, ptep)) { 5204 5204 spin_unlock(ptl); 5205 - tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE); 5206 - force_flush = true; 5207 5205 address |= last_addr_mask; 5208 5206 continue; 5209 5207 } ··· 5317 5319 } 5318 5320 tlb_end_vma(tlb, vma); 5319 5321 5320 - /* 5321 - * If we unshared PMDs, the TLB flush was not recorded in mmu_gather. We 5322 - * could defer the flush until now, since by holding i_mmap_rwsem we 5323 - * guaranteed that the last reference would not be dropped. But we must 5324 - * do the flushing before we return, as otherwise i_mmap_rwsem will be 5325 - * dropped and the last reference to the shared PMDs page might be 5326 - * dropped as well. 5327 - * 5328 - * In theory we could defer the freeing of the PMD pages as well, but 5329 - * huge_pmd_unshare() relies on the exact page_count for the PMD page to 5330 - * detect sharing, so we cannot defer the release of the page either. 5331 - * Instead, do flush now. 5332 - */ 5333 - if (force_flush) 5334 - tlb_flush_mmu_tlbonly(tlb); 5322 + huge_pmd_unshare_flush(tlb, vma); 5335 5323 } 5336 5324 5337 5325 void __hugetlb_zap_begin(struct vm_area_struct *vma, ··· 6416 6432 pte_t pte; 6417 6433 struct hstate *h = hstate_vma(vma); 6418 6434 long pages = 0, psize = huge_page_size(h); 6419 - bool shared_pmd = false; 6420 6435 struct mmu_notifier_range range; 6421 6436 unsigned long last_addr_mask; 6422 6437 bool uffd_wp = cp_flags & MM_CP_UFFD_WP; 6423 6438 bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; 6439 + struct mmu_gather tlb; 6424 6440 6425 6441 /* 6426 6442 * In the case of shared PMDs, the area to flush could be beyond ··· 6433 6449 6434 6450 BUG_ON(address >= end); 6435 6451 flush_cache_range(vma, range.start, range.end); 6452 + tlb_gather_mmu_vma(&tlb, vma); 6436 6453 6437 6454 mmu_notifier_invalidate_range_start(&range); 6438 6455 hugetlb_vma_lock_write(vma); ··· 6460 6475 } 6461 6476 } 6462 6477 ptl = huge_pte_lock(h, mm, ptep); 6463 - if (huge_pmd_unshare(mm, vma, address, ptep)) { 6478 + if (huge_pmd_unshare(&tlb, vma, address, ptep)) { 6464 6479 /* 6465 6480 * When uffd-wp is enabled on the vma, unshare 6466 6481 * shouldn't happen at all. Warn about it if it ··· 6469 6484 WARN_ON_ONCE(uffd_wp || uffd_wp_resolve); 6470 6485 pages++; 6471 6486 spin_unlock(ptl); 6472 - shared_pmd = true; 6473 6487 address |= last_addr_mask; 6474 6488 continue; 6475 6489 } ··· 6529 6545 pte = huge_pte_clear_uffd_wp(pte); 6530 6546 huge_ptep_modify_prot_commit(vma, address, ptep, old_pte, pte); 6531 6547 pages++; 6548 + tlb_remove_huge_tlb_entry(h, &tlb, ptep, address); 6532 6549 } 6533 6550 6534 6551 next: 6535 6552 spin_unlock(ptl); 6536 6553 cond_resched(); 6537 6554 } 6538 - /* 6539 - * Must flush TLB before releasing i_mmap_rwsem: x86's huge_pmd_unshare 6540 - * may have cleared our pud entry and done put_page on the page table: 6541 - * once we release i_mmap_rwsem, another task can do the final put_page 6542 - * and that page table be reused and filled with junk. If we actually 6543 - * did unshare a page of pmds, flush the range corresponding to the pud. 6544 - */ 6545 - if (shared_pmd) 6546 - flush_hugetlb_tlb_range(vma, range.start, range.end); 6547 - else 6548 - flush_hugetlb_tlb_range(vma, start, end); 6555 + 6556 + tlb_flush_mmu_tlbonly(&tlb); 6557 + huge_pmd_unshare_flush(&tlb, vma); 6549 6558 /* 6550 6559 * No need to call mmu_notifier_arch_invalidate_secondary_tlbs() we are 6551 6560 * downgrading page table protection not changing it to point to a new ··· 6549 6572 i_mmap_unlock_write(vma->vm_file->f_mapping); 6550 6573 hugetlb_vma_unlock_write(vma); 6551 6574 mmu_notifier_invalidate_range_end(&range); 6575 + tlb_finish_mmu(&tlb); 6552 6576 6553 6577 return pages > 0 ? (pages << h->order) : pages; 6554 6578 } ··· 6906 6928 return pte; 6907 6929 } 6908 6930 6909 - /* 6910 - * unmap huge page backed by shared pte. 6931 + /** 6932 + * huge_pmd_unshare - Unmap a pmd table if it is shared by multiple users 6933 + * @tlb: the current mmu_gather. 6934 + * @vma: the vma covering the pmd table. 6935 + * @addr: the address we are trying to unshare. 6936 + * @ptep: pointer into the (pmd) page table. 6911 6937 * 6912 - * Called with page table lock held. 6938 + * Called with the page table lock held, the i_mmap_rwsem held in write mode 6939 + * and the hugetlb vma lock held in write mode. 6913 6940 * 6914 - * returns: 1 successfully unmapped a shared pte page 6915 - * 0 the underlying pte page is not shared, or it is the last user 6941 + * Note: The caller must call huge_pmd_unshare_flush() before dropping the 6942 + * i_mmap_rwsem. 6943 + * 6944 + * Returns: 1 if it was a shared PMD table and it got unmapped, or 0 if it 6945 + * was not a shared PMD table. 6916 6946 */ 6917 - int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, 6918 - unsigned long addr, pte_t *ptep) 6947 + int huge_pmd_unshare(struct mmu_gather *tlb, struct vm_area_struct *vma, 6948 + unsigned long addr, pte_t *ptep) 6919 6949 { 6920 6950 unsigned long sz = huge_page_size(hstate_vma(vma)); 6951 + struct mm_struct *mm = vma->vm_mm; 6921 6952 pgd_t *pgd = pgd_offset(mm, addr); 6922 6953 p4d_t *p4d = p4d_offset(pgd, addr); 6923 6954 pud_t *pud = pud_offset(p4d, addr); ··· 6938 6951 i_mmap_assert_write_locked(vma->vm_file->f_mapping); 6939 6952 hugetlb_vma_assert_locked(vma); 6940 6953 pud_clear(pud); 6941 - /* 6942 - * Once our caller drops the rmap lock, some other process might be 6943 - * using this page table as a normal, non-hugetlb page table. 6944 - * Wait for pending gup_fast() in other threads to finish before letting 6945 - * that happen. 6946 - */ 6947 - tlb_remove_table_sync_one(); 6948 - ptdesc_pmd_pts_dec(virt_to_ptdesc(ptep)); 6954 + 6955 + tlb_unshare_pmd_ptdesc(tlb, virt_to_ptdesc(ptep), addr); 6956 + 6949 6957 mm_dec_nr_pmds(mm); 6950 6958 return 1; 6959 + } 6960 + 6961 + /* 6962 + * huge_pmd_unshare_flush - Complete a sequence of huge_pmd_unshare() calls 6963 + * @tlb: the current mmu_gather. 6964 + * @vma: the vma covering the pmd table. 6965 + * 6966 + * Perform necessary TLB flushes or IPI broadcasts to synchronize PMD table 6967 + * unsharing with concurrent page table walkers. 6968 + * 6969 + * This function must be called after a sequence of huge_pmd_unshare() 6970 + * calls while still holding the i_mmap_rwsem. 6971 + */ 6972 + void huge_pmd_unshare_flush(struct mmu_gather *tlb, struct vm_area_struct *vma) 6973 + { 6974 + /* 6975 + * We must synchronize page table unsharing such that nobody will 6976 + * try reusing a previously-shared page table while it might still 6977 + * be in use by previous sharers (TLB, GUP_fast). 6978 + */ 6979 + i_mmap_assert_write_locked(vma->vm_file->f_mapping); 6980 + 6981 + tlb_flush_unshared_tables(tlb); 6951 6982 } 6952 6983 6953 6984 #else /* !CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING */ ··· 6976 6971 return NULL; 6977 6972 } 6978 6973 6979 - int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, 6980 - unsigned long addr, pte_t *ptep) 6974 + int huge_pmd_unshare(struct mmu_gather *tlb, struct vm_area_struct *vma, 6975 + unsigned long addr, pte_t *ptep) 6981 6976 { 6982 6977 return 0; 6978 + } 6979 + 6980 + void huge_pmd_unshare_flush(struct mmu_gather *tlb, struct vm_area_struct *vma) 6981 + { 6983 6982 } 6984 6983 6985 6984 void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, ··· 7252 7243 unsigned long sz = huge_page_size(h); 7253 7244 struct mm_struct *mm = vma->vm_mm; 7254 7245 struct mmu_notifier_range range; 7246 + struct mmu_gather tlb; 7255 7247 unsigned long address; 7256 7248 spinlock_t *ptl; 7257 7249 pte_t *ptep; ··· 7264 7254 return; 7265 7255 7266 7256 flush_cache_range(vma, start, end); 7257 + tlb_gather_mmu_vma(&tlb, vma); 7258 + 7267 7259 /* 7268 7260 * No need to call adjust_range_if_pmd_sharing_possible(), because 7269 7261 * we have already done the PUD_SIZE alignment. ··· 7284 7272 if (!ptep) 7285 7273 continue; 7286 7274 ptl = huge_pte_lock(h, mm, ptep); 7287 - huge_pmd_unshare(mm, vma, address, ptep); 7275 + huge_pmd_unshare(&tlb, vma, address, ptep); 7288 7276 spin_unlock(ptl); 7289 7277 } 7290 - flush_hugetlb_tlb_range(vma, start, end); 7278 + huge_pmd_unshare_flush(&tlb, vma); 7291 7279 if (take_locks) { 7292 7280 i_mmap_unlock_write(vma->vm_file->f_mapping); 7293 7281 hugetlb_vma_unlock_write(vma); ··· 7297 7285 * Documentation/mm/mmu_notifier.rst. 7298 7286 */ 7299 7287 mmu_notifier_invalidate_range_end(&range); 7288 + tlb_finish_mmu(&tlb); 7300 7289 } 7301 7290 7302 7291 /*
+4 -1
mm/init-mm.c
··· 44 44 .mm_lock_seq = SEQCNT_ZERO(init_mm.mm_lock_seq), 45 45 #endif 46 46 .user_ns = &init_user_ns, 47 - .cpu_bitmap = CPU_BITS_NONE, 47 + #ifdef CONFIG_SCHED_MM_CID 48 + .mm_cid.lock = __RAW_SPIN_LOCK_UNLOCKED(init_mm.mm_cid.lock), 49 + #endif 50 + .flexible_array = MM_STRUCT_FLEXIBLE_ARRAY_INIT, 48 51 INIT_MM_CONTEXT(init_mm) 49 52 }; 50 53
-8
mm/internal.h
··· 538 538 bool folio_isolate_lru(struct folio *folio); 539 539 void folio_putback_lru(struct folio *folio); 540 540 extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason); 541 - #ifdef CONFIG_NUMA 542 541 int user_proactive_reclaim(char *buf, 543 542 struct mem_cgroup *memcg, pg_data_t *pgdat); 544 - #else 545 - static inline int user_proactive_reclaim(char *buf, 546 - struct mem_cgroup *memcg, pg_data_t *pgdat) 547 - { 548 - return 0; 549 - } 550 - #endif 551 543 552 544 /* 553 545 * in mm/rmap.c:
+12 -5
mm/kfence/core.c
··· 823 823 static struct delayed_work kfence_timer; 824 824 825 825 #ifdef CONFIG_KFENCE_STATIC_KEYS 826 + /* Wait queue to wake up allocation-gate timer task. */ 827 + static DECLARE_WAIT_QUEUE_HEAD(allocation_wait); 828 + 826 829 static int kfence_reboot_callback(struct notifier_block *nb, 827 830 unsigned long action, void *data) 828 831 { ··· 835 832 */ 836 833 WRITE_ONCE(kfence_enabled, false); 837 834 /* Cancel any pending timer work */ 838 - cancel_delayed_work_sync(&kfence_timer); 835 + cancel_delayed_work(&kfence_timer); 836 + /* 837 + * Wake up any blocked toggle_allocation_gate() so it can complete 838 + * early while the system is still able to handle IPIs. 839 + */ 840 + wake_up(&allocation_wait); 839 841 840 842 return NOTIFY_OK; 841 843 } ··· 849 841 .notifier_call = kfence_reboot_callback, 850 842 .priority = INT_MAX, /* Run early to stop timers ASAP */ 851 843 }; 852 - 853 - /* Wait queue to wake up allocation-gate timer task. */ 854 - static DECLARE_WAIT_QUEUE_HEAD(allocation_wait); 855 844 856 845 static void wake_up_kfence_timer(struct irq_work *work) 857 846 { ··· 878 873 /* Enable static key, and await allocation to happen. */ 879 874 static_branch_enable(&kfence_allocation_key); 880 875 881 - wait_event_idle(allocation_wait, atomic_read(&kfence_allocation_gate) > 0); 876 + wait_event_idle(allocation_wait, 877 + atomic_read(&kfence_allocation_gate) > 0 || 878 + !READ_ONCE(kfence_enabled)); 882 879 883 880 /* Disable static key and reset timer. */ 884 881 static_branch_disable(&kfence_allocation_key);
+7 -4
mm/memory.c
··· 1465 1465 static bool 1466 1466 vma_needs_copy(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) 1467 1467 { 1468 - if (src_vma->vm_flags & VM_COPY_ON_FORK) 1468 + /* 1469 + * We check against dst_vma as while sane VMA flags will have been 1470 + * copied, VM_UFFD_WP may be set only on dst_vma. 1471 + */ 1472 + if (dst_vma->vm_flags & VM_COPY_ON_FORK) 1469 1473 return true; 1470 1474 /* 1471 1475 * The presence of an anon_vma indicates an anonymous VMA has page ··· 1967 1963 do { 1968 1964 next = pud_addr_end(addr, end); 1969 1965 if (pud_trans_huge(*pud)) { 1970 - if (next - addr != HPAGE_PUD_SIZE) { 1971 - mmap_assert_locked(tlb->mm); 1966 + if (next - addr != HPAGE_PUD_SIZE) 1972 1967 split_huge_pud(vma, pud, addr); 1973 - } else if (zap_huge_pud(tlb, vma, pud, addr)) 1968 + else if (zap_huge_pud(tlb, vma, pud, addr)) 1974 1969 goto next; 1975 1970 /* fall through */ 1976 1971 }
+6 -6
mm/migrate.c
··· 1458 1458 int page_was_mapped = 0; 1459 1459 struct anon_vma *anon_vma = NULL; 1460 1460 struct address_space *mapping = NULL; 1461 + enum ttu_flags ttu = 0; 1461 1462 1462 1463 if (folio_ref_count(src) == 1) { 1463 1464 /* page was freed from under us. So we are done. */ ··· 1499 1498 goto put_anon; 1500 1499 1501 1500 if (folio_mapped(src)) { 1502 - enum ttu_flags ttu = 0; 1503 - 1504 1501 if (!folio_test_anon(src)) { 1505 1502 /* 1506 1503 * In shared mappings, try_to_unmap could potentially ··· 1515 1516 1516 1517 try_to_migrate(src, ttu); 1517 1518 page_was_mapped = 1; 1518 - 1519 - if (ttu & TTU_RMAP_LOCKED) 1520 - i_mmap_unlock_write(mapping); 1521 1519 } 1522 1520 1523 1521 if (!folio_mapped(src)) 1524 1522 rc = move_to_new_folio(dst, src, mode); 1525 1523 1526 1524 if (page_was_mapped) 1527 - remove_migration_ptes(src, !rc ? dst : src, 0); 1525 + remove_migration_ptes(src, !rc ? dst : src, 1526 + ttu ? RMP_LOCKED : 0); 1527 + 1528 + if (ttu & TTU_RMAP_LOCKED) 1529 + i_mmap_unlock_write(mapping); 1528 1530 1529 1531 unlock_put_anon: 1530 1532 folio_unlock(dst);
+33
mm/mmu_gather.c
··· 10 10 #include <linux/swap.h> 11 11 #include <linux/rmap.h> 12 12 #include <linux/pgalloc.h> 13 + #include <linux/hugetlb.h> 13 14 14 15 #include <asm/tlb.h> 15 16 ··· 427 426 #endif 428 427 tlb->vma_pfn = 0; 429 428 429 + tlb->fully_unshared_tables = 0; 430 430 __tlb_reset_range(tlb); 431 431 inc_tlb_flush_pending(tlb->mm); 432 432 } ··· 462 460 } 463 461 464 462 /** 463 + * tlb_gather_mmu_vma - initialize an mmu_gather structure for operating on a 464 + * single VMA 465 + * @tlb: the mmu_gather structure to initialize 466 + * @vma: the vm_area_struct 467 + * 468 + * Called to initialize an (on-stack) mmu_gather structure for operating on 469 + * a single VMA. In contrast to tlb_gather_mmu(), calling this function will 470 + * not require another call to tlb_start_vma(). In contrast to tlb_start_vma(), 471 + * this function will *not* call flush_cache_range(). 472 + * 473 + * For hugetlb VMAs, this function will also initialize the mmu_gather 474 + * page_size accordingly, not requiring a separate call to 475 + * tlb_change_page_size(). 476 + * 477 + */ 478 + void tlb_gather_mmu_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) 479 + { 480 + tlb_gather_mmu(tlb, vma->vm_mm); 481 + tlb_update_vma_flags(tlb, vma); 482 + if (is_vm_hugetlb_page(vma)) 483 + /* All entries have the same size. */ 484 + tlb_change_page_size(tlb, huge_page_size(hstate_vma(vma))); 485 + } 486 + 487 + /** 465 488 * tlb_finish_mmu - finish an mmu_gather structure 466 489 * @tlb: the mmu_gather structure to finish 467 490 * ··· 495 468 */ 496 469 void tlb_finish_mmu(struct mmu_gather *tlb) 497 470 { 471 + /* 472 + * We expect an earlier huge_pmd_unshare_flush() call to sort this out, 473 + * due to complicated locking requirements with page table unsharing. 474 + */ 475 + VM_WARN_ON_ONCE(tlb->fully_unshared_tables); 476 + 498 477 /* 499 478 * If there are parallel threads are doing PTE changes on same range 500 479 * under non-exclusive lock (e.g., mmap_lock read-side) but defer TLB
+2 -6
mm/page_alloc.c
··· 7457 7457 } 7458 7458 #endif 7459 7459 7460 - #ifdef CONFIG_ZONE_DMA 7461 - bool has_managed_dma(void) 7460 + bool has_managed_zone(enum zone_type zone) 7462 7461 { 7463 7462 struct pglist_data *pgdat; 7464 7463 7465 7464 for_each_online_pgdat(pgdat) { 7466 - struct zone *zone = &pgdat->node_zones[ZONE_DMA]; 7467 - 7468 - if (managed_zone(zone)) 7465 + if (managed_zone(&pgdat->node_zones[zone])) 7469 7466 return true; 7470 7467 } 7471 7468 return false; 7472 7469 } 7473 - #endif /* CONFIG_ZONE_DMA */ 7474 7470 7475 7471 #ifdef CONFIG_UNACCEPTED_MEMORY 7476 7472
+21 -24
mm/rmap.c
··· 76 76 #include <linux/mm_inline.h> 77 77 #include <linux/oom.h> 78 78 79 - #include <asm/tlbflush.h> 79 + #include <asm/tlb.h> 80 80 81 81 #define CREATE_TRACE_POINTS 82 82 #include <trace/events/migrate.h> ··· 2008 2008 * if unsuccessful. 2009 2009 */ 2010 2010 if (!anon) { 2011 + struct mmu_gather tlb; 2012 + 2011 2013 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 2012 2014 if (!hugetlb_vma_trylock_write(vma)) 2013 2015 goto walk_abort; 2014 - if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) { 2016 + 2017 + tlb_gather_mmu_vma(&tlb, vma); 2018 + if (huge_pmd_unshare(&tlb, vma, address, pvmw.pte)) { 2015 2019 hugetlb_vma_unlock_write(vma); 2016 - flush_tlb_range(vma, 2017 - range.start, range.end); 2020 + huge_pmd_unshare_flush(&tlb, vma); 2021 + tlb_finish_mmu(&tlb); 2018 2022 /* 2019 - * The ref count of the PMD page was 2020 - * dropped which is part of the way map 2021 - * counting is done for shared PMDs. 2022 - * Return 'true' here. When there is 2023 - * no other sharing, huge_pmd_unshare 2024 - * returns false and we will unmap the 2025 - * actual page and drop map count 2026 - * to zero. 2023 + * The PMD table was unmapped, 2024 + * consequently unmapping the folio. 2027 2025 */ 2028 2026 goto walk_done; 2029 2027 } 2030 2028 hugetlb_vma_unlock_write(vma); 2029 + tlb_finish_mmu(&tlb); 2031 2030 } 2032 2031 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); 2033 2032 if (pte_dirty(pteval)) ··· 2403 2404 * fail if unsuccessful. 2404 2405 */ 2405 2406 if (!anon) { 2407 + struct mmu_gather tlb; 2408 + 2406 2409 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 2407 2410 if (!hugetlb_vma_trylock_write(vma)) { 2408 2411 page_vma_mapped_walk_done(&pvmw); 2409 2412 ret = false; 2410 2413 break; 2411 2414 } 2412 - if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) { 2413 - hugetlb_vma_unlock_write(vma); 2414 - flush_tlb_range(vma, 2415 - range.start, range.end); 2416 2415 2416 + tlb_gather_mmu_vma(&tlb, vma); 2417 + if (huge_pmd_unshare(&tlb, vma, address, pvmw.pte)) { 2418 + hugetlb_vma_unlock_write(vma); 2419 + huge_pmd_unshare_flush(&tlb, vma); 2420 + tlb_finish_mmu(&tlb); 2417 2421 /* 2418 - * The ref count of the PMD page was 2419 - * dropped which is part of the way map 2420 - * counting is done for shared PMDs. 2421 - * Return 'true' here. When there is 2422 - * no other sharing, huge_pmd_unshare 2423 - * returns false and we will unmap the 2424 - * actual page and drop map count 2425 - * to zero. 2422 + * The PMD table was unmapped, 2423 + * consequently unmapping the folio. 2426 2424 */ 2427 2425 page_vma_mapped_walk_done(&pvmw); 2428 2426 break; 2429 2427 } 2430 2428 hugetlb_vma_unlock_write(vma); 2429 + tlb_finish_mmu(&tlb); 2431 2430 } 2432 2431 /* Nuke the hugetlb page table entry */ 2433 2432 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
+6 -2
mm/slub.c
··· 5694 5694 if (unlikely(!size)) 5695 5695 return ZERO_SIZE_PTR; 5696 5696 5697 - if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq())) 5698 - /* kmalloc_nolock() in PREEMPT_RT is not supported from irq */ 5697 + if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible()) 5698 + /* 5699 + * kmalloc_nolock() in PREEMPT_RT is not supported from 5700 + * non-preemptible context because local_lock becomes a 5701 + * sleeping lock on RT. 5702 + */ 5699 5703 return NULL; 5700 5704 retry: 5701 5705 if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
+9 -2
mm/vma.c
··· 37 37 bool check_ksm_early :1; 38 38 /* If we map new, hold the file rmap lock on mapping. */ 39 39 bool hold_file_rmap_lock :1; 40 + /* If .mmap_prepare changed the file, we don't need to pin. */ 41 + bool file_doesnt_need_get :1; 40 42 }; 41 43 42 44 #define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vm_flags_, file_) \ ··· 2452 2450 struct vma_iterator *vmi = map->vmi; 2453 2451 int error; 2454 2452 2455 - vma->vm_file = get_file(map->file); 2453 + vma->vm_file = map->file; 2454 + if (!map->file_doesnt_need_get) 2455 + get_file(map->file); 2456 2456 2457 2457 if (!map->file->f_op->mmap) 2458 2458 return 0; ··· 2642 2638 2643 2639 /* Update fields permitted to be changed. */ 2644 2640 map->pgoff = desc->pgoff; 2645 - map->file = desc->vm_file; 2641 + if (desc->vm_file != map->file) { 2642 + map->file_doesnt_need_get = true; 2643 + map->file = desc->vm_file; 2644 + } 2646 2645 map->vm_flags = desc->vm_flags; 2647 2646 map->page_prot = desc->page_prot; 2648 2647 /* User-defined fields. */
+11 -2
mm/vmscan.c
··· 7707 7707 return ret; 7708 7708 } 7709 7709 7710 + #else 7711 + 7712 + static unsigned long __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, 7713 + unsigned long nr_pages, 7714 + struct scan_control *sc) 7715 + { 7716 + return 0; 7717 + } 7718 + 7719 + #endif 7720 + 7710 7721 enum { 7711 7722 MEMORY_RECLAIM_SWAPPINESS = 0, 7712 7723 MEMORY_RECLAIM_SWAPPINESS_MAX, ··· 7824 7813 7825 7814 return 0; 7826 7815 } 7827 - 7828 - #endif 7829 7816 7830 7817 /** 7831 7818 * check_move_unevictable_folios - Move evictable folios to appropriate zone
+4 -3
net/core/skbuff.c
··· 1312 1312 has_mac = skb_mac_header_was_set(skb); 1313 1313 has_trans = skb_transport_header_was_set(skb); 1314 1314 1315 - printk("%sskb len=%u headroom=%u headlen=%u tailroom=%u\n" 1316 - "mac=(%d,%d) mac_len=%u net=(%d,%d) trans=%d\n" 1315 + printk("%sskb len=%u data_len=%u headroom=%u headlen=%u tailroom=%u\n" 1316 + "end-tail=%u mac=(%d,%d) mac_len=%u net=(%d,%d) trans=%d\n" 1317 1317 "shinfo(txflags=%u nr_frags=%u gso(size=%hu type=%u segs=%hu))\n" 1318 1318 "csum(0x%x start=%u offset=%u ip_summed=%u complete_sw=%u valid=%u level=%u)\n" 1319 1319 "hash(0x%x sw=%u l4=%u) proto=0x%04x pkttype=%u iif=%d\n" 1320 1320 "priority=0x%x mark=0x%x alloc_cpu=%u vlan_all=0x%x\n" 1321 1321 "encapsulation=%d inner(proto=0x%04x, mac=%u, net=%u, trans=%u)\n", 1322 - level, skb->len, headroom, skb_headlen(skb), tailroom, 1322 + level, skb->len, skb->data_len, headroom, skb_headlen(skb), 1323 + tailroom, skb->end - skb->tail, 1323 1324 has_mac ? skb->mac_header : -1, 1324 1325 has_mac ? skb_mac_header_len(skb) : -1, 1325 1326 skb->mac_len,
+1 -1
net/dsa/dsa.c
··· 158 158 bridge_num = find_next_zero_bit(&dsa_fwd_offloading_bridges, 159 159 DSA_MAX_NUM_OFFLOADING_BRIDGES, 160 160 1); 161 - if (bridge_num >= max) 161 + if (bridge_num > max) 162 162 return 0; 163 163 164 164 set_bit(bridge_num, &dsa_fwd_offloading_bridges);
+3
net/ipv4/fou_core.c
··· 215 215 return gue_control_message(skb, guehdr); 216 216 217 217 proto_ctype = guehdr->proto_ctype; 218 + if (unlikely(!proto_ctype)) 219 + goto drop; 220 + 218 221 __skb_pull(skb, sizeof(struct udphdr) + hdrlen); 219 222 skb_reset_transport_header(skb); 220 223
+1 -1
net/ipv4/fou_nl.c
··· 15 15 const struct nla_policy fou_nl_policy[FOU_ATTR_IFINDEX + 1] = { 16 16 [FOU_ATTR_PORT] = { .type = NLA_BE16, }, 17 17 [FOU_ATTR_AF] = { .type = NLA_U8, }, 18 - [FOU_ATTR_IPPROTO] = { .type = NLA_U8, }, 18 + [FOU_ATTR_IPPROTO] = NLA_POLICY_MIN(NLA_U8, 1), 19 19 [FOU_ATTR_TYPE] = { .type = NLA_U8, }, 20 20 [FOU_ATTR_REMCSUM_NOPARTIAL] = { .type = NLA_FLAG, }, 21 21 [FOU_ATTR_LOCAL_V4] = { .type = NLA_U32, },
+2 -2
net/ipv6/ndisc.c
··· 1555 1555 memcpy(&n, ((u8 *)(ndopts.nd_opts_mtu+1))+2, sizeof(mtu)); 1556 1556 mtu = ntohl(n); 1557 1557 1558 - if (in6_dev->ra_mtu != mtu) { 1559 - in6_dev->ra_mtu = mtu; 1558 + if (READ_ONCE(in6_dev->ra_mtu) != mtu) { 1559 + WRITE_ONCE(in6_dev->ra_mtu, mtu); 1560 1560 send_ifinfo_notify = true; 1561 1561 } 1562 1562
+5 -3
net/l2tp/l2tp_core.c
··· 1086 1086 tunnel = session->tunnel; 1087 1087 1088 1088 /* Check protocol version */ 1089 - if (version != tunnel->version) 1089 + if (version != tunnel->version) { 1090 + l2tp_session_put(session); 1090 1091 goto invalid; 1092 + } 1091 1093 1092 1094 if (version == L2TP_HDR_VER_3 && 1093 1095 l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr)) { ··· 1416 1414 { 1417 1415 struct l2tp_tunnel *tunnel = container_of(work, struct l2tp_tunnel, 1418 1416 del_work); 1419 - struct sock *sk = tunnel->sock; 1420 - struct socket *sock = sk->sk_socket; 1421 1417 1422 1418 l2tp_tunnel_closeall(tunnel); 1423 1419 ··· 1423 1423 * the sk API to release it here. 1424 1424 */ 1425 1425 if (tunnel->fd < 0) { 1426 + struct socket *sock = tunnel->sock->sk_socket; 1427 + 1426 1428 if (sock) { 1427 1429 kernel_sock_shutdown(sock, SHUT_RDWR); 1428 1430 sock_release(sock);
-2
net/mac80211/ieee80211_i.h
··· 451 451 struct ieee80211_conn_settings conn; 452 452 453 453 u16 status; 454 - 455 - bool disabled; 456 454 } link[IEEE80211_MLD_MAX_NUM_LINKS]; 457 455 458 456 u8 ap_addr[ETH_ALEN] __aligned(2);
+6 -2
net/mac80211/iface.c
··· 350 350 /* we hold the RTNL here so can safely walk the list */ 351 351 list_for_each_entry(nsdata, &local->interfaces, list) { 352 352 if (nsdata != sdata && ieee80211_sdata_running(nsdata)) { 353 + struct ieee80211_link_data *link; 354 + 353 355 /* 354 356 * Only OCB and monitor mode may coexist 355 357 */ ··· 378 376 * will not add another interface while any channel 379 377 * switch is active. 380 378 */ 381 - if (nsdata->vif.bss_conf.csa_active) 382 - return -EBUSY; 379 + for_each_link_data(nsdata, link) { 380 + if (link->conf->csa_active) 381 + return -EBUSY; 382 + } 383 383 384 384 /* 385 385 * The remaining checks are only performed for interfaces
+2 -1
net/mac80211/key.c
··· 987 987 988 988 if (ieee80211_sdata_running(sdata)) { 989 989 list_for_each_entry(key, &sdata->key_list, list) { 990 - increment_tailroom_need_count(sdata); 990 + if (!(key->flags & KEY_FLAG_TAINTED)) 991 + increment_tailroom_need_count(sdata); 991 992 ieee80211_key_enable_hw_accel(key); 992 993 } 993 994 }
+119 -94
net/mac80211/mlme.c
··· 6161 6161 return true; 6162 6162 } 6163 6163 6164 + static u16 ieee80211_get_ttlm(u8 bm_size, u8 *data) 6165 + { 6166 + if (bm_size == 1) 6167 + return *data; 6168 + 6169 + return get_unaligned_le16(data); 6170 + } 6171 + 6172 + static int 6173 + ieee80211_parse_adv_t2l(struct ieee80211_sub_if_data *sdata, 6174 + const struct ieee80211_ttlm_elem *ttlm, 6175 + struct ieee80211_adv_ttlm_info *ttlm_info) 6176 + { 6177 + /* The element size was already validated in 6178 + * ieee80211_tid_to_link_map_size_ok() 6179 + */ 6180 + u8 control, link_map_presence, map_size, tid; 6181 + u8 *pos; 6182 + 6183 + memset(ttlm_info, 0, sizeof(*ttlm_info)); 6184 + pos = (void *)ttlm->optional; 6185 + control = ttlm->control; 6186 + 6187 + if ((control & IEEE80211_TTLM_CONTROL_DIRECTION) != 6188 + IEEE80211_TTLM_DIRECTION_BOTH) { 6189 + sdata_info(sdata, "Invalid advertised T2L map direction\n"); 6190 + return -EINVAL; 6191 + } 6192 + 6193 + link_map_presence = *pos; 6194 + pos++; 6195 + 6196 + if (control & IEEE80211_TTLM_CONTROL_SWITCH_TIME_PRESENT) { 6197 + ttlm_info->switch_time = get_unaligned_le16(pos); 6198 + 6199 + /* Since ttlm_info->switch_time == 0 means no switch time, bump 6200 + * it by 1. 6201 + */ 6202 + if (!ttlm_info->switch_time) 6203 + ttlm_info->switch_time = 1; 6204 + 6205 + pos += 2; 6206 + } 6207 + 6208 + if (control & IEEE80211_TTLM_CONTROL_EXPECTED_DUR_PRESENT) { 6209 + ttlm_info->duration = pos[0] | pos[1] << 8 | pos[2] << 16; 6210 + pos += 3; 6211 + } 6212 + 6213 + if (control & IEEE80211_TTLM_CONTROL_DEF_LINK_MAP) { 6214 + ttlm_info->map = 0xffff; 6215 + return 0; 6216 + } 6217 + 6218 + if (control & IEEE80211_TTLM_CONTROL_LINK_MAP_SIZE) 6219 + map_size = 1; 6220 + else 6221 + map_size = 2; 6222 + 6223 + /* According to Draft P802.11be_D3.0 clause 35.3.7.1.7, an AP MLD shall 6224 + * not advertise a TID-to-link mapping that does not map all TIDs to the 6225 + * same link set, reject frame if not all links have mapping 6226 + */ 6227 + if (link_map_presence != 0xff) { 6228 + sdata_info(sdata, 6229 + "Invalid advertised T2L mapping presence indicator\n"); 6230 + return -EINVAL; 6231 + } 6232 + 6233 + ttlm_info->map = ieee80211_get_ttlm(map_size, pos); 6234 + if (!ttlm_info->map) { 6235 + sdata_info(sdata, 6236 + "Invalid advertised T2L map for TID 0\n"); 6237 + return -EINVAL; 6238 + } 6239 + 6240 + pos += map_size; 6241 + 6242 + for (tid = 1; tid < 8; tid++) { 6243 + u16 map = ieee80211_get_ttlm(map_size, pos); 6244 + 6245 + if (map != ttlm_info->map) { 6246 + sdata_info(sdata, "Invalid advertised T2L map for tid %d\n", 6247 + tid); 6248 + return -EINVAL; 6249 + } 6250 + 6251 + pos += map_size; 6252 + } 6253 + return 0; 6254 + } 6255 + 6164 6256 static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, 6165 6257 struct ieee80211_mgmt *mgmt, 6166 6258 struct ieee802_11_elems *elems, ··· 6284 6192 continue; 6285 6193 6286 6194 valid_links |= BIT(link_id); 6287 - if (assoc_data->link[link_id].disabled) 6288 - dormant_links |= BIT(link_id); 6289 6195 6290 6196 if (link_id != assoc_data->assoc_link_id) { 6291 6197 err = ieee80211_sta_allocate_link(sta, link_id); 6292 6198 if (err) 6293 6199 goto out_err; 6294 6200 } 6201 + } 6202 + 6203 + /* 6204 + * We do not support setting a negotiated TTLM during 6205 + * association. As such, we can assume that if there is a TTLM, 6206 + * then it is the currently active advertised TTLM. 6207 + * In that case, there must be exactly one TTLM that does not 6208 + * have a switch time set. This mapping should also leave us 6209 + * with at least one usable link. 6210 + */ 6211 + if (elems->ttlm_num > 1) { 6212 + sdata_info(sdata, 6213 + "More than one advertised TTLM in association response\n"); 6214 + goto out_err; 6215 + } else if (elems->ttlm_num == 1) { 6216 + if (ieee80211_parse_adv_t2l(sdata, elems->ttlm[0], 6217 + &sdata->u.mgd.ttlm_info) || 6218 + sdata->u.mgd.ttlm_info.switch_time != 0 || 6219 + !(valid_links & sdata->u.mgd.ttlm_info.map)) { 6220 + sdata_info(sdata, 6221 + "Invalid advertised TTLM in association response\n"); 6222 + goto out_err; 6223 + } 6224 + 6225 + sdata->u.mgd.ttlm_info.active = true; 6226 + dormant_links = 6227 + valid_links & ~sdata->u.mgd.ttlm_info.map; 6295 6228 } 6296 6229 6297 6230 ieee80211_vif_set_links(sdata, valid_links, dormant_links); ··· 7107 6990 7108 6991 sdata->u.mgd.ttlm_info.active = true; 7109 6992 sdata->u.mgd.ttlm_info.switch_time = 0; 7110 - } 7111 - 7112 - static u16 ieee80211_get_ttlm(u8 bm_size, u8 *data) 7113 - { 7114 - if (bm_size == 1) 7115 - return *data; 7116 - else 7117 - return get_unaligned_le16(data); 7118 - } 7119 - 7120 - static int 7121 - ieee80211_parse_adv_t2l(struct ieee80211_sub_if_data *sdata, 7122 - const struct ieee80211_ttlm_elem *ttlm, 7123 - struct ieee80211_adv_ttlm_info *ttlm_info) 7124 - { 7125 - /* The element size was already validated in 7126 - * ieee80211_tid_to_link_map_size_ok() 7127 - */ 7128 - u8 control, link_map_presence, map_size, tid; 7129 - u8 *pos; 7130 - 7131 - memset(ttlm_info, 0, sizeof(*ttlm_info)); 7132 - pos = (void *)ttlm->optional; 7133 - control = ttlm->control; 7134 - 7135 - if ((control & IEEE80211_TTLM_CONTROL_DEF_LINK_MAP) || 7136 - !(control & IEEE80211_TTLM_CONTROL_SWITCH_TIME_PRESENT)) 7137 - return 0; 7138 - 7139 - if ((control & IEEE80211_TTLM_CONTROL_DIRECTION) != 7140 - IEEE80211_TTLM_DIRECTION_BOTH) { 7141 - sdata_info(sdata, "Invalid advertised T2L map direction\n"); 7142 - return -EINVAL; 7143 - } 7144 - 7145 - link_map_presence = *pos; 7146 - pos++; 7147 - 7148 - ttlm_info->switch_time = get_unaligned_le16(pos); 7149 - 7150 - /* Since ttlm_info->switch_time == 0 means no switch time, bump it 7151 - * by 1. 7152 - */ 7153 - if (!ttlm_info->switch_time) 7154 - ttlm_info->switch_time = 1; 7155 - 7156 - pos += 2; 7157 - 7158 - if (control & IEEE80211_TTLM_CONTROL_EXPECTED_DUR_PRESENT) { 7159 - ttlm_info->duration = pos[0] | pos[1] << 8 | pos[2] << 16; 7160 - pos += 3; 7161 - } 7162 - 7163 - if (control & IEEE80211_TTLM_CONTROL_LINK_MAP_SIZE) 7164 - map_size = 1; 7165 - else 7166 - map_size = 2; 7167 - 7168 - /* According to Draft P802.11be_D3.0 clause 35.3.7.1.7, an AP MLD shall 7169 - * not advertise a TID-to-link mapping that does not map all TIDs to the 7170 - * same link set, reject frame if not all links have mapping 7171 - */ 7172 - if (link_map_presence != 0xff) { 7173 - sdata_info(sdata, 7174 - "Invalid advertised T2L mapping presence indicator\n"); 7175 - return -EINVAL; 7176 - } 7177 - 7178 - ttlm_info->map = ieee80211_get_ttlm(map_size, pos); 7179 - if (!ttlm_info->map) { 7180 - sdata_info(sdata, 7181 - "Invalid advertised T2L map for TID 0\n"); 7182 - return -EINVAL; 7183 - } 7184 - 7185 - pos += map_size; 7186 - 7187 - for (tid = 1; tid < 8; tid++) { 7188 - u16 map = ieee80211_get_ttlm(map_size, pos); 7189 - 7190 - if (map != ttlm_info->map) { 7191 - sdata_info(sdata, "Invalid advertised T2L map for tid %d\n", 7192 - tid); 7193 - return -EINVAL; 7194 - } 7195 - 7196 - pos += map_size; 7197 - } 7198 - return 0; 7199 6993 } 7200 6994 7201 6995 static void ieee80211_process_adv_ttlm(struct ieee80211_sub_if_data *sdata, ··· 9765 9737 req, true, i, 9766 9738 &assoc_data->link[i].conn); 9767 9739 assoc_data->link[i].bss = link_cbss; 9768 - assoc_data->link[i].disabled = req->links[i].disabled; 9769 9740 9770 9741 if (!bss->uapsd_supported) 9771 9742 uapsd_supported = false; ··· 10746 10719 &data->link[link_id].conn); 10747 10720 10748 10721 data->link[link_id].bss = link_cbss; 10749 - data->link[link_id].disabled = 10750 - req->add_links[link_id].disabled; 10751 10722 data->link[link_id].elems = 10752 10723 (u8 *)req->add_links[link_id].elems; 10753 10724 data->link[link_id].elems_len =
+7 -2
net/mac80211/scan.c
··· 347 347 mgmt->da)) 348 348 return; 349 349 } else { 350 - /* Beacons are expected only with broadcast address */ 351 - if (!is_broadcast_ether_addr(mgmt->da)) 350 + /* 351 + * Non-S1G beacons are expected only with broadcast address. 352 + * S1G beacons only carry the SA so no DA check is required 353 + * nor possible. 354 + */ 355 + if (!ieee80211_is_s1g_beacon(mgmt->frame_control) && 356 + !is_broadcast_ether_addr(mgmt->da)) 352 357 return; 353 358 } 354 359
+9 -4
net/netrom/nr_route.c
··· 752 752 unsigned char *dptr; 753 753 ax25_cb *ax25s; 754 754 int ret; 755 - struct sk_buff *skbn; 755 + struct sk_buff *nskb, *oskb; 756 756 757 757 /* 758 758 * Reject malformed packets early. Check that it contains at least 2 ··· 811 811 /* We are going to change the netrom headers so we should get our 812 812 own skb, we also did not know until now how much header space 813 813 we had to reserve... - RXQ */ 814 - if ((skbn=skb_copy_expand(skb, dev->hard_header_len, 0, GFP_ATOMIC)) == NULL) { 814 + nskb = skb_copy_expand(skb, dev->hard_header_len, 0, GFP_ATOMIC); 815 + 816 + if (!nskb) { 815 817 nr_node_unlock(nr_node); 816 818 nr_node_put(nr_node); 817 819 dev_put(dev); 818 820 return 0; 819 821 } 820 - kfree_skb(skb); 821 - skb=skbn; 822 + oskb = skb; 823 + skb = nskb; 822 824 skb->data[14]--; 823 825 824 826 dptr = skb_push(skb, 1); ··· 838 836 ret = (nr_neigh->ax25 != NULL); 839 837 nr_node_unlock(nr_node); 840 838 nr_node_put(nr_node); 839 + 840 + if (ret) 841 + kfree_skb(oskb); 841 842 842 843 return ret; 843 844 }
+6 -5
net/openvswitch/vport.c
··· 310 310 */ 311 311 int ovs_vport_get_upcall_stats(struct vport *vport, struct sk_buff *skb) 312 312 { 313 + u64 tx_success = 0, tx_fail = 0; 313 314 struct nlattr *nla; 314 315 int i; 315 316 316 - __u64 tx_success = 0; 317 - __u64 tx_fail = 0; 318 - 319 317 for_each_possible_cpu(i) { 320 318 const struct vport_upcall_stats_percpu *stats; 319 + u64 n_success, n_fail; 321 320 unsigned int start; 322 321 323 322 stats = per_cpu_ptr(vport->upcall_stats, i); 324 323 do { 325 324 start = u64_stats_fetch_begin(&stats->syncp); 326 - tx_success += u64_stats_read(&stats->n_success); 327 - tx_fail += u64_stats_read(&stats->n_fail); 325 + n_success = u64_stats_read(&stats->n_success); 326 + n_fail = u64_stats_read(&stats->n_fail); 328 327 } while (u64_stats_fetch_retry(&stats->syncp, start)); 328 + tx_success += n_success; 329 + tx_fail += n_fail; 329 330 } 330 331 331 332 nla = nla_nest_start_noflag(skb, OVS_VPORT_ATTR_UPCALL_STATS);
+8 -1
net/rxrpc/ar-internal.h
··· 387 387 struct rb_root service_conns; /* Service connections */ 388 388 struct list_head keepalive_link; /* Link in net->peer_keepalive[] */ 389 389 unsigned long app_data; /* Application data (e.g. afs_server) */ 390 - time64_t last_tx_at; /* Last time packet sent here */ 390 + unsigned int last_tx_at; /* Last time packet sent here (time64_t LSW) */ 391 391 seqlock_t service_conn_lock; 392 392 spinlock_t lock; /* access lock */ 393 393 int debug_id; /* debug ID for printks */ ··· 1378 1378 void rxrpc_peer_keepalive_worker(struct work_struct *); 1379 1379 void rxrpc_input_probe_for_pmtud(struct rxrpc_connection *conn, rxrpc_serial_t acked_serial, 1380 1380 bool sendmsg_fail); 1381 + 1382 + /* Update the last transmission time on a peer for keepalive purposes. */ 1383 + static inline void rxrpc_peer_mark_tx(struct rxrpc_peer *peer) 1384 + { 1385 + /* To avoid tearing on 32-bit systems, we only keep the LSW. */ 1386 + WRITE_ONCE(peer->last_tx_at, ktime_get_seconds()); 1387 + } 1381 1388 1382 1389 /* 1383 1390 * peer_object.c
+1 -1
net/rxrpc/conn_event.c
··· 194 194 } 195 195 196 196 ret = kernel_sendmsg(conn->local->socket, &msg, iov, ioc, len); 197 - conn->peer->last_tx_at = ktime_get_seconds(); 197 + rxrpc_peer_mark_tx(conn->peer); 198 198 if (ret < 0) 199 199 trace_rxrpc_tx_fail(chan->call_debug_id, serial, ret, 200 200 rxrpc_tx_point_call_final_resend);
+7 -7
net/rxrpc/output.c
··· 275 275 rxrpc_local_dont_fragment(conn->local, why == rxrpc_propose_ack_ping_for_mtu_probe); 276 276 277 277 ret = do_udp_sendmsg(conn->local->socket, &msg, len); 278 - call->peer->last_tx_at = ktime_get_seconds(); 278 + rxrpc_peer_mark_tx(call->peer); 279 279 if (ret < 0) { 280 280 trace_rxrpc_tx_fail(call->debug_id, serial, ret, 281 281 rxrpc_tx_point_call_ack); ··· 411 411 412 412 iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, sizeof(pkt)); 413 413 ret = do_udp_sendmsg(conn->local->socket, &msg, sizeof(pkt)); 414 - conn->peer->last_tx_at = ktime_get_seconds(); 414 + rxrpc_peer_mark_tx(conn->peer); 415 415 if (ret < 0) 416 416 trace_rxrpc_tx_fail(call->debug_id, serial, ret, 417 417 rxrpc_tx_point_call_abort); ··· 698 698 ret = 0; 699 699 trace_rxrpc_tx_data(call, txb->seq, txb->serial, txb->flags, 700 700 rxrpc_txdata_inject_loss); 701 - conn->peer->last_tx_at = ktime_get_seconds(); 701 + rxrpc_peer_mark_tx(conn->peer); 702 702 goto done; 703 703 } 704 704 } ··· 711 711 */ 712 712 rxrpc_inc_stat(call->rxnet, stat_tx_data_send); 713 713 ret = do_udp_sendmsg(conn->local->socket, &msg, len); 714 - conn->peer->last_tx_at = ktime_get_seconds(); 714 + rxrpc_peer_mark_tx(conn->peer); 715 715 716 716 if (ret == -EMSGSIZE) { 717 717 rxrpc_inc_stat(call->rxnet, stat_tx_data_send_msgsize); ··· 797 797 798 798 trace_rxrpc_tx_packet(conn->debug_id, &whdr, rxrpc_tx_point_conn_abort); 799 799 800 - conn->peer->last_tx_at = ktime_get_seconds(); 800 + rxrpc_peer_mark_tx(conn->peer); 801 801 } 802 802 803 803 /* ··· 917 917 trace_rxrpc_tx_packet(peer->debug_id, &whdr, 918 918 rxrpc_tx_point_version_keepalive); 919 919 920 - peer->last_tx_at = ktime_get_seconds(); 920 + rxrpc_peer_mark_tx(peer); 921 921 _leave(""); 922 922 } 923 923 ··· 973 973 if (ret < 0) 974 974 goto fail; 975 975 976 - conn->peer->last_tx_at = ktime_get_seconds(); 976 + rxrpc_peer_mark_tx(conn->peer); 977 977 return; 978 978 979 979 fail:
+16 -1
net/rxrpc/peer_event.c
··· 238 238 } 239 239 240 240 /* 241 + * Reconstruct the last transmission time. The difference calculated should be 242 + * valid provided no more than ~68 years elapsed since the last transmission. 243 + */ 244 + static time64_t rxrpc_peer_get_tx_mark(const struct rxrpc_peer *peer, time64_t base) 245 + { 246 + s32 last_tx_at = READ_ONCE(peer->last_tx_at); 247 + s32 base_lsw = base; 248 + s32 diff = last_tx_at - base_lsw; 249 + 250 + diff = clamp(diff, -RXRPC_KEEPALIVE_TIME, RXRPC_KEEPALIVE_TIME); 251 + 252 + return diff + base; 253 + } 254 + 255 + /* 241 256 * Perform keep-alive pings. 242 257 */ 243 258 static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, ··· 280 265 spin_unlock_bh(&rxnet->peer_hash_lock); 281 266 282 267 if (use) { 283 - keepalive_at = peer->last_tx_at + RXRPC_KEEPALIVE_TIME; 268 + keepalive_at = rxrpc_peer_get_tx_mark(peer, base) + RXRPC_KEEPALIVE_TIME; 284 269 slot = keepalive_at - base; 285 270 _debug("%02x peer %u t=%d {%pISp}", 286 271 cursor, peer->debug_id, slot, &peer->srx.transport);
+2 -2
net/rxrpc/proc.c
··· 296 296 297 297 now = ktime_get_seconds(); 298 298 seq_printf(seq, 299 - "UDP %-47.47s %-47.47s %3u %4u %5u %6llus %8d %8d\n", 299 + "UDP %-47.47s %-47.47s %3u %4u %5u %6ds %8d %8d\n", 300 300 lbuff, 301 301 rbuff, 302 302 refcount_read(&peer->ref), 303 303 peer->cong_ssthresh, 304 304 peer->max_data, 305 - now - peer->last_tx_at, 305 + (s32)now - (s32)READ_ONCE(peer->last_tx_at), 306 306 READ_ONCE(peer->recent_srtt_us), 307 307 READ_ONCE(peer->recent_rto_us)); 308 308
+15 -4
net/rxrpc/recvmsg.c
··· 518 518 if (rxrpc_call_has_failed(call)) 519 519 goto call_failed; 520 520 521 - if (!skb_queue_empty(&call->recvmsg_queue)) 521 + if (!(flags & MSG_PEEK) && 522 + !skb_queue_empty(&call->recvmsg_queue)) 522 523 rxrpc_notify_socket(call); 523 524 goto not_yet_complete; 524 525 ··· 550 549 error_requeue_call: 551 550 if (!(flags & MSG_PEEK)) { 552 551 spin_lock_irq(&rx->recvmsg_lock); 553 - list_add(&call->recvmsg_link, &rx->recvmsg_q); 554 - spin_unlock_irq(&rx->recvmsg_lock); 552 + if (list_empty(&call->recvmsg_link)) { 553 + list_add(&call->recvmsg_link, &rx->recvmsg_q); 554 + rxrpc_see_call(call, rxrpc_call_see_recvmsg_requeue); 555 + spin_unlock_irq(&rx->recvmsg_lock); 556 + } else if (list_is_first(&call->recvmsg_link, &rx->recvmsg_q)) { 557 + spin_unlock_irq(&rx->recvmsg_lock); 558 + rxrpc_put_call(call, rxrpc_call_see_recvmsg_requeue_first); 559 + } else { 560 + list_move(&call->recvmsg_link, &rx->recvmsg_q); 561 + spin_unlock_irq(&rx->recvmsg_lock); 562 + rxrpc_put_call(call, rxrpc_call_see_recvmsg_requeue_move); 563 + } 555 564 trace_rxrpc_recvmsg(call_debug_id, rxrpc_recvmsg_requeue, 0); 556 565 } else { 557 - rxrpc_put_call(call, rxrpc_call_put_recvmsg); 566 + rxrpc_put_call(call, rxrpc_call_put_recvmsg_peek_nowait); 558 567 } 559 568 error_no_call: 560 569 release_sock(&rx->sk);
+1 -1
net/rxrpc/rxgk.c
··· 678 678 679 679 ret = do_udp_sendmsg(conn->local->socket, &msg, len); 680 680 if (ret > 0) 681 - conn->peer->last_tx_at = ktime_get_seconds(); 681 + rxrpc_peer_mark_tx(conn->peer); 682 682 __free_page(page); 683 683 684 684 if (ret < 0) {
+1 -1
net/rxrpc/rxkad.c
··· 694 694 return -EAGAIN; 695 695 } 696 696 697 - conn->peer->last_tx_at = ktime_get_seconds(); 697 + rxrpc_peer_mark_tx(conn->peer); 698 698 trace_rxrpc_tx_packet(conn->debug_id, &whdr, 699 699 rxrpc_tx_point_rxkad_challenge); 700 700 _leave(" = 0");
+4 -2
net/sched/act_ife.c
··· 821 821 /* could be stupid policy setup or mtu config 822 822 * so lets be conservative.. */ 823 823 if ((action == TC_ACT_SHOT) || exceed_mtu) { 824 + drop: 824 825 qstats_drop_inc(this_cpu_ptr(ife->common.cpu_qstats)); 825 826 return TC_ACT_SHOT; 826 827 } ··· 830 829 skb_push(skb, skb->dev->hard_header_len); 831 830 832 831 ife_meta = ife_encode(skb, metalen); 832 + if (!ife_meta) 833 + goto drop; 833 834 834 835 spin_lock(&ife->tcf_lock); 835 836 ··· 847 844 if (err < 0) { 848 845 /* too corrupt to keep around if overwritten */ 849 846 spin_unlock(&ife->tcf_lock); 850 - qstats_drop_inc(this_cpu_ptr(ife->common.cpu_qstats)); 851 - return TC_ACT_SHOT; 847 + goto drop; 852 848 } 853 849 skboff += err; 854 850 }
+1 -1
net/sched/sch_qfq.c
··· 373 373 /* Deschedule class and remove it from its parent aggregate. */ 374 374 static void qfq_deact_rm_from_agg(struct qfq_sched *q, struct qfq_class *cl) 375 375 { 376 - if (cl->qdisc->q.qlen > 0) /* class is active */ 376 + if (cl_is_active(cl)) /* class is active */ 377 377 qfq_deactivate_class(q, cl); 378 378 379 379 qfq_rm_from_agg(q, cl);
+5
net/sched/sch_teql.c
··· 178 178 if (m->dev == dev) 179 179 return -ELOOP; 180 180 181 + if (sch->parent != TC_H_ROOT) { 182 + NL_SET_ERR_MSG_MOD(extack, "teql can only be used as root"); 183 + return -EOPNOTSUPP; 184 + } 185 + 181 186 q->m = m; 182 187 183 188 skb_queue_head_init(&q->q);
+5 -5
net/sctp/sm_statefuns.c
··· 603 603 sctp_add_cmd_sf(commands, SCTP_CMD_PEER_INIT, 604 604 SCTP_PEER_INIT(initchunk)); 605 605 606 + /* SCTP-AUTH: generate the association shared keys so that 607 + * we can potentially sign the COOKIE-ECHO. 608 + */ 609 + sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_SHKEY, SCTP_NULL()); 610 + 606 611 /* Reset init error count upon receipt of INIT-ACK. */ 607 612 sctp_add_cmd_sf(commands, SCTP_CMD_INIT_COUNTER_RESET, SCTP_NULL()); 608 613 ··· 621 616 SCTP_TO(SCTP_EVENT_TIMEOUT_T1_COOKIE)); 622 617 sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE, 623 618 SCTP_STATE(SCTP_STATE_COOKIE_ECHOED)); 624 - 625 - /* SCTP-AUTH: generate the association shared keys so that 626 - * we can potentially sign the COOKIE-ECHO. 627 - */ 628 - sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_SHKEY, SCTP_NULL()); 629 619 630 620 /* 5.1 C) "A" shall then send the State Cookie received in the 631 621 * INIT ACK chunk in a COOKIE ECHO chunk, ...
+25 -11
net/vmw_vsock/virtio_transport_common.c
··· 28 28 29 29 static void virtio_transport_cancel_close_work(struct vsock_sock *vsk, 30 30 bool cancel_timeout); 31 + static s64 virtio_transport_has_space(struct virtio_vsock_sock *vvs); 31 32 32 33 static const struct virtio_transport * 33 34 virtio_transport_get_ops(struct vsock_sock *vsk) ··· 500 499 return 0; 501 500 502 501 spin_lock_bh(&vvs->tx_lock); 503 - ret = vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt); 504 - if (ret > credit) 505 - ret = credit; 502 + ret = min_t(u32, credit, virtio_transport_has_space(vvs)); 506 503 vvs->tx_cnt += ret; 507 504 vvs->bytes_unsent += ret; 508 505 spin_unlock_bh(&vvs->tx_lock); ··· 821 822 } 822 823 EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_dequeue); 823 824 825 + static u32 virtio_transport_tx_buf_size(struct virtio_vsock_sock *vvs) 826 + { 827 + /* The peer advertises its receive buffer via peer_buf_alloc, but we 828 + * cap it to our local buf_alloc so a remote peer cannot force us to 829 + * queue more data than our own buffer configuration allows. 830 + */ 831 + return min(vvs->peer_buf_alloc, vvs->buf_alloc); 832 + } 833 + 824 834 int 825 835 virtio_transport_seqpacket_enqueue(struct vsock_sock *vsk, 826 836 struct msghdr *msg, ··· 839 831 840 832 spin_lock_bh(&vvs->tx_lock); 841 833 842 - if (len > vvs->peer_buf_alloc) { 834 + if (len > virtio_transport_tx_buf_size(vvs)) { 843 835 spin_unlock_bh(&vvs->tx_lock); 844 836 return -EMSGSIZE; 845 837 } ··· 885 877 } 886 878 EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_has_data); 887 879 888 - static s64 virtio_transport_has_space(struct vsock_sock *vsk) 880 + static s64 virtio_transport_has_space(struct virtio_vsock_sock *vvs) 889 881 { 890 - struct virtio_vsock_sock *vvs = vsk->trans; 891 882 s64 bytes; 892 883 893 - bytes = (s64)vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt); 884 + /* Use s64 arithmetic so if the peer shrinks peer_buf_alloc while 885 + * we have bytes in flight (tx_cnt - peer_fwd_cnt), the subtraction 886 + * does not underflow. 887 + */ 888 + bytes = (s64)virtio_transport_tx_buf_size(vvs) - 889 + (vvs->tx_cnt - vvs->peer_fwd_cnt); 894 890 if (bytes < 0) 895 891 bytes = 0; 896 892 ··· 907 895 s64 bytes; 908 896 909 897 spin_lock_bh(&vvs->tx_lock); 910 - bytes = virtio_transport_has_space(vsk); 898 + bytes = virtio_transport_has_space(vvs); 911 899 spin_unlock_bh(&vvs->tx_lock); 912 900 913 901 return bytes; ··· 1371 1359 1372 1360 /* Try to copy small packets into the buffer of last packet queued, 1373 1361 * to avoid wasting memory queueing the entire buffer with a small 1374 - * payload. 1362 + * payload. Skip non-linear (e.g. zerocopy) skbs; these carry payload 1363 + * in skb_shinfo. 1375 1364 */ 1376 - if (len <= GOOD_COPY_LEN && !skb_queue_empty(&vvs->rx_queue)) { 1365 + if (len <= GOOD_COPY_LEN && !skb_queue_empty(&vvs->rx_queue) && 1366 + !skb_is_nonlinear(skb)) { 1377 1367 struct virtio_vsock_hdr *last_hdr; 1378 1368 struct sk_buff *last_skb; 1379 1369 ··· 1504 1490 spin_lock_bh(&vvs->tx_lock); 1505 1491 vvs->peer_buf_alloc = le32_to_cpu(hdr->buf_alloc); 1506 1492 vvs->peer_fwd_cnt = le32_to_cpu(hdr->fwd_cnt); 1507 - space_available = virtio_transport_has_space(vsk); 1493 + space_available = virtio_transport_has_space(vvs); 1508 1494 spin_unlock_bh(&vvs->tx_lock); 1509 1495 return space_available; 1510 1496 }
-10
net/wireless/nl80211.c
··· 12241 12241 return -EINVAL; 12242 12242 } 12243 12243 } 12244 - 12245 - links[link_id].disabled = 12246 - nla_get_flag(attrs[NL80211_ATTR_MLO_LINK_DISABLED]); 12247 12244 } 12248 12245 12249 12246 return 0; ··· 12416 12419 if (req.links[req.link_id].elems_len) { 12417 12420 GENL_SET_ERR_MSG(info, 12418 12421 "cannot have per-link elems on assoc link"); 12419 - err = -EINVAL; 12420 - goto free; 12421 - } 12422 - 12423 - if (req.links[req.link_id].disabled) { 12424 - GENL_SET_ERR_MSG(info, 12425 - "cannot have assoc link disabled"); 12426 12422 err = -EINVAL; 12427 12423 goto free; 12428 12424 }
+5 -3
net/wireless/util.c
··· 1561 1561 tmp = result; 1562 1562 tmp *= SCALE; 1563 1563 do_div(tmp, mcs_divisors[rate->mcs]); 1564 - result = tmp; 1565 1564 1566 1565 /* and take NSS, DCM into account */ 1567 - result = (result * rate->nss) / 8; 1566 + tmp *= rate->nss; 1567 + do_div(tmp, 8); 1568 1568 if (rate->he_dcm) 1569 - result /= 2; 1569 + do_div(tmp, 2); 1570 + 1571 + result = tmp; 1570 1572 1571 1573 return result / 10000; 1572 1574 }
+33 -8
rust/kernel/auxiliary.rs
··· 23 23 /// An adapter for the registration of auxiliary drivers. 24 24 pub struct Adapter<T: Driver>(T); 25 25 26 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 26 + // SAFETY: 27 + // - `bindings::auxiliary_driver` is a C type declared as `repr(C)`. 28 + // - `T` is the type of the driver's device private data. 29 + // - `struct auxiliary_driver` embeds a `struct device_driver`. 30 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 31 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 32 + type DriverType = bindings::auxiliary_driver; 33 + type DriverData = T; 34 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 35 + } 36 + 37 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 27 38 // a preceding call to `register` has been successful. 28 39 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 29 - type RegType = bindings::auxiliary_driver; 30 - 31 40 unsafe fn register( 32 - adrv: &Opaque<Self::RegType>, 41 + adrv: &Opaque<Self::DriverType>, 33 42 name: &'static CStr, 34 43 module: &'static ThisModule, 35 44 ) -> Result { ··· 50 41 (*adrv.get()).id_table = T::ID_TABLE.as_ptr(); 51 42 } 52 43 53 - // SAFETY: `adrv` is guaranteed to be a valid `RegType`. 44 + // SAFETY: `adrv` is guaranteed to be a valid `DriverType`. 54 45 to_result(unsafe { 55 46 bindings::__auxiliary_driver_register(adrv.get(), module.0, name.as_char_ptr()) 56 47 }) 57 48 } 58 49 59 - unsafe fn unregister(adrv: &Opaque<Self::RegType>) { 60 - // SAFETY: `adrv` is guaranteed to be a valid `RegType`. 50 + unsafe fn unregister(adrv: &Opaque<Self::DriverType>) { 51 + // SAFETY: `adrv` is guaranteed to be a valid `DriverType`. 61 52 unsafe { bindings::auxiliary_driver_unregister(adrv.get()) } 62 53 } 63 54 } ··· 96 87 // SAFETY: `remove_callback` is only ever called after a successful call to 97 88 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 98 89 // and stored a `Pin<KBox<T>>`. 99 - drop(unsafe { adev.as_ref().drvdata_obtain::<T>() }); 90 + let data = unsafe { adev.as_ref().drvdata_borrow::<T>() }; 91 + 92 + T::unbind(adev, data); 100 93 } 101 94 } 102 95 ··· 198 187 /// 199 188 /// Called when an auxiliary device is matches a corresponding driver. 200 189 fn probe(dev: &Device<device::Core>, id_info: &Self::IdInfo) -> impl PinInit<Self, Error>; 190 + 191 + /// Auxiliary driver unbind. 192 + /// 193 + /// Called when a [`Device`] is unbound from its bound [`Driver`]. Implementing this callback 194 + /// is optional. 195 + /// 196 + /// This callback serves as a place for drivers to perform teardown operations that require a 197 + /// `&Device<Core>` or `&Device<Bound>` reference. For instance, drivers may try to perform I/O 198 + /// operations to gracefully tear down the device. 199 + /// 200 + /// Otherwise, release operations for driver resources should be performed in `Self::drop`. 201 + fn unbind(dev: &Device<device::Core>, this: Pin<&Self>) { 202 + let _ = (dev, this); 203 + } 201 204 } 202 205 203 206 /// The auxiliary device representation.
+11 -9
rust/kernel/device.rs
··· 232 232 /// 233 233 /// # Safety 234 234 /// 235 - /// - Must only be called once after a preceding call to [`Device::set_drvdata`]. 236 235 /// - The type `T` must match the type of the `ForeignOwnable` previously stored by 237 236 /// [`Device::set_drvdata`]. 238 - pub unsafe fn drvdata_obtain<T: 'static>(&self) -> Pin<KBox<T>> { 237 + pub(crate) unsafe fn drvdata_obtain<T: 'static>(&self) -> Option<Pin<KBox<T>>> { 239 238 // SAFETY: By the type invariants, `self.as_raw()` is a valid pointer to a `struct device`. 240 239 let ptr = unsafe { bindings::dev_get_drvdata(self.as_raw()) }; 241 240 242 241 // SAFETY: By the type invariants, `self.as_raw()` is a valid pointer to a `struct device`. 243 242 unsafe { bindings::dev_set_drvdata(self.as_raw(), core::ptr::null_mut()) }; 244 243 244 + if ptr.is_null() { 245 + return None; 246 + } 247 + 245 248 // SAFETY: 246 - // - By the safety requirements of this function, `ptr` comes from a previous call to 247 - // `into_foreign()`. 249 + // - If `ptr` is not NULL, it comes from a previous call to `into_foreign()`. 248 250 // - `dev_get_drvdata()` guarantees to return the same pointer given to `dev_set_drvdata()` 249 251 // in `into_foreign()`. 250 - unsafe { Pin::<KBox<T>>::from_foreign(ptr.cast()) } 252 + Some(unsafe { Pin::<KBox<T>>::from_foreign(ptr.cast()) }) 251 253 } 252 254 253 255 /// Borrow the driver's private data bound to this [`Device`]. 254 256 /// 255 257 /// # Safety 256 258 /// 257 - /// - Must only be called after a preceding call to [`Device::set_drvdata`] and before 258 - /// [`Device::drvdata_obtain`]. 259 + /// - Must only be called after a preceding call to [`Device::set_drvdata`] and before the 260 + /// device is fully unbound. 259 261 /// - The type `T` must match the type of the `ForeignOwnable` previously stored by 260 262 /// [`Device::set_drvdata`]. 261 263 pub unsafe fn drvdata_borrow<T: 'static>(&self) -> Pin<&T> { ··· 273 271 /// # Safety 274 272 /// 275 273 /// - Must only be called after a preceding call to [`Device::set_drvdata`] and before 276 - /// [`Device::drvdata_obtain`]. 274 + /// the device is fully unbound. 277 275 /// - The type `T` must match the type of the `ForeignOwnable` previously stored by 278 276 /// [`Device::set_drvdata`]. 279 277 unsafe fn drvdata_unchecked<T: 'static>(&self) -> Pin<&T> { ··· 322 320 323 321 // SAFETY: 324 322 // - The above check of `dev_get_drvdata()` guarantees that we are called after 325 - // `set_drvdata()` and before `drvdata_obtain()`. 323 + // `set_drvdata()`. 326 324 // - We've just checked that the type of the driver's private data is in fact `T`. 327 325 Ok(unsafe { self.drvdata_unchecked() }) 328 326 }
+70 -16
rust/kernel/driver.rs
··· 99 99 use core::pin::Pin; 100 100 use pin_init::{pin_data, pinned_drop, PinInit}; 101 101 102 + /// Trait describing the layout of a specific device driver. 103 + /// 104 + /// This trait describes the layout of a specific driver structure, such as `struct pci_driver` or 105 + /// `struct platform_driver`. 106 + /// 107 + /// # Safety 108 + /// 109 + /// Implementors must guarantee that: 110 + /// - `DriverType` is `repr(C)`, 111 + /// - `DriverData` is the type of the driver's device private data. 112 + /// - `DriverType` embeds a valid `struct device_driver` at byte offset `DEVICE_DRIVER_OFFSET`. 113 + pub unsafe trait DriverLayout { 114 + /// The specific driver type embedding a `struct device_driver`. 115 + type DriverType: Default; 116 + 117 + /// The type of the driver's device private data. 118 + type DriverData; 119 + 120 + /// Byte offset of the embedded `struct device_driver` within `DriverType`. 121 + /// 122 + /// This must correspond exactly to the location of the embedded `struct device_driver` field. 123 + const DEVICE_DRIVER_OFFSET: usize; 124 + } 125 + 102 126 /// The [`RegistrationOps`] trait serves as generic interface for subsystems (e.g., PCI, Platform, 103 127 /// Amba, etc.) to provide the corresponding subsystem specific implementation to register / 104 - /// unregister a driver of the particular type (`RegType`). 128 + /// unregister a driver of the particular type (`DriverType`). 105 129 /// 106 - /// For instance, the PCI subsystem would set `RegType` to `bindings::pci_driver` and call 130 + /// For instance, the PCI subsystem would set `DriverType` to `bindings::pci_driver` and call 107 131 /// `bindings::__pci_register_driver` from `RegistrationOps::register` and 108 132 /// `bindings::pci_unregister_driver` from `RegistrationOps::unregister`. 109 133 /// 110 134 /// # Safety 111 135 /// 112 - /// A call to [`RegistrationOps::unregister`] for a given instance of `RegType` is only valid if a 113 - /// preceding call to [`RegistrationOps::register`] has been successful. 114 - pub unsafe trait RegistrationOps { 115 - /// The type that holds information about the registration. This is typically a struct defined 116 - /// by the C portion of the kernel. 117 - type RegType: Default; 118 - 136 + /// A call to [`RegistrationOps::unregister`] for a given instance of `DriverType` is only valid if 137 + /// a preceding call to [`RegistrationOps::register`] has been successful. 138 + pub unsafe trait RegistrationOps: DriverLayout { 119 139 /// Registers a driver. 120 140 /// 121 141 /// # Safety ··· 143 123 /// On success, `reg` must remain pinned and valid until the matching call to 144 124 /// [`RegistrationOps::unregister`]. 145 125 unsafe fn register( 146 - reg: &Opaque<Self::RegType>, 126 + reg: &Opaque<Self::DriverType>, 147 127 name: &'static CStr, 148 128 module: &'static ThisModule, 149 129 ) -> Result; ··· 154 134 /// 155 135 /// Must only be called after a preceding successful call to [`RegistrationOps::register`] for 156 136 /// the same `reg`. 157 - unsafe fn unregister(reg: &Opaque<Self::RegType>); 137 + unsafe fn unregister(reg: &Opaque<Self::DriverType>); 158 138 } 159 139 160 140 /// A [`Registration`] is a generic type that represents the registration of some driver type (e.g. ··· 166 146 #[pin_data(PinnedDrop)] 167 147 pub struct Registration<T: RegistrationOps> { 168 148 #[pin] 169 - reg: Opaque<T::RegType>, 149 + reg: Opaque<T::DriverType>, 170 150 } 171 151 172 152 // SAFETY: `Registration` has no fields or methods accessible via `&Registration`, so it is safe to ··· 177 157 // any thread, so `Registration` is `Send`. 178 158 unsafe impl<T: RegistrationOps> Send for Registration<T> {} 179 159 180 - impl<T: RegistrationOps> Registration<T> { 160 + impl<T: RegistrationOps + 'static> Registration<T> { 161 + extern "C" fn post_unbind_callback(dev: *mut bindings::device) { 162 + // SAFETY: The driver core only ever calls the post unbind callback with a valid pointer to 163 + // a `struct device`. 164 + // 165 + // INVARIANT: `dev` is valid for the duration of the `post_unbind_callback()`. 166 + let dev = unsafe { &*dev.cast::<device::Device<device::CoreInternal>>() }; 167 + 168 + // `remove()` and all devres callbacks have been completed at this point, hence drop the 169 + // driver's device private data. 170 + // 171 + // SAFETY: By the safety requirements of the `Driver` trait, `T::DriverData` is the 172 + // driver's device private data type. 173 + drop(unsafe { dev.drvdata_obtain::<T::DriverData>() }); 174 + } 175 + 176 + /// Attach generic `struct device_driver` callbacks. 177 + fn callbacks_attach(drv: &Opaque<T::DriverType>) { 178 + let ptr = drv.get().cast::<u8>(); 179 + 180 + // SAFETY: 181 + // - `drv.get()` yields a valid pointer to `Self::DriverType`. 182 + // - Adding `DEVICE_DRIVER_OFFSET` yields the address of the embedded `struct device_driver` 183 + // as guaranteed by the safety requirements of the `Driver` trait. 184 + let base = unsafe { ptr.add(T::DEVICE_DRIVER_OFFSET) }; 185 + 186 + // CAST: `base` points to the offset of the embedded `struct device_driver`. 187 + let base = base.cast::<bindings::device_driver>(); 188 + 189 + // SAFETY: It is safe to set the fields of `struct device_driver` on initialization. 190 + unsafe { (*base).p_cb.post_unbind_rust = Some(Self::post_unbind_callback) }; 191 + } 192 + 181 193 /// Creates a new instance of the registration object. 182 194 pub fn new(name: &'static CStr, module: &'static ThisModule) -> impl PinInit<Self, Error> { 183 195 try_pin_init!(Self { 184 - reg <- Opaque::try_ffi_init(|ptr: *mut T::RegType| { 196 + reg <- Opaque::try_ffi_init(|ptr: *mut T::DriverType| { 185 197 // SAFETY: `try_ffi_init` guarantees that `ptr` is valid for write. 186 - unsafe { ptr.write(T::RegType::default()) }; 198 + unsafe { ptr.write(T::DriverType::default()) }; 187 199 188 200 // SAFETY: `try_ffi_init` guarantees that `ptr` is valid for write, and it has 189 201 // just been initialised above, so it's also valid for read. 190 - let drv = unsafe { &*(ptr as *const Opaque<T::RegType>) }; 202 + let drv = unsafe { &*(ptr as *const Opaque<T::DriverType>) }; 203 + 204 + Self::callbacks_attach(drv); 191 205 192 206 // SAFETY: `drv` is guaranteed to be pinned until `T::unregister`. 193 207 unsafe { T::register(drv, name, module) }
+20 -11
rust/kernel/i2c.rs
··· 92 92 /// An adapter for the registration of I2C drivers. 93 93 pub struct Adapter<T: Driver>(T); 94 94 95 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 95 + // SAFETY: 96 + // - `bindings::i2c_driver` is a C type declared as `repr(C)`. 97 + // - `T` is the type of the driver's device private data. 98 + // - `struct i2c_driver` embeds a `struct device_driver`. 99 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 100 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 101 + type DriverType = bindings::i2c_driver; 102 + type DriverData = T; 103 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 104 + } 105 + 106 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 96 107 // a preceding call to `register` has been successful. 97 108 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 98 - type RegType = bindings::i2c_driver; 99 - 100 109 unsafe fn register( 101 - idrv: &Opaque<Self::RegType>, 110 + idrv: &Opaque<Self::DriverType>, 102 111 name: &'static CStr, 103 112 module: &'static ThisModule, 104 113 ) -> Result { ··· 142 133 (*idrv.get()).driver.acpi_match_table = acpi_table; 143 134 } 144 135 145 - // SAFETY: `idrv` is guaranteed to be a valid `RegType`. 136 + // SAFETY: `idrv` is guaranteed to be a valid `DriverType`. 146 137 to_result(unsafe { bindings::i2c_register_driver(module.0, idrv.get()) }) 147 138 } 148 139 149 - unsafe fn unregister(idrv: &Opaque<Self::RegType>) { 150 - // SAFETY: `idrv` is guaranteed to be a valid `RegType`. 140 + unsafe fn unregister(idrv: &Opaque<Self::DriverType>) { 141 + // SAFETY: `idrv` is guaranteed to be a valid `DriverType`. 151 142 unsafe { bindings::i2c_del_driver(idrv.get()) } 152 143 } 153 144 } ··· 178 169 // SAFETY: `remove_callback` is only ever called after a successful call to 179 170 // `probe_callback`, hence it's guaranteed that `I2cClient::set_drvdata()` has been called 180 171 // and stored a `Pin<KBox<T>>`. 181 - let data = unsafe { idev.as_ref().drvdata_obtain::<T>() }; 172 + let data = unsafe { idev.as_ref().drvdata_borrow::<T>() }; 182 173 183 - T::unbind(idev, data.as_ref()); 174 + T::unbind(idev, data); 184 175 } 185 176 186 177 extern "C" fn shutdown_callback(idev: *mut bindings::i2c_client) { ··· 190 181 // SAFETY: `shutdown_callback` is only ever called after a successful call to 191 182 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 192 183 // and stored a `Pin<KBox<T>>`. 193 - let data = unsafe { idev.as_ref().drvdata_obtain::<T>() }; 184 + let data = unsafe { idev.as_ref().drvdata_borrow::<T>() }; 194 185 195 - T::shutdown(idev, data.as_ref()); 186 + T::shutdown(idev, data); 196 187 } 197 188 198 189 /// The [`i2c::IdTable`] of the corresponding driver.
+6 -3
rust/kernel/io.rs
··· 142 142 /// Bound checks are performed on compile time, hence if the offset is not known at compile 143 143 /// time, the build will fail. 144 144 $(#[$attr])* 145 - #[inline] 145 + // Always inline to optimize out error path of `io_addr_assert`. 146 + #[inline(always)] 146 147 pub fn $name(&self, offset: usize) -> $type_name { 147 148 let addr = self.io_addr_assert::<$type_name>(offset); 148 149 ··· 172 171 /// Bound checks are performed on compile time, hence if the offset is not known at compile 173 172 /// time, the build will fail. 174 173 $(#[$attr])* 175 - #[inline] 174 + // Always inline to optimize out error path of `io_addr_assert`. 175 + #[inline(always)] 176 176 pub fn $name(&self, value: $type_name, offset: usize) { 177 177 let addr = self.io_addr_assert::<$type_name>(offset); 178 178 ··· 241 239 self.addr().checked_add(offset).ok_or(EINVAL) 242 240 } 243 241 244 - #[inline] 242 + // Always inline to optimize out error path of `build_assert`. 243 + #[inline(always)] 245 244 fn io_addr_assert<U>(&self, offset: usize) -> usize { 246 245 build_assert!(Self::offset_valid::<U>(offset, SIZE)); 247 246
+2
rust/kernel/io/resource.rs
··· 226 226 /// Resource represents a memory region that must be ioremaped using `ioremap_np`. 227 227 pub const IORESOURCE_MEM_NONPOSTED: Flags = Flags::new(bindings::IORESOURCE_MEM_NONPOSTED); 228 228 229 + // Always inline to optimize out error path of `build_assert`. 230 + #[inline(always)] 229 231 const fn new(value: u32) -> Self { 230 232 crate::build_assert!(value as u64 <= c_ulong::MAX as u64); 231 233 Flags(value as c_ulong)
+2
rust/kernel/irq/flags.rs
··· 96 96 self.0 97 97 } 98 98 99 + // Always inline to optimize out error path of `build_assert`. 100 + #[inline(always)] 99 101 const fn new(value: u32) -> Self { 100 102 build_assert!(value as u64 <= c_ulong::MAX as u64); 101 103 Self(value as c_ulong)
+18 -9
rust/kernel/pci.rs
··· 50 50 /// An adapter for the registration of PCI drivers. 51 51 pub struct Adapter<T: Driver>(T); 52 52 53 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 53 + // SAFETY: 54 + // - `bindings::pci_driver` is a C type declared as `repr(C)`. 55 + // - `T` is the type of the driver's device private data. 56 + // - `struct pci_driver` embeds a `struct device_driver`. 57 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 58 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 59 + type DriverType = bindings::pci_driver; 60 + type DriverData = T; 61 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 62 + } 63 + 64 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 54 65 // a preceding call to `register` has been successful. 55 66 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 56 - type RegType = bindings::pci_driver; 57 - 58 67 unsafe fn register( 59 - pdrv: &Opaque<Self::RegType>, 68 + pdrv: &Opaque<Self::DriverType>, 60 69 name: &'static CStr, 61 70 module: &'static ThisModule, 62 71 ) -> Result { ··· 77 68 (*pdrv.get()).id_table = T::ID_TABLE.as_ptr(); 78 69 } 79 70 80 - // SAFETY: `pdrv` is guaranteed to be a valid `RegType`. 71 + // SAFETY: `pdrv` is guaranteed to be a valid `DriverType`. 81 72 to_result(unsafe { 82 73 bindings::__pci_register_driver(pdrv.get(), module.0, name.as_char_ptr()) 83 74 }) 84 75 } 85 76 86 - unsafe fn unregister(pdrv: &Opaque<Self::RegType>) { 87 - // SAFETY: `pdrv` is guaranteed to be a valid `RegType`. 77 + unsafe fn unregister(pdrv: &Opaque<Self::DriverType>) { 78 + // SAFETY: `pdrv` is guaranteed to be a valid `DriverType`. 88 79 unsafe { bindings::pci_unregister_driver(pdrv.get()) } 89 80 } 90 81 } ··· 123 114 // SAFETY: `remove_callback` is only ever called after a successful call to 124 115 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 125 116 // and stored a `Pin<KBox<T>>`. 126 - let data = unsafe { pdev.as_ref().drvdata_obtain::<T>() }; 117 + let data = unsafe { pdev.as_ref().drvdata_borrow::<T>() }; 127 118 128 - T::unbind(pdev, data.as_ref()); 119 + T::unbind(pdev, data); 129 120 } 130 121 } 131 122
+18 -9
rust/kernel/platform.rs
··· 26 26 /// An adapter for the registration of platform drivers. 27 27 pub struct Adapter<T: Driver>(T); 28 28 29 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 29 + // SAFETY: 30 + // - `bindings::platform_driver` is a C type declared as `repr(C)`. 31 + // - `T` is the type of the driver's device private data. 32 + // - `struct platform_driver` embeds a `struct device_driver`. 33 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 34 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 35 + type DriverType = bindings::platform_driver; 36 + type DriverData = T; 37 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 38 + } 39 + 40 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 30 41 // a preceding call to `register` has been successful. 31 42 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 32 - type RegType = bindings::platform_driver; 33 - 34 43 unsafe fn register( 35 - pdrv: &Opaque<Self::RegType>, 44 + pdrv: &Opaque<Self::DriverType>, 36 45 name: &'static CStr, 37 46 module: &'static ThisModule, 38 47 ) -> Result { ··· 64 55 (*pdrv.get()).driver.acpi_match_table = acpi_table; 65 56 } 66 57 67 - // SAFETY: `pdrv` is guaranteed to be a valid `RegType`. 58 + // SAFETY: `pdrv` is guaranteed to be a valid `DriverType`. 68 59 to_result(unsafe { bindings::__platform_driver_register(pdrv.get(), module.0) }) 69 60 } 70 61 71 - unsafe fn unregister(pdrv: &Opaque<Self::RegType>) { 72 - // SAFETY: `pdrv` is guaranteed to be a valid `RegType`. 62 + unsafe fn unregister(pdrv: &Opaque<Self::DriverType>) { 63 + // SAFETY: `pdrv` is guaranteed to be a valid `DriverType`. 73 64 unsafe { bindings::platform_driver_unregister(pdrv.get()) }; 74 65 } 75 66 } ··· 101 92 // SAFETY: `remove_callback` is only ever called after a successful call to 102 93 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 103 94 // and stored a `Pin<KBox<T>>`. 104 - let data = unsafe { pdev.as_ref().drvdata_obtain::<T>() }; 95 + let data = unsafe { pdev.as_ref().drvdata_borrow::<T>() }; 105 96 106 - T::unbind(pdev, data.as_ref()); 97 + T::unbind(pdev, data); 107 98 } 108 99 } 109 100
+18 -9
rust/kernel/usb.rs
··· 27 27 /// An adapter for the registration of USB drivers. 28 28 pub struct Adapter<T: Driver>(T); 29 29 30 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 30 + // SAFETY: 31 + // - `bindings::usb_driver` is a C type declared as `repr(C)`. 32 + // - `T` is the type of the driver's device private data. 33 + // - `struct usb_driver` embeds a `struct device_driver`. 34 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 35 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 36 + type DriverType = bindings::usb_driver; 37 + type DriverData = T; 38 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 39 + } 40 + 41 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 31 42 // a preceding call to `register` has been successful. 32 43 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 33 - type RegType = bindings::usb_driver; 34 - 35 44 unsafe fn register( 36 - udrv: &Opaque<Self::RegType>, 45 + udrv: &Opaque<Self::DriverType>, 37 46 name: &'static CStr, 38 47 module: &'static ThisModule, 39 48 ) -> Result { ··· 54 45 (*udrv.get()).id_table = T::ID_TABLE.as_ptr(); 55 46 } 56 47 57 - // SAFETY: `udrv` is guaranteed to be a valid `RegType`. 48 + // SAFETY: `udrv` is guaranteed to be a valid `DriverType`. 58 49 to_result(unsafe { 59 50 bindings::usb_register_driver(udrv.get(), module.0, name.as_char_ptr()) 60 51 }) 61 52 } 62 53 63 - unsafe fn unregister(udrv: &Opaque<Self::RegType>) { 64 - // SAFETY: `udrv` is guaranteed to be a valid `RegType`. 54 + unsafe fn unregister(udrv: &Opaque<Self::DriverType>) { 55 + // SAFETY: `udrv` is guaranteed to be a valid `DriverType`. 65 56 unsafe { bindings::usb_deregister(udrv.get()) }; 66 57 } 67 58 } ··· 103 94 // SAFETY: `disconnect_callback` is only ever called after a successful call to 104 95 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 105 96 // and stored a `Pin<KBox<T>>`. 106 - let data = unsafe { dev.drvdata_obtain::<T>() }; 97 + let data = unsafe { dev.drvdata_borrow::<T>() }; 107 98 108 - T::disconnect(intf, data.as_ref()); 99 + T::disconnect(intf, data); 109 100 } 110 101 } 111 102
+1 -1
scripts/check-function-names.sh
··· 13 13 exit 1 14 14 fi 15 15 16 - bad_symbols=$(nm "$objfile" | awk '$2 ~ /^[TtWw]$/ {print $3}' | grep -E '^(startup|exit|split|unlikely|hot|unknown)(\.|$)') 16 + bad_symbols=$(${NM:-nm} "$objfile" | awk '$2 ~ /^[TtWw]$/ {print $3}' | grep -E '^(startup|exit|split|unlikely|hot|unknown)(\.|$)') 17 17 18 18 if [ -n "$bad_symbols" ]; then 19 19 echo "$bad_symbols" | while read -r sym; do
+6 -5
scripts/kconfig/nconf-cfg.sh
··· 6 6 cflags=$1 7 7 libs=$2 8 8 9 - PKG="ncursesw menuw panelw" 10 - PKG2="ncurses menu panel" 9 + # Keep library order for static linking (HOSTCC='cc -static') 10 + PKG="menuw panelw ncursesw" 11 + PKG2="menu panel ncurses" 11 12 12 13 if [ -n "$(command -v ${HOSTPKG_CONFIG})" ]; then 13 14 if ${HOSTPKG_CONFIG} --exists $PKG; then ··· 29 28 # find ncurses by pkg-config.) 30 29 if [ -f /usr/include/ncursesw/ncurses.h ]; then 31 30 echo -D_GNU_SOURCE -I/usr/include/ncursesw > ${cflags} 32 - echo -lncursesw -lmenuw -lpanelw > ${libs} 31 + echo -lmenuw -lpanelw -lncursesw > ${libs} 33 32 exit 0 34 33 fi 35 34 36 35 if [ -f /usr/include/ncurses/ncurses.h ]; then 37 36 echo -D_GNU_SOURCE -I/usr/include/ncurses > ${cflags} 38 - echo -lncurses -lmenu -lpanel > ${libs} 37 + echo -lmenu -lpanel -lncurses > ${libs} 39 38 exit 0 40 39 fi 41 40 42 41 if [ -f /usr/include/ncurses.h ]; then 43 42 echo -D_GNU_SOURCE > ${cflags} 44 - echo -lncurses -lmenu -lpanel > ${libs} 43 + echo -lmenu -lpanel -lncurses > ${libs} 45 44 exit 0 46 45 fi 47 46
+2
scripts/tracepoint-update.c
··· 49 49 array = realloc(array, sizeof(char *) * size); 50 50 if (!array) { 51 51 fprintf(stderr, "Failed memory allocation\n"); 52 + free(*vals); 53 + *vals = NULL; 52 54 return -1; 53 55 } 54 56 *vals = array;
+2 -2
security/keys/trusted-keys/trusted_tpm2.c
··· 465 465 } 466 466 467 467 /** 468 - * tpm2_unseal_cmd() - execute a TPM2_Unload command 468 + * tpm2_unseal_cmd() - execute a TPM2_Unseal command 469 469 * 470 470 * @chip: TPM chip to use 471 471 * @payload: the key data in clear and encrypted form ··· 498 498 return rc; 499 499 } 500 500 501 - rc = tpm_buf_append_name(chip, &buf, options->keyhandle, NULL); 501 + rc = tpm_buf_append_name(chip, &buf, blob_handle, NULL); 502 502 if (rc) 503 503 goto out; 504 504
+28 -1
sound/hda/codecs/realtek/alc269.c
··· 3736 3736 ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE, 3737 3737 ALC287_FIXUP_YOGA7_14ITL_SPEAKERS, 3738 3738 ALC298_FIXUP_LENOVO_C940_DUET7, 3739 + ALC287_FIXUP_LENOVO_YOGA_BOOK_9I, 3739 3740 ALC287_FIXUP_13S_GEN2_SPEAKERS, 3740 3741 ALC256_FIXUP_SET_COEF_DEFAULTS, 3741 3742 ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE, ··· 3821 3820 id = ALC298_FIXUP_LENOVO_SPK_VOLUME; /* C940 */ 3822 3821 else 3823 3822 id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* Duet 7 */ 3823 + __snd_hda_apply_fixup(codec, id, action, 0); 3824 + } 3825 + 3826 + /* A special fixup for Lenovo Yoga 9i and Yoga Book 9i 13IRU8 3827 + * both have the very same PCI SSID and vendor ID, so we need 3828 + * to apply different fixups depending on the subsystem ID 3829 + */ 3830 + static void alc287_fixup_lenovo_yoga_book_9i(struct hda_codec *codec, 3831 + const struct hda_fixup *fix, 3832 + int action) 3833 + { 3834 + int id; 3835 + 3836 + if (codec->core.subsystem_id == 0x17aa3881) 3837 + id = ALC287_FIXUP_TAS2781_I2C; /* Yoga Book 9i 13IRU8 */ 3838 + else 3839 + id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP; /* Yoga 9i */ 3824 3840 __snd_hda_apply_fixup(codec, id, action, 0); 3825 3841 } 3826 3842 ··· 5852 5834 .type = HDA_FIXUP_FUNC, 5853 5835 .v.func = alc298_fixup_lenovo_c940_duet7, 5854 5836 }, 5837 + [ALC287_FIXUP_LENOVO_YOGA_BOOK_9I] = { 5838 + .type = HDA_FIXUP_FUNC, 5839 + .v.func = alc287_fixup_lenovo_yoga_book_9i, 5840 + }, 5855 5841 [ALC287_FIXUP_13S_GEN2_SPEAKERS] = { 5856 5842 .type = HDA_FIXUP_VERBS, 5857 5843 .v.verbs = (const struct hda_verb[]) { ··· 7035 7013 SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP), 7036 7014 SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP), 7037 7015 SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 7016 + SND_PCI_QUIRK(0x144d, 0xc876, "Samsung 730QED (NP730QED-KA2US)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 7038 7017 SND_PCI_QUIRK(0x144d, 0xca03, "Samsung Galaxy Book2 Pro 360 (NP930QED)", ALC298_FIXUP_SAMSUNG_AMP), 7039 7018 SND_PCI_QUIRK(0x144d, 0xca06, "Samsung Galaxy Book3 360 (NP730QFG)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 7040 7019 SND_PCI_QUIRK(0x144d, 0xc868, "Samsung Galaxy Book2 Pro (NP930XED)", ALC298_FIXUP_SAMSUNG_AMP), ··· 7214 7191 SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF), 7215 7192 SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 7216 7193 SND_PCI_QUIRK(0x17aa, 0x383d, "Legion Y9000X 2019", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS), 7217 - SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP), 7194 + SND_PCI_QUIRK(0x17aa, 0x3843, "Lenovo Yoga 9i / Yoga Book 9i", ALC287_FIXUP_LENOVO_YOGA_BOOK_9I), 7218 7195 SND_PCI_QUIRK(0x17aa, 0x3847, "Legion 7 16ACHG6", ALC287_FIXUP_LEGION_16ACHG6), 7219 7196 SND_PCI_QUIRK(0x17aa, 0x384a, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 7220 7197 SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), ··· 7805 7782 {0x12, 0x90a60140}, 7806 7783 {0x19, 0x04a11030}, 7807 7784 {0x21, 0x04211020}), 7785 + SND_HDA_PIN_QUIRK(0x10ec0274, 0x1d05, "TongFang", ALC274_FIXUP_HP_HEADSET_MIC, 7786 + {0x17, 0x90170110}, 7787 + {0x19, 0x03a11030}, 7788 + {0x21, 0x03211020}), 7808 7789 SND_HDA_PIN_QUIRK(0x10ec0282, 0x1025, "Acer", ALC282_FIXUP_ACER_DISABLE_LINEOUT, 7809 7790 ALC282_STANDARD_PINS, 7810 7791 {0x12, 0x90a609c0},
+2
sound/pci/ctxfi/ctamixer.c
··· 205 205 206 206 /* Set amixer specific operations */ 207 207 amixer->rsc.ops = &amixer_basic_rsc_ops; 208 + amixer->rsc.conj = 0; 208 209 amixer->ops = &amixer_ops; 209 210 amixer->input = NULL; 210 211 amixer->sum = NULL; ··· 368 367 return err; 369 368 370 369 sum->rsc.ops = &sum_basic_rsc_ops; 370 + sum->rsc.conj = 0; 371 371 372 372 return 0; 373 373 }
+17 -5
sound/usb/mixer.c
··· 1813 1813 1814 1814 range = (cval->max - cval->min) / cval->res; 1815 1815 /* 1816 - * Are there devices with volume range more than 255? I use a bit more 1817 - * to be sure. 384 is a resolution magic number found on Logitech 1818 - * devices. It will definitively catch all buggy Logitech devices. 1816 + * There are definitely devices with a range of ~20,000, so let's be 1817 + * conservative and allow for a bit more. 1819 1818 */ 1820 - if (range > 384) { 1819 + if (range > 65535) { 1821 1820 usb_audio_warn(mixer->chip, 1822 1821 "Warning! Unlikely big volume range (=%u), cval->res is probably wrong.", 1823 1822 range); ··· 2945 2946 2946 2947 static void snd_usb_mixer_free(struct usb_mixer_interface *mixer) 2947 2948 { 2949 + struct usb_mixer_elem_list *list, *next; 2950 + int id; 2951 + 2948 2952 /* kill pending URBs */ 2949 2953 snd_usb_mixer_disconnect(mixer); 2950 2954 2951 - kfree(mixer->id_elems); 2955 + /* Unregister controls first, snd_ctl_remove() frees the element */ 2956 + if (mixer->id_elems) { 2957 + for (id = 0; id < MAX_ID_ELEMS; id++) { 2958 + for (list = mixer->id_elems[id]; list; list = next) { 2959 + next = list->next_id_elem; 2960 + if (list->kctl) 2961 + snd_ctl_remove(mixer->chip->card, list->kctl); 2962 + } 2963 + } 2964 + kfree(mixer->id_elems); 2965 + } 2952 2966 if (mixer->urb) { 2953 2967 kfree(mixer->urb->transfer_buffer); 2954 2968 usb_free_urb(mixer->urb);
+3 -3
sound/usb/mixer_scarlett2.c
··· 2533 2533 err = scarlett2_usb_get(mixer, config_item->offset, buf, size); 2534 2534 if (err < 0) 2535 2535 return err; 2536 - if (size == 2) { 2536 + if (config_item->size == 16) { 2537 2537 u16 *buf_16 = buf; 2538 2538 2539 2539 for (i = 0; i < count; i++, buf_16++) 2540 2540 *buf_16 = le16_to_cpu(*(__le16 *)buf_16); 2541 - } else if (size == 4) { 2542 - u32 *buf_32 = buf; 2541 + } else if (config_item->size == 32) { 2542 + u32 *buf_32 = (u32 *)buf; 2543 2543 2544 2544 for (i = 0; i < count; i++, buf_32++) 2545 2545 *buf_32 = le32_to_cpu(*(__le32 *)buf_32);
+2 -1
sound/usb/pcm.c
··· 1553 1553 1554 1554 for (i = 0; i < ctx->packets; i++) { 1555 1555 counts = snd_usb_endpoint_next_packet_size(ep, ctx, i, avail); 1556 - if (counts < 0 || frames + counts >= ep->max_urb_frames) 1556 + if (counts < 0 || 1557 + (frames + counts) * stride > ctx->buffer_size) 1557 1558 break; 1558 1559 /* set up descriptor */ 1559 1560 urb->iso_frame_desc[i].offset = frames * stride;
+2
sound/usb/quirks.c
··· 2390 2390 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2391 2391 DEVICE_FLG(0x2d99, 0x0026, /* HECATE G2 GAMING HEADSET */ 2392 2392 QUIRK_FLAG_MIXER_PLAYBACK_MIN_MUTE), 2393 + DEVICE_FLG(0x2fc6, 0xf06b, /* MOONDROP Moonriver2 Ti */ 2394 + QUIRK_FLAG_CTL_MSG_DELAY), 2393 2395 DEVICE_FLG(0x2fc6, 0xf0b7, /* iBasso DC07 Pro */ 2394 2396 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2395 2397 DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+45 -16
tools/include/io_uring/mini_liburing.h
··· 6 6 #include <stdio.h> 7 7 #include <string.h> 8 8 #include <unistd.h> 9 + #include <sys/uio.h> 9 10 10 11 struct io_sq_ring { 11 12 unsigned int *head; ··· 56 55 struct io_uring_sq sq; 57 56 struct io_uring_cq cq; 58 57 int ring_fd; 58 + unsigned flags; 59 59 }; 60 60 61 61 #if defined(__x86_64) || defined(__i386__) ··· 74 72 void *ptr; 75 73 int ret; 76 74 77 - sq->ring_sz = p->sq_off.array + p->sq_entries * sizeof(unsigned int); 75 + if (p->flags & IORING_SETUP_NO_SQARRAY) { 76 + sq->ring_sz = p->cq_off.cqes; 77 + sq->ring_sz += p->cq_entries * sizeof(struct io_uring_cqe); 78 + } else { 79 + sq->ring_sz = p->sq_off.array; 80 + sq->ring_sz += p->sq_entries * sizeof(unsigned int); 81 + } 82 + 78 83 ptr = mmap(0, sq->ring_sz, PROT_READ | PROT_WRITE, 79 84 MAP_SHARED | MAP_POPULATE, fd, IORING_OFF_SQ_RING); 80 85 if (ptr == MAP_FAILED) ··· 92 83 sq->kring_entries = ptr + p->sq_off.ring_entries; 93 84 sq->kflags = ptr + p->sq_off.flags; 94 85 sq->kdropped = ptr + p->sq_off.dropped; 95 - sq->array = ptr + p->sq_off.array; 86 + if (!(p->flags & IORING_SETUP_NO_SQARRAY)) 87 + sq->array = ptr + p->sq_off.array; 96 88 97 89 size = p->sq_entries * sizeof(struct io_uring_sqe); 98 90 sq->sqes = mmap(0, size, PROT_READ | PROT_WRITE, ··· 136 126 flags, sig, _NSIG / 8); 137 127 } 138 128 129 + static inline int io_uring_queue_init_params(unsigned int entries, 130 + struct io_uring *ring, 131 + struct io_uring_params *p) 132 + { 133 + int fd, ret; 134 + 135 + memset(ring, 0, sizeof(*ring)); 136 + 137 + fd = io_uring_setup(entries, p); 138 + if (fd < 0) 139 + return fd; 140 + ret = io_uring_mmap(fd, p, &ring->sq, &ring->cq); 141 + if (!ret) { 142 + ring->ring_fd = fd; 143 + ring->flags = p->flags; 144 + } else { 145 + close(fd); 146 + } 147 + return ret; 148 + } 149 + 139 150 static inline int io_uring_queue_init(unsigned int entries, 140 151 struct io_uring *ring, 141 152 unsigned int flags) 142 153 { 143 154 struct io_uring_params p; 144 - int fd, ret; 145 155 146 - memset(ring, 0, sizeof(*ring)); 147 156 memset(&p, 0, sizeof(p)); 148 157 p.flags = flags; 149 158 150 - fd = io_uring_setup(entries, &p); 151 - if (fd < 0) 152 - return fd; 153 - ret = io_uring_mmap(fd, &p, &ring->sq, &ring->cq); 154 - if (!ret) 155 - ring->ring_fd = fd; 156 - else 157 - close(fd); 158 - return ret; 159 + return io_uring_queue_init_params(entries, ring, &p); 159 160 } 160 161 161 162 /* Get a sqe */ ··· 220 199 221 200 ktail = *sq->ktail; 222 201 to_submit = sq->sqe_tail - sq->sqe_head; 223 - for (submitted = 0; submitted < to_submit; submitted++) { 224 - read_barrier(); 225 - sq->array[ktail++ & mask] = sq->sqe_head++ & mask; 202 + 203 + if (!(ring->flags & IORING_SETUP_NO_SQARRAY)) { 204 + for (submitted = 0; submitted < to_submit; submitted++) { 205 + read_barrier(); 206 + sq->array[ktail++ & mask] = sq->sqe_head++ & mask; 207 + } 208 + } else { 209 + ktail += to_submit; 210 + sq->sqe_head += to_submit; 211 + submitted = to_submit; 226 212 } 213 + 227 214 if (!submitted) 228 215 return 0; 229 216
+2 -1
tools/net/ynl/Makefile
··· 41 41 rm -rf pyynl.egg-info 42 42 rm -rf build 43 43 44 - install: libynl.a lib/*.h 44 + install: libynl.a lib/*.h ynltool 45 45 @echo -e "\tINSTALL libynl.a" 46 46 @$(INSTALL) -d $(DESTDIR)$(libdir) 47 47 @$(INSTALL) -m 0644 libynl.a $(DESTDIR)$(libdir)/libynl.a ··· 51 51 @echo -e "\tINSTALL pyynl" 52 52 @pip install --prefix=$(DESTDIR)$(prefix) . 53 53 @make -C generated install 54 + @make -C ynltool install 54 55 55 56 run_tests: 56 57 @$(MAKE) -C tests run_tests
+1 -1
tools/net/ynl/ynl-regen.sh
··· 21 21 for f in $files; do 22 22 # params: 0 1 2 3 23 23 # $YAML YNL-GEN kernel $mode 24 - params=( $(git grep -B1 -h '/\* YNL-GEN' $f | sed 's@/\*\(.*\)\*/@\1@') ) 24 + params=( $(git grep --no-line-number -B1 -h '/\* YNL-GEN' $f | sed 's@/\*\(.*\)\*/@\1@') ) 25 25 args=$(sed -n 's@/\* YNL-ARG \(.*\) \*/@\1@p' $f) 26 26 27 27 if [ $f -nt ${params[0]} -a -z "$force" ]; then
+17 -4
tools/objtool/Makefile
··· 77 77 # We check using HOSTCC directly rather than the shared feature framework 78 78 # because objtool is a host tool that links against host libraries. 79 79 # 80 - HAVE_LIBOPCODES := $(shell echo 'int main(void) { return 0; }' | \ 81 - $(HOSTCC) -xc - -o /dev/null -lopcodes 2>/dev/null && echo y) 80 + # When using shared libraries, -lopcodes is sufficient as dependencies are 81 + # resolved automatically. With static libraries, we must explicitly link 82 + # against libopcodes' dependencies: libbfd, libiberty, and sometimes libz. 83 + # Try each combination and use the first one that succeeds. 84 + # 85 + LIBOPCODES_LIBS := $(shell \ 86 + for libs in "-lopcodes" \ 87 + "-lopcodes -lbfd" \ 88 + "-lopcodes -lbfd -liberty" \ 89 + "-lopcodes -lbfd -liberty -lz"; do \ 90 + echo 'extern void disassemble_init_for_target(void *);' \ 91 + 'int main(void) { disassemble_init_for_target(0); return 0; }' | \ 92 + $(HOSTCC) -xc - -o /dev/null $$libs 2>/dev/null && \ 93 + echo "$$libs" && break; \ 94 + done) 82 95 83 96 # Styled disassembler support requires binutils >= 2.39 84 97 HAVE_DISASM_STYLED := $(shell echo '$(pound)include <dis-asm.h>' | \ ··· 99 86 100 87 BUILD_DISAS := n 101 88 102 - ifeq ($(HAVE_LIBOPCODES),y) 89 + ifneq ($(LIBOPCODES_LIBS),) 103 90 BUILD_DISAS := y 104 91 OBJTOOL_CFLAGS += -DDISAS -DPACKAGE='"objtool"' 105 - OBJTOOL_LDFLAGS += -lopcodes 92 + OBJTOOL_LDFLAGS += $(LIBOPCODES_LIBS) 106 93 ifeq ($(HAVE_DISASM_STYLED),y) 107 94 OBJTOOL_CFLAGS += -DDISASM_INIT_STYLED 108 95 endif
+5 -2
tools/perf/util/parse-events.c
··· 251 251 event_attr_init(attr); 252 252 253 253 evsel = evsel__new_idx(attr, *idx); 254 - if (!evsel) 255 - goto out_err; 254 + if (!evsel) { 255 + perf_cpu_map__put(cpus); 256 + perf_cpu_map__put(pmu_cpus); 257 + return NULL; 258 + } 256 259 257 260 if (name) { 258 261 evsel->name = strdup(name);
-1
tools/testing/selftests/alsa/utimer-test.c
··· 141 141 TEST(wrong_timers_test) { 142 142 int timer_dev_fd; 143 143 int utimer_fd; 144 - size_t i; 145 144 struct snd_timer_uinfo wrong_timer = { 146 145 .resolution = 0, 147 146 .id = UTIMER_DEFAULT_ID,
+1
tools/testing/selftests/net/Makefile
··· 48 48 ipv6_flowlabel.sh \ 49 49 ipv6_force_forwarding.sh \ 50 50 ipv6_route_update_soft_lockup.sh \ 51 + ipvtap_test.sh \ 51 52 l2_tos_ttl_inherit.sh \ 52 53 l2tp.sh \ 53 54 link_netns.py \
+5 -2
tools/testing/selftests/net/amt.sh
··· 73 73 # +------------------------+ 74 74 #============================================================================== 75 75 76 + source lib.sh 77 + 76 78 readonly LISTENER=$(mktemp -u listener-XXXXXXXX) 77 79 readonly GATEWAY=$(mktemp -u gateway-XXXXXXXX) 78 80 readonly RELAY=$(mktemp -u relay-XXXXXXXX) ··· 248 246 249 247 send_mcast4() 250 248 { 251 - sleep 2 249 + sleep 5 250 + wait_local_port_listen ${LISTENER} 4000 udp 252 251 ip netns exec "${SOURCE}" bash -c \ 253 252 'printf "%s %128s" 172.17.0.2 | nc -w 1 -u 239.0.0.1 4000' & 254 253 } 255 254 256 255 send_mcast6() 257 256 { 258 - sleep 2 257 + wait_local_port_listen ${LISTENER} 6000 udp 259 258 ip netns exec "${SOURCE}" bash -c \ 260 259 'printf "%s %128s" 2001:db8:3::2 | nc -w 1 -u ff0e::5:6 6000' & 261 260 }
+2
tools/testing/selftests/net/config
··· 48 48 CONFIG_IPV6_SIT=y 49 49 CONFIG_IPV6_VTI=y 50 50 CONFIG_IPVLAN=m 51 + CONFIG_IPVTAP=m 51 52 CONFIG_KALLSYMS=y 52 53 CONFIG_L2TP=m 53 54 CONFIG_L2TP_ETH=m ··· 117 116 CONFIG_PSAMPLE=m 118 117 CONFIG_RPS=y 119 118 CONFIG_SYSFS=y 119 + CONFIG_TAP=m 120 120 CONFIG_TCP_MD5SIG=y 121 121 CONFIG_TEST_BLACKHOLE_DEV=m 122 122 CONFIG_TEST_BPF=m
+168
tools/testing/selftests/net/ipvtap_test.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Simple tests for ipvtap 5 + 6 + 7 + # 8 + # The testing environment looks this way: 9 + # 10 + # |------HNS-------| |------PHY-------| 11 + # | veth<----------------->veth | 12 + # |------|--|------| |----------------| 13 + # | | 14 + # | | |-----TST0-------| 15 + # | |------------|----ipvlan | 16 + # | |----------------| 17 + # | 18 + # | |-----TST1-------| 19 + # |---------------|----ipvlan | 20 + # |----------------| 21 + # 22 + 23 + ALL_TESTS=" 24 + test_ip_set 25 + " 26 + 27 + source lib.sh 28 + 29 + DEBUG=0 30 + 31 + VETH_HOST=vethtst.h 32 + VETH_PHY=vethtst.p 33 + 34 + NS_COUNT=32 35 + IP_ITERATIONS=1024 36 + IPSET_TIMEOUT="60s" 37 + 38 + ns_run() { 39 + ns=$1 40 + shift 41 + if [[ "$ns" == "global" ]]; then 42 + "$@" >/dev/null 43 + else 44 + ip netns exec "$ns" "$@" >/dev/null 45 + fi 46 + } 47 + 48 + test_ip_setup_env() { 49 + setup_ns NS_PHY 50 + setup_ns HST_NS 51 + 52 + # setup simulated other-host (phy) and host itself 53 + ns_run "$HST_NS" ip link add $VETH_HOST type veth peer name $VETH_PHY \ 54 + netns "$NS_PHY" >/dev/null 55 + ns_run "$HST_NS" ip link set $VETH_HOST up 56 + ns_run "$NS_PHY" ip link set $VETH_PHY up 57 + 58 + for ((i=0; i<NS_COUNT; i++)); do 59 + setup_ns ipvlan_ns_$i 60 + ns="ipvlan_ns_$i" 61 + if [ "$DEBUG" = "1" ]; then 62 + echo "created NS ${!ns}" 63 + fi 64 + if ! ns_run "$HST_NS" ip link add netns ${!ns} ipvlan0 \ 65 + link $VETH_HOST \ 66 + type ipvtap mode l2 bridge; then 67 + exit_error "FAIL: Failed to configure ipvlan link." 68 + fi 69 + done 70 + } 71 + 72 + test_ip_cleanup_env() { 73 + ns_run "$HST_NS" ip link del $VETH_HOST 74 + cleanup_all_ns 75 + } 76 + 77 + exit_error() { 78 + echo "$1" 79 + exit $ksft_fail 80 + } 81 + 82 + rnd() { 83 + echo $(( RANDOM % 32 + 16 )) 84 + } 85 + 86 + test_ip_set_thread() { 87 + # Here we are trying to create some IP conflicts between namespaces. 88 + # If just add/remove IP, nothing interesting will happen. 89 + # But if add random IP and then remove random IP, 90 + # eventually conflicts start to apear. 91 + ip link set ipvlan0 up 92 + for ((i=0; i<IP_ITERATIONS; i++)); do 93 + v=$(rnd) 94 + ip a a "172.25.0.$v/24" dev ipvlan0 2>/dev/null 95 + ip a a "fc00::$v/64" dev ipvlan0 2>/dev/null 96 + v=$(rnd) 97 + ip a d "172.25.0.$v/24" dev ipvlan0 2>/dev/null 98 + ip a d "fc00::$v/64" dev ipvlan0 2>/dev/null 99 + done 100 + } 101 + 102 + test_ip_set() { 103 + RET=0 104 + 105 + trap test_ip_cleanup_env EXIT 106 + 107 + test_ip_setup_env 108 + 109 + declare -A ns_pids 110 + for ((i=0; i<NS_COUNT; i++)); do 111 + ns="ipvlan_ns_$i" 112 + ns_run ${!ns} timeout "$IPSET_TIMEOUT" \ 113 + bash -c "$0 test_ip_set_thread"& 114 + ns_pids[$i]=$! 115 + done 116 + 117 + for ((i=0; i<NS_COUNT; i++)); do 118 + wait "${ns_pids[$i]}" 119 + done 120 + 121 + declare -A all_ips 122 + for ((i=0; i<NS_COUNT; i++)); do 123 + ns="ipvlan_ns_$i" 124 + ip_output=$(ip netns exec ${!ns} ip a l dev ipvlan0 | grep inet) 125 + while IFS= read -r nsip_out; do 126 + if [[ -z $nsip_out ]]; then 127 + continue; 128 + fi 129 + nsip=$(awk '{print $2}' <<< "$nsip_out") 130 + if [[ -v all_ips[$nsip] ]]; then 131 + RET=$ksft_fail 132 + log_test "conflict for $nsip" 133 + return "$RET" 134 + else 135 + all_ips[$nsip]=$i 136 + fi 137 + done <<< "$ip_output" 138 + done 139 + 140 + if [ "$DEBUG" = "1" ]; then 141 + for key in "${!all_ips[@]}"; do 142 + echo "$key: ${all_ips[$key]}" 143 + done 144 + fi 145 + 146 + trap - EXIT 147 + test_ip_cleanup_env 148 + 149 + log_test "test multithreaded ip set" 150 + } 151 + 152 + if [[ "$1" == "-d" ]]; then 153 + DEBUG=1 154 + shift 155 + fi 156 + 157 + if [[ "$1" == "-t" ]]; then 158 + shift 159 + TESTS="$*" 160 + fi 161 + 162 + if [[ "$1" == "test_ip_set_thread" ]]; then 163 + test_ip_set_thread 164 + else 165 + require_command ip 166 + 167 + tests_run 168 + fi
+25
tools/testing/selftests/tc-testing/tc-tests/qdiscs/teql.json
··· 81 81 "$TC qdisc del dev $DUMMY handle 1: root", 82 82 "$IP link del dev $DUMMY" 83 83 ] 84 + }, 85 + { 86 + "id": "124e", 87 + "name": "Try to add teql as a child qdisc", 88 + "category": [ 89 + "qdisc", 90 + "ets", 91 + "tbf" 92 + ], 93 + "plugins": { 94 + "requires": [ 95 + "nsPlugin" 96 + ] 97 + }, 98 + "setup": [ 99 + "$TC qdisc add dev $DUMMY root handle 1: qfq", 100 + "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 15 maxpkt 16384" 101 + ], 102 + "cmdUnderTest": "$TC qdisc add dev $DUMMY parent 1:1 handle 2:1 teql0", 103 + "expExitCode": "2", 104 + "verifyCmd": "$TC -s -j qdisc ls dev $DUMMY parent 1:1", 105 + "matchJSON": [], 106 + "teardown": [ 107 + "$TC qdisc del dev $DUMMY root handle 1:" 108 + ] 84 109 } 85 110 ]
+7 -4
tools/testing/selftests/ublk/kublk.c
··· 753 753 754 754 static int ublk_thread_is_done(struct ublk_thread *t) 755 755 { 756 - return (t->state & UBLKS_T_STOPPING) && ublk_thread_is_idle(t); 756 + return (t->state & UBLKS_T_STOPPING) && ublk_thread_is_idle(t) && !t->cmd_inflight; 757 757 } 758 758 759 759 static inline void ublksrv_handle_tgt_cqe(struct ublk_thread *t, ··· 1054 1054 } 1055 1055 if (ret < 0) { 1056 1056 ublk_err("%s: ublk_ctrl_start_dev failed: %d\n", __func__, ret); 1057 - goto fail; 1057 + /* stop device so that inflight uring_cmd can be cancelled */ 1058 + ublk_ctrl_stop_dev(dev); 1059 + goto fail_start; 1058 1060 } 1059 1061 1060 1062 ublk_ctrl_get_info(dev); ··· 1064 1062 ublk_ctrl_dump(dev); 1065 1063 else 1066 1064 ublk_send_dev_event(ctx, dev, dev->dev_info.dev_id); 1067 - 1065 + fail_start: 1068 1066 /* wait until we are terminated */ 1069 1067 for (i = 0; i < dev->nthreads; i++) 1070 1068 pthread_join(tinfo[i].thread, &thread_ret); ··· 1274 1272 } 1275 1273 1276 1274 ret = ublk_start_daemon(ctx, dev); 1277 - ublk_dbg(UBLK_DBG_DEV, "%s: daemon exit %d\b", ret); 1275 + ublk_dbg(UBLK_DBG_DEV, "%s: daemon exit %d\n", __func__, ret); 1278 1276 if (ret < 0) 1279 1277 ublk_ctrl_del_dev(dev); 1280 1278 ··· 1620 1618 int option_idx, opt; 1621 1619 const char *cmd = argv[1]; 1622 1620 struct dev_ctx ctx = { 1621 + ._evtfd = -1, 1623 1622 .queue_depth = 128, 1624 1623 .nr_hw_queues = 2, 1625 1624 .dev_id = -1,
+1 -1
tools/testing/selftests/vDSO/vgetrandom-chacha.S
··· 14 14 #elif defined(__riscv) && __riscv_xlen == 64 15 15 #include "../../../../arch/riscv/kernel/vdso/vgetrandom-chacha.S" 16 16 #elif defined(__s390x__) 17 - #include "../../../../arch/s390/kernel/vdso64/vgetrandom-chacha.S" 17 + #include "../../../../arch/s390/kernel/vdso/vgetrandom-chacha.S" 18 18 #elif defined(__x86_64__) 19 19 #include "../../../../arch/x86/entry/vdso/vgetrandom-chacha.S" 20 20 #endif
+1 -1
tools/testing/vsock/util.h
··· 25 25 }; 26 26 27 27 static const char * const transport_ksyms[] = { 28 - #define x(name, symbol) "d " symbol "_transport", 28 + #define x(name, symbol) " " symbol "_transport", 29 29 KNOWN_TRANSPORTS(x) 30 30 #undef x 31 31 };
+117
tools/testing/vsock/vsock_test.c
··· 347 347 } 348 348 349 349 #define SOCK_BUF_SIZE (2 * 1024 * 1024) 350 + #define SOCK_BUF_SIZE_SMALL (64 * 1024) 350 351 #define MAX_MSG_PAGES 4 351 352 352 353 static void test_seqpacket_msg_bounds_client(const struct test_opts *opts) 353 354 { 355 + unsigned long long sock_buf_size; 354 356 unsigned long curr_hash; 355 357 size_t max_msg_size; 356 358 int page_size; ··· 364 362 perror("connect"); 365 363 exit(EXIT_FAILURE); 366 364 } 365 + 366 + sock_buf_size = SOCK_BUF_SIZE; 367 + 368 + setsockopt_ull_check(fd, AF_VSOCK, SO_VM_SOCKETS_BUFFER_MAX_SIZE, 369 + sock_buf_size, 370 + "setsockopt(SO_VM_SOCKETS_BUFFER_MAX_SIZE)"); 371 + 372 + setsockopt_ull_check(fd, AF_VSOCK, SO_VM_SOCKETS_BUFFER_SIZE, 373 + sock_buf_size, 374 + "setsockopt(SO_VM_SOCKETS_BUFFER_SIZE)"); 367 375 368 376 /* Wait, until receiver sets buffer size. */ 369 377 control_expectln("SRVREADY"); ··· 2231 2219 close(fd); 2232 2220 } 2233 2221 2222 + static void test_stream_tx_credit_bounds_client(const struct test_opts *opts) 2223 + { 2224 + unsigned long long sock_buf_size; 2225 + size_t total = 0; 2226 + char buf[4096]; 2227 + int fd; 2228 + 2229 + memset(buf, 'A', sizeof(buf)); 2230 + 2231 + fd = vsock_stream_connect(opts->peer_cid, opts->peer_port); 2232 + if (fd < 0) { 2233 + perror("connect"); 2234 + exit(EXIT_FAILURE); 2235 + } 2236 + 2237 + sock_buf_size = SOCK_BUF_SIZE_SMALL; 2238 + 2239 + setsockopt_ull_check(fd, AF_VSOCK, SO_VM_SOCKETS_BUFFER_MAX_SIZE, 2240 + sock_buf_size, 2241 + "setsockopt(SO_VM_SOCKETS_BUFFER_MAX_SIZE)"); 2242 + 2243 + setsockopt_ull_check(fd, AF_VSOCK, SO_VM_SOCKETS_BUFFER_SIZE, 2244 + sock_buf_size, 2245 + "setsockopt(SO_VM_SOCKETS_BUFFER_SIZE)"); 2246 + 2247 + if (fcntl(fd, F_SETFL, fcntl(fd, F_GETFL, 0) | O_NONBLOCK) < 0) { 2248 + perror("fcntl(F_SETFL)"); 2249 + exit(EXIT_FAILURE); 2250 + } 2251 + 2252 + control_expectln("SRVREADY"); 2253 + 2254 + for (;;) { 2255 + ssize_t sent = send(fd, buf, sizeof(buf), 0); 2256 + 2257 + if (sent == 0) { 2258 + fprintf(stderr, "unexpected EOF while sending bytes\n"); 2259 + exit(EXIT_FAILURE); 2260 + } 2261 + 2262 + if (sent < 0) { 2263 + if (errno == EINTR) 2264 + continue; 2265 + 2266 + if (errno == EAGAIN || errno == EWOULDBLOCK) 2267 + break; 2268 + 2269 + perror("send"); 2270 + exit(EXIT_FAILURE); 2271 + } 2272 + 2273 + total += sent; 2274 + } 2275 + 2276 + control_writeln("CLIDONE"); 2277 + close(fd); 2278 + 2279 + /* We should not be able to send more bytes than the value set as 2280 + * local buffer size. 2281 + */ 2282 + if (total > sock_buf_size) { 2283 + fprintf(stderr, 2284 + "TX credit too large: queued %zu bytes (expected <= %llu)\n", 2285 + total, sock_buf_size); 2286 + exit(EXIT_FAILURE); 2287 + } 2288 + } 2289 + 2290 + static void test_stream_tx_credit_bounds_server(const struct test_opts *opts) 2291 + { 2292 + unsigned long long sock_buf_size; 2293 + int fd; 2294 + 2295 + fd = vsock_stream_accept(VMADDR_CID_ANY, opts->peer_port, NULL); 2296 + if (fd < 0) { 2297 + perror("accept"); 2298 + exit(EXIT_FAILURE); 2299 + } 2300 + 2301 + sock_buf_size = SOCK_BUF_SIZE; 2302 + 2303 + setsockopt_ull_check(fd, AF_VSOCK, SO_VM_SOCKETS_BUFFER_MAX_SIZE, 2304 + sock_buf_size, 2305 + "setsockopt(SO_VM_SOCKETS_BUFFER_MAX_SIZE)"); 2306 + 2307 + setsockopt_ull_check(fd, AF_VSOCK, SO_VM_SOCKETS_BUFFER_SIZE, 2308 + sock_buf_size, 2309 + "setsockopt(SO_VM_SOCKETS_BUFFER_SIZE)"); 2310 + 2311 + control_writeln("SRVREADY"); 2312 + control_expectln("CLIDONE"); 2313 + 2314 + close(fd); 2315 + } 2316 + 2234 2317 static struct test_case test_cases[] = { 2235 2318 { 2236 2319 .name = "SOCK_STREAM connection reset", ··· 2509 2402 .name = "SOCK_STREAM accept()ed socket custom setsockopt()", 2510 2403 .run_client = test_stream_accepted_setsockopt_client, 2511 2404 .run_server = test_stream_accepted_setsockopt_server, 2405 + }, 2406 + { 2407 + .name = "SOCK_STREAM virtio MSG_ZEROCOPY coalescence corruption", 2408 + .run_client = test_stream_msgzcopy_mangle_client, 2409 + .run_server = test_stream_msgzcopy_mangle_server, 2410 + }, 2411 + { 2412 + .name = "SOCK_STREAM TX credit bounds", 2413 + .run_client = test_stream_tx_credit_bounds_client, 2414 + .run_server = test_stream_tx_credit_bounds_server, 2512 2415 }, 2513 2416 {}, 2514 2417 };
+74
tools/testing/vsock/vsock_test_zerocopy.c
··· 9 9 #include <stdio.h> 10 10 #include <stdlib.h> 11 11 #include <string.h> 12 + #include <sys/ioctl.h> 12 13 #include <sys/mman.h> 13 14 #include <unistd.h> 14 15 #include <poll.h> 15 16 #include <linux/errqueue.h> 16 17 #include <linux/kernel.h> 18 + #include <linux/sockios.h> 19 + #include <linux/time64.h> 17 20 #include <errno.h> 18 21 19 22 #include "control.h" 23 + #include "timeout.h" 20 24 #include "vsock_test_zerocopy.h" 21 25 #include "msg_zerocopy_common.h" 22 26 ··· 358 354 } 359 355 360 356 control_expectln("DONE"); 357 + close(fd); 358 + } 359 + 360 + #define GOOD_COPY_LEN 128 /* net/vmw_vsock/virtio_transport_common.c */ 361 + 362 + void test_stream_msgzcopy_mangle_client(const struct test_opts *opts) 363 + { 364 + char sbuf1[PAGE_SIZE + 1], sbuf2[GOOD_COPY_LEN]; 365 + unsigned long hash; 366 + struct pollfd fds; 367 + int fd, i; 368 + 369 + fd = vsock_stream_connect(opts->peer_cid, opts->peer_port); 370 + if (fd < 0) { 371 + perror("connect"); 372 + exit(EXIT_FAILURE); 373 + } 374 + 375 + enable_so_zerocopy_check(fd); 376 + 377 + memset(sbuf1, 'x', sizeof(sbuf1)); 378 + send_buf(fd, sbuf1, sizeof(sbuf1), 0, sizeof(sbuf1)); 379 + 380 + for (i = 0; i < sizeof(sbuf2); i++) 381 + sbuf2[i] = rand() & 0xff; 382 + 383 + send_buf(fd, sbuf2, sizeof(sbuf2), MSG_ZEROCOPY, sizeof(sbuf2)); 384 + 385 + hash = hash_djb2(sbuf2, sizeof(sbuf2)); 386 + control_writeulong(hash); 387 + 388 + fds.fd = fd; 389 + fds.events = 0; 390 + 391 + if (poll(&fds, 1, TIMEOUT * MSEC_PER_SEC) != 1 || 392 + !(fds.revents & POLLERR)) { 393 + perror("poll"); 394 + exit(EXIT_FAILURE); 395 + } 396 + 397 + close(fd); 398 + } 399 + 400 + void test_stream_msgzcopy_mangle_server(const struct test_opts *opts) 401 + { 402 + unsigned long local_hash, remote_hash; 403 + char rbuf[PAGE_SIZE + 1]; 404 + int fd; 405 + 406 + fd = vsock_stream_accept(VMADDR_CID_ANY, opts->peer_port, NULL); 407 + if (fd < 0) { 408 + perror("accept"); 409 + exit(EXIT_FAILURE); 410 + } 411 + 412 + /* Wait, don't race the (buggy) skbs coalescence. */ 413 + vsock_ioctl_int(fd, SIOCINQ, PAGE_SIZE + 1 + GOOD_COPY_LEN); 414 + 415 + /* Discard the first packet. */ 416 + recv_buf(fd, rbuf, PAGE_SIZE + 1, 0, PAGE_SIZE + 1); 417 + 418 + recv_buf(fd, rbuf, GOOD_COPY_LEN, 0, GOOD_COPY_LEN); 419 + remote_hash = control_readulong(); 420 + local_hash = hash_djb2(rbuf, GOOD_COPY_LEN); 421 + 422 + if (local_hash != remote_hash) { 423 + fprintf(stderr, "Data received corrupted\n"); 424 + exit(EXIT_FAILURE); 425 + } 426 + 361 427 close(fd); 362 428 }
+3
tools/testing/vsock/vsock_test_zerocopy.h
··· 12 12 void test_stream_msgzcopy_empty_errq_client(const struct test_opts *opts); 13 13 void test_stream_msgzcopy_empty_errq_server(const struct test_opts *opts); 14 14 15 + void test_stream_msgzcopy_mangle_client(const struct test_opts *opts); 16 + void test_stream_msgzcopy_mangle_server(const struct test_opts *opts); 17 + 15 18 #endif /* VSOCK_TEST_ZEROCOPY_H */