Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

spi: fix explicit controller deregistration

Johan Hovold <johan@kernel.org> says:

Turns out we have a few drivers that get the tear down ordering wrong
also when not using device managed registration (cf. [1] and [2]).

Fix this to avoid issues like system errors due to unclocked accesses,
NULL-pointer dereferences, hangs or failed I/O during during
deregistration (e.g. when powering down devices).

Johan

[1] https://lore.kernel.org/lkml/20260409120419.388546-2-johan@kernel.org/
[2] https://lore.kernel.org/lkml/20260410081757.503099-1-johan@kernel.org/

+2635 -1380
+2
.mailmap
··· 849 849 Tvrtko Ursulin <tursulin@ursulin.net> <tvrtko@ursulin.net> 850 850 Tycho Andersen <tycho@tycho.pizza> <tycho@tycho.ws> 851 851 Tzung-Bi Shih <tzungbi@kernel.org> <tzungbi@google.com> 852 + Ulf Hansson <ulfh@kernel.org> <ulf.hansson@linaro.org> 853 + Ulf Hansson <ulfh@kernel.org> <ulf.hansson@stericsson.com> 852 854 Umang Jain <uajain@igalia.com> <umang.jain@ideasonboard.com> 853 855 Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de> 854 856 Uwe Kleine-König <u.kleine-koenig@baylibre.com> <ukleinek@baylibre.com>
+36 -21
Documentation/arch/riscv/zicfilp.rst
··· 76 76 4. prctl() enabling 77 77 -------------------- 78 78 79 - :c:macro:`PR_SET_INDIR_BR_LP_STATUS` / :c:macro:`PR_GET_INDIR_BR_LP_STATUS` / 80 - :c:macro:`PR_LOCK_INDIR_BR_LP_STATUS` are three prctls added to manage indirect 81 - branch tracking. These prctls are architecture-agnostic and return -EINVAL if 82 - the underlying functionality is not supported. 79 + Per-task indirect branch tracking state can be monitored and 80 + controlled via the :c:macro:`PR_GET_CFI` and :c:macro:`PR_SET_CFI` 81 + ``prctl()` arguments (respectively), by supplying 82 + :c:macro:`PR_CFI_BRANCH_LANDING_PADS` as the second argument. These 83 + are architecture-agnostic, and will return -EINVAL if the underlying 84 + functionality is not supported. 83 85 84 - * prctl(PR_SET_INDIR_BR_LP_STATUS, unsigned long arg) 86 + * prctl(:c:macro:`PR_SET_CFI`, :c:macro:`PR_CFI_BRANCH_LANDING_PADS`, unsigned long arg) 85 87 86 - If arg1 is :c:macro:`PR_INDIR_BR_LP_ENABLE` and if CPU supports 87 - ``zicfilp`` then the kernel will enable indirect branch tracking for the 88 - task. The dynamic loader can issue this :c:macro:`prctl` once it has 88 + arg is a bitmask. 89 + 90 + If :c:macro:`PR_CFI_ENABLE` is set in arg, and the CPU supports 91 + ``zicfilp``, then the kernel will enable indirect branch tracking for 92 + the task. The dynamic loader can issue this ``prctl()`` once it has 89 93 determined that all the objects loaded in the address space support 90 - indirect branch tracking. Additionally, if there is a `dlopen` to an 91 - object which wasn't compiled with ``zicfilp``, the dynamic loader can 92 - issue this prctl with arg1 set to 0 (i.e. :c:macro:`PR_INDIR_BR_LP_ENABLE` 93 - cleared). 94 + indirect branch tracking. 94 95 95 - * prctl(PR_GET_INDIR_BR_LP_STATUS, unsigned long * arg) 96 + Indirect branch tracking state can also be locked once enabled. This 97 + prevents the task from subsequently disabling it. This is done by 98 + setting the bit :c:macro:`PR_CFI_LOCK` in arg. Either indirect branch 99 + tracking must already be enabled for the task, or the bit 100 + :c:macro:`PR_CFI_ENABLE` must also be set in arg. This is intended 101 + for environments that wish to run with a strict security posture that 102 + do not wish to load objects without ``zicfilp`` support. 96 103 97 - Returns the current status of indirect branch tracking. If enabled 98 - it'll return :c:macro:`PR_INDIR_BR_LP_ENABLE` 104 + Indirect branch tracking can also be disabled for the task, assuming 105 + that it has not previously been enabled and locked. If there is a 106 + ``dlopen()`` to an object which wasn't compiled with ``zicfilp``, the 107 + dynamic loader can issue this ``prctl()`` with arg set to 108 + :c:macro:`PR_CFI_DISABLE`. Disabling indirect branch tracking for the 109 + task is not possible if it has previously been enabled and locked. 99 110 100 - * prctl(PR_LOCK_INDIR_BR_LP_STATUS, unsigned long arg) 101 111 102 - Locks the current status of indirect branch tracking on the task. User 103 - space may want to run with a strict security posture and wouldn't want 104 - loading of objects without ``zicfilp`` support in them, to disallow 105 - disabling of indirect branch tracking. In this case, user space can 106 - use this prctl to lock the current settings. 112 + * prctl(:c:macro:`PR_GET_CFI`, :c:macro:`PR_CFI_BRANCH_LANDING_PADS`, unsigned long * arg) 113 + 114 + Returns the current status of indirect branch tracking into a bitmask 115 + stored into the memory location pointed to by arg. The bitmask will 116 + have the :c:macro:`PR_CFI_ENABLE` bit set if indirect branch tracking 117 + is currently enabled for the task, and if it is locked, will 118 + additionally have the :c:macro:`PR_CFI_LOCK` bit set. If indirect 119 + branch tracking is currently disabled for the task, the 120 + :c:macro:`PR_CFI_DISABLE` bit will be set. 121 + 107 122 108 123 5. violations related to indirect branch tracking 109 124 --------------------------------------------------
+2 -3
Documentation/devicetree/bindings/display/msm/qcom,qcm2290-mdss.yaml
··· 33 33 - const: core 34 34 35 35 iommus: 36 - maxItems: 2 36 + maxItems: 1 37 37 38 38 interconnects: 39 39 items: ··· 107 107 interconnect-names = "mdp0-mem", 108 108 "cpu-cfg"; 109 109 110 - iommus = <&apps_smmu 0x420 0x2>, 111 - <&apps_smmu 0x421 0x0>; 110 + iommus = <&apps_smmu 0x420 0x2>; 112 111 ranges; 113 112 114 113 display-controller@5e01000 {
+2 -5
Documentation/devicetree/bindings/media/qcom,qcm2290-venus.yaml
··· 42 42 - const: vcodec0_bus 43 43 44 44 iommus: 45 - maxItems: 5 45 + maxItems: 2 46 46 47 47 interconnects: 48 48 maxItems: 2 ··· 102 102 memory-region = <&pil_video_mem>; 103 103 104 104 iommus = <&apps_smmu 0x860 0x0>, 105 - <&apps_smmu 0x880 0x0>, 106 - <&apps_smmu 0x861 0x04>, 107 - <&apps_smmu 0x863 0x0>, 108 - <&apps_smmu 0x804 0xe0>; 105 + <&apps_smmu 0x880 0x0>; 109 106 110 107 interconnects = <&mmnrt_virt MASTER_VIDEO_P0 RPM_ALWAYS_TAG 111 108 &bimc SLAVE_EBI1 RPM_ALWAYS_TAG>,
+2 -2
Documentation/devicetree/bindings/net/nvidia,tegra234-mgbe.yaml
··· 42 42 - const: mgbe 43 43 - const: mac 44 44 - const: mac-divider 45 - - const: ptp-ref 45 + - const: ptp_ref 46 46 - const: rx-input-m 47 47 - const: rx-input 48 48 - const: tx ··· 133 133 <&bpmp TEGRA234_CLK_MGBE0_RX_PCS_M>, 134 134 <&bpmp TEGRA234_CLK_MGBE0_RX_PCS>, 135 135 <&bpmp TEGRA234_CLK_MGBE0_TX_PCS>; 136 - clock-names = "mgbe", "mac", "mac-divider", "ptp-ref", "rx-input-m", 136 + clock-names = "mgbe", "mac", "mac-divider", "ptp_ref", "rx-input-m", 137 137 "rx-input", "tx", "eee-pcs", "rx-pcs-input", "rx-pcs-m", 138 138 "rx-pcs", "tx-pcs"; 139 139 resets = <&bpmp TEGRA234_RESET_MGBE0_MAC>,
+10 -3
Documentation/devicetree/bindings/sound/ti,tas2552.yaml
··· 12 12 - Baojun Xu <baojun.xu@ti.com> 13 13 14 14 description: > 15 - The TAS2552 can receive its reference clock via MCLK, BCLK, IVCLKIN pin or 16 - use the internal 1.8MHz. This CLKIN is used by the PLL. In addition to PLL, 15 + The TAS2552 can receive its reference clock via MCLK, BCLK, IVCLKIN pin or 16 + use the internal 1.8MHz. This CLKIN is used by the PLL. In addition to PLL, 17 17 the PDM reference clock is also selectable: PLL, IVCLKIN, BCLK or MCLK. 18 18 19 19 For system integration the dt-bindings/sound/tas2552.h header file provides ··· 34 34 maxItems: 1 35 35 description: gpio pin to enable/disable the device 36 36 37 + '#sound-dai-cells': 38 + const: 0 39 + 37 40 required: 38 41 - compatible 39 42 - reg ··· 44 41 - iovdd-supply 45 42 - avdd-supply 46 43 47 - additionalProperties: false 44 + allOf: 45 + - $ref: dai-common.yaml# 46 + 47 + unevaluatedProperties: false 48 48 49 49 examples: 50 50 - | ··· 60 54 audio-codec@41 { 61 55 compatible = "ti,tas2552"; 62 56 reg = <0x41>; 57 + #sound-dai-cells = <0>; 63 58 vbat-supply = <&reg_vbat>; 64 59 iovdd-supply = <&reg_iovdd>; 65 60 avdd-supply = <&reg_avdd>;
+10 -10
MAINTAINERS
··· 1292 1292 1293 1293 AMD XGBE DRIVER 1294 1294 M: Raju Rangoju <Raju.Rangoju@amd.com> 1295 + M: Prashanth Kumar K R <PrashanthKumar.K.R@amd.com> 1295 1296 L: netdev@vger.kernel.org 1296 1297 S: Maintained 1297 1298 F: arch/arm64/boot/dts/amd/amd-seattle-xgbe*.dtsi ··· 6718 6717 CPUIDLE DRIVER - ARM PSCI 6719 6718 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 6720 6719 M: Sudeep Holla <sudeep.holla@kernel.org> 6721 - M: Ulf Hansson <ulf.hansson@linaro.org> 6720 + M: Ulf Hansson <ulfh@kernel.org> 6722 6721 L: linux-pm@vger.kernel.org 6723 6722 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 6724 6723 S: Supported ··· 6726 6725 F: drivers/cpuidle/cpuidle-psci.c 6727 6726 6728 6727 CPUIDLE DRIVER - ARM PSCI PM DOMAIN 6729 - M: Ulf Hansson <ulf.hansson@linaro.org> 6728 + M: Ulf Hansson <ulfh@kernel.org> 6730 6729 L: linux-pm@vger.kernel.org 6731 6730 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 6732 6731 S: Supported ··· 6735 6734 F: drivers/cpuidle/cpuidle-psci.h 6736 6735 6737 6736 CPUIDLE DRIVER - DT IDLE PM DOMAIN 6738 - M: Ulf Hansson <ulf.hansson@linaro.org> 6737 + M: Ulf Hansson <ulfh@kernel.org> 6739 6738 L: linux-pm@vger.kernel.org 6740 6739 S: Supported 6741 6740 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/linux-pm.git ··· 10731 10730 F: drivers/i2c/muxes/i2c-demux-pinctrl.c 10732 10731 10733 10732 GENERIC PM DOMAINS 10734 - M: Ulf Hansson <ulf.hansson@linaro.org> 10733 + M: Ulf Hansson <ulfh@kernel.org> 10735 10734 L: linux-pm@vger.kernel.org 10736 10735 S: Supported 10737 10736 F: Documentation/devicetree/bindings/power/power?domain* ··· 18091 18090 F: include/linux/spi/mmc_spi.h 18092 18091 18093 18092 MULTIMEDIA CARD (MMC), SECURE DIGITAL (SD) AND SDIO SUBSYSTEM 18094 - M: Ulf Hansson <ulf.hansson@linaro.org> 18093 + M: Ulf Hansson <ulfh@kernel.org> 18095 18094 L: linux-mmc@vger.kernel.org 18096 18095 S: Maintained 18097 18096 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc.git ··· 21077 21076 F: net/atm/pppoatm.c 21078 21077 21079 21078 PPP OVER ETHERNET 21080 - M: Michal Ostrowski <mostrows@earthlink.net> 21081 - S: Maintained 21079 + S: Orphan 21082 21080 F: drivers/net/ppp/pppoe.c 21083 21081 F: drivers/net/ppp/pppox.c 21084 21082 ··· 22131 22131 F: drivers/infiniband/sw/rdmavt 22132 22132 22133 22133 RDS - RELIABLE DATAGRAM SOCKETS 22134 - M: Allison Henderson <allison.henderson@oracle.com> 22134 + M: Allison Henderson <achender@kernel.org> 22135 22135 L: netdev@vger.kernel.org 22136 22136 L: linux-rdma@vger.kernel.org 22137 22137 L: rds-devel@oss.oracle.com (moderated for non-subscribers) ··· 24697 24697 SONY MEMORYSTICK SUBSYSTEM 24698 24698 M: Maxim Levitsky <maximlevitsky@gmail.com> 24699 24699 M: Alex Dubov <oakad@yahoo.com> 24700 - M: Ulf Hansson <ulf.hansson@linaro.org> 24700 + M: Ulf Hansson <ulfh@kernel.org> 24701 24701 L: linux-mmc@vger.kernel.org 24702 24702 S: Maintained 24703 24703 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc.git ··· 27616 27616 F: drivers/video/fbdev/uvesafb.* 27617 27617 27618 27618 Ux500 CLOCK DRIVERS 27619 - M: Ulf Hansson <ulf.hansson@linaro.org> 27619 + M: Ulf Hansson <ulfh@kernel.org> 27620 27620 L: linux-clk@vger.kernel.org 27621 27621 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 27622 27622 S: Maintained
+1 -1
Makefile
··· 2 2 VERSION = 7 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc7 5 + EXTRAVERSION = 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm/boot/dts/microchip/sam9x7.dtsi
··· 1226 1226 interrupt-controller; 1227 1227 #gpio-cells = <2>; 1228 1228 gpio-controller; 1229 - #gpio-lines = <26>; 1229 + #gpio-lines = <27>; 1230 1230 clocks = <&pmc PMC_TYPE_PERIPHERAL 3>; 1231 1231 }; 1232 1232
+1 -5
arch/arm/boot/dts/nxp/imx/imx6-logicpd-som.dtsi
··· 36 36 &gpmi { 37 37 pinctrl-names = "default"; 38 38 pinctrl-0 = <&pinctrl_gpmi_nand>; 39 + nand-on-flash-bbt; 39 40 status = "okay"; 40 - 41 - nand@0 { 42 - reg = <0>; 43 - nand-on-flash-bbt; 44 - }; 45 41 }; 46 42 47 43 &i2c3 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6qdl-icore.dtsi
··· 172 172 &gpmi { 173 173 pinctrl-names = "default"; 174 174 pinctrl-0 = <&pinctrl_gpmi_nand>; 175 + nand-on-flash-bbt; 175 176 status = "okay"; 176 - 177 - nand@0 { 178 - reg = <0>; 179 - nand-on-flash-bbt; 180 - }; 181 177 }; 182 178 183 179 &i2c1 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-pfla02.dtsi
··· 102 102 &gpmi { 103 103 pinctrl-names = "default"; 104 104 pinctrl-0 = <&pinctrl_gpmi_nand>; 105 + nand-on-flash-bbt; 105 106 status = "okay"; 106 - 107 - nand@0 { 108 - reg = <0>; 109 - nand-on-flash-bbt; 110 - }; 111 107 }; 112 108 113 109 &i2c1 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-phycore-som.dtsi
··· 73 73 &gpmi { 74 74 pinctrl-names = "default"; 75 75 pinctrl-0 = <&pinctrl_gpmi_nand>; 76 + nand-on-flash-bbt; 76 77 status = "disabled"; 77 - 78 - nand@0 { 79 - reg = <0>; 80 - nand-on-flash-bbt; 81 - }; 82 78 }; 83 79 84 80 &i2c3 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6qdl-skov-cpu.dtsi
··· 260 260 &gpmi { 261 261 pinctrl-names = "default"; 262 262 pinctrl-0 = <&pinctrl_gpmi_nand>; 263 + nand-on-flash-bbt; 263 264 #address-cells = <1>; 264 265 #size-cells = <0>; 265 266 status = "okay"; 266 - 267 - nand@0 { 268 - reg = <0>; 269 - nand-on-flash-bbt; 270 - }; 271 267 }; 272 268 273 269 &i2c3 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6qdl-tx6.dtsi
··· 252 252 &gpmi { 253 253 pinctrl-names = "default"; 254 254 pinctrl-0 = <&pinctrl_gpmi_nand>; 255 + nand-on-flash-bbt; 255 256 fsl,no-blockmark-swap; 256 257 status = "okay"; 257 - 258 - nand@0 { 259 - reg = <0>; 260 - nand-on-flash-bbt; 261 - }; 262 258 }; 263 259 264 260 &i2c1 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6ul-geam.dts
··· 133 133 &gpmi { 134 134 pinctrl-names = "default"; 135 135 pinctrl-0 = <&pinctrl_gpmi_nand>; 136 + nand-on-flash-bbt; 136 137 status = "okay"; 137 - 138 - nand@0 { 139 - reg = <0>; 140 - nand-on-flash-bbt; 141 - }; 142 138 }; 143 139 144 140 &i2c1 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6ul-isiot.dtsi
··· 101 101 &gpmi { 102 102 pinctrl-names = "default"; 103 103 pinctrl-0 = <&pinctrl_gpmi_nand>; 104 + nand-on-flash-bbt; 104 105 status = "disabled"; 105 - 106 - nand@0 { 107 - reg = <0>; 108 - nand-on-flash-bbt; 109 - }; 110 106 }; 111 107 112 108 &i2c1 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6ul-phytec-phycore-som.dtsi
··· 63 63 &gpmi { 64 64 pinctrl-names = "default"; 65 65 pinctrl-0 = <&pinctrl_gpmi_nand>; 66 + nand-on-flash-bbt; 66 67 status = "disabled"; 67 - 68 - nand@0 { 69 - reg = <0>; 70 - nand-on-flash-bbt; 71 - }; 72 68 }; 73 69 74 70 &i2c1 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6ul-tx6ul.dtsi
··· 296 296 &gpmi { 297 297 pinctrl-names = "default"; 298 298 pinctrl-0 = <&pinctrl_gpmi_nand>; 299 + nand-on-flash-bbt; 299 300 fsl,no-blockmark-swap; 300 301 status = "okay"; 301 - 302 - nand@0 { 303 - reg = <0>; 304 - nand-on-flash-bbt; 305 - }; 306 302 }; 307 303 308 304 &i2c2 {
+4 -8
arch/arm/boot/dts/nxp/imx/imx6ull-colibri.dtsi
··· 160 160 pinctrl-names = "default"; 161 161 pinctrl-0 = <&pinctrl_gpmi_nand>; 162 162 fsl,use-minimum-ecc; 163 + nand-on-flash-bbt; 164 + nand-ecc-mode = "hw"; 165 + nand-ecc-strength = <8>; 166 + nand-ecc-step-size = <512>; 163 167 status = "okay"; 164 - 165 - nand@0 { 166 - reg = <0>; 167 - nand-on-flash-bbt; 168 - nand-ecc-mode = "hw"; 169 - nand-ecc-strength = <8>; 170 - nand-ecc-step-size = <512>; 171 - }; 172 168 }; 173 169 174 170 /* I2C3_SDA/SCL on SODIMM 194/196 (e.g. RTC on carrier board) */
+4 -8
arch/arm/boot/dts/nxp/imx/imx6ull-engicam-microgea.dtsi
··· 43 43 &gpmi { 44 44 pinctrl-names = "default"; 45 45 pinctrl-0 = <&pinctrl_gpmi_nand>; 46 + nand-ecc-mode = "hw"; 47 + nand-ecc-strength = <0>; 48 + nand-ecc-step-size = <0>; 49 + nand-on-flash-bbt; 46 50 status = "okay"; 47 - 48 - nand@0 { 49 - reg = <0>; 50 - nand-ecc-mode = "hw"; 51 - nand-ecc-strength = <0>; 52 - nand-ecc-step-size = <0>; 53 - nand-on-flash-bbt; 54 - }; 55 51 }; 56 52 57 53 &iomuxc {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6ull-myir-mys-6ulx.dtsi
··· 60 60 &gpmi { 61 61 pinctrl-names = "default"; 62 62 pinctrl-0 = <&pinctrl_gpmi_nand>; 63 + nand-on-flash-bbt; 63 64 status = "disabled"; 64 - 65 - nand@0 { 66 - reg = <0>; 67 - nand-on-flash-bbt; 68 - }; 69 65 }; 70 66 71 67 &uart1 {
+1 -5
arch/arm/boot/dts/nxp/imx/imx6ulz-bsh-smm-m2.dts
··· 25 25 &gpmi { 26 26 pinctrl-names = "default"; 27 27 pinctrl-0 = <&pinctrl_gpmi_nand>; 28 + nand-on-flash-bbt; 28 29 status = "okay"; 29 - 30 - nand@0 { 31 - reg = <0>; 32 - nand-on-flash-bbt; 33 - }; 34 30 }; 35 31 36 32 &snvs_poweroff {
+2 -6
arch/arm/boot/dts/nxp/imx/imx7-colibri.dtsi
··· 375 375 /* NAND on such SKUs */ 376 376 &gpmi { 377 377 fsl,use-minimum-ecc; 378 + nand-ecc-mode = "hw"; 379 + nand-on-flash-bbt; 378 380 pinctrl-names = "default"; 379 381 pinctrl-0 = <&pinctrl_gpmi_nand>; 380 - 381 - nand@0 { 382 - reg = <0>; 383 - nand-ecc-mode = "hw"; 384 - nand-on-flash-bbt; 385 - }; 386 382 }; 387 383 388 384 /* On-module Power I2C */
+1 -1
arch/arm64/boot/dts/allwinner/sun55i-a523.dtsi
··· 901 901 interrupts = <GIC_SPI 172 IRQ_TYPE_LEVEL_HIGH>; 902 902 clocks = <&r_ccu CLK_BUS_R_SPI>, <&r_ccu CLK_R_SPI>; 903 903 clock-names = "ahb", "mod"; 904 - dmas = <&dma 53>, <&dma 53>; 904 + dmas = <&mcu_dma 13>, <&mcu_dma 13>; 905 905 dma-names = "rx", "tx"; 906 906 resets = <&r_ccu RST_BUS_R_SPI>; 907 907 status = "disabled";
+1 -1
arch/arm64/boot/dts/freescale/imx8mq-librem5-r3.dts
··· 7 7 8 8 &a53_opp_table { 9 9 opp-1000000000 { 10 - opp-microvolt = <950000>; 10 + opp-microvolt = <1000000>; 11 11 }; 12 12 }; 13 13
+7 -17
arch/arm64/boot/dts/freescale/imx8mq-librem5.dtsi
··· 880 880 regulator-max-microvolt = <1300000>; 881 881 regulator-boot-on; 882 882 regulator-ramp-delay = <1250>; 883 - rohm,dvs-run-voltage = <880000>; 884 - rohm,dvs-idle-voltage = <820000>; 885 - rohm,dvs-suspend-voltage = <810000>; 883 + rohm,dvs-run-voltage = <900000>; 884 + rohm,dvs-idle-voltage = <850000>; 885 + rohm,dvs-suspend-voltage = <850000>; 886 886 regulator-always-on; 887 887 }; 888 888 ··· 892 892 regulator-max-microvolt = <1300000>; 893 893 regulator-boot-on; 894 894 regulator-ramp-delay = <1250>; 895 - rohm,dvs-run-voltage = <950000>; 896 - rohm,dvs-idle-voltage = <850000>; 895 + rohm,dvs-run-voltage = <1000000>; 896 + rohm,dvs-idle-voltage = <900000>; 897 897 regulator-always-on; 898 898 }; 899 899 ··· 902 902 regulator-min-microvolt = <700000>; 903 903 regulator-max-microvolt = <1300000>; 904 904 regulator-boot-on; 905 - rohm,dvs-run-voltage = <850000>; 905 + rohm,dvs-run-voltage = <900000>; 906 906 }; 907 907 908 908 buck4_reg: BUCK4 { 909 909 regulator-name = "buck4"; 910 910 regulator-min-microvolt = <700000>; 911 911 regulator-max-microvolt = <1300000>; 912 - rohm,dvs-run-voltage = <930000>; 912 + rohm,dvs-run-voltage = <1000000>; 913 913 }; 914 914 915 915 buck5_reg: BUCK5 { ··· 1447 1447 pinctrl-0 = <&pinctrl_wdog>; 1448 1448 fsl,ext-reset-output; 1449 1449 status = "okay"; 1450 - }; 1451 - 1452 - &a53_opp_table { 1453 - opp-1000000000 { 1454 - opp-microvolt = <850000>; 1455 - }; 1456 - 1457 - opp-1500000000 { 1458 - opp-microvolt = <950000>; 1459 - }; 1460 1450 };
+1 -1
arch/arm64/boot/dts/freescale/imx8mq.dtsi
··· 1632 1632 <&clk IMX8MQ_GPU_PLL_OUT>, 1633 1633 <&clk IMX8MQ_GPU_PLL>; 1634 1634 assigned-clock-rates = <800000000>, <800000000>, 1635 - <800000000>, <800000000>, <0>; 1635 + <800000000>, <400000000>, <0>; 1636 1636 power-domains = <&pgc_gpu>; 1637 1637 }; 1638 1638
+10 -10
arch/arm64/boot/dts/freescale/imx91-tqma9131.dtsi
··· 272 272 /* enable SION for data and cmd pad due to ERR052021 */ 273 273 pinctrl_usdhc1: usdhc1grp { 274 274 fsl,pins = /* PD | FSEL 3 | DSE X5 */ 275 - <MX91_PAD_SD1_CLK__USDHC1_CLK 0x5be>, 275 + <MX91_PAD_SD1_CLK__USDHC1_CLK 0x59e>, 276 276 /* HYS | FSEL 0 | no drive */ 277 277 <MX91_PAD_SD1_STROBE__USDHC1_STROBE 0x1000>, 278 278 /* HYS | FSEL 3 | X5 */ 279 - <MX91_PAD_SD1_CMD__USDHC1_CMD 0x400011be>, 279 + <MX91_PAD_SD1_CMD__USDHC1_CMD 0x4000139e>, 280 280 /* HYS | FSEL 3 | X4 */ 281 - <MX91_PAD_SD1_DATA0__USDHC1_DATA0 0x4000119e>, 282 - <MX91_PAD_SD1_DATA1__USDHC1_DATA1 0x4000119e>, 283 - <MX91_PAD_SD1_DATA2__USDHC1_DATA2 0x4000119e>, 284 - <MX91_PAD_SD1_DATA3__USDHC1_DATA3 0x4000119e>, 285 - <MX91_PAD_SD1_DATA4__USDHC1_DATA4 0x4000119e>, 286 - <MX91_PAD_SD1_DATA5__USDHC1_DATA5 0x4000119e>, 287 - <MX91_PAD_SD1_DATA6__USDHC1_DATA6 0x4000119e>, 288 - <MX91_PAD_SD1_DATA7__USDHC1_DATA7 0x4000119e>; 281 + <MX91_PAD_SD1_DATA0__USDHC1_DATA0 0x4000139e>, 282 + <MX91_PAD_SD1_DATA1__USDHC1_DATA1 0x4000139e>, 283 + <MX91_PAD_SD1_DATA2__USDHC1_DATA2 0x4000139e>, 284 + <MX91_PAD_SD1_DATA3__USDHC1_DATA3 0x4000139e>, 285 + <MX91_PAD_SD1_DATA4__USDHC1_DATA4 0x4000139e>, 286 + <MX91_PAD_SD1_DATA5__USDHC1_DATA5 0x4000139e>, 287 + <MX91_PAD_SD1_DATA6__USDHC1_DATA6 0x4000139e>, 288 + <MX91_PAD_SD1_DATA7__USDHC1_DATA7 0x4000139e>; 289 289 }; 290 290 291 291 pinctrl_wdog: wdoggrp {
+2
arch/arm64/boot/dts/freescale/imx93-9x9-qsb.dts
··· 507 507 pinctrl-2 = <&pinctrl_usdhc1_200mhz>; 508 508 bus-width = <8>; 509 509 non-removable; 510 + fsl,tuning-step = <1>; 510 511 status = "okay"; 511 512 }; 512 513 ··· 520 519 vmmc-supply = <&reg_usdhc2_vmmc>; 521 520 bus-width = <4>; 522 521 no-mmc; 522 + fsl,tuning-step = <1>; 523 523 status = "okay"; 524 524 }; 525 525
+13 -13
arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
··· 271 271 /* enable SION for data and cmd pad due to ERR052021 */ 272 272 pinctrl_usdhc1: usdhc1grp { 273 273 fsl,pins = < 274 - /* PD | FSEL 3 | DSE X5 */ 275 - MX93_PAD_SD1_CLK__USDHC1_CLK 0x5be 274 + /* PD | FSEL 3 | DSE X4 */ 275 + MX93_PAD_SD1_CLK__USDHC1_CLK 0x59e 276 276 /* HYS | FSEL 0 | no drive */ 277 277 MX93_PAD_SD1_STROBE__USDHC1_STROBE 0x1000 278 - /* HYS | FSEL 3 | X5 */ 279 - MX93_PAD_SD1_CMD__USDHC1_CMD 0x400011be 280 - /* HYS | FSEL 3 | X4 */ 281 - MX93_PAD_SD1_DATA0__USDHC1_DATA0 0x4000119e 282 - MX93_PAD_SD1_DATA1__USDHC1_DATA1 0x4000119e 283 - MX93_PAD_SD1_DATA2__USDHC1_DATA2 0x4000119e 284 - MX93_PAD_SD1_DATA3__USDHC1_DATA3 0x4000119e 285 - MX93_PAD_SD1_DATA4__USDHC1_DATA4 0x4000119e 286 - MX93_PAD_SD1_DATA5__USDHC1_DATA5 0x4000119e 287 - MX93_PAD_SD1_DATA6__USDHC1_DATA6 0x4000119e 288 - MX93_PAD_SD1_DATA7__USDHC1_DATA7 0x4000119e 278 + /* HYS | PU | FSEL 3 | DSE X4 */ 279 + MX93_PAD_SD1_CMD__USDHC1_CMD 0x4000139e 280 + /* HYS | PU | FSEL 3 | DSE X4 */ 281 + MX93_PAD_SD1_DATA0__USDHC1_DATA0 0x4000139e 282 + MX93_PAD_SD1_DATA1__USDHC1_DATA1 0x4000139e 283 + MX93_PAD_SD1_DATA2__USDHC1_DATA2 0x4000139e 284 + MX93_PAD_SD1_DATA3__USDHC1_DATA3 0x4000139e 285 + MX93_PAD_SD1_DATA4__USDHC1_DATA4 0x4000139e 286 + MX93_PAD_SD1_DATA5__USDHC1_DATA5 0x4000139e 287 + MX93_PAD_SD1_DATA6__USDHC1_DATA6 0x4000139e 288 + MX93_PAD_SD1_DATA7__USDHC1_DATA7 0x4000139e 289 289 >; 290 290 }; 291 291
+1 -1
arch/arm64/boot/dts/hisilicon/hi3798cv200-poplar.dts
··· 179 179 }; 180 180 181 181 &pcie { 182 - reset-gpios = <&gpio4 4 GPIO_ACTIVE_HIGH>; 182 + reset-gpios = <&gpio4 4 GPIO_ACTIVE_LOW>; 183 183 vpcie-supply = <&reg_pcie>; 184 184 status = "okay"; 185 185 };
+1
arch/arm64/boot/dts/hisilicon/hi3798cv200.dtsi
··· 122 122 #address-cells = <1>; 123 123 #size-cells = <1>; 124 124 ranges = <0x0 0x0 0xf0000000 0x10000000>; 125 + dma-ranges = <0x0 0x0 0x0 0x40000000>; 125 126 126 127 crg: clock-reset-controller@8a22000 { 127 128 compatible = "hisilicon,hi3798cv200-crg", "syscon", "simple-mfd";
+3 -8
arch/arm64/boot/dts/qcom/agatti.dtsi
··· 1669 1669 &bimc SLAVE_EBI1 RPM_ALWAYS_TAG>; 1670 1670 interconnect-names = "gfx-mem"; 1671 1671 1672 - iommus = <&adreno_smmu 0 1>, 1673 - <&adreno_smmu 2 0>; 1672 + iommus = <&adreno_smmu 0 1>; 1674 1673 operating-points-v2 = <&gpu_opp_table>; 1675 1674 power-domains = <&rpmpd QCM2290_VDDCX>; 1676 1675 qcom,gmu = <&gmu_wrapper>; ··· 1950 1951 1951 1952 power-domains = <&dispcc MDSS_GDSC>; 1952 1953 1953 - iommus = <&apps_smmu 0x420 0x2>, 1954 - <&apps_smmu 0x421 0x0>; 1954 + iommus = <&apps_smmu 0x420 0x2>; 1955 1955 interconnects = <&mmrt_virt MASTER_MDP0 RPM_ALWAYS_TAG 1956 1956 &bimc SLAVE_EBI1 RPM_ALWAYS_TAG>, 1957 1957 <&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG ··· 2434 2436 2435 2437 memory-region = <&pil_video_mem>; 2436 2438 iommus = <&apps_smmu 0x860 0x0>, 2437 - <&apps_smmu 0x880 0x0>, 2438 - <&apps_smmu 0x861 0x04>, 2439 - <&apps_smmu 0x863 0x0>, 2440 - <&apps_smmu 0x804 0xe0>; 2439 + <&apps_smmu 0x880 0x0>; 2441 2440 2442 2441 interconnects = <&mmnrt_virt MASTER_VIDEO_P0 RPM_ALWAYS_TAG 2443 2442 &bimc SLAVE_EBI1 RPM_ALWAYS_TAG>,
+1 -1
arch/arm64/boot/dts/qcom/hamoa.dtsi
··· 269 269 idle-state-name = "ret"; 270 270 arm,psci-suspend-param = <0x00000004>; 271 271 entry-latency-us = <180>; 272 - exit-latency-us = <500>; 272 + exit-latency-us = <320>; 273 273 min-residency-us = <600>; 274 274 }; 275 275 };
+7 -2
arch/arm64/boot/dts/qcom/monaco.dtsi
··· 765 765 hwlocks = <&tcsr_mutex 3>; 766 766 }; 767 767 768 + gunyah_md_mem: gunyah-md-region@91a80000 { 769 + reg = <0x0 0x91a80000 0x0 0x80000>; 770 + no-map; 771 + }; 772 + 768 773 lpass_machine_learning_mem: lpass-machine-learning-region@93b00000 { 769 774 reg = <0x0 0x93b00000 0x0 0xf00000>; 770 775 no-map; ··· 6419 6414 }; 6420 6415 6421 6416 qup_uart10_rts: qup-uart10-rts-state { 6422 - pins = "gpio84"; 6417 + pins = "gpio85"; 6423 6418 function = "qup1_se2"; 6424 6419 }; 6425 6420 6426 6421 qup_uart10_tx: qup-uart10-tx-state { 6427 - pins = "gpio85"; 6422 + pins = "gpio86"; 6428 6423 function = "qup1_se2"; 6429 6424 }; 6430 6425
+1 -1
arch/arm64/boot/dts/qcom/qcm6490-idp.dts
··· 177 177 pinctrl-0 = <&wcd_default>; 178 178 pinctrl-names = "default"; 179 179 180 - reset-gpios = <&tlmm 83 GPIO_ACTIVE_HIGH>; 180 + reset-gpios = <&tlmm 83 GPIO_ACTIVE_LOW>; 181 181 182 182 vdd-buck-supply = <&vreg_l17b_1p7>; 183 183 vdd-rxtx-supply = <&vreg_l18b_1p8>;
+10 -6
arch/arm64/boot/dts/qcom/x1-asus-zenbook-a14.dtsi
··· 1032 1032 }; 1033 1033 1034 1034 &pcie4 { 1035 - perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1036 - wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1037 - 1038 1035 pinctrl-0 = <&pcie4_default>; 1039 1036 pinctrl-names = "default"; 1040 1037 ··· 1045 1048 status = "okay"; 1046 1049 }; 1047 1050 1048 - &pcie6a { 1049 - perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1050 - wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1051 + &pcie4_port0 { 1052 + reset-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1053 + wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1054 + }; 1051 1055 1056 + &pcie6a { 1052 1057 vddpe-3v3-supply = <&vreg_nvme>; 1053 1058 1054 1059 pinctrl-0 = <&pcie6a_default>; ··· 1064 1065 vdda-pll-supply = <&vreg_l2j_1p2>; 1065 1066 1066 1067 status = "okay"; 1068 + }; 1069 + 1070 + &pcie6a_port0 { 1071 + reset-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1072 + wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1067 1073 }; 1068 1074 1069 1075 &pm8550_gpios {
+15 -9
arch/arm64/boot/dts/qcom/x1-crd.dtsi
··· 1216 1216 }; 1217 1217 1218 1218 &pcie4 { 1219 - perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1220 - wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1221 - 1222 1219 pinctrl-0 = <&pcie4_default>; 1223 1220 pinctrl-names = "default"; 1224 1221 1225 1222 status = "okay"; 1223 + }; 1224 + 1225 + &pcie4_port0 { 1226 + reset-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1227 + wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1226 1228 }; 1227 1229 1228 1230 &pcie4_phy { ··· 1235 1233 }; 1236 1234 1237 1235 &pcie5 { 1238 - perst-gpios = <&tlmm 149 GPIO_ACTIVE_LOW>; 1239 - wake-gpios = <&tlmm 151 GPIO_ACTIVE_LOW>; 1240 - 1241 1236 vddpe-3v3-supply = <&vreg_wwan>; 1242 1237 1243 1238 pinctrl-0 = <&pcie5_default>; ··· 1250 1251 status = "okay"; 1251 1252 }; 1252 1253 1253 - &pcie6a { 1254 - perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1255 - wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1254 + &pcie5_port0 { 1255 + reset-gpios = <&tlmm 149 GPIO_ACTIVE_LOW>; 1256 + wake-gpios = <&tlmm 151 GPIO_ACTIVE_LOW>; 1257 + }; 1256 1258 1259 + &pcie6a { 1257 1260 vddpe-3v3-supply = <&vreg_nvme>; 1258 1261 1259 1262 pinctrl-names = "default"; ··· 1269 1268 vdda-pll-supply = <&vreg_l2j_1p2>; 1270 1269 1271 1270 status = "okay"; 1271 + }; 1272 + 1273 + &pcie6a_port0 { 1274 + reset-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1275 + wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1272 1276 }; 1273 1277 1274 1278 &pm8550_gpios {
+8 -6
arch/arm64/boot/dts/qcom/x1-dell-thena.dtsi
··· 1081 1081 }; 1082 1082 1083 1083 &pcie4 { 1084 - perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1085 - wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1086 - 1087 1084 pinctrl-0 = <&pcie4_default>; 1088 1085 pinctrl-names = "default"; 1089 1086 ··· 1095 1098 }; 1096 1099 1097 1100 &pcie4_port0 { 1101 + reset-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1102 + wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1103 + 1098 1104 wifi@0 { 1099 1105 compatible = "pci17cb,1107"; 1100 1106 reg = <0x10000 0x0 0x0 0x0 0x0>; ··· 1115 1115 }; 1116 1116 1117 1117 &pcie6a { 1118 - perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1119 - wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1120 - 1121 1118 vddpe-3v3-supply = <&vreg_nvme>; 1122 1119 1123 1120 pinctrl-0 = <&pcie6a_default>; 1124 1121 pinctrl-names = "default"; 1125 1122 1126 1123 status = "okay"; 1124 + }; 1125 + 1126 + &pcie6a_port0 { 1127 + reset-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1128 + wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1127 1129 }; 1128 1130 1129 1131 &pcie6a_phy {
+8 -6
arch/arm64/boot/dts/qcom/x1-hp-omnibook-x14.dtsi
··· 1065 1065 }; 1066 1066 1067 1067 &pcie4 { 1068 - perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1069 - wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1070 - 1071 1068 pinctrl-0 = <&pcie4_default>; 1072 1069 pinctrl-names = "default"; 1073 1070 ··· 1079 1082 }; 1080 1083 1081 1084 &pcie4_port0 { 1085 + reset-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1086 + wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1087 + 1082 1088 wifi@0 { 1083 1089 compatible = "pci17cb,1107"; 1084 1090 reg = <0x10000 0x0 0x0 0x0 0x0>; ··· 1099 1099 }; 1100 1100 1101 1101 &pcie6a { 1102 - perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1103 - wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1104 - 1105 1102 vddpe-3v3-supply = <&vreg_nvme>; 1106 1103 1107 1104 pinctrl-0 = <&pcie6a_default>; 1108 1105 pinctrl-names = "default"; 1109 1106 1110 1107 status = "okay"; 1108 + }; 1109 + 1110 + &pcie6a_port0 { 1111 + reset-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1112 + wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1111 1113 }; 1112 1114 1113 1115 &pcie6a_phy {
+5 -3
arch/arm64/boot/dts/qcom/x1-microsoft-denali.dtsi
··· 964 964 }; 965 965 966 966 &pcie6a { 967 - perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 968 - wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 969 - 970 967 vddpe-3v3-supply = <&vreg_nvme>; 971 968 972 969 pinctrl-0 = <&pcie6a_default>; ··· 977 980 vdda-pll-supply = <&vreg_l2j_1p2>; 978 981 979 982 status = "okay"; 983 + }; 984 + 985 + &pcie6a_port0 { 986 + reset-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 987 + wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 980 988 }; 981 989 982 990 &pm8550_gpios {
+3 -3
arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
··· 1126 1126 }; 1127 1127 1128 1128 &pcie4 { 1129 - perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1130 - wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1131 - 1132 1129 pinctrl-0 = <&pcie4_default>; 1133 1130 pinctrl-names = "default"; 1134 1131 ··· 1140 1143 }; 1141 1144 1142 1145 &pcie4_port0 { 1146 + reset-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1147 + wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1148 + 1143 1149 wifi@0 { 1144 1150 compatible = "pci17cb,1107"; 1145 1151 reg = <0x10000 0x0 0x0 0x0 0x0>;
+8 -7
arch/arm64/boot/dts/qcom/x1e80100-medion-sprchrgd-14-s1.dts
··· 1033 1033 }; 1034 1034 1035 1035 &pcie4 { 1036 - perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1037 - wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1038 - 1039 1036 pinctrl-0 = <&pcie4_default>; 1040 1037 pinctrl-names = "default"; 1041 1038 ··· 1047 1050 }; 1048 1051 1049 1052 &pcie4_port0 { 1053 + reset-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1054 + wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1055 + 1050 1056 wifi@0 { 1051 1057 compatible = "pci17cb,1107"; 1052 1058 reg = <0x10000 0x0 0x0 0x0 0x0>; ··· 1067 1067 }; 1068 1068 1069 1069 &pcie6a { 1070 - perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1071 - 1072 - wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1073 - 1074 1070 vddpe-3v3-supply = <&vreg_nvme>; 1075 1071 1076 1072 pinctrl-0 = <&pcie6a_default>; ··· 1080 1084 vdda-pll-supply = <&vreg_l2j_1p2>; 1081 1085 1082 1086 status = "okay"; 1087 + }; 1088 + 1089 + &pcie6a_port0 { 1090 + reset-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1091 + wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1083 1092 }; 1084 1093 1085 1094 &pm8550_gpios {
+8 -6
arch/arm64/boot/dts/qcom/x1p42100-lenovo-thinkbook-16.dts
··· 1131 1131 }; 1132 1132 1133 1133 &pcie4 { 1134 - perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1135 - wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1136 - 1137 1134 pinctrl-0 = <&pcie4_default>; 1138 1135 pinctrl-names = "default"; 1139 1136 ··· 1145 1148 }; 1146 1149 1147 1150 &pcie4_port0 { 1151 + reset-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>; 1152 + wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>; 1153 + 1148 1154 wifi@0 { 1149 1155 compatible = "pci17cb,1107"; 1150 1156 reg = <0x10000 0x0 0x0 0x0 0x0>; ··· 1165 1165 }; 1166 1166 1167 1167 &pcie6a { 1168 - perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1169 - wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1170 - 1171 1168 vddpe-3v3-supply = <&vreg_nvme>; 1172 1169 1173 1170 pinctrl-0 = <&pcie6a_default>; ··· 1178 1181 vdda-pll-supply = <&vreg_l2j_1p2>; 1179 1182 1180 1183 status = "okay"; 1184 + }; 1185 + 1186 + &pcie6a_port0 { 1187 + reset-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>; 1188 + wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>; 1181 1189 }; 1182 1190 1183 1191 &pm8550_pwm {
+11
arch/arm64/boot/dts/renesas/r8a779g3-sparrow-hawk.dts
··· 118 118 reg = <0x6 0x00000000 0x1 0x00000000>; 119 119 }; 120 120 121 + reserved-memory { 122 + #address-cells = <2>; 123 + #size-cells = <2>; 124 + ranges; 125 + 126 + tfa@40000000 { 127 + reg = <0x0 0x40000000 0x0 0x8000000>; 128 + no-map; 129 + }; 130 + }; 131 + 121 132 /* Page 27 / DSI to Display */ 122 133 dp-con { 123 134 compatible = "dp-connector";
-18
arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
··· 879 879 }; 880 880 }; 881 881 882 - wifi { 883 - wifi_host_wake_l: wifi-host-wake-l { 884 - rockchip,pins = <0 RK_PA3 RK_FUNC_GPIO &pcfg_pull_none>; 885 - }; 886 - }; 887 - 888 882 wireless-bluetooth { 889 883 bt_wake_pin: bt-wake-pin { 890 884 rockchip,pins = <2 RK_PD3 RK_FUNC_GPIO &pcfg_pull_none>; ··· 936 942 pinctrl-names = "default"; 937 943 pinctrl-0 = <&sdio0_bus4 &sdio0_cmd &sdio0_clk>; 938 944 sd-uhs-sdr104; 939 - #address-cells = <1>; 940 - #size-cells = <0>; 941 945 status = "okay"; 942 - 943 - brcmf: wifi@1 { 944 - compatible = "brcm,bcm4329-fmac"; 945 - reg = <1>; 946 - interrupt-parent = <&gpio0>; 947 - interrupts = <RK_PA3 IRQ_TYPE_LEVEL_HIGH>; 948 - interrupt-names = "host-wake"; 949 - pinctrl-names = "default"; 950 - pinctrl-0 = <&wifi_host_wake_l>; 951 - }; 952 946 }; 953 947 954 948 &sdhci {
+4 -4
arch/riscv/include/asm/usercfi.h
··· 39 39 bool is_shstk_enabled(struct task_struct *task); 40 40 bool is_shstk_locked(struct task_struct *task); 41 41 bool is_shstk_allocated(struct task_struct *task); 42 - void set_shstk_lock(struct task_struct *task); 42 + void set_shstk_lock(struct task_struct *task, bool lock); 43 43 void set_shstk_status(struct task_struct *task, bool enable); 44 44 unsigned long get_active_shstk(struct task_struct *task); 45 45 int restore_user_shstk(struct task_struct *tsk, unsigned long shstk_ptr); ··· 47 47 bool is_indir_lp_enabled(struct task_struct *task); 48 48 bool is_indir_lp_locked(struct task_struct *task); 49 49 void set_indir_lp_status(struct task_struct *task, bool enable); 50 - void set_indir_lp_lock(struct task_struct *task); 50 + void set_indir_lp_lock(struct task_struct *task, bool lock); 51 51 52 52 #define PR_SHADOW_STACK_SUPPORTED_STATUS_MASK (PR_SHADOW_STACK_ENABLE) 53 53 ··· 69 69 70 70 #define is_shstk_allocated(task) false 71 71 72 - #define set_shstk_lock(task) do {} while (0) 72 + #define set_shstk_lock(task, lock) do {} while (0) 73 73 74 74 #define set_shstk_status(task, enable) do {} while (0) 75 75 ··· 79 79 80 80 #define set_indir_lp_status(task, enable) do {} while (0) 81 81 82 - #define set_indir_lp_lock(task) do {} while (0) 82 + #define set_indir_lp_lock(task, lock) do {} while (0) 83 83 84 84 #define restore_user_shstk(tsk, shstk_ptr) -EINVAL 85 85
+20 -18
arch/riscv/include/uapi/asm/ptrace.h
··· 132 132 unsigned long ss_ptr; /* shadow stack pointer */ 133 133 }; 134 134 135 - #define PTRACE_CFI_LP_EN_BIT 0 136 - #define PTRACE_CFI_LP_LOCK_BIT 1 137 - #define PTRACE_CFI_ELP_BIT 2 138 - #define PTRACE_CFI_SS_EN_BIT 3 139 - #define PTRACE_CFI_SS_LOCK_BIT 4 140 - #define PTRACE_CFI_SS_PTR_BIT 5 135 + #define PTRACE_CFI_BRANCH_LANDING_PAD_EN_BIT 0 136 + #define PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_BIT 1 137 + #define PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_BIT 2 138 + #define PTRACE_CFI_SHADOW_STACK_EN_BIT 3 139 + #define PTRACE_CFI_SHADOW_STACK_LOCK_BIT 4 140 + #define PTRACE_CFI_SHADOW_STACK_PTR_BIT 5 141 141 142 - #define PTRACE_CFI_LP_EN_STATE _BITUL(PTRACE_CFI_LP_EN_BIT) 143 - #define PTRACE_CFI_LP_LOCK_STATE _BITUL(PTRACE_CFI_LP_LOCK_BIT) 144 - #define PTRACE_CFI_ELP_STATE _BITUL(PTRACE_CFI_ELP_BIT) 145 - #define PTRACE_CFI_SS_EN_STATE _BITUL(PTRACE_CFI_SS_EN_BIT) 146 - #define PTRACE_CFI_SS_LOCK_STATE _BITUL(PTRACE_CFI_SS_LOCK_BIT) 147 - #define PTRACE_CFI_SS_PTR_STATE _BITUL(PTRACE_CFI_SS_PTR_BIT) 142 + #define PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE _BITUL(PTRACE_CFI_BRANCH_LANDING_PAD_EN_BIT) 143 + #define PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_STATE \ 144 + _BITUL(PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_BIT) 145 + #define PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE \ 146 + _BITUL(PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_BIT) 147 + #define PTRACE_CFI_SHADOW_STACK_EN_STATE _BITUL(PTRACE_CFI_SHADOW_STACK_EN_BIT) 148 + #define PTRACE_CFI_SHADOW_STACK_LOCK_STATE _BITUL(PTRACE_CFI_SHADOW_STACK_LOCK_BIT) 149 + #define PTRACE_CFI_SHADOW_STACK_PTR_STATE _BITUL(PTRACE_CFI_SHADOW_STACK_PTR_BIT) 148 150 149 - #define PRACE_CFI_STATE_INVALID_MASK ~(PTRACE_CFI_LP_EN_STATE | \ 150 - PTRACE_CFI_LP_LOCK_STATE | \ 151 - PTRACE_CFI_ELP_STATE | \ 152 - PTRACE_CFI_SS_EN_STATE | \ 153 - PTRACE_CFI_SS_LOCK_STATE | \ 154 - PTRACE_CFI_SS_PTR_STATE) 151 + #define PTRACE_CFI_STATE_INVALID_MASK ~(PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE | \ 152 + PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_STATE | \ 153 + PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE | \ 154 + PTRACE_CFI_SHADOW_STACK_EN_STATE | \ 155 + PTRACE_CFI_SHADOW_STACK_LOCK_STATE | \ 156 + PTRACE_CFI_SHADOW_STACK_PTR_STATE) 155 157 156 158 struct __cfi_status { 157 159 __u64 cfi_state;
+2
arch/riscv/kernel/process.c
··· 160 160 * clear shadow stack state on exec. 161 161 * libc will set it later via prctl. 162 162 */ 163 + set_shstk_lock(current, false); 163 164 set_shstk_status(current, false); 164 165 set_shstk_base(current, 0, 0); 165 166 set_active_shstk(current, 0); ··· 168 167 * disable indirect branch tracking on exec. 169 168 * libc will enable it later via prctl. 170 169 */ 170 + set_indir_lp_lock(current, false); 171 171 set_indir_lp_status(current, false); 172 172 173 173 #ifdef CONFIG_64BIT
+11 -11
arch/riscv/kernel/ptrace.c
··· 303 303 regs = task_pt_regs(target); 304 304 305 305 if (is_indir_lp_enabled(target)) { 306 - user_cfi.cfi_status.cfi_state |= PTRACE_CFI_LP_EN_STATE; 306 + user_cfi.cfi_status.cfi_state |= PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE; 307 307 user_cfi.cfi_status.cfi_state |= is_indir_lp_locked(target) ? 308 - PTRACE_CFI_LP_LOCK_STATE : 0; 308 + PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_STATE : 0; 309 309 user_cfi.cfi_status.cfi_state |= (regs->status & SR_ELP) ? 310 - PTRACE_CFI_ELP_STATE : 0; 310 + PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE : 0; 311 311 } 312 312 313 313 if (is_shstk_enabled(target)) { 314 - user_cfi.cfi_status.cfi_state |= (PTRACE_CFI_SS_EN_STATE | 315 - PTRACE_CFI_SS_PTR_STATE); 314 + user_cfi.cfi_status.cfi_state |= (PTRACE_CFI_SHADOW_STACK_EN_STATE | 315 + PTRACE_CFI_SHADOW_STACK_PTR_STATE); 316 316 user_cfi.cfi_status.cfi_state |= is_shstk_locked(target) ? 317 - PTRACE_CFI_SS_LOCK_STATE : 0; 317 + PTRACE_CFI_SHADOW_STACK_LOCK_STATE : 0; 318 318 user_cfi.shstk_ptr = get_active_shstk(target); 319 319 } 320 320 ··· 349 349 * rsvd field should be set to zero so that if those fields are needed in future 350 350 */ 351 351 if ((user_cfi.cfi_status.cfi_state & 352 - (PTRACE_CFI_LP_EN_STATE | PTRACE_CFI_LP_LOCK_STATE | 353 - PTRACE_CFI_SS_EN_STATE | PTRACE_CFI_SS_LOCK_STATE)) || 354 - (user_cfi.cfi_status.cfi_state & PRACE_CFI_STATE_INVALID_MASK)) 352 + (PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE | PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_STATE | 353 + PTRACE_CFI_SHADOW_STACK_EN_STATE | PTRACE_CFI_SHADOW_STACK_LOCK_STATE)) || 354 + (user_cfi.cfi_status.cfi_state & PTRACE_CFI_STATE_INVALID_MASK)) 355 355 return -EINVAL; 356 356 357 357 /* If lpad is enabled on target and ptrace requests to set / clear elp, do that */ 358 358 if (is_indir_lp_enabled(target)) { 359 359 if (user_cfi.cfi_status.cfi_state & 360 - PTRACE_CFI_ELP_STATE) /* set elp state */ 360 + PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE) /* set elp state */ 361 361 regs->status |= SR_ELP; 362 362 else 363 363 regs->status &= ~SR_ELP; /* clear elp state */ ··· 365 365 366 366 /* If shadow stack enabled on target, set new shadow stack pointer */ 367 367 if (is_shstk_enabled(target) && 368 - (user_cfi.cfi_status.cfi_state & PTRACE_CFI_SS_PTR_STATE)) 368 + (user_cfi.cfi_status.cfi_state & PTRACE_CFI_SHADOW_STACK_PTR_STATE)) 369 369 set_active_shstk(target, user_cfi.shstk_ptr); 370 370 371 371 return 0;
+19 -20
arch/riscv/kernel/usercfi.c
··· 74 74 csr_write(CSR_ENVCFG, task->thread.envcfg); 75 75 } 76 76 77 - void set_shstk_lock(struct task_struct *task) 77 + void set_shstk_lock(struct task_struct *task, bool lock) 78 78 { 79 - task->thread_info.user_cfi_state.ubcfi_locked = 1; 79 + task->thread_info.user_cfi_state.ubcfi_locked = lock; 80 80 } 81 81 82 82 bool is_indir_lp_enabled(struct task_struct *task) ··· 104 104 csr_write(CSR_ENVCFG, task->thread.envcfg); 105 105 } 106 106 107 - void set_indir_lp_lock(struct task_struct *task) 107 + void set_indir_lp_lock(struct task_struct *task, bool lock) 108 108 { 109 - task->thread_info.user_cfi_state.ufcfi_locked = 1; 109 + task->thread_info.user_cfi_state.ufcfi_locked = lock; 110 110 } 111 111 /* 112 112 * If size is 0, then to be compatible with regular stack we want it to be as big as ··· 452 452 !is_shstk_enabled(task) || arg != 0) 453 453 return -EINVAL; 454 454 455 - set_shstk_lock(task); 455 + set_shstk_lock(task, true); 456 456 457 457 return 0; 458 458 } 459 459 460 - int arch_get_indir_br_lp_status(struct task_struct *t, unsigned long __user *status) 460 + int arch_prctl_get_branch_landing_pad_state(struct task_struct *t, 461 + unsigned long __user *state) 461 462 { 462 463 unsigned long fcfi_status = 0; 463 464 464 465 if (!is_user_lpad_enabled()) 465 466 return -EINVAL; 466 467 467 - /* indirect branch tracking is enabled on the task or not */ 468 - fcfi_status |= (is_indir_lp_enabled(t) ? PR_INDIR_BR_LP_ENABLE : 0); 468 + fcfi_status = (is_indir_lp_enabled(t) ? PR_CFI_ENABLE : PR_CFI_DISABLE); 469 + fcfi_status |= (is_indir_lp_locked(t) ? PR_CFI_LOCK : 0); 469 470 470 - return copy_to_user(status, &fcfi_status, sizeof(fcfi_status)) ? -EFAULT : 0; 471 + return copy_to_user(state, &fcfi_status, sizeof(fcfi_status)) ? -EFAULT : 0; 471 472 } 472 473 473 - int arch_set_indir_br_lp_status(struct task_struct *t, unsigned long status) 474 + int arch_prctl_set_branch_landing_pad_state(struct task_struct *t, unsigned long state) 474 475 { 475 - bool enable_indir_lp = false; 476 - 477 476 if (!is_user_lpad_enabled()) 478 477 return -EINVAL; 479 478 ··· 480 481 if (is_indir_lp_locked(t)) 481 482 return -EINVAL; 482 483 483 - /* Reject unknown flags */ 484 - if (status & ~PR_INDIR_BR_LP_ENABLE) 484 + if (!(state & (PR_CFI_ENABLE | PR_CFI_DISABLE))) 485 485 return -EINVAL; 486 486 487 - enable_indir_lp = (status & PR_INDIR_BR_LP_ENABLE); 488 - set_indir_lp_status(t, enable_indir_lp); 487 + if (state & PR_CFI_ENABLE && state & PR_CFI_DISABLE) 488 + return -EINVAL; 489 + 490 + set_indir_lp_status(t, !!(state & PR_CFI_ENABLE)); 489 491 490 492 return 0; 491 493 } 492 494 493 - int arch_lock_indir_br_lp_status(struct task_struct *task, 494 - unsigned long arg) 495 + int arch_prctl_lock_branch_landing_pad_state(struct task_struct *task) 495 496 { 496 497 /* 497 498 * If indirect branch tracking is not supported or not enabled on task, 498 499 * nothing to lock here 499 500 */ 500 501 if (!is_user_lpad_enabled() || 501 - !is_indir_lp_enabled(task) || arg != 0) 502 + !is_indir_lp_enabled(task)) 502 503 return -EINVAL; 503 504 504 - set_indir_lp_lock(task); 505 + set_indir_lp_lock(task, true); 505 506 506 507 return 0; 507 508 }
+5 -4
arch/s390/kvm/gaccess.c
··· 1449 1449 pgste_set_unlock(ptep_h, pgste); 1450 1450 if (rc) 1451 1451 return rc; 1452 - if (!sg->parent) 1452 + if (sg->invalidated) 1453 1453 return -EAGAIN; 1454 1454 1455 1455 newpte = _pte(f->pfn, 0, !p, 0); ··· 1479 1479 1480 1480 do { 1481 1481 /* _gmap_crstep_xchg_atomic() could have unshadowed this shadow gmap */ 1482 - if (!sg->parent) 1482 + if (sg->invalidated) 1483 1483 return -EAGAIN; 1484 1484 oldcrste = READ_ONCE(*host); 1485 1485 newcrste = _crste_fc1(f->pfn, oldcrste.h.tt, f->writable, !p); ··· 1492 1492 if (!newcrste.h.p && !f->writable) 1493 1493 return -EOPNOTSUPP; 1494 1494 } while (!_gmap_crstep_xchg_atomic(sg->parent, host, oldcrste, newcrste, f->gfn, false)); 1495 - if (!sg->parent) 1495 + if (sg->invalidated) 1496 1496 return -EAGAIN; 1497 1497 1498 1498 newcrste = _crste_fc1(f->pfn, oldcrste.h.tt, 0, !p); ··· 1545 1545 entries[i].pfn, i + 1, entries[i].writable); 1546 1546 if (rc) 1547 1547 return rc; 1548 - if (!sg->parent) 1548 + if (sg->invalidated) 1549 1549 return -EAGAIN; 1550 1550 } 1551 1551 ··· 1601 1601 scoped_guard(spinlock, &parent->children_lock) { 1602 1602 if (READ_ONCE(sg->parent) != parent) 1603 1603 return -EAGAIN; 1604 + sg->invalidated = false; 1604 1605 rc = _gaccess_do_shadow(vcpu->arch.mc, sg, saddr, walk); 1605 1606 } 1606 1607 if (rc == -ENOMEM)
+3
arch/s390/kvm/gmap.c
··· 181 181 182 182 list_del(&child->list); 183 183 child->parent = NULL; 184 + child->invalidated = true; 184 185 } 185 186 186 187 /** ··· 1070 1069 if (level > TABLE_TYPE_PAGE_TABLE) 1071 1070 align = 1UL << (11 * level + _SEGMENT_SHIFT); 1072 1071 kvm_s390_vsie_gmap_notifier(sg, ALIGN_DOWN(gaddr, align), ALIGN(gaddr + 1, align)); 1072 + sg->invalidated = true; 1073 1073 if (dat_entry_walk(NULL, r_gfn, sg->asce, 0, level, &crstep, &ptep)) 1074 1074 return; 1075 1075 if (ptep) { ··· 1176 1174 scoped_guard(spinlock, &parent->children_lock) { 1177 1175 if (READ_ONCE(sg->parent) != parent) 1178 1176 return -EAGAIN; 1177 + sg->invalidated = false; 1179 1178 for (i = 0; i < CRST_TABLE_PAGES; i++) { 1180 1179 if (!context->f[i].valid) 1181 1180 continue;
+1
arch/s390/kvm/gmap.h
··· 60 60 struct gmap { 61 61 unsigned long flags; 62 62 unsigned char edat_level; 63 + bool invalidated; 63 64 struct kvm *kvm; 64 65 union asce asce; 65 66 struct list_head list;
+1
arch/x86/events/intel/uncore.c
··· 67 67 return bus ? pci_domain_nr(bus) : -EINVAL; 68 68 } 69 69 70 + /* Note: This API can only be used when NUMA information is available. */ 70 71 int uncore_device_to_die(struct pci_dev *dev) 71 72 { 72 73 int node = pcibus_to_node(dev->bus);
+11 -6
arch/x86/events/intel/uncore_discovery.c
··· 264 264 struct uncore_unit_discovery unit; 265 265 void __iomem *io_addr; 266 266 unsigned long size; 267 + int ret = 0; 267 268 int i; 268 269 269 270 size = UNCORE_DISCOVERY_GLOBAL_MAP_SIZE; ··· 274 273 275 274 /* Read Global Discovery State */ 276 275 memcpy_fromio(&global, io_addr, sizeof(struct uncore_global_discovery)); 276 + iounmap(io_addr); 277 + 277 278 if (uncore_discovery_invalid_unit(global)) { 278 279 pr_info("Invalid Global Discovery State: 0x%llx 0x%llx 0x%llx\n", 279 280 global.table1, global.ctl, global.table3); 280 - iounmap(io_addr); 281 281 return -EINVAL; 282 282 } 283 - iounmap(io_addr); 284 283 285 284 size = (1 + global.max_units) * global.stride * 8; 286 285 io_addr = ioremap(addr, size); 287 286 if (!io_addr) 288 287 return -ENOMEM; 289 288 290 - if (domain->global_init && domain->global_init(global.ctl)) 291 - return -ENODEV; 289 + if (domain->global_init && domain->global_init(global.ctl)) { 290 + ret = -ENODEV; 291 + goto out; 292 + } 292 293 293 294 /* Parsing Unit Discovery State */ 294 295 for (i = 0; i < global.max_units; i++) { ··· 310 307 } 311 308 312 309 *parsed = true; 310 + 311 + out: 313 312 iounmap(io_addr); 314 - return 0; 313 + return ret; 315 314 } 316 315 317 316 static int parse_discovery_table(struct uncore_discovery_domain *domain, ··· 371 366 (val & UNCORE_DISCOVERY_DVSEC2_BIR_MASK) * UNCORE_DISCOVERY_BIR_STEP; 372 367 373 368 die = get_device_die_id(dev); 374 - if (die < 0) 369 + if ((die < 0) || (die >= uncore_max_dies())) 375 370 continue; 376 371 377 372 parse_discovery_table(domain, dev, die, bar_offset, &parsed);
+30 -31
arch/x86/events/intel/uncore_snbep.c
··· 1459 1459 } 1460 1460 1461 1461 map->pbus_to_dieid[bus] = die_id = uncore_device_to_die(ubox_dev); 1462 - 1463 1462 raw_spin_unlock(&pci2phy_map_lock); 1464 - 1465 - if (WARN_ON_ONCE(die_id == -1)) { 1466 - err = -EINVAL; 1467 - break; 1468 - } 1469 1463 } 1470 1464 } 1471 1465 ··· 6414 6420 6415 6421 while ((dev = pci_get_device(PCI_VENDOR_ID_INTEL, device, dev)) != NULL) { 6416 6422 6417 - die = uncore_device_to_die(dev); 6423 + die = uncore_pcibus_to_dieid(dev->bus); 6418 6424 if (die < 0) 6419 6425 continue; 6420 6426 ··· 6438 6444 6439 6445 int spr_uncore_pci_init(void) 6440 6446 { 6447 + int ret = snbep_pci2phy_map_init(0x3250, SKX_CPUNODEID, SKX_GIDNIDMAP, true); 6448 + 6449 + if (ret) 6450 + return ret; 6451 + 6441 6452 /* 6442 6453 * The discovery table of UPI on some SPR variant is broken, 6443 6454 * which impacts the detection of both UPI and M3UPI uncore PMON. ··· 6934 6935 6935 6936 static struct uncore_event_desc dmr_uncore_iio_freerunning_events[] = { 6936 6937 /* ITC Free Running Data BW counter for inbound traffic */ 6937 - INTEL_UNCORE_FR_EVENT_DESC(inb_data_port0, 0x10, "3.814697266e-6"), 6938 - INTEL_UNCORE_FR_EVENT_DESC(inb_data_port1, 0x11, "3.814697266e-6"), 6939 - INTEL_UNCORE_FR_EVENT_DESC(inb_data_port2, 0x12, "3.814697266e-6"), 6940 - INTEL_UNCORE_FR_EVENT_DESC(inb_data_port3, 0x13, "3.814697266e-6"), 6941 - INTEL_UNCORE_FR_EVENT_DESC(inb_data_port4, 0x14, "3.814697266e-6"), 6942 - INTEL_UNCORE_FR_EVENT_DESC(inb_data_port5, 0x15, "3.814697266e-6"), 6943 - INTEL_UNCORE_FR_EVENT_DESC(inb_data_port6, 0x16, "3.814697266e-6"), 6944 - INTEL_UNCORE_FR_EVENT_DESC(inb_data_port7, 0x17, "3.814697266e-6"), 6938 + INTEL_UNCORE_FR_EVENT_DESC(inb_data_port0, 0x10, 3.814697266e-6), 6939 + INTEL_UNCORE_FR_EVENT_DESC(inb_data_port1, 0x11, 3.814697266e-6), 6940 + INTEL_UNCORE_FR_EVENT_DESC(inb_data_port2, 0x12, 3.814697266e-6), 6941 + INTEL_UNCORE_FR_EVENT_DESC(inb_data_port3, 0x13, 3.814697266e-6), 6942 + INTEL_UNCORE_FR_EVENT_DESC(inb_data_port4, 0x14, 3.814697266e-6), 6943 + INTEL_UNCORE_FR_EVENT_DESC(inb_data_port5, 0x15, 3.814697266e-6), 6944 + INTEL_UNCORE_FR_EVENT_DESC(inb_data_port6, 0x16, 3.814697266e-6), 6945 + INTEL_UNCORE_FR_EVENT_DESC(inb_data_port7, 0x17, 3.814697266e-6), 6945 6946 6946 6947 /* ITC Free Running BW IN counters */ 6947 - INTEL_UNCORE_FR_EVENT_DESC(bw_in_port0, 0x20, "3.814697266e-6"), 6948 - INTEL_UNCORE_FR_EVENT_DESC(bw_in_port1, 0x21, "3.814697266e-6"), 6949 - INTEL_UNCORE_FR_EVENT_DESC(bw_in_port2, 0x22, "3.814697266e-6"), 6950 - INTEL_UNCORE_FR_EVENT_DESC(bw_in_port3, 0x23, "3.814697266e-6"), 6951 - INTEL_UNCORE_FR_EVENT_DESC(bw_in_port4, 0x24, "3.814697266e-6"), 6952 - INTEL_UNCORE_FR_EVENT_DESC(bw_in_port5, 0x25, "3.814697266e-6"), 6953 - INTEL_UNCORE_FR_EVENT_DESC(bw_in_port6, 0x26, "3.814697266e-6"), 6954 - INTEL_UNCORE_FR_EVENT_DESC(bw_in_port7, 0x27, "3.814697266e-6"), 6948 + INTEL_UNCORE_FR_EVENT_DESC(bw_in_port0, 0x20, 3.814697266e-6), 6949 + INTEL_UNCORE_FR_EVENT_DESC(bw_in_port1, 0x21, 3.814697266e-6), 6950 + INTEL_UNCORE_FR_EVENT_DESC(bw_in_port2, 0x22, 3.814697266e-6), 6951 + INTEL_UNCORE_FR_EVENT_DESC(bw_in_port3, 0x23, 3.814697266e-6), 6952 + INTEL_UNCORE_FR_EVENT_DESC(bw_in_port4, 0x24, 3.814697266e-6), 6953 + INTEL_UNCORE_FR_EVENT_DESC(bw_in_port5, 0x25, 3.814697266e-6), 6954 + INTEL_UNCORE_FR_EVENT_DESC(bw_in_port6, 0x26, 3.814697266e-6), 6955 + INTEL_UNCORE_FR_EVENT_DESC(bw_in_port7, 0x27, 3.814697266e-6), 6955 6956 6956 6957 /* ITC Free Running BW OUT counters */ 6957 - INTEL_UNCORE_FR_EVENT_DESC(bw_out_port0, 0x30, "3.814697266e-6"), 6958 - INTEL_UNCORE_FR_EVENT_DESC(bw_out_port1, 0x31, "3.814697266e-6"), 6959 - INTEL_UNCORE_FR_EVENT_DESC(bw_out_port2, 0x32, "3.814697266e-6"), 6960 - INTEL_UNCORE_FR_EVENT_DESC(bw_out_port3, 0x33, "3.814697266e-6"), 6961 - INTEL_UNCORE_FR_EVENT_DESC(bw_out_port4, 0x34, "3.814697266e-6"), 6962 - INTEL_UNCORE_FR_EVENT_DESC(bw_out_port5, 0x35, "3.814697266e-6"), 6963 - INTEL_UNCORE_FR_EVENT_DESC(bw_out_port6, 0x36, "3.814697266e-6"), 6964 - INTEL_UNCORE_FR_EVENT_DESC(bw_out_port7, 0x37, "3.814697266e-6"), 6958 + INTEL_UNCORE_FR_EVENT_DESC(bw_out_port0, 0x30, 3.814697266e-6), 6959 + INTEL_UNCORE_FR_EVENT_DESC(bw_out_port1, 0x31, 3.814697266e-6), 6960 + INTEL_UNCORE_FR_EVENT_DESC(bw_out_port2, 0x32, 3.814697266e-6), 6961 + INTEL_UNCORE_FR_EVENT_DESC(bw_out_port3, 0x33, 3.814697266e-6), 6962 + INTEL_UNCORE_FR_EVENT_DESC(bw_out_port4, 0x34, 3.814697266e-6), 6963 + INTEL_UNCORE_FR_EVENT_DESC(bw_out_port5, 0x35, 3.814697266e-6), 6964 + INTEL_UNCORE_FR_EVENT_DESC(bw_out_port6, 0x36, 3.814697266e-6), 6965 + INTEL_UNCORE_FR_EVENT_DESC(bw_out_port7, 0x37, 3.814697266e-6), 6965 6966 6966 6967 /* Free Running Clock Counter */ 6967 6968 INTEL_UNCORE_EVENT_DESC(clockticks, "event=0xff,umask=0x40"),
+6 -6
arch/x86/include/uapi/asm/kvm.h
··· 197 197 __u32 nmsrs; /* number of msrs in entries */ 198 198 __u32 pad; 199 199 200 - struct kvm_msr_entry entries[]; 200 + __DECLARE_FLEX_ARRAY(struct kvm_msr_entry, entries); 201 201 }; 202 202 203 203 /* for KVM_GET_MSR_INDEX_LIST */ 204 204 struct kvm_msr_list { 205 205 __u32 nmsrs; /* number of msrs in entries */ 206 - __u32 indices[]; 206 + __DECLARE_FLEX_ARRAY(__u32, indices); 207 207 }; 208 208 209 209 /* Maximum size of any access bitmap in bytes */ ··· 245 245 struct kvm_cpuid { 246 246 __u32 nent; 247 247 __u32 padding; 248 - struct kvm_cpuid_entry entries[]; 248 + __DECLARE_FLEX_ARRAY(struct kvm_cpuid_entry, entries); 249 249 }; 250 250 251 251 struct kvm_cpuid_entry2 { ··· 267 267 struct kvm_cpuid2 { 268 268 __u32 nent; 269 269 __u32 padding; 270 - struct kvm_cpuid_entry2 entries[]; 270 + __DECLARE_FLEX_ARRAY(struct kvm_cpuid_entry2, entries); 271 271 }; 272 272 273 273 /* for KVM_GET_PIT and KVM_SET_PIT */ ··· 398 398 * the contents of CPUID leaf 0xD on the host. 399 399 */ 400 400 __u32 region[1024]; 401 - __u32 extra[]; 401 + __DECLARE_FLEX_ARRAY(__u32, extra); 402 402 }; 403 403 404 404 #define KVM_MAX_XCRS 16 ··· 566 566 __u32 fixed_counter_bitmap; 567 567 __u32 flags; 568 568 __u32 pad[4]; 569 - __u64 events[]; 569 + __DECLARE_FLEX_ARRAY(__u64, events); 570 570 }; 571 571 572 572 #define KVM_PMU_EVENT_ALLOW 0
+8
arch/x86/kernel/cpu/mce/amd.c
··· 604 604 enum smca_bank_types bank_type = smca_get_bank_type(m->extcpu, m->bank); 605 605 struct cpuinfo_x86 *c = &boot_cpu_data; 606 606 607 + /* Bogus hw errors on Cezanne A0. */ 608 + if (c->x86 == 0x19 && 609 + c->x86_model == 0x50 && 610 + c->x86_stepping == 0x0) { 611 + if (!(m->status & MCI_STATUS_EN)) 612 + return true; 613 + } 614 + 607 615 /* See Family 17h Models 10h-2Fh Erratum #1114. */ 608 616 if (c->x86 == 0x17 && 609 617 c->x86_model >= 0x10 && c->x86_model <= 0x2F &&
+2 -1
arch/x86/kernel/shstk.c
··· 351 351 need_to_check_vma = PAGE_ALIGN(*ssp) == *ssp; 352 352 353 353 if (need_to_check_vma) 354 - mmap_read_lock_killable(current->mm); 354 + if (mmap_read_lock_killable(current->mm)) 355 + return -EINTR; 355 356 356 357 err = get_shstk_data(&token_addr, (unsigned long __user *)*ssp); 357 358 if (unlikely(err))
+4 -2
crypto/af_alg.c
··· 705 705 * Assumption: caller created af_alg_count_tsgl(len) 706 706 * SG entries in dst. 707 707 */ 708 - if (dst) { 709 - /* reassign page to dst after offset */ 708 + if (dst && plen) { 709 + /* reassign page to dst */ 710 710 get_page(page); 711 711 sg_set_page(dst + j, page, plen, sg[i].offset); 712 712 j++; ··· 1229 1229 1230 1230 seglen = min_t(size_t, (maxsize - len), 1231 1231 msg_data_left(msg)); 1232 + /* Never pin more pages than the remaining RX accounting budget. */ 1233 + seglen = min_t(size_t, seglen, af_alg_rcvbuf(sk)); 1232 1234 1233 1235 if (list_empty(&areq->rsgl_list)) { 1234 1236 rsgl = &areq->first_rsgl;
+1 -1
crypto/algif_aead.c
··· 144 144 if (usedpages < outlen) { 145 145 size_t less = outlen - usedpages; 146 146 147 - if (used < less) { 147 + if (used < less + (ctx->enc ? 0 : as)) { 148 148 err = -EINVAL; 149 149 goto free; 150 150 }
+5
crypto/algif_skcipher.c
··· 130 130 * full block size buffers. 131 131 */ 132 132 if (ctx->more || len < ctx->used) { 133 + if (len < bs) { 134 + err = -EINVAL; 135 + goto free; 136 + } 137 + 133 138 len -= len % bs; 134 139 cflags |= CRYPTO_SKCIPHER_REQ_NOTFINAL; 135 140 }
+4 -4
crypto/asymmetric_keys/x509_cert_parser.c
··· 609 609 * 0x04 is where keyCertSign lands in this bit string 610 610 * 0x80 is where digitalSignature lands in this bit string 611 611 */ 612 - if (v[0] != ASN1_BTS) 613 - return -EBADMSG; 614 612 if (vlen < 4) 613 + return -EBADMSG; 614 + if (v[0] != ASN1_BTS) 615 615 return -EBADMSG; 616 616 if (v[2] >= 8) 617 617 return -EBADMSG; ··· 645 645 * (Expect 0xFF if the CA is TRUE) 646 646 * vlen should match the entire extension size 647 647 */ 648 - if (v[0] != (ASN1_CONS_BIT | ASN1_SEQ)) 649 - return -EBADMSG; 650 648 if (vlen < 2) 649 + return -EBADMSG; 650 + if (v[0] != (ASN1_CONS_BIT | ASN1_SEQ)) 651 651 return -EBADMSG; 652 652 if (v[1] != vlen - 2) 653 653 return -EBADMSG;
+1
drivers/accel/ethosu/Kconfig
··· 4 4 tristate "Arm Ethos-U65/U85 NPU" 5 5 depends on HAS_IOMEM 6 6 depends on DRM_ACCEL 7 + depends on ARM || ARM64 || COMPILE_TEST 7 8 select DRM_GEM_DMA_HELPER 8 9 select DRM_SCHED 9 10 select GENERIC_ALLOCATOR
+14
drivers/ata/ahci.c
··· 68 68 /* board IDs for specific chipsets in alphabetical order */ 69 69 board_ahci_al, 70 70 board_ahci_avn, 71 + board_ahci_jmb585, 71 72 board_ahci_mcp65, 72 73 board_ahci_mcp77, 73 74 board_ahci_mcp89, ··· 212 211 .pio_mask = ATA_PIO4, 213 212 .udma_mask = ATA_UDMA6, 214 213 .port_ops = &ahci_avn_ops, 214 + }, 215 + /* JMicron JMB582/585: 64-bit DMA is broken, force 32-bit */ 216 + [board_ahci_jmb585] = { 217 + AHCI_HFLAGS (AHCI_HFLAG_IGN_IRQ_IF_ERR | 218 + AHCI_HFLAG_32BIT_ONLY), 219 + .flags = AHCI_FLAG_COMMON, 220 + .pio_mask = ATA_PIO4, 221 + .udma_mask = ATA_UDMA6, 222 + .port_ops = &ahci_ops, 215 223 }, 216 224 [board_ahci_mcp65] = { 217 225 AHCI_HFLAGS (AHCI_HFLAG_NO_FPDMA_AA | AHCI_HFLAG_NO_PMP | ··· 448 438 { PCI_VDEVICE(INTEL, 0x02d7), board_ahci_pcs_quirk }, /* Comet Lake PCH RAID */ 449 439 /* Elkhart Lake IDs 0x4b60 & 0x4b62 https://sata-io.org/product/8803 not tested yet */ 450 440 { PCI_VDEVICE(INTEL, 0x4b63), board_ahci_pcs_quirk }, /* Elkhart Lake AHCI */ 441 + 442 + /* JMicron JMB582/585: force 32-bit DMA (broken 64-bit implementation) */ 443 + { PCI_VDEVICE(JMICRON, 0x0582), board_ahci_jmb585 }, 444 + { PCI_VDEVICE(JMICRON, 0x0585), board_ahci_jmb585 }, 451 445 452 446 /* JMicron 360/1/3/5/6, match class to avoid IDE function */ 453 447 { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+2 -2
drivers/base/class.c
··· 127 127 }; 128 128 129 129 int class_create_file_ns(const struct class *cls, const struct class_attribute *attr, 130 - const void *ns) 130 + const struct ns_common *ns) 131 131 { 132 132 struct subsys_private *sp = class_to_subsys(cls); 133 133 int error; ··· 143 143 EXPORT_SYMBOL_GPL(class_create_file_ns); 144 144 145 145 void class_remove_file_ns(const struct class *cls, const struct class_attribute *attr, 146 - const void *ns) 146 + const struct ns_common *ns) 147 147 { 148 148 struct subsys_private *sp = class_to_subsys(cls); 149 149
+3 -4
drivers/base/core.c
··· 2570 2570 kfree(p); 2571 2571 } 2572 2572 2573 - static const void *device_namespace(const struct kobject *kobj) 2573 + static const struct ns_common *device_namespace(const struct kobject *kobj) 2574 2574 { 2575 2575 const struct device *dev = kobj_to_dev(kobj); 2576 - const void *ns = NULL; 2577 2576 2578 2577 if (dev->class && dev->class->namespace) 2579 - ns = dev->class->namespace(dev); 2578 + return dev->class->namespace(dev); 2580 2579 2581 - return ns; 2580 + return NULL; 2582 2581 } 2583 2582 2584 2583 static void device_get_ownership(const struct kobject *kobj, kuid_t *uid, kgid_t *gid)
+3 -3
drivers/edac/edac_mc.c
··· 369 369 if (!mci->layers) 370 370 goto error; 371 371 372 + mci->dev.release = mci_release; 373 + device_initialize(&mci->dev); 374 + 372 375 mci->pvt_info = kzalloc(sz_pvt, GFP_KERNEL); 373 376 if (!mci->pvt_info) 374 377 goto error; 375 - 376 - mci->dev.release = mci_release; 377 - device_initialize(&mci->dev); 378 378 379 379 /* setup index and various internal pointers */ 380 380 mci->mc_idx = mc_num;
+1 -1
drivers/firmware/efi/efi-init.c
··· 60 60 * x86 defines its own instance of sysfb_primary_display and uses 61 61 * it even without EFI, everything else can get them from here. 62 62 */ 63 - #if !defined(CONFIG_X86) && (defined(CONFIG_SYSFB) || defined(CONFIG_EFI_EARLYCON)) || defined(CONFIG_FIRMWARE_EDID) 63 + #if !defined(CONFIG_X86) && (defined(CONFIG_SYSFB) || defined(CONFIG_EFI_EARLYCON) || defined(CONFIG_FIRMWARE_EDID)) 64 64 struct sysfb_display_info sysfb_primary_display __section(".data"); 65 65 EXPORT_SYMBOL_GPL(sysfb_primary_display); 66 66 #endif
+6 -4
drivers/firmware/microchip/mpfs-auto-update.c
··· 113 113 * be added here. 114 114 */ 115 115 116 - priv->flash = mpfs_sys_controller_get_flash(priv->sys_controller); 117 - if (!priv->flash) 118 - return FW_UPLOAD_ERR_HW_ERROR; 119 - 120 116 erase_size = round_up(erase_size, (u64)priv->flash->erasesize); 121 117 122 118 /* ··· 422 426 if (IS_ERR(priv->sys_controller)) 423 427 return dev_err_probe(dev, PTR_ERR(priv->sys_controller), 424 428 "Could not register as a sub device of the system controller\n"); 429 + 430 + priv->flash = mpfs_sys_controller_get_flash(priv->sys_controller); 431 + if (IS_ERR_OR_NULL(priv->flash)) { 432 + dev_dbg(dev, "No flash connected to the system controller, auto-update not supported\n"); 433 + return -ENODEV; 434 + } 425 435 426 436 priv->dev = dev; 427 437 platform_set_drvdata(pdev, priv);
+3 -4
drivers/firmware/thead,th1520-aon.c
··· 170 170 hdr->func = TH1520_AON_PM_FUNC_SET_RESOURCE_POWER_MODE; 171 171 hdr->size = TH1520_AON_RPC_MSG_NUM; 172 172 173 - RPC_SET_BE16(&msg.resource, 0, rsrc); 174 - RPC_SET_BE16(&msg.resource, 2, 175 - (power_on ? TH1520_AON_PM_PW_MODE_ON : 176 - TH1520_AON_PM_PW_MODE_OFF)); 173 + msg.resource = cpu_to_be16(rsrc); 174 + msg.mode = cpu_to_be16(power_on ? TH1520_AON_PM_PW_MODE_ON : 175 + TH1520_AON_PM_PW_MODE_OFF); 177 176 178 177 ret = th1520_aon_call_rpc(aon_chan, &msg); 179 178 if (ret)
+2
drivers/gpio/gpio-bd72720.c
··· 256 256 g->dev = dev; 257 257 g->chip.parent = parent; 258 258 g->regmap = dev_get_regmap(parent, NULL); 259 + if (!g->regmap) 260 + return -ENODEV; 259 261 260 262 return devm_gpiochip_add_data(dev, &g->chip, g); 261 263 }
+2 -2
drivers/gpio/gpio-tegra.c
··· 595 595 struct tegra_gpio_info *tgi = gpiochip_get_data(chip); 596 596 597 597 gpiochip_relres_irq(chip, d->hwirq); 598 - tegra_gpio_enable(tgi, d->hwirq); 598 + tegra_gpio_disable(tgi, d->hwirq); 599 599 } 600 600 601 601 static void tegra_gpio_irq_print_chip(struct irq_data *d, struct seq_file *s) ··· 698 698 699 699 tgi = devm_kzalloc(&pdev->dev, sizeof(*tgi), GFP_KERNEL); 700 700 if (!tgi) 701 - return -ENODEV; 701 + return -ENOMEM; 702 702 703 703 tgi->soc = of_device_get_match_data(&pdev->dev); 704 704 tgi->dev = &pdev->dev;
+19 -11
drivers/gpu/drm/i915/display/intel_psr.c
··· 2678 2678 2679 2679 static void clip_area_update(struct drm_rect *overlap_damage_area, 2680 2680 struct drm_rect *damage_area, 2681 - struct drm_rect *pipe_src) 2681 + struct drm_rect *display_area) 2682 2682 { 2683 - if (!drm_rect_intersect(damage_area, pipe_src)) 2683 + if (!drm_rect_intersect(damage_area, display_area)) 2684 2684 return; 2685 2685 2686 2686 if (overlap_damage_area->y1 == -1) { ··· 2731 2731 static void 2732 2732 intel_psr2_sel_fetch_et_alignment(struct intel_atomic_state *state, 2733 2733 struct intel_crtc *crtc, 2734 + struct drm_rect *display_area, 2734 2735 bool *cursor_in_su_area) 2735 2736 { 2736 2737 struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc); ··· 2759 2758 continue; 2760 2759 2761 2760 clip_area_update(&crtc_state->psr2_su_area, &new_plane_state->uapi.dst, 2762 - &crtc_state->pipe_src); 2761 + display_area); 2763 2762 *cursor_in_su_area = true; 2764 2763 } 2765 2764 } ··· 2856 2855 struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc); 2857 2856 struct intel_plane_state *new_plane_state, *old_plane_state; 2858 2857 struct intel_plane *plane; 2858 + struct drm_rect display_area = { 2859 + .x1 = 0, 2860 + .y1 = 0, 2861 + .x2 = crtc_state->hw.adjusted_mode.crtc_hdisplay, 2862 + .y2 = crtc_state->hw.adjusted_mode.crtc_vdisplay, 2863 + }; 2859 2864 bool full_update = false, su_area_changed; 2860 2865 int i, ret; 2861 2866 ··· 2875 2868 2876 2869 crtc_state->psr2_su_area.x1 = 0; 2877 2870 crtc_state->psr2_su_area.y1 = -1; 2878 - crtc_state->psr2_su_area.x2 = drm_rect_width(&crtc_state->pipe_src); 2871 + crtc_state->psr2_su_area.x2 = drm_rect_width(&display_area); 2879 2872 crtc_state->psr2_su_area.y2 = -1; 2880 2873 2881 2874 /* ··· 2913 2906 damaged_area.y1 = old_plane_state->uapi.dst.y1; 2914 2907 damaged_area.y2 = old_plane_state->uapi.dst.y2; 2915 2908 clip_area_update(&crtc_state->psr2_su_area, &damaged_area, 2916 - &crtc_state->pipe_src); 2909 + &display_area); 2917 2910 } 2918 2911 2919 2912 if (new_plane_state->uapi.visible) { 2920 2913 damaged_area.y1 = new_plane_state->uapi.dst.y1; 2921 2914 damaged_area.y2 = new_plane_state->uapi.dst.y2; 2922 2915 clip_area_update(&crtc_state->psr2_su_area, &damaged_area, 2923 - &crtc_state->pipe_src); 2916 + &display_area); 2924 2917 } 2925 2918 continue; 2926 2919 } else if (new_plane_state->uapi.alpha != old_plane_state->uapi.alpha) { ··· 2928 2921 damaged_area.y1 = new_plane_state->uapi.dst.y1; 2929 2922 damaged_area.y2 = new_plane_state->uapi.dst.y2; 2930 2923 clip_area_update(&crtc_state->psr2_su_area, &damaged_area, 2931 - &crtc_state->pipe_src); 2924 + &display_area); 2932 2925 continue; 2933 2926 } 2934 2927 ··· 2944 2937 damaged_area.x1 += new_plane_state->uapi.dst.x1 - src.x1; 2945 2938 damaged_area.x2 += new_plane_state->uapi.dst.x1 - src.x1; 2946 2939 2947 - clip_area_update(&crtc_state->psr2_su_area, &damaged_area, &crtc_state->pipe_src); 2940 + clip_area_update(&crtc_state->psr2_su_area, &damaged_area, &display_area); 2948 2941 } 2949 2942 2950 2943 /* ··· 2979 2972 * cursor is added into affected planes even when 2980 2973 * cursor is not updated by itself. 2981 2974 */ 2982 - intel_psr2_sel_fetch_et_alignment(state, crtc, &cursor_in_su_area); 2975 + intel_psr2_sel_fetch_et_alignment(state, crtc, &display_area, 2976 + &cursor_in_su_area); 2983 2977 2984 2978 su_area_changed = intel_psr2_sel_fetch_pipe_alignment(crtc_state); 2985 2979 ··· 3056 3048 3057 3049 skip_sel_fetch_set_loop: 3058 3050 if (full_update) 3059 - clip_area_update(&crtc_state->psr2_su_area, &crtc_state->pipe_src, 3060 - &crtc_state->pipe_src); 3051 + clip_area_update(&crtc_state->psr2_su_area, &display_area, 3052 + &display_area); 3061 3053 3062 3054 psr2_man_trk_ctl_calc(crtc_state, full_update); 3063 3055 crtc_state->pipe_srcsz_early_tpt =
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 896 896 897 897 rcu_read_lock(); 898 898 vma = radix_tree_lookup(&eb->gem_context->handles_vma, handle); 899 - if (likely(vma && vma->vm == vm)) 899 + if (likely(vma)) 900 900 vma = i915_vma_tryget(vma); 901 901 else 902 902 vma = NULL;
+18 -8
drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
··· 148 148 /* Just in case everything has gone horribly wrong, give it a kick */ 149 149 intel_engine_flush_submission(engine); 150 150 151 - rq = engine->heartbeat.systole; 152 - if (rq && i915_request_completed(rq)) { 153 - i915_request_put(rq); 154 - engine->heartbeat.systole = NULL; 151 + rq = xchg(&engine->heartbeat.systole, NULL); 152 + if (rq) { 153 + if (i915_request_completed(rq)) 154 + i915_request_put(rq); 155 + else 156 + engine->heartbeat.systole = rq; 155 157 } 156 158 157 159 if (!intel_engine_pm_get_if_awake(engine)) ··· 234 232 unlock: 235 233 mutex_unlock(&ce->timeline->mutex); 236 234 out: 237 - if (!engine->i915->params.enable_hangcheck || !next_heartbeat(engine)) 238 - i915_request_put(fetch_and_zero(&engine->heartbeat.systole)); 235 + if (!engine->i915->params.enable_hangcheck || !next_heartbeat(engine)) { 236 + rq = xchg(&engine->heartbeat.systole, NULL); 237 + if (rq) 238 + i915_request_put(rq); 239 + } 239 240 intel_engine_pm_put(engine); 240 241 } 241 242 ··· 252 247 253 248 void intel_engine_park_heartbeat(struct intel_engine_cs *engine) 254 249 { 255 - if (cancel_delayed_work(&engine->heartbeat.work)) 256 - i915_request_put(fetch_and_zero(&engine->heartbeat.systole)); 250 + if (cancel_delayed_work(&engine->heartbeat.work)) { 251 + struct i915_request *rq; 252 + 253 + rq = xchg(&engine->heartbeat.systole, NULL); 254 + if (rq) 255 + i915_request_put(rq); 256 + } 257 257 } 258 258 259 259 void intel_gt_unpark_heartbeats(struct intel_gt *gt)
+3
drivers/gpu/drm/vc4/vc4_bo.c
··· 738 738 return -EINVAL; 739 739 } 740 740 741 + mutex_lock(&bo->madv_lock); 741 742 if (bo->madv != VC4_MADV_WILLNEED) { 742 743 DRM_DEBUG("mmapping of %s BO not allowed\n", 743 744 bo->madv == VC4_MADV_DONTNEED ? 744 745 "purgeable" : "purged"); 746 + mutex_unlock(&bo->madv_lock); 745 747 return -EINVAL; 746 748 } 749 + mutex_unlock(&bo->madv_lock); 747 750 748 751 return drm_gem_dma_mmap(&bo->base, vma); 749 752 }
+11 -8
drivers/gpu/drm/vc4/vc4_gem.c
··· 62 62 for (i = 0; i < state->user_state.bo_count; i++) 63 63 drm_gem_object_put(state->bo[i]); 64 64 65 + kfree(state->bo); 65 66 kfree(state); 66 67 } 67 68 ··· 171 170 spin_lock_irqsave(&vc4->job_lock, irqflags); 172 171 exec[0] = vc4_first_bin_job(vc4); 173 172 exec[1] = vc4_first_render_job(vc4); 174 - if (!exec[0] && !exec[1]) { 175 - spin_unlock_irqrestore(&vc4->job_lock, irqflags); 176 - return; 177 - } 173 + if (!exec[0] && !exec[1]) 174 + goto err_free_state; 178 175 179 176 /* Get the bos from both binner and renderer into hang state. */ 180 177 state->bo_count = 0; ··· 189 190 kernel_state->bo = kzalloc_objs(*kernel_state->bo, state->bo_count, 190 191 GFP_ATOMIC); 191 192 192 - if (!kernel_state->bo) { 193 - spin_unlock_irqrestore(&vc4->job_lock, irqflags); 194 - return; 195 - } 193 + if (!kernel_state->bo) 194 + goto err_free_state; 196 195 197 196 k = 0; 198 197 for (i = 0; i < 2; i++) { ··· 282 285 vc4->hang_state = kernel_state; 283 286 spin_unlock_irqrestore(&vc4->job_lock, irqflags); 284 287 } 288 + 289 + return; 290 + 291 + err_free_state: 292 + spin_unlock_irqrestore(&vc4->job_lock, irqflags); 293 + kfree(kernel_state); 285 294 } 286 295 287 296 static void
+1
drivers/gpu/drm/vc4/vc4_v3d.c
··· 481 481 482 482 pm_runtime_use_autosuspend(dev); 483 483 pm_runtime_set_autosuspend_delay(dev, 40); /* a little over 2 frames. */ 484 + pm_runtime_put_autosuspend(dev); 484 485 485 486 return 0; 486 487
+1 -2
drivers/gpu/drm/xe/xe_hw_engine.c
··· 595 595 maxcnt *= maxcnt_units_ns; 596 596 597 597 if (xe_gt_WARN_ON(gt, idledly >= maxcnt || inhibit_switch)) { 598 - idledly = DIV_ROUND_CLOSEST(((maxcnt - 1) * maxcnt_units_ns), 598 + idledly = DIV_ROUND_CLOSEST(((maxcnt - 1) * 1000), 599 599 idledly_units_ps); 600 - idledly = DIV_ROUND_CLOSEST(idledly, 1000); 601 600 xe_mmio_write32(&gt->mmio, RING_IDLEDLY(hwe->mmio_base), idledly); 602 601 } 603 602 }
+2 -1
drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
··· 413 413 rc = amd_sfh_hid_client_init(mp2); 414 414 if (rc) { 415 415 amd_sfh_clear_intr(mp2); 416 - dev_err(&pdev->dev, "amd_sfh_hid_client_init failed err %d\n", rc); 416 + if (rc != -EOPNOTSUPP) 417 + dev_err(&pdev->dev, "amd_sfh_hid_client_init failed err %d\n", rc); 417 418 return; 418 419 } 419 420
+6
drivers/hid/hid-debug.c
··· 990 990 { 0x0c, 0x01c9, "ALContactSync" }, 991 991 { 0x0c, 0x01ca, "ALNavigation" }, 992 992 { 0x0c, 0x01cb, "ALContextawareDesktopAssistant" }, 993 + { 0x0c, 0x01cc, "ALActionOnSelection" }, 994 + { 0x0c, 0x01cd, "ALContextualInsertion" }, 995 + { 0x0c, 0x01ce, "ALContextualQuery" }, 993 996 { 0x0c, 0x0200, "GenericGUIApplicationControls" }, 994 997 { 0x0c, 0x0201, "ACNew" }, 995 998 { 0x0c, 0x0202, "ACOpen" }, ··· 3378 3375 [KEY_BRIGHTNESS_MIN] = "BrightnessMin", 3379 3376 [KEY_BRIGHTNESS_MAX] = "BrightnessMax", 3380 3377 [KEY_BRIGHTNESS_AUTO] = "BrightnessAuto", 3378 + [KEY_ACTION_ON_SELECTION] = "ActionOnSelection", 3379 + [KEY_CONTEXTUAL_INSERT] = "ContextualInsert", 3380 + [KEY_CONTEXTUAL_QUERY] = "ContextualQuery", 3381 3381 [KEY_KBDINPUTASSIST_PREV] = "KbdInputAssistPrev", 3382 3382 [KEY_KBDINPUTASSIST_NEXT] = "KbdInputAssistNext", 3383 3383 [KEY_KBDINPUTASSIST_PREVGROUP] = "KbdInputAssistPrevGroup",
+7
drivers/hid/hid-ids.h
··· 22 22 #define USB_DEVICE_ID_3M2256 0x0502 23 23 #define USB_DEVICE_ID_3M3266 0x0506 24 24 25 + #define USB_VENDOR_ID_8BITDO 0x2dc8 26 + #define USB_DEVICE_ID_8BITDO_PRO_3 0x6009 27 + 25 28 #define USB_VENDOR_ID_A4TECH 0x09da 26 29 #define USB_DEVICE_ID_A4TECH_WCP32PU 0x0006 27 30 #define USB_DEVICE_ID_A4TECH_X5_005D 0x000a ··· 1473 1470 1474 1471 #define USB_VENDOR_ID_VTL 0x0306 1475 1472 #define USB_DEVICE_ID_VTL_MULTITOUCH_FF3F 0xff3f 1473 + 1474 + #define USB_VENDOR_ID_VXE 0x3554 1475 + #define USB_DEVICE_ID_VXE_DRAGONFLY_R1_PRO_DONGLE 0xf58a 1476 + #define USB_DEVICE_ID_VXE_DRAGONFLY_R1_PRO_WIRED 0xf58c 1476 1477 1477 1478 #define USB_VENDOR_ID_WACOM 0x056a 1478 1479 #define USB_DEVICE_ID_WACOM_GRAPHIRE_BLUETOOTH 0x81
+3
drivers/hid/hid-input.c
··· 1227 1227 case 0x1bc: map_key_clear(KEY_MESSENGER); break; 1228 1228 case 0x1bd: map_key_clear(KEY_INFO); break; 1229 1229 case 0x1cb: map_key_clear(KEY_ASSISTANT); break; 1230 + case 0x1cc: map_key_clear(KEY_ACTION_ON_SELECTION); break; 1231 + case 0x1cd: map_key_clear(KEY_CONTEXTUAL_INSERT); break; 1232 + case 0x1ce: map_key_clear(KEY_CONTEXTUAL_QUERY); break; 1230 1233 case 0x201: map_key_clear(KEY_NEW); break; 1231 1234 case 0x202: map_key_clear(KEY_OPEN); break; 1232 1235 case 0x203: map_key_clear(KEY_CLOSE); break;
+2
drivers/hid/hid-kysona.c
··· 272 272 static const struct hid_device_id kysona_devices[] = { 273 273 { HID_USB_DEVICE(USB_VENDOR_ID_KYSONA, USB_DEVICE_ID_KYSONA_M600_DONGLE) }, 274 274 { HID_USB_DEVICE(USB_VENDOR_ID_KYSONA, USB_DEVICE_ID_KYSONA_M600_WIRED) }, 275 + { HID_USB_DEVICE(USB_VENDOR_ID_VXE, USB_DEVICE_ID_VXE_DRAGONFLY_R1_PRO_DONGLE) }, 276 + { HID_USB_DEVICE(USB_VENDOR_ID_VXE, USB_DEVICE_ID_VXE_DRAGONFLY_R1_PRO_WIRED) }, 275 277 { } 276 278 }; 277 279 MODULE_DEVICE_TABLE(hid, kysona_devices);
+1
drivers/hid/hid-quirks.c
··· 25 25 */ 26 26 27 27 static const struct hid_device_id hid_quirks[] = { 28 + { HID_USB_DEVICE(USB_VENDOR_ID_8BITDO, USB_DEVICE_ID_8BITDO_PRO_3), HID_QUIRK_ALWAYS_POLL }, 28 29 { HID_USB_DEVICE(USB_VENDOR_ID_AASHIMA, USB_DEVICE_ID_AASHIMA_GAMEPAD), HID_QUIRK_BADPAD }, 29 30 { HID_USB_DEVICE(USB_VENDOR_ID_AASHIMA, USB_DEVICE_ID_AASHIMA_PREDATOR), HID_QUIRK_BADPAD }, 30 31 { HID_USB_DEVICE(USB_VENDOR_ID_ADATA_XPG, USB_VENDOR_ID_ADATA_XPG_WL_GAMING_MOUSE), HID_QUIRK_ALWAYS_POLL },
+2
drivers/hid/hid-roccat.c
··· 257 257 if (!new_value) 258 258 return -ENOMEM; 259 259 260 + mutex_lock(&device->readers_lock); 260 261 mutex_lock(&device->cbuf_lock); 261 262 262 263 report = &device->cbuf[device->cbuf_end]; ··· 280 279 } 281 280 282 281 mutex_unlock(&device->cbuf_lock); 282 + mutex_unlock(&device->readers_lock); 283 283 284 284 wake_up_interruptible(&device->wait); 285 285 return 0;
+7
drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
··· 26 26 .max_interrupt_delay = MAX_RX_INTERRUPT_DELAY, 27 27 }; 28 28 29 + static struct quicki2c_ddata nvl_ddata = { 30 + .max_detect_size = MAX_RX_DETECT_SIZE_NVL, 31 + .max_interrupt_delay = MAX_RX_INTERRUPT_DELAY, 32 + }; 33 + 29 34 /* THC QuickI2C ACPI method to get device properties */ 30 35 /* HIDI2C device method */ 31 36 static guid_t i2c_hid_guid = ··· 1037 1032 { PCI_DEVICE_DATA(INTEL, THC_PTL_U_DEVICE_ID_I2C_PORT2, &ptl_ddata) }, 1038 1033 { PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_I2C_PORT1, &ptl_ddata) }, 1039 1034 { PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_I2C_PORT2, &ptl_ddata) }, 1035 + { PCI_DEVICE_DATA(INTEL, THC_NVL_H_DEVICE_ID_I2C_PORT1, &nvl_ddata) }, 1036 + { PCI_DEVICE_DATA(INTEL, THC_NVL_H_DEVICE_ID_I2C_PORT2, &nvl_ddata) }, 1040 1037 { } 1041 1038 }; 1042 1039 MODULE_DEVICE_TABLE(pci, quicki2c_pci_tbl);
+4
drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-dev.h
··· 15 15 #define PCI_DEVICE_ID_INTEL_THC_PTL_U_DEVICE_ID_I2C_PORT2 0xE44A 16 16 #define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_I2C_PORT1 0x4D48 17 17 #define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_I2C_PORT2 0x4D4A 18 + #define PCI_DEVICE_ID_INTEL_THC_NVL_H_DEVICE_ID_I2C_PORT1 0xD348 19 + #define PCI_DEVICE_ID_INTEL_THC_NVL_H_DEVICE_ID_I2C_PORT2 0xD34A 18 20 19 21 /* Packet size value, the unit is 16 bytes */ 20 22 #define MAX_PACKET_SIZE_VALUE_LNL 256 ··· 42 40 43 41 /* PTL Max packet size detection capability is 255 Bytes */ 44 42 #define MAX_RX_DETECT_SIZE_PTL 255 43 + /* NVL Max packet size detection capability is 64K Bytes */ 44 + #define MAX_RX_DETECT_SIZE_NVL 65535 45 45 /* Max interrupt delay capability is 2.56ms */ 46 46 #define MAX_RX_INTERRUPT_DELAY 256 47 47
+6
drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
··· 37 37 .max_packet_size_value = MAX_PACKET_SIZE_VALUE_MTL, 38 38 }; 39 39 40 + struct quickspi_driver_data nvl = { 41 + .max_packet_size_value = MAX_PACKET_SIZE_VALUE_LNL, 42 + }; 43 + 40 44 /* THC QuickSPI ACPI method to get device properties */ 41 45 /* HIDSPI Method: {6e2ac436-0fcf-41af-a265-b32a220dcfab} */ 42 46 static guid_t hidspi_guid = ··· 986 982 {PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_SPI_PORT2, &ptl), }, 987 983 {PCI_DEVICE_DATA(INTEL, THC_ARL_DEVICE_ID_SPI_PORT1, &arl), }, 988 984 {PCI_DEVICE_DATA(INTEL, THC_ARL_DEVICE_ID_SPI_PORT2, &arl), }, 985 + {PCI_DEVICE_DATA(INTEL, THC_NVL_H_DEVICE_ID_SPI_PORT1, &nvl), }, 986 + {PCI_DEVICE_DATA(INTEL, THC_NVL_H_DEVICE_ID_SPI_PORT2, &nvl), }, 989 987 {} 990 988 }; 991 989 MODULE_DEVICE_TABLE(pci, quickspi_pci_tbl);
+2
drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
··· 23 23 #define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_SPI_PORT2 0x4D4B 24 24 #define PCI_DEVICE_ID_INTEL_THC_ARL_DEVICE_ID_SPI_PORT1 0x7749 25 25 #define PCI_DEVICE_ID_INTEL_THC_ARL_DEVICE_ID_SPI_PORT2 0x774B 26 + #define PCI_DEVICE_ID_INTEL_THC_NVL_H_DEVICE_ID_SPI_PORT1 0xD349 27 + #define PCI_DEVICE_ID_INTEL_THC_NVL_H_DEVICE_ID_SPI_PORT2 0xD34B 26 28 27 29 /* HIDSPI special ACPI parameters DSM methods */ 28 30 #define ACPI_QUICKSPI_REVISION_NUM 2
+12 -3
drivers/hv/mshv_root_main.c
··· 630 630 { 631 631 struct mshv_partition *p = vp->vp_partition; 632 632 struct mshv_mem_region *region; 633 - bool ret; 633 + bool ret = false; 634 634 u64 gfn; 635 635 #if defined(CONFIG_X86_64) 636 636 struct hv_x64_memory_intercept_message *msg = ··· 641 641 (struct hv_arm64_memory_intercept_message *) 642 642 vp->vp_intercept_msg_page->u.payload; 643 643 #endif 644 + enum hv_intercept_access_type access_type = 645 + msg->header.intercept_access_type; 644 646 645 647 gfn = HVPFN_DOWN(msg->guest_physical_address); 646 648 ··· 650 648 if (!region) 651 649 return false; 652 650 651 + if (access_type == HV_INTERCEPT_ACCESS_WRITE && 652 + !(region->hv_map_flags & HV_MAP_GPA_WRITABLE)) 653 + goto put_region; 654 + 655 + if (access_type == HV_INTERCEPT_ACCESS_EXECUTE && 656 + !(region->hv_map_flags & HV_MAP_GPA_EXECUTABLE)) 657 + goto put_region; 658 + 653 659 /* Only movable memory ranges are supported for GPA intercepts */ 654 660 if (region->mreg_type == MSHV_REGION_TYPE_MEM_MOVABLE) 655 661 ret = mshv_region_handle_gfn_fault(region, gfn); 656 - else 657 - ret = false; 658 662 663 + put_region: 659 664 mshv_region_put(region); 660 665 661 666 return ret;
+1 -1
drivers/i2c/busses/i2c-imx.c
··· 401 401 static int i2c_imx_dma_request(struct imx_i2c_struct *i2c_imx, dma_addr_t phy_addr) 402 402 { 403 403 struct imx_i2c_dma *dma; 404 - struct dma_slave_config dma_sconfig; 404 + struct dma_slave_config dma_sconfig = {}; 405 405 struct device *dev = i2c_imx->adapter.dev.parent; 406 406 int ret; 407 407
+3 -2
drivers/infiniband/core/device.c
··· 509 509 return 0; 510 510 } 511 511 512 - static const void *net_namespace(const struct device *d) 512 + static const struct ns_common *net_namespace(const struct device *d) 513 513 { 514 514 const struct ib_core_device *coredev = 515 515 container_of(d, struct ib_core_device, dev); 516 + struct net *net = read_pnet(&coredev->rdma_net); 516 517 517 - return read_pnet(&coredev->rdma_net); 518 + return net ? to_ns_common(net) : NULL; 518 519 } 519 520 520 521 static struct class ib_class = {
+4 -3
drivers/infiniband/ulp/srp/ib_srp.c
··· 43 43 #include <linux/jiffies.h> 44 44 #include <linux/lockdep.h> 45 45 #include <linux/inet.h> 46 + #include <net/net_namespace.h> 46 47 #include <rdma/ib_cache.h> 47 48 48 49 #include <linux/atomic.h> ··· 1049 1048 scsi_remove_host(target->scsi_host); 1050 1049 srp_stop_rport_timers(target->rport); 1051 1050 srp_disconnect_target(target); 1052 - kobj_ns_drop(KOBJ_NS_TYPE_NET, target->net); 1051 + kobj_ns_drop(KOBJ_NS_TYPE_NET, to_ns_common(target->net)); 1053 1052 for (i = 0; i < target->ch_count; i++) { 1054 1053 ch = &target->ch[i]; 1055 1054 srp_free_ch_ib(target, ch); ··· 3714 3713 3715 3714 target = host_to_target(target_host); 3716 3715 3717 - target->net = kobj_ns_grab_current(KOBJ_NS_TYPE_NET); 3716 + target->net = to_net_ns(kobj_ns_grab_current(KOBJ_NS_TYPE_NET)); 3718 3717 target->io_class = SRP_REV16A_IB_IO_CLASS; 3719 3718 target->scsi_host = target_host; 3720 3719 target->srp_host = host; ··· 3906 3905 * earlier in this function. 3907 3906 */ 3908 3907 if (target->state != SRP_TARGET_REMOVED) 3909 - kobj_ns_drop(KOBJ_NS_TYPE_NET, target->net); 3908 + kobj_ns_drop(KOBJ_NS_TYPE_NET, to_ns_common(target->net)); 3910 3909 scsi_host_put(target->scsi_host); 3911 3910 } 3912 3911
+28 -7
drivers/input/misc/uinput.c
··· 25 25 #include <linux/module.h> 26 26 #include <linux/init.h> 27 27 #include <linux/fs.h> 28 + #include <linux/lockdep.h> 28 29 #include <linux/miscdevice.h> 29 30 #include <linux/overflow.h> 31 + #include <linux/spinlock.h> 30 32 #include <linux/input/mt.h> 31 33 #include "../input-compat.h" 32 34 ··· 59 57 struct input_dev *dev; 60 58 struct mutex mutex; 61 59 enum uinput_state state; 60 + spinlock_t state_lock; 62 61 wait_queue_head_t waitq; 63 62 unsigned char ready; 64 63 unsigned char head; ··· 77 74 { 78 75 struct uinput_device *udev = input_get_drvdata(dev); 79 76 struct timespec64 ts; 77 + 78 + lockdep_assert_held(&dev->event_lock); 80 79 81 80 ktime_get_ts64(&ts); 82 81 ··· 151 146 static int uinput_request_send(struct uinput_device *udev, 152 147 struct uinput_request *request) 153 148 { 154 - int retval; 149 + unsigned long flags; 150 + int retval = 0; 155 151 156 - retval = mutex_lock_interruptible(&udev->mutex); 157 - if (retval) 158 - return retval; 152 + spin_lock(&udev->state_lock); 159 153 160 154 if (udev->state != UIST_CREATED) { 161 155 retval = -ENODEV; 162 156 goto out; 163 157 } 164 158 165 - init_completion(&request->done); 166 - 167 159 /* 168 160 * Tell our userspace application about this new request 169 161 * by queueing an input event. 170 162 */ 163 + spin_lock_irqsave(&udev->dev->event_lock, flags); 171 164 uinput_dev_event(udev->dev, EV_UINPUT, request->code, request->id); 165 + spin_unlock_irqrestore(&udev->dev->event_lock, flags); 172 166 173 167 out: 174 - mutex_unlock(&udev->mutex); 168 + spin_unlock(&udev->state_lock); 175 169 return retval; 176 170 } 177 171 ··· 178 174 struct uinput_request *request) 179 175 { 180 176 int retval; 177 + 178 + /* 179 + * Initialize completion before allocating the request slot. 180 + * Once the slot is allocated, uinput_flush_requests() may 181 + * complete it at any time, so it must be initialized first. 182 + */ 183 + init_completion(&request->done); 181 184 182 185 retval = uinput_request_reserve_slot(udev, request); 183 186 if (retval) ··· 300 289 struct input_dev *dev = udev->dev; 301 290 enum uinput_state old_state = udev->state; 302 291 292 + /* 293 + * Update state under state_lock so that concurrent 294 + * uinput_request_send() sees the state change before we 295 + * flush pending requests and tear down the device. 296 + */ 297 + spin_lock(&udev->state_lock); 303 298 udev->state = UIST_NEW_DEVICE; 299 + spin_unlock(&udev->state_lock); 304 300 305 301 if (dev) { 306 302 name = dev->name; ··· 384 366 if (error) 385 367 goto fail2; 386 368 369 + spin_lock(&udev->state_lock); 387 370 udev->state = UIST_CREATED; 371 + spin_unlock(&udev->state_lock); 388 372 389 373 return 0; 390 374 ··· 404 384 return -ENOMEM; 405 385 406 386 mutex_init(&newdev->mutex); 387 + spin_lock_init(&newdev->state_lock); 407 388 spin_lock_init(&newdev->requests_lock); 408 389 init_waitqueue_head(&newdev->requests_waitq); 409 390 init_waitqueue_head(&newdev->waitq);
+6
drivers/iommu/iommu.c
··· 2717 2717 2718 2718 pr_debug("unmapped: iova 0x%lx size 0x%zx\n", 2719 2719 iova, unmapped_page); 2720 + /* 2721 + * If the driver itself isn't using the gather, make sure 2722 + * it looks non-empty so iotlb_sync will still be called. 2723 + */ 2724 + if (iotlb_gather->start >= iotlb_gather->end) 2725 + iommu_iotlb_gather_add_range(iotlb_gather, iova, size); 2720 2726 2721 2727 iova += unmapped_page; 2722 2728 unmapped += unmapped_page;
+12 -7
drivers/mmc/host/vub300.c
··· 369 369 static void vub300_delete(struct kref *kref) 370 370 { /* kref callback - softirq */ 371 371 struct vub300_mmc_host *vub300 = kref_to_vub300_mmc_host(kref); 372 + struct mmc_host *mmc = vub300->mmc; 373 + 372 374 usb_free_urb(vub300->command_out_urb); 373 375 vub300->command_out_urb = NULL; 374 376 usb_free_urb(vub300->command_res_urb); 375 377 vub300->command_res_urb = NULL; 376 378 usb_put_dev(vub300->udev); 379 + mmc_free_host(mmc); 377 380 /* 378 381 * and hence also frees vub300 379 382 * which is contained at the end of struct mmc ··· 2115 2112 goto error1; 2116 2113 } 2117 2114 /* this also allocates memory for our VUB300 mmc host device */ 2118 - mmc = devm_mmc_alloc_host(&udev->dev, sizeof(*vub300)); 2115 + mmc = mmc_alloc_host(sizeof(*vub300), &udev->dev); 2119 2116 if (!mmc) { 2120 2117 retval = -ENOMEM; 2121 2118 dev_err(&udev->dev, "not enough memory for the mmc_host\n"); ··· 2272 2269 dev_err(&vub300->udev->dev, 2273 2270 "Could not find two sets of bulk-in/out endpoint pairs\n"); 2274 2271 retval = -EINVAL; 2275 - goto error4; 2272 + goto err_free_host; 2276 2273 } 2277 2274 retval = 2278 2275 usb_control_msg(vub300->udev, usb_rcvctrlpipe(vub300->udev, 0), ··· 2281 2278 0x0000, 0x0000, &vub300->hc_info, 2282 2279 sizeof(vub300->hc_info), 1000); 2283 2280 if (retval < 0) 2284 - goto error4; 2281 + goto err_free_host; 2285 2282 retval = 2286 2283 usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0), 2287 2284 SET_ROM_WAIT_STATES, 2288 2285 USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 2289 2286 firmware_rom_wait_states, 0x0000, NULL, 0, 1000); 2290 2287 if (retval < 0) 2291 - goto error4; 2288 + goto err_free_host; 2292 2289 dev_info(&vub300->udev->dev, 2293 2290 "operating_mode = %s %s %d MHz %s %d byte USB packets\n", 2294 2291 (mmc->caps & MMC_CAP_SDIO_IRQ) ? "IRQs" : "POLL", ··· 2303 2300 0x0000, 0x0000, &vub300->system_port_status, 2304 2301 sizeof(vub300->system_port_status), 1000); 2305 2302 if (retval < 0) { 2306 - goto error4; 2303 + goto err_free_host; 2307 2304 } else if (sizeof(vub300->system_port_status) == retval) { 2308 2305 vub300->card_present = 2309 2306 (0x0001 & vub300->system_port_status.port_flags) ? 1 : 0; ··· 2311 2308 (0x0010 & vub300->system_port_status.port_flags) ? 1 : 0; 2312 2309 } else { 2313 2310 retval = -EINVAL; 2314 - goto error4; 2311 + goto err_free_host; 2315 2312 } 2316 2313 usb_set_intfdata(interface, vub300); 2317 2314 INIT_DELAYED_WORK(&vub300->pollwork, vub300_pollwork_thread); ··· 2341 2338 return 0; 2342 2339 error6: 2343 2340 timer_delete_sync(&vub300->inactivity_timer); 2341 + err_free_host: 2342 + mmc_free_host(mmc); 2344 2343 /* 2345 2344 * and hence also frees vub300 2346 2345 * which is contained at the end of struct mmc ··· 2370 2365 usb_set_intfdata(interface, NULL); 2371 2366 /* prevent more I/O from starting */ 2372 2367 vub300->interface = NULL; 2373 - kref_put(&vub300->kref, vub300_delete); 2374 2368 mmc_remove_host(mmc); 2369 + kref_put(&vub300->kref, vub300_delete); 2375 2370 pr_info("USB vub300 remote SDIO host controller[%d]" 2376 2371 " now disconnected", ifnum); 2377 2372 return;
+2 -2
drivers/net/bonding/bond_sysfs.c
··· 808 808 sysfs_attr_init(&bn->class_attr_bonding_masters.attr); 809 809 810 810 ret = netdev_class_create_file_ns(&bn->class_attr_bonding_masters, 811 - bn->net); 811 + to_ns_common(bn->net)); 812 812 /* Permit multiple loads of the module by ignoring failures to 813 813 * create the bonding_masters sysfs file. Bonding devices 814 814 * created by second or subsequent loads of the module will ··· 835 835 /* Remove /sys/class/net/bonding_masters. */ 836 836 void __net_exit bond_destroy_sysfs(struct bond_net *bn) 837 837 { 838 - netdev_class_remove_file_ns(&bn->class_attr_bonding_masters, bn->net); 838 + netdev_class_remove_file_ns(&bn->class_attr_bonding_masters, to_ns_common(bn->net)); 839 839 } 840 840 841 841 /* Initialize sysfs for each bond. This sets up and registers
+1 -2
drivers/net/ethernet/airoha/airoha_eth.c
··· 697 697 if (q->skb) { 698 698 dev_kfree_skb(q->skb); 699 699 q->skb = NULL; 700 - } else { 701 - page_pool_put_full_page(q->page_pool, page, true); 702 700 } 701 + page_pool_put_full_page(q->page_pool, page, true); 703 702 } 704 703 airoha_qdma_fill_rx_queue(q); 705 704
+1
drivers/net/ethernet/altera/altera_tse_main.c
··· 570 570 DMA_TO_DEVICE); 571 571 if (dma_mapping_error(priv->device, dma_addr)) { 572 572 netdev_err(priv->dev, "%s: DMA mapping error\n", __func__); 573 + dev_kfree_skb_any(skb); 573 574 ret = NETDEV_TX_OK; 574 575 goto out; 575 576 }
+1 -1
drivers/net/ethernet/freescale/Kconfig
··· 28 28 depends on PTP_1588_CLOCK_OPTIONAL 29 29 select CRC32 30 30 select PHYLIB 31 - select FIXED_PHY if M5272 31 + select FIXED_PHY 32 32 select PAGE_POOL 33 33 imply PAGE_POOL_STATS 34 34 imply NET_SELFTESTS
+7 -1
drivers/net/ethernet/intel/e1000/e1000_ethtool.c
··· 496 496 */ 497 497 ret_val = e1000_read_eeprom(hw, first_word, 1, 498 498 &eeprom_buff[0]); 499 + if (ret_val) 500 + goto out; 501 + 499 502 ptr++; 500 503 } 501 - if (((eeprom->offset + eeprom->len) & 1) && (ret_val == 0)) { 504 + if ((eeprom->offset + eeprom->len) & 1) { 502 505 /* need read/modify/write of last changed EEPROM word 503 506 * only the first byte of the word is being modified 504 507 */ 505 508 ret_val = e1000_read_eeprom(hw, last_word, 1, 506 509 &eeprom_buff[last_word - first_word]); 510 + if (ret_val) 511 + goto out; 507 512 } 508 513 509 514 /* Device's eeprom is always little-endian, word addressable */ ··· 527 522 if ((ret_val == 0) && (first_word <= EEPROM_CHECKSUM_REG)) 528 523 e1000_update_eeprom_checksum(hw); 529 524 525 + out: 530 526 kfree(eeprom_buff); 531 527 return ret_val; 532 528 }
+19 -11
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 1296 1296 if (pf->hw.reset_ongoing) 1297 1297 return; 1298 1298 1299 - if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) { 1299 + if (hw->mac_type == ICE_MAC_GENERIC_3K_E825 && 1300 + test_bit(ICE_FLAG_DPLL, pf->flags)) { 1300 1301 int pin, err; 1301 - 1302 - if (!test_bit(ICE_FLAG_DPLL, pf->flags)) 1303 - return; 1304 1302 1305 1303 mutex_lock(&pf->dplls.lock); 1306 1304 for (pin = 0; pin < ICE_SYNCE_CLK_NUM; pin++) { ··· 1312 1314 port_num, 1313 1315 &active, 1314 1316 clk_pin); 1315 - if (WARN_ON_ONCE(err)) { 1316 - mutex_unlock(&pf->dplls.lock); 1317 - return; 1317 + if (err) { 1318 + dev_err_once(ice_pf_to_dev(pf), 1319 + "Failed to read SyncE bypass mux for pin %d, err %d\n", 1320 + pin, err); 1321 + break; 1318 1322 } 1319 1323 1320 1324 err = ice_tspll_cfg_synce_ethdiv_e825c(hw, clk_pin); 1321 - if (active && WARN_ON_ONCE(err)) { 1322 - mutex_unlock(&pf->dplls.lock); 1323 - return; 1325 + if (active && err) { 1326 + dev_err_once(ice_pf_to_dev(pf), 1327 + "Failed to configure SyncE ETH divider for pin %d, err %d\n", 1328 + pin, err); 1329 + break; 1324 1330 } 1325 1331 } 1326 1332 mutex_unlock(&pf->dplls.lock); ··· 3082 3080 struct ice_ptp *ctrl_ptp = ice_get_ctrl_ptp(pf); 3083 3081 struct ice_ptp *ptp = &pf->ptp; 3084 3082 3085 - if (WARN_ON(!ctrl_ptp) || pf->hw.mac_type == ICE_MAC_UNKNOWN) 3083 + if (!ctrl_ptp) { 3084 + dev_info(ice_pf_to_dev(pf), 3085 + "PTP unavailable: no controlling PF\n"); 3086 + return -EOPNOTSUPP; 3087 + } 3088 + 3089 + if (pf->hw.mac_type == ICE_MAC_UNKNOWN) 3086 3090 return -ENODEV; 3087 3091 3088 3092 INIT_LIST_HEAD(&ptp->port.list_node);
+11 -9
drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
··· 287 287 return err; 288 288 } 289 289 290 - /* API for virtchnl "transaction" support ("xn" for short). 291 - * 292 - * We are reusing the completion lock to serialize the accesses to the 293 - * transaction state for simplicity, but it could be its own separate synchro 294 - * as well. For now, this API is only used from within a workqueue context; 295 - * raw_spin_lock() is enough. 296 - */ 290 + /* API for virtchnl "transaction" support ("xn" for short). */ 291 + 297 292 /** 298 293 * idpf_vc_xn_lock - Request exclusive access to vc transaction 299 294 * @xn: struct idpf_vc_xn* to access 300 295 */ 301 296 #define idpf_vc_xn_lock(xn) \ 302 - raw_spin_lock(&(xn)->completed.wait.lock) 297 + spin_lock(&(xn)->lock) 303 298 304 299 /** 305 300 * idpf_vc_xn_unlock - Release exclusive access to vc transaction 306 301 * @xn: struct idpf_vc_xn* to access 307 302 */ 308 303 #define idpf_vc_xn_unlock(xn) \ 309 - raw_spin_unlock(&(xn)->completed.wait.lock) 304 + spin_unlock(&(xn)->lock) 310 305 311 306 /** 312 307 * idpf_vc_xn_release_bufs - Release reference to reply buffer(s) and ··· 333 338 xn->state = IDPF_VC_XN_IDLE; 334 339 xn->idx = i; 335 340 idpf_vc_xn_release_bufs(xn); 341 + spin_lock_init(&xn->lock); 336 342 init_completion(&xn->completed); 337 343 } 338 344 ··· 402 406 struct idpf_vc_xn *xn) 403 407 { 404 408 idpf_vc_xn_release_bufs(xn); 409 + spin_lock_bh(&vcxn_mngr->xn_bm_lock); 405 410 set_bit(xn->idx, vcxn_mngr->free_xn_bm); 411 + spin_unlock_bh(&vcxn_mngr->xn_bm_lock); 406 412 } 407 413 408 414 /** ··· 615 617 err = -ENXIO; 616 618 goto out_unlock; 617 619 case IDPF_VC_XN_ASYNC: 620 + /* Set reply_sz from the actual payload so that async_handler 621 + * can evaluate the response. 622 + */ 623 + xn->reply_sz = ctlq_msg->data_len; 618 624 err = idpf_vc_xn_forward_async(adapter, xn, ctlq_msg); 619 625 idpf_vc_xn_unlock(xn); 620 626 return err;
+3 -2
drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
··· 42 42 * struct idpf_vc_xn - Data structure representing virtchnl transactions 43 43 * @completed: virtchnl event loop uses that to signal when a reply is 44 44 * available, uses kernel completion API 45 - * @state: virtchnl event loop stores the data below, protected by the 46 - * completion's lock. 45 + * @lock: protects the transaction state fields below 46 + * @state: virtchnl event loop stores the data below, protected by @lock 47 47 * @reply_sz: Original size of reply, may be > reply_buf.iov_len; it will be 48 48 * truncated on its way to the receiver thread according to 49 49 * reply_buf.iov_len. ··· 58 58 */ 59 59 struct idpf_vc_xn { 60 60 struct completion completed; 61 + spinlock_t lock; 61 62 enum idpf_vc_xn_state state; 62 63 size_t reply_sz; 63 64 struct kvec reply;
+1 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 2203 2203 2204 2204 for (i = 0; i < adapter->num_q_vectors; i++) { 2205 2205 if (adapter->q_vector[i]) { 2206 - napi_synchronize(&adapter->q_vector[i]->napi); 2207 - igb_set_queue_napi(adapter, i, NULL); 2208 2206 napi_disable(&adapter->q_vector[i]->napi); 2207 + igb_set_queue_napi(adapter, i, NULL); 2209 2208 } 2210 2209 } 2211 2210
+1 -1
drivers/net/ethernet/intel/ixgbe/devlink/devlink.c
··· 474 474 adapter->flags2 &= ~(IXGBE_FLAG2_API_MISMATCH | 475 475 IXGBE_FLAG2_FW_ROLLBACK); 476 476 477 - return 0; 477 + return ixgbe_refresh_fw_version(adapter); 478 478 } 479 479 480 480 static const struct devlink_ops ixgbe_devlink_ops = {
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 973 973 bool ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id, 974 974 u16 subdevice_id); 975 975 void ixgbe_set_fw_version_e610(struct ixgbe_adapter *adapter); 976 - void ixgbe_refresh_fw_version(struct ixgbe_adapter *adapter); 976 + int ixgbe_refresh_fw_version(struct ixgbe_adapter *adapter); 977 977 #ifdef CONFIG_PCI_IOV 978 978 void ixgbe_full_sync_mac_table(struct ixgbe_adapter *adapter); 979 979 #endif
+7 -6
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
··· 1155 1155 return ret_val; 1156 1156 } 1157 1157 1158 - void ixgbe_refresh_fw_version(struct ixgbe_adapter *adapter) 1158 + int ixgbe_refresh_fw_version(struct ixgbe_adapter *adapter) 1159 1159 { 1160 1160 struct ixgbe_hw *hw = &adapter->hw; 1161 + int err; 1161 1162 1162 - ixgbe_get_flash_data(hw); 1163 + err = ixgbe_get_flash_data(hw); 1164 + if (err) 1165 + return err; 1166 + 1163 1167 ixgbe_set_fw_version_e610(adapter); 1168 + return 0; 1164 1169 } 1165 1170 1166 1171 static void ixgbe_get_drvinfo(struct net_device *netdev, 1167 1172 struct ethtool_drvinfo *drvinfo) 1168 1173 { 1169 1174 struct ixgbe_adapter *adapter = ixgbe_from_netdev(netdev); 1170 - 1171 - /* need to refresh info for e610 in case fw reloads in runtime */ 1172 - if (adapter->hw.mac.type == ixgbe_mac_e610) 1173 - ixgbe_refresh_fw_version(adapter); 1174 1175 1175 1176 strscpy(drvinfo->driver, ixgbe_driver_name, sizeof(drvinfo->driver)); 1176 1177
+10
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 6289 6289 if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) 6290 6290 msleep(2000); 6291 6291 ixgbe_up(adapter); 6292 + 6293 + /* E610 has no FW event to notify all PFs of an EMPR reset, so 6294 + * refresh the FW version here to pick up any new FW version after 6295 + * a hardware reset (e.g. EMPR triggered by another PF's devlink 6296 + * reload). ixgbe_refresh_fw_version() updates both hw->flash and 6297 + * adapter->eeprom_id so ethtool -i reports the correct string. 6298 + */ 6299 + if (adapter->hw.mac.type == ixgbe_mac_e610) 6300 + (void)ixgbe_refresh_fw_version(adapter); 6301 + 6292 6302 clear_bit(__IXGBE_RESETTING, &adapter->state); 6293 6303 } 6294 6304
+7
drivers/net/ethernet/intel/ixgbevf/vf.c
··· 709 709 return err; 710 710 } 711 711 712 + static int ixgbevf_hv_negotiate_features_vf(struct ixgbe_hw *hw, 713 + u32 *pf_features) 714 + { 715 + return -EOPNOTSUPP; 716 + } 717 + 712 718 /** 713 719 * ixgbevf_set_vfta_vf - Set/Unset VLAN filter table address 714 720 * @hw: pointer to the HW structure ··· 1148 1142 .setup_link = ixgbevf_setup_mac_link_vf, 1149 1143 .check_link = ixgbevf_hv_check_mac_link_vf, 1150 1144 .negotiate_api_version = ixgbevf_hv_negotiate_api_version_vf, 1145 + .negotiate_features = ixgbevf_hv_negotiate_features_vf, 1151 1146 .set_rar = ixgbevf_hv_set_rar_vf, 1152 1147 .update_mc_addr_list = ixgbevf_hv_update_mc_addr_list_vf, 1153 1148 .update_xcast_mode = ixgbevf_hv_update_xcast_mode,
+1
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 2267 2267 { PCI_VDEVICE(MELLANOX, 0x1023) }, /* ConnectX-8 */ 2268 2268 { PCI_VDEVICE(MELLANOX, 0x1025) }, /* ConnectX-9 */ 2269 2269 { PCI_VDEVICE(MELLANOX, 0x1027) }, /* ConnectX-10 */ 2270 + { PCI_VDEVICE(MELLANOX, 0x2101) }, /* ConnectX-10 NVLink-C2C */ 2270 2271 { PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */ 2271 2272 { PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */ 2272 2273 { PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
+24 -4
drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
··· 91 91 pp_params.dma_dir = DMA_BIDIRECTIONAL; 92 92 93 93 rx->page_pool = page_pool_create(&pp_params); 94 + if (unlikely(IS_ERR(rx->page_pool))) 95 + return PTR_ERR(rx->page_pool); 94 96 95 97 for (int i = 0; i < lan966x->num_phys_ports; ++i) { 96 98 struct lan966x_port *port; ··· 119 117 return PTR_ERR(rx->page_pool); 120 118 121 119 err = fdma_alloc_coherent(lan966x->dev, fdma); 122 - if (err) 120 + if (err) { 121 + page_pool_destroy(rx->page_pool); 123 122 return err; 123 + } 124 124 125 125 fdma_dcbs_init(fdma, FDMA_DCB_INFO_DATAL(fdma->db_size), 126 126 FDMA_DCB_STATUS_INTR); ··· 812 808 813 809 static int lan966x_fdma_reload(struct lan966x *lan966x, int new_mtu) 814 810 { 811 + struct page *(*old_pages)[FDMA_RX_DCB_MAX_DBS]; 815 812 struct page_pool *page_pool; 816 813 struct fdma fdma_rx_old; 817 - int err; 814 + int err, i, j; 815 + 816 + old_pages = kmemdup(lan966x->rx.page, sizeof(lan966x->rx.page), 817 + GFP_KERNEL); 818 + if (!old_pages) 819 + return -ENOMEM; 818 820 819 821 /* Store these for later to free them */ 820 822 memcpy(&fdma_rx_old, &lan966x->rx.fdma, sizeof(struct fdma)); ··· 831 821 lan966x_fdma_stop_netdev(lan966x); 832 822 833 823 lan966x_fdma_rx_disable(&lan966x->rx); 834 - lan966x_fdma_rx_free_pages(&lan966x->rx); 835 824 lan966x->rx.page_order = round_up(new_mtu, PAGE_SIZE) / PAGE_SIZE - 1; 836 825 lan966x->rx.max_mtu = new_mtu; 837 826 err = lan966x_fdma_rx_alloc(&lan966x->rx); 838 827 if (err) 839 828 goto restore; 840 829 lan966x_fdma_rx_start(&lan966x->rx); 830 + 831 + for (i = 0; i < fdma_rx_old.n_dcbs; ++i) 832 + for (j = 0; j < fdma_rx_old.n_dbs; ++j) 833 + page_pool_put_full_page(page_pool, 834 + old_pages[i][j], false); 841 835 842 836 fdma_free_coherent(lan966x->dev, &fdma_rx_old); 843 837 ··· 850 836 lan966x_fdma_wakeup_netdev(lan966x); 851 837 napi_enable(&lan966x->napi); 852 838 853 - return err; 839 + kfree(old_pages); 840 + return 0; 854 841 restore: 855 842 lan966x->rx.page_pool = page_pool; 856 843 memcpy(&lan966x->rx.fdma, &fdma_rx_old, sizeof(struct fdma)); 857 844 lan966x_fdma_rx_start(&lan966x->rx); 858 845 846 + lan966x_fdma_wakeup_netdev(lan966x); 847 + napi_enable(&lan966x->napi); 848 + 849 + kfree(old_pages); 859 850 return err; 860 851 } 861 852 ··· 974 955 err = lan966x_fdma_tx_alloc(&lan966x->tx); 975 956 if (err) { 976 957 fdma_free_coherent(lan966x->dev, &lan966x->rx.fdma); 958 + page_pool_destroy(lan966x->rx.page_pool); 977 959 return err; 978 960 } 979 961
+1 -1
drivers/net/ethernet/qualcomm/qca_uart.c
··· 100 100 if (!qca->rx_skb) { 101 101 netdev_dbg(netdev, "recv: out of RX resources\n"); 102 102 n_stats->rx_errors++; 103 - return i; 103 + return i + 1; 104 104 } 105 105 } 106 106 }
+6 -5
drivers/net/ethernet/stmicro/stmmac/chain_mode.c
··· 20 20 unsigned int nopaged_len = skb_headlen(skb); 21 21 struct stmmac_priv *priv = tx_q->priv_data; 22 22 unsigned int entry = tx_q->cur_tx; 23 - unsigned int bmax, des2; 23 + unsigned int bmax, buf_len, des2; 24 24 unsigned int i = 1, len; 25 25 struct dma_desc *desc; 26 26 ··· 31 31 else 32 32 bmax = BUF_SIZE_2KiB; 33 33 34 - len = nopaged_len - bmax; 34 + buf_len = min_t(unsigned int, nopaged_len, bmax); 35 + len = nopaged_len - buf_len; 35 36 36 37 des2 = dma_map_single(priv->device, skb->data, 37 - bmax, DMA_TO_DEVICE); 38 + buf_len, DMA_TO_DEVICE); 38 39 desc->des2 = cpu_to_le32(des2); 39 40 if (dma_mapping_error(priv->device, des2)) 40 41 return -1; 41 42 tx_q->tx_skbuff_dma[entry].buf = des2; 42 - tx_q->tx_skbuff_dma[entry].len = bmax; 43 + tx_q->tx_skbuff_dma[entry].len = buf_len; 43 44 /* do not close the descriptor and do not set own bit */ 44 - stmmac_prepare_tx_desc(priv, desc, 1, bmax, csum, STMMAC_CHAIN_MODE, 45 + stmmac_prepare_tx_desc(priv, desc, 1, buf_len, csum, STMMAC_CHAIN_MODE, 45 46 0, false, skb->len); 46 47 47 48 while (len != 0) {
+8
drivers/net/ethernet/stmicro/stmmac/dwmac-motorcomm.c
··· 6 6 */ 7 7 8 8 #include <linux/bits.h> 9 + #include <linux/delay.h> 9 10 #include <linux/dev_printk.h> 10 11 #include <linux/io.h> 11 12 #include <linux/iopoll.h> ··· 334 333 dev_warn(&pdev->dev, "failed to disable L1 state: %d\n", ret); 335 334 336 335 motorcomm_reset(priv); 336 + 337 + /* 338 + * After system reset, the eFuse controller needs time to load 339 + * its internal data. Without this delay, eFuse reads return 340 + * all zeros, causing MAC address detection to fail. 341 + */ 342 + usleep_range(2000, 5000); 337 343 338 344 ret = motorcomm_efuse_read_mac(&pdev->dev, priv, res.mac); 339 345 if (ret == -ENOENT) {
+17 -2
drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
··· 9 9 #include "stmmac_platform.h" 10 10 11 11 static const char *const mgbe_clks[] = { 12 - "rx-pcs", "tx", "tx-pcs", "mac-divider", "mac", "mgbe", "ptp-ref", "mac" 12 + "rx-pcs", "tx", "tx-pcs", "mac-divider", "mac", "mgbe", "ptp_ref", "mac" 13 13 }; 14 14 15 15 struct tegra_mgbe { ··· 215 215 { 216 216 struct plat_stmmacenet_data *plat; 217 217 struct stmmac_resources res; 218 + bool use_legacy_ptp = false; 218 219 struct tegra_mgbe *mgbe; 219 220 int irq, err, i; 220 221 u32 value; ··· 258 257 if (!mgbe->clks) 259 258 return -ENOMEM; 260 259 261 - for (i = 0; i < ARRAY_SIZE(mgbe_clks); i++) 260 + /* Older device-trees use 'ptp-ref' rather than 'ptp_ref'. 261 + * Fall back when the legacy name is present. 262 + */ 263 + if (of_property_match_string(pdev->dev.of_node, "clock-names", 264 + "ptp-ref") >= 0) 265 + use_legacy_ptp = true; 266 + 267 + for (i = 0; i < ARRAY_SIZE(mgbe_clks); i++) { 262 268 mgbe->clks[i].id = mgbe_clks[i]; 269 + 270 + if (use_legacy_ptp && !strcmp(mgbe_clks[i], "ptp_ref")) { 271 + dev_warn(mgbe->dev, 272 + "Device-tree update needed for PTP clock!\n"); 273 + mgbe->clks[i].id = "ptp-ref"; 274 + } 275 + } 263 276 264 277 err = devm_clk_bulk_get(mgbe->dev, ARRAY_SIZE(mgbe_clks), mgbe->clks); 265 278 if (err < 0)
+4 -4
drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
··· 424 424 char i2c_name[32]; 425 425 char sfp_name[32]; 426 426 char phylink_name[32]; 427 - struct property_entry gpio_props[1]; 428 - struct property_entry i2c_props[3]; 429 - struct property_entry sfp_props[8]; 430 - struct property_entry phylink_props[2]; 427 + struct property_entry gpio_props[2]; 428 + struct property_entry i2c_props[4]; 429 + struct property_entry sfp_props[9]; 430 + struct property_entry phylink_props[3]; 431 431 struct software_node_ref_args i2c_ref[1]; 432 432 struct software_node_ref_args gpio0_ref[1]; 433 433 struct software_node_ref_args gpio1_ref[1];
+5 -4
drivers/net/ipa/reg/gsi_reg-v5.0.c
··· 30 30 31 31 static const u32 reg_ch_c_cntxt_1_fmask[] = { 32 32 [CH_R_LENGTH] = GENMASK(23, 0), 33 - [ERINDEX] = GENMASK(31, 24), 33 + [CH_ERINDEX] = GENMASK(31, 24), 34 34 }; 35 35 36 36 REG_STRIDE_FIELDS(CH_C_CNTXT_1, ch_c_cntxt_1, ··· 156 156 157 157 static const u32 reg_generic_cmd_fmask[] = { 158 158 [GENERIC_OPCODE] = GENMASK(4, 0), 159 - [GENERIC_CHID] = GENMASK(9, 5), 160 - [GENERIC_EE] = GENMASK(13, 10), 161 - /* Bits 14-31 reserved */ 159 + [GENERIC_CHID] = GENMASK(12, 5), 160 + [GENERIC_EE] = GENMASK(16, 13), 161 + /* Bits 17-23 reserved */ 162 + [GENERIC_PARAMS] = GENMASK(31, 24), 162 163 }; 163 164 164 165 REG_FIELDS(GENERIC_CMD, generic_cmd, 0x00025018 + 0x12000 * GSI_EE_AP);
+3 -2
drivers/net/ipvlan/ipvtap.c
··· 30 30 static dev_t ipvtap_major; 31 31 static struct cdev ipvtap_cdev; 32 32 33 - static const void *ipvtap_net_namespace(const struct device *d) 33 + static const struct ns_common *ipvtap_net_namespace(const struct device *d) 34 34 { 35 35 const struct net_device *dev = to_net_dev(d->parent); 36 - return dev_net(dev); 36 + 37 + return to_ns_common(dev_net(dev)); 37 38 } 38 39 39 40 static struct class ipvtap_class = {
+3 -2
drivers/net/macvtap.c
··· 35 35 */ 36 36 static dev_t macvtap_major; 37 37 38 - static const void *macvtap_net_namespace(const struct device *d) 38 + static const struct ns_common *macvtap_net_namespace(const struct device *d) 39 39 { 40 40 const struct net_device *dev = to_net_dev(d->parent); 41 - return dev_net(dev); 41 + 42 + return to_ns_common(dev_net(dev)); 42 43 } 43 44 44 45 static struct class macvtap_class = {
+1 -2
drivers/net/mdio/mdio-realtek-rtl9300.c
··· 466 466 { 467 467 struct device *dev = &pdev->dev; 468 468 struct rtl9300_mdio_priv *priv; 469 - struct fwnode_handle *child; 470 469 int err; 471 470 472 471 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); ··· 486 487 if (err) 487 488 return err; 488 489 489 - device_for_each_child_node(dev, child) { 490 + device_for_each_child_node_scoped(dev, child) { 490 491 err = rtl9300_mdiobus_probe_one(dev, priv, child); 491 492 if (err) 492 493 return err;
+16
drivers/net/phy/sfp.c
··· 543 543 SFP_QUIRK("HUAWEI", "MA5671A", sfp_quirk_2500basex, 544 544 sfp_fixup_ignore_tx_fault_and_los), 545 545 546 + // Hisense LXT-010S-H is a GPON ONT SFP (sold as LEOX LXT-010S-H) that 547 + // can operate at 2500base-X, but reports 1000BASE-LX / 1300MBd in its 548 + // EEPROM 549 + SFP_QUIRK("Hisense-Leox", "LXT-010S-H", sfp_quirk_2500basex, 550 + sfp_fixup_ignore_tx_fault), 551 + 552 + // Hisense ZNID-GPON-2311NA can operate at 2500base-X, but reports 553 + // 1000BASE-LX / 1300MBd in its EEPROM 554 + SFP_QUIRK("Hisense", "ZNID-GPON-2311NA", sfp_quirk_2500basex, 555 + sfp_fixup_ignore_tx_fault), 556 + 557 + // HSGQ HSGQ-XPON-Stick can operate at 2500base-X, but reports 558 + // 1000BASE-LX / 1300MBd in its EEPROM 559 + SFP_QUIRK("HSGQ", "HSGQ-XPON-Stick", sfp_quirk_2500basex, 560 + sfp_fixup_ignore_tx_fault), 561 + 546 562 // Lantech 8330-262D-E and 8330-265D can operate at 2500base-X, but 547 563 // incorrectly report 2500MBd NRZ in their EEPROM. 548 564 // Some 8330-265D modules have inverted LOS, while all of them report
+8 -5
drivers/net/wan/lapbether.c
··· 446 446 static int lapbeth_device_event(struct notifier_block *this, 447 447 unsigned long event, void *ptr) 448 448 { 449 - struct lapbethdev *lapbeth; 450 449 struct net_device *dev = netdev_notifier_info_to_dev(ptr); 450 + struct lapbethdev *lapbeth; 451 451 452 452 if (dev_net(dev) != &init_net) 453 453 return NOTIFY_DONE; 454 454 455 - if (!dev_is_ethdev(dev) && !lapbeth_get_x25_dev(dev)) 455 + lapbeth = lapbeth_get_x25_dev(dev); 456 + if (!dev_is_ethdev(dev) && !lapbeth) 456 457 return NOTIFY_DONE; 457 458 458 459 switch (event) { 459 460 case NETDEV_UP: 460 461 /* New ethernet device -> new LAPB interface */ 461 - if (!lapbeth_get_x25_dev(dev)) 462 + if (!lapbeth) 462 463 lapbeth_new_device(dev); 463 464 break; 464 465 case NETDEV_GOING_DOWN: 465 466 /* ethernet device closes -> close LAPB interface */ 466 - lapbeth = lapbeth_get_x25_dev(dev); 467 467 if (lapbeth) 468 468 dev_close(lapbeth->axdev); 469 469 break; 470 470 case NETDEV_UNREGISTER: 471 471 /* ethernet device disappears -> remove LAPB interface */ 472 - lapbeth = lapbeth_get_x25_dev(dev); 473 472 if (lapbeth) 474 473 lapbeth_free_device(lapbeth); 475 474 break; 475 + case NETDEV_PRE_TYPE_CHANGE: 476 + /* Our underlying device type must not change. */ 477 + if (lapbeth) 478 + return NOTIFY_BAD; 476 479 } 477 480 478 481 return NOTIFY_DONE;
+5
drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
··· 153 153 bphy_err(drvr, "invalid interface index: %u\n", ifevent->ifidx); 154 154 return; 155 155 } 156 + if (ifevent->bsscfgidx >= BRCMF_MAX_IFS) { 157 + bphy_err(drvr, "invalid bsscfg index: %u\n", 158 + ifevent->bsscfgidx); 159 + return; 160 + } 156 161 157 162 ifp = drvr->iflist[ifevent->bsscfgidx]; 158 163
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmsmac/dma.c
··· 483 483 if (((desc_strtaddr + size - 1) & boundary) != (desc_strtaddr 484 484 & boundary)) { 485 485 *alignbits = dma_align_sizetobits(size); 486 - dma_free_coherent(di->dmadev, size, va, *descpa); 486 + dma_free_coherent(di->dmadev, *alloced, va, *descpa); 487 487 va = dma_alloc_consistent(di, size, *alignbits, 488 488 alloced, descpa); 489 489 }
+1 -1
drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
··· 828 828 if (retval) 829 829 goto exit_free_device; 830 830 831 - rt2x00dev->anchor = devm_kmalloc(&usb_dev->dev, 831 + rt2x00dev->anchor = devm_kmalloc(&usb_intf->dev, 832 832 sizeof(struct usb_anchor), 833 833 GFP_KERNEL); 834 834 if (!rt2x00dev->anchor) {
+8 -3
drivers/nfc/pn533/uart.c
··· 211 211 212 212 timer_delete(&dev->cmd_timeout); 213 213 for (i = 0; i < count; i++) { 214 + if (!dev->recv_skb) { 215 + dev->recv_skb = alloc_skb(PN532_UART_SKB_BUFF_LEN, 216 + GFP_KERNEL); 217 + if (!dev->recv_skb) 218 + return i; 219 + } 220 + 214 221 if (unlikely(!skb_tailroom(dev->recv_skb))) 215 222 skb_trim(dev->recv_skb, 0); 216 223 ··· 226 219 continue; 227 220 228 221 pn533_recv_frame(dev->priv, dev->recv_skb, 0); 229 - dev->recv_skb = alloc_skb(PN532_UART_SKB_BUFF_LEN, GFP_KERNEL); 230 - if (!dev->recv_skb) 231 - return 0; 222 + dev->recv_skb = NULL; 232 223 } 233 224 234 225 return i;
+7 -3
drivers/nfc/s3fwrn5/uart.c
··· 58 58 size_t i; 59 59 60 60 for (i = 0; i < count; i++) { 61 + if (!phy->recv_skb) { 62 + phy->recv_skb = alloc_skb(NCI_SKB_BUFF_LEN, GFP_KERNEL); 63 + if (!phy->recv_skb) 64 + return i; 65 + } 66 + 61 67 skb_put_u8(phy->recv_skb, *data++); 62 68 63 69 if (phy->recv_skb->len < S3FWRN82_NCI_HEADER) ··· 75 69 76 70 s3fwrn5_recv_frame(phy->common.ndev, phy->recv_skb, 77 71 phy->common.mode); 78 - phy->recv_skb = alloc_skb(NCI_SKB_BUFF_LEN, GFP_KERNEL); 79 - if (!phy->recv_skb) 80 - return 0; 72 + phy->recv_skb = NULL; 81 73 } 82 74 83 75 return i;
+9 -3
drivers/pci/controller/pci-hyperv.c
··· 2485 2485 if (!hv_dev) 2486 2486 continue; 2487 2487 2488 + /* 2489 + * If the Hyper-V host doesn't provide a NUMA node for the 2490 + * device, default to node 0. With NUMA_NO_NODE the kernel 2491 + * may spread work across NUMA nodes, which degrades 2492 + * performance on Hyper-V. 2493 + */ 2494 + set_dev_node(&dev->dev, 0); 2495 + 2488 2496 if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY && 2489 2497 hv_dev->desc.virtual_numa_node < num_possible_nodes()) 2490 2498 /* ··· 3786 3778 hbus->bridge->domain_nr); 3787 3779 if (!hbus->wq) { 3788 3780 ret = -ENOMEM; 3789 - goto free_dom; 3781 + goto free_bus; 3790 3782 } 3791 3783 3792 3784 hdev->channel->next_request_id_callback = vmbus_next_request_id; ··· 3882 3874 vmbus_close(hdev->channel); 3883 3875 destroy_wq: 3884 3876 destroy_workqueue(hbus->wq); 3885 - free_dom: 3886 - pci_bus_release_emul_domain_nr(hbus->bridge->domain_nr); 3887 3877 free_bus: 3888 3878 kfree(hbus); 3889 3879 return ret;
+26 -10
drivers/pinctrl/intel/pinctrl-intel.c
··· 53 53 #define PADOWN_MASK(p) (GENMASK(3, 0) << PADOWN_SHIFT(p)) 54 54 #define PADOWN_GPP(p) ((p) / 8) 55 55 56 - #define PWMC 0x204 57 - 58 56 /* Offset from pad_regs */ 59 57 #define PADCFG0 0x000 60 58 #define PADCFG0_RXEVCFG_MASK GENMASK(26, 25) ··· 203 205 community = intel_get_community(pctrl, pin); 204 206 if (!community) 205 207 return false; 206 - if (!community->padown_offset) 208 + 209 + /* If padown_offset is not provided, assume host ownership */ 210 + padown = community->regs + community->padown_offset; 211 + if (padown == community->regs) 207 212 return true; 213 + 214 + /* New HW generations have extended PAD_OWN registers */ 215 + if (community->features & PINCTRL_FEATURE_3BIT_PAD_OWN) 216 + return !(readl(padown + pin_to_padno(community, pin) * 4) & 7); 208 217 209 218 padgrp = intel_community_get_padgroup(community, pin); 210 219 if (!padgrp) ··· 219 214 220 215 gpp_offset = padgroup_offset(padgrp, pin); 221 216 gpp = PADOWN_GPP(gpp_offset); 222 - offset = community->padown_offset + padgrp->padown_num * 4 + gpp * 4; 223 - padown = community->regs + offset; 217 + offset = padgrp->padown_num * 4 + gpp * 4; 224 218 225 - return !(readl(padown) & PADOWN_MASK(gpp_offset)); 219 + return !(readl(padown + offset) & PADOWN_MASK(gpp_offset)); 226 220 } 227 221 228 222 static bool intel_pad_acpi_mode(const struct intel_pinctrl *pctrl, unsigned int pin) ··· 1553 1549 } 1554 1550 1555 1551 static int intel_pinctrl_probe_pwm(struct intel_pinctrl *pctrl, 1556 - struct intel_community *community) 1552 + struct intel_community *community, 1553 + unsigned short capability_offset) 1557 1554 { 1555 + void __iomem *base = community->regs + capability_offset + 4; 1558 1556 static const struct pwm_lpss_boardinfo info = { 1559 1557 .clk_rate = 19200000, 1560 1558 .npwm = 1, ··· 1570 1564 if (!IS_REACHABLE(CONFIG_PWM_LPSS)) 1571 1565 return 0; 1572 1566 1573 - chip = devm_pwm_lpss_probe(pctrl->dev, community->regs + PWMC, &info); 1567 + chip = devm_pwm_lpss_probe(pctrl->dev, base, &info); 1574 1568 return PTR_ERR_OR_ZERO(chip); 1575 1569 } 1576 1570 ··· 1601 1595 1602 1596 for (i = 0; i < pctrl->ncommunities; i++) { 1603 1597 struct intel_community *community = &pctrl->communities[i]; 1598 + unsigned short capability_offset[6]; 1604 1599 void __iomem *regs; 1600 + u32 revision; 1605 1601 u32 offset; 1606 1602 u32 value; 1607 1603 ··· 1618 1610 value = readl(regs + REVID); 1619 1611 if (value == ~0u) 1620 1612 return -ENODEV; 1621 - if (((value & REVID_MASK) >> REVID_SHIFT) >= 0x94) { 1613 + 1614 + revision = (value & REVID_MASK) >> REVID_SHIFT; 1615 + if (revision >= 0x092) { 1622 1616 community->features |= PINCTRL_FEATURE_DEBOUNCE; 1623 1617 community->features |= PINCTRL_FEATURE_1K_PD; 1624 1618 } 1619 + if (revision >= 0x110) 1620 + community->features |= PINCTRL_FEATURE_3BIT_PAD_OWN; 1625 1621 1626 1622 /* Determine community features based on the capabilities */ 1627 1623 offset = CAPLIST; ··· 1634 1622 switch ((value & CAPLIST_ID_MASK) >> CAPLIST_ID_SHIFT) { 1635 1623 case CAPLIST_ID_GPIO_HW_INFO: 1636 1624 community->features |= PINCTRL_FEATURE_GPIO_HW_INFO; 1625 + capability_offset[CAPLIST_ID_GPIO_HW_INFO] = offset; 1637 1626 break; 1638 1627 case CAPLIST_ID_PWM: 1639 1628 community->features |= PINCTRL_FEATURE_PWM; 1629 + capability_offset[CAPLIST_ID_PWM] = offset; 1640 1630 break; 1641 1631 case CAPLIST_ID_BLINK: 1642 1632 community->features |= PINCTRL_FEATURE_BLINK; 1633 + capability_offset[CAPLIST_ID_BLINK] = offset; 1643 1634 break; 1644 1635 case CAPLIST_ID_EXP: 1645 1636 community->features |= PINCTRL_FEATURE_EXP; 1637 + capability_offset[CAPLIST_ID_EXP] = offset; 1646 1638 break; 1647 1639 default: 1648 1640 break; ··· 1669 1653 if (ret) 1670 1654 return ret; 1671 1655 1672 - ret = intel_pinctrl_probe_pwm(pctrl, community); 1656 + ret = intel_pinctrl_probe_pwm(pctrl, community, capability_offset[CAPLIST_ID_PWM]); 1673 1657 if (ret) 1674 1658 return ret; 1675 1659 }
+1
drivers/pinctrl/intel/pinctrl-intel.h
··· 150 150 #define PINCTRL_FEATURE_PWM BIT(3) 151 151 #define PINCTRL_FEATURE_BLINK BIT(4) 152 152 #define PINCTRL_FEATURE_EXP BIT(5) 153 + #define PINCTRL_FEATURE_3BIT_PAD_OWN BIT(6) 153 154 154 155 #define __INTEL_COMMUNITY(b, s, e, g, n, gs, gn, soc) \ 155 156 { \
+9
drivers/pinctrl/pinctrl-mcp23s08.c
··· 664 664 if (mcp->irq && mcp->irq_controller) { 665 665 struct gpio_irq_chip *girq = &mcp->chip.irq; 666 666 667 + /* 668 + * Disable all pin interrupts, to prevent the interrupt handler from 669 + * calling nested handlers for any currently-enabled interrupts that 670 + * do not (yet) have an actual handler. 671 + */ 672 + ret = mcp_write(mcp, MCP_GPINTEN, 0); 673 + if (ret < 0) 674 + return dev_err_probe(dev, ret, "can't disable interrupts\n"); 675 + 667 676 gpio_irq_chip_set_chip(girq, &mcp23s08_irq_chip); 668 677 /* This will let us handle the parent IRQ in the driver */ 669 678 girq->parent_handler = NULL;
+9
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 203 203 DMI_MATCH(DMI_PRODUCT_NAME, "82XQ"), 204 204 } 205 205 }, 206 + /* https://bugzilla.kernel.org/show_bug.cgi?id=221273 */ 207 + { 208 + .ident = "Thinkpad L14 Gen3", 209 + .driver_data = &quirk_s2idle_bug, 210 + .matches = { 211 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 212 + DMI_MATCH(DMI_PRODUCT_NAME, "21C6"), 213 + } 214 + }, 206 215 /* https://gitlab.freedesktop.org/drm/amd/-/issues/4434 */ 207 216 { 208 217 .ident = "Lenovo Yoga 6 13ALC6",
+86
drivers/platform/x86/asus-armoury.h
··· 594 594 }, 595 595 { 596 596 .matches = { 597 + DMI_MATCH(DMI_BOARD_NAME, "FA607NU"), 598 + }, 599 + .driver_data = &(struct power_data) { 600 + .ac_data = &(struct power_limits) { 601 + .ppt_pl1_spl_min = 15, 602 + .ppt_pl1_spl_max = 80, 603 + .ppt_pl2_sppt_min = 35, 604 + .ppt_pl2_sppt_max = 80, 605 + .ppt_pl3_fppt_min = 35, 606 + .ppt_pl3_fppt_max = 80, 607 + .nv_dynamic_boost_min = 5, 608 + .nv_dynamic_boost_max = 25, 609 + .nv_temp_target_min = 75, 610 + .nv_temp_target_max = 87, 611 + }, 612 + .dc_data = &(struct power_limits) { 613 + .ppt_pl1_spl_min = 25, 614 + .ppt_pl1_spl_def = 45, 615 + .ppt_pl1_spl_max = 65, 616 + .ppt_pl2_sppt_min = 25, 617 + .ppt_pl2_sppt_def = 54, 618 + .ppt_pl2_sppt_max = 65, 619 + .ppt_pl3_fppt_min = 25, 620 + .ppt_pl3_fppt_max = 65, 621 + .nv_temp_target_min = 75, 622 + .nv_temp_target_max = 87, 623 + }, 624 + }, 625 + }, 626 + { 627 + .matches = { 597 628 DMI_MATCH(DMI_BOARD_NAME, "FA607P"), 598 629 }, 599 630 .driver_data = &(struct power_data) { ··· 1342 1311 }, 1343 1312 { 1344 1313 .matches = { 1314 + DMI_MATCH(DMI_BOARD_NAME, "GU605MU"), 1315 + }, 1316 + .driver_data = &(struct power_data) { 1317 + .ac_data = &(struct power_limits) { 1318 + .ppt_pl1_spl_min = 28, 1319 + .ppt_pl1_spl_max = 90, 1320 + .ppt_pl2_sppt_min = 28, 1321 + .ppt_pl2_sppt_max = 135, 1322 + .nv_dynamic_boost_min = 5, 1323 + .nv_dynamic_boost_max = 20, 1324 + .nv_temp_target_min = 75, 1325 + .nv_temp_target_max = 87, 1326 + .nv_tgp_min = 55, 1327 + .nv_tgp_max = 85, 1328 + }, 1329 + .dc_data = &(struct power_limits) { 1330 + .ppt_pl1_spl_min = 25, 1331 + .ppt_pl1_spl_max = 35, 1332 + .ppt_pl2_sppt_min = 38, 1333 + .ppt_pl2_sppt_max = 53, 1334 + .nv_temp_target_min = 75, 1335 + .nv_temp_target_max = 87, 1336 + }, 1337 + .requires_fan_curve = true, 1338 + }, 1339 + }, 1340 + { 1341 + .matches = { 1345 1342 DMI_MATCH(DMI_BOARD_NAME, "GU605M"), 1346 1343 }, 1347 1344 .driver_data = &(struct power_data) { ··· 1418 1359 .ppt_pl1_spl_max = 45, 1419 1360 .ppt_pl2_sppt_min = 25, 1420 1361 .ppt_pl2_sppt_max = 54, 1362 + .ppt_pl3_fppt_min = 35, 1363 + .ppt_pl3_fppt_max = 65, 1364 + .nv_temp_target_min = 75, 1365 + .nv_temp_target_max = 87, 1366 + }, 1367 + .dc_data = &(struct power_limits) { 1368 + .ppt_pl1_spl_min = 15, 1369 + .ppt_pl1_spl_max = 35, 1370 + .ppt_pl2_sppt_min = 25, 1371 + .ppt_pl2_sppt_max = 35, 1372 + .ppt_pl3_fppt_min = 35, 1373 + .ppt_pl3_fppt_max = 65, 1374 + .nv_temp_target_min = 75, 1375 + .nv_temp_target_max = 87, 1376 + }, 1377 + }, 1378 + }, 1379 + { 1380 + .matches = { 1381 + DMI_MATCH(DMI_BOARD_NAME, "GV302XU"), 1382 + }, 1383 + .driver_data = &(struct power_data) { 1384 + .ac_data = &(struct power_limits) { 1385 + .ppt_pl1_spl_min = 15, 1386 + .ppt_pl1_spl_max = 55, 1387 + .ppt_pl2_sppt_min = 25, 1388 + .ppt_pl2_sppt_max = 60, 1421 1389 .ppt_pl3_fppt_min = 35, 1422 1390 .ppt_pl3_fppt_max = 65, 1423 1391 .nv_temp_target_min = 75,
+3 -1
drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
··· 36 36 37 37 /* Supported SST hardware version by this driver */ 38 38 #define ISST_MAJOR_VERSION 0 39 - #define ISST_MINOR_VERSION 2 39 + #define ISST_MINOR_VERSION 3 40 40 41 41 /* 42 42 * Used to indicate if value read from MMIO needs to get multiplied ··· 1460 1460 j * SST_TF_RATIO_0_WIDTH, SST_TF_RATIO_0_WIDTH, 1461 1461 SST_MUL_FACTOR_FREQ) 1462 1462 } 1463 + 1464 + memset(turbo_freq.bucket_core_counts, 0, sizeof(turbo_freq.bucket_core_counts)); 1463 1465 1464 1466 if (feature_rev >= 2) { 1465 1467 bool has_tf_info_8 = false;
+8 -2
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
··· 31 31 #include "uncore-frequency-common.h" 32 32 33 33 #define UNCORE_MAJOR_VERSION 0 34 - #define UNCORE_MINOR_VERSION 2 34 + #define UNCORE_MINOR_VERSION 3 35 35 #define UNCORE_ELC_SUPPORTED_VERSION 2 36 36 #define UNCORE_HEADER_INDEX 0 37 37 #define UNCORE_FABRIC_CLUSTER_OFFSET 8 ··· 537 537 #define UNCORE_VERSION_MASK GENMASK_ULL(7, 0) 538 538 #define UNCORE_LOCAL_FABRIC_CLUSTER_ID_MASK GENMASK_ULL(15, 8) 539 539 #define UNCORE_CLUSTER_OFF_MASK GENMASK_ULL(7, 0) 540 + #define UNCORE_AUTONOMOUS_UFS_DISABLED BIT(32) 540 541 #define UNCORE_MAX_CLUSTER_PER_DOMAIN 8 541 542 542 543 static int uncore_probe(struct auxiliary_device *auxdev, const struct auxiliary_device_id *id) ··· 599 598 600 599 for (i = 0; i < num_resources; ++i) { 601 600 struct tpmi_uncore_power_domain_info *pd_info; 601 + bool auto_ufs_enabled; 602 602 struct resource *res; 603 603 u64 cluster_offset; 604 604 u8 cluster_mask; ··· 649 647 continue; 650 648 } 651 649 650 + auto_ufs_enabled = !(header & UNCORE_AUTONOMOUS_UFS_DISABLED); 651 + 652 652 /* Find out number of clusters in this resource */ 653 653 pd_info->cluster_count = hweight8(cluster_mask); 654 654 ··· 693 689 694 690 cluster_info->uncore_root = tpmi_uncore; 695 691 696 - if (TPMI_MINOR_VERSION(pd_info->ufs_header_ver) >= UNCORE_ELC_SUPPORTED_VERSION) 692 + if ((TPMI_MINOR_VERSION(pd_info->ufs_header_ver) >= 693 + UNCORE_ELC_SUPPORTED_VERSION) && 694 + auto_ufs_enabled) 697 695 cluster_info->elc_supported = true; 698 696 699 697 ret = uncore_freq_add_entry(&cluster_info->uncore_data, 0);
+1 -7
drivers/pmdomain/imx/imx8mp-blk-ctrl.c
··· 352 352 regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(12)); 353 353 regmap_clear_bits(bc->regmap, HDMI_TX_CONTROL0, BIT(3)); 354 354 break; 355 - case IMX8MP_HDMIBLK_PD_HDCP: 356 - regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL0, BIT(11)); 357 - break; 358 355 case IMX8MP_HDMIBLK_PD_HRV: 359 356 regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(3) | BIT(4) | BIT(5)); 360 357 regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(15)); ··· 405 408 regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL0, BIT(7)); 406 409 regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(22) | BIT(24)); 407 410 break; 408 - case IMX8MP_HDMIBLK_PD_HDCP: 409 - regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL0, BIT(11)); 410 - break; 411 411 case IMX8MP_HDMIBLK_PD_HRV: 412 412 regmap_clear_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(15)); 413 413 regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(3) | BIT(4) | BIT(5)); ··· 433 439 regmap_write(bc->regmap, HDMI_RTX_CLK_CTL0, 0x0); 434 440 regmap_write(bc->regmap, HDMI_RTX_CLK_CTL1, 0x0); 435 441 regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL0, 436 - BIT(0) | BIT(1) | BIT(10)); 442 + BIT(0) | BIT(1) | BIT(10) | BIT(11)); 437 443 regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(0)); 438 444 439 445 /*
+1 -1
drivers/regulator/bd71828-regulator.c
··· 785 785 }, 786 786 }; 787 787 788 - #define BD72720_BUCK10_DESC_INDEX 10 788 + #define BD72720_BUCK10_DESC_INDEX 9 789 789 #define BD72720_NUM_BUCK_VOLTS 0x100 790 790 #define BD72720_NUM_LDO_VOLTS 0x100 791 791 #define BD72720_NUM_LDO12346_VOLTS 0x80
-1
drivers/reset/core.c
··· 856 856 ret = __auxiliary_device_add(adev, "reset"); 857 857 if (ret) { 858 858 auxiliary_device_uninit(adev); 859 - kfree(adev); 860 859 return ret; 861 860 } 862 861
+1 -1
drivers/reset/reset-rzg2l-usbphy-ctrl.c
··· 350 350 351 351 MODULE_LICENSE("GPL v2"); 352 352 MODULE_DESCRIPTION("Renesas RZ/G2L USBPHY Control"); 353 - MODULE_AUTHOR("biju.das.jz@bp.renesas.com>"); 353 + MODULE_AUTHOR("Biju Das <biju.das.jz@bp.renesas.com>");
+36 -24
drivers/reset/spacemit/reset-spacemit-k3.c
··· 112 112 [RESET_APMU_SDH0] = RESET_DATA(APMU_SDH0_CLK_RES_CTRL, 0, BIT(1)), 113 113 [RESET_APMU_SDH1] = RESET_DATA(APMU_SDH1_CLK_RES_CTRL, 0, BIT(1)), 114 114 [RESET_APMU_SDH2] = RESET_DATA(APMU_SDH2_CLK_RES_CTRL, 0, BIT(1)), 115 - [RESET_APMU_USB2] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 116 - BIT(1)|BIT(2)|BIT(3)), 117 - [RESET_APMU_USB3_PORTA] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 118 - BIT(5)|BIT(6)|BIT(7)), 119 - [RESET_APMU_USB3_PORTB] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 120 - BIT(9)|BIT(10)|BIT(11)), 121 - [RESET_APMU_USB3_PORTC] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 122 - BIT(13)|BIT(14)|BIT(15)), 123 - [RESET_APMU_USB3_PORTD] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 124 - BIT(17)|BIT(18)|BIT(19)), 115 + [RESET_APMU_USB2_AHB] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(1)), 116 + [RESET_APMU_USB2_VCC] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(2)), 117 + [RESET_APMU_USB2_PHY] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(3)), 118 + [RESET_APMU_USB3_A_AHB] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(5)), 119 + [RESET_APMU_USB3_A_VCC] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(6)), 120 + [RESET_APMU_USB3_A_PHY] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(7)), 121 + [RESET_APMU_USB3_B_AHB] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(9)), 122 + [RESET_APMU_USB3_B_VCC] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(10)), 123 + [RESET_APMU_USB3_B_PHY] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(11)), 124 + [RESET_APMU_USB3_C_AHB] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(13)), 125 + [RESET_APMU_USB3_C_VCC] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(14)), 126 + [RESET_APMU_USB3_C_PHY] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(15)), 127 + [RESET_APMU_USB3_D_AHB] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(17)), 128 + [RESET_APMU_USB3_D_VCC] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(18)), 129 + [RESET_APMU_USB3_D_PHY] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, BIT(19)), 125 130 [RESET_APMU_QSPI] = RESET_DATA(APMU_QSPI_CLK_RES_CTRL, 0, BIT(1)), 126 131 [RESET_APMU_QSPI_BUS] = RESET_DATA(APMU_QSPI_CLK_RES_CTRL, 0, BIT(0)), 127 132 [RESET_APMU_DMA] = RESET_DATA(APMU_DMA_CLK_RES_CTRL, 0, BIT(0)), ··· 156 151 [RESET_APMU_CPU7_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(26), 0), 157 152 [RESET_APMU_C1_MPSUB_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(28), 0), 158 153 [RESET_APMU_MPSUB_DBG] = RESET_DATA(APMU_PMU_CC2_AP, BIT(29), 0), 159 - [RESET_APMU_UCIE] = RESET_DATA(APMU_UCIE_CTRL, 160 - BIT(1) | BIT(2) | BIT(3), 0), 161 - [RESET_APMU_RCPU] = RESET_DATA(APMU_RCPU_CLK_RES_CTRL, 0, 162 - BIT(3) | BIT(2) | BIT(0)), 154 + [RESET_APMU_UCIE_IP] = RESET_DATA(APMU_UCIE_CTRL, BIT(1), 0), 155 + [RESET_APMU_UCIE_HOT] = RESET_DATA(APMU_UCIE_CTRL, BIT(2), 0), 156 + [RESET_APMU_UCIE_MON] = RESET_DATA(APMU_UCIE_CTRL, BIT(3), 0), 157 + [RESET_APMU_RCPU_AUDIO_SYS] = RESET_DATA(APMU_RCPU_CLK_RES_CTRL, 0, BIT(0)), 158 + [RESET_APMU_RCPU_MCU_CORE] = RESET_DATA(APMU_RCPU_CLK_RES_CTRL, 0, BIT(2)), 159 + [RESET_APMU_RCPU_AUDIO_APMU] = RESET_DATA(APMU_RCPU_CLK_RES_CTRL, 0, BIT(3)), 163 160 [RESET_APMU_DSI4LN2_ESCCLK] = RESET_DATA(APMU_LCD_CLK_RES_CTRL3, 0, BIT(3)), 164 161 [RESET_APMU_DSI4LN2_LCD_SW] = RESET_DATA(APMU_LCD_CLK_RES_CTRL3, 0, BIT(4)), 165 162 [RESET_APMU_DSI4LN2_LCD_MCLK] = RESET_DATA(APMU_LCD_CLK_RES_CTRL4, 0, BIT(9)), ··· 171 164 [RESET_APMU_UFS_ACLK] = RESET_DATA(APMU_UFS_CLK_RES_CTRL, 0, BIT(0)), 172 165 [RESET_APMU_EDP0] = RESET_DATA(APMU_LCD_EDP_CTRL, 0, BIT(0)), 173 166 [RESET_APMU_EDP1] = RESET_DATA(APMU_LCD_EDP_CTRL, 0, BIT(16)), 174 - [RESET_APMU_PCIE_PORTA] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_A, 0, 175 - BIT(5) | BIT(4) | BIT(3)), 176 - [RESET_APMU_PCIE_PORTB] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_B, 0, 177 - BIT(5) | BIT(4) | BIT(3)), 178 - [RESET_APMU_PCIE_PORTC] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_C, 0, 179 - BIT(5) | BIT(4) | BIT(3)), 180 - [RESET_APMU_PCIE_PORTD] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_D, 0, 181 - BIT(5) | BIT(4) | BIT(3)), 182 - [RESET_APMU_PCIE_PORTE] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_E, 0, 183 - BIT(5) | BIT(4) | BIT(3)), 167 + [RESET_APMU_PCIE_A_DBI] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_A, 0, BIT(3)), 168 + [RESET_APMU_PCIE_A_SLAVE] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_A, 0, BIT(4)), 169 + [RESET_APMU_PCIE_A_MASTER] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_A, 0, BIT(5)), 170 + [RESET_APMU_PCIE_B_DBI] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_B, 0, BIT(3)), 171 + [RESET_APMU_PCIE_B_SLAVE] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_B, 0, BIT(4)), 172 + [RESET_APMU_PCIE_B_MASTER] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_B, 0, BIT(5)), 173 + [RESET_APMU_PCIE_C_DBI] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_C, 0, BIT(3)), 174 + [RESET_APMU_PCIE_C_SLAVE] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_C, 0, BIT(4)), 175 + [RESET_APMU_PCIE_C_MASTER] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_C, 0, BIT(5)), 176 + [RESET_APMU_PCIE_D_DBI] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_D, 0, BIT(3)), 177 + [RESET_APMU_PCIE_D_SLAVE] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_D, 0, BIT(4)), 178 + [RESET_APMU_PCIE_D_MASTER] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_D, 0, BIT(5)), 179 + [RESET_APMU_PCIE_E_DBI] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_E, 0, BIT(3)), 180 + [RESET_APMU_PCIE_E_SLAVE] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_E, 0, BIT(4)), 181 + [RESET_APMU_PCIE_E_MASTER] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_E, 0, BIT(5)), 184 182 [RESET_APMU_EMAC0] = RESET_DATA(APMU_EMAC0_CLK_RES_CTRL, 0, BIT(1)), 185 183 [RESET_APMU_EMAC1] = RESET_DATA(APMU_EMAC1_CLK_RES_CTRL, 0, BIT(1)), 186 184 [RESET_APMU_EMAC2] = RESET_DATA(APMU_EMAC2_CLK_RES_CTRL, 0, BIT(1)),
+1 -1
drivers/soc/aspeed/aspeed-socinfo.c
··· 39 39 unsigned int i; 40 40 41 41 for (i = 0 ; i < ARRAY_SIZE(rev_table) ; ++i) { 42 - if (rev_table[i].id == id) 42 + if ((rev_table[i].id & 0xff00ffff) == id) 43 43 return rev_table[i].name; 44 44 } 45 45
+4 -2
drivers/soc/microchip/mpfs-control-scb.c
··· 14 14 { 15 15 struct device *dev = &pdev->dev; 16 16 17 - return mfd_add_devices(dev, PLATFORM_DEVID_NONE, mpfs_control_scb_devs, 18 - ARRAY_SIZE(mpfs_control_scb_devs), NULL, 0, NULL); 17 + return devm_mfd_add_devices(dev, PLATFORM_DEVID_NONE, 18 + mpfs_control_scb_devs, 19 + ARRAY_SIZE(mpfs_control_scb_devs), NULL, 0, 20 + NULL); 19 21 } 20 22 21 23 static const struct of_device_id mpfs_control_scb_of_match[] = {
+4 -2
drivers/soc/microchip/mpfs-mss-top-sysreg.c
··· 16 16 struct device *dev = &pdev->dev; 17 17 int ret; 18 18 19 - ret = mfd_add_devices(dev, PLATFORM_DEVID_NONE, mpfs_mss_top_sysreg_devs, 20 - ARRAY_SIZE(mpfs_mss_top_sysreg_devs) , NULL, 0, NULL); 19 + ret = devm_mfd_add_devices(dev, PLATFORM_DEVID_NONE, 20 + mpfs_mss_top_sysreg_devs, 21 + ARRAY_SIZE(mpfs_mss_top_sysreg_devs), NULL, 22 + 0, NULL); 21 23 if (ret) 22 24 return ret; 23 25
+1 -1
drivers/soc/qcom/pdr_internal.h
··· 84 84 85 85 struct servreg_loc_pfr_req { 86 86 char service[SERVREG_NAME_LENGTH + 1]; 87 - char reason[257]; 87 + char reason[SERVREG_PFR_LENGTH + 1]; 88 88 }; 89 89 90 90 struct servreg_loc_pfr_resp {
+1 -1
drivers/soc/qcom/qcom_pdr_msg.c
··· 325 325 }, 326 326 { 327 327 .data_type = QMI_STRING, 328 - .elem_len = SERVREG_NAME_LENGTH + 1, 328 + .elem_len = SERVREG_PFR_LENGTH + 1, 329 329 .elem_size = sizeof(char), 330 330 .array_type = VAR_LEN_ARRAY, 331 331 .tlv_type = 0x02,
+2 -2
drivers/spi/spi-cadence-quadspi.c
··· 2016 2016 2017 2017 ddata = of_device_get_match_data(dev); 2018 2018 2019 + spi_unregister_controller(cqspi->host); 2020 + 2019 2021 refcount_set(&cqspi->refcount, 0); 2020 2022 2021 2023 if (!refcount_dec_and_test(&cqspi->inflight_ops)) 2022 2024 cqspi_wait_idle(cqspi); 2023 - 2024 - spi_unregister_controller(cqspi->host); 2025 2025 2026 2026 if (cqspi->rx_chan) 2027 2027 dma_release_channel(cqspi->rx_chan);
+5 -1
drivers/spi/spi-cadence.c
··· 777 777 struct spi_controller *ctlr = platform_get_drvdata(pdev); 778 778 struct cdns_spi *xspi = spi_controller_get_devdata(ctlr); 779 779 780 + spi_controller_get(ctlr); 781 + 782 + spi_unregister_controller(ctlr); 783 + 780 784 cdns_spi_write(xspi, CDNS_SPI_ER, CDNS_SPI_ER_DISABLE); 781 785 782 786 if (!spi_controller_is_target(ctlr)) { ··· 788 784 pm_runtime_set_suspended(&pdev->dev); 789 785 } 790 786 791 - spi_unregister_controller(ctlr); 787 + spi_controller_put(ctlr); 792 788 } 793 789 794 790 /**
+4 -2
drivers/spi/spi-mpc52xx.c
··· 517 517 struct mpc52xx_spi *ms = spi_controller_get_devdata(host); 518 518 int i; 519 519 520 - cancel_work_sync(&ms->work); 520 + spi_unregister_controller(host); 521 + 521 522 free_irq(ms->irq0, ms); 522 523 free_irq(ms->irq1, ms); 524 + 525 + cancel_work_sync(&ms->work); 523 526 524 527 for (i = 0; i < ms->gpio_cs_count; i++) 525 528 gpiod_put(ms->gpio_cs[i]); 526 529 527 530 kfree(ms->gpio_cs); 528 - spi_unregister_controller(host); 529 531 iounmap(ms->regs); 530 532 spi_controller_put(host); 531 533 }
+2 -1
drivers/spi/spi-mxic.c
··· 832 832 struct spi_controller *host = platform_get_drvdata(pdev); 833 833 struct mxic_spi *mxic = spi_controller_get_devdata(host); 834 834 835 + spi_unregister_controller(host); 836 + 835 837 pm_runtime_disable(&pdev->dev); 836 838 mxic_spi_mem_ecc_remove(mxic); 837 - spi_unregister_controller(host); 838 839 } 839 840 840 841 static const struct of_device_id mxic_spi_of_ids[] = {
+6 -1
drivers/spi/spi-orion.c
··· 801 801 struct spi_controller *host = platform_get_drvdata(pdev); 802 802 struct orion_spi *spi = spi_controller_get_devdata(host); 803 803 804 + spi_controller_get(host); 805 + 806 + spi_unregister_controller(host); 807 + 804 808 pm_runtime_get_sync(&pdev->dev); 805 809 clk_disable_unprepare(spi->axi_clk); 806 810 807 - spi_unregister_controller(host); 811 + spi_controller_put(host); 812 + 808 813 pm_runtime_disable(&pdev->dev); 809 814 } 810 815
+8 -3
drivers/spi/spi-topcliff-pch.c
··· 1406 1406 dev_dbg(&plat_dev->dev, "%s:[ch%d] irq=%d\n", 1407 1407 __func__, plat_dev->id, board_dat->pdev->irq); 1408 1408 1409 - if (use_dma) 1410 - pch_free_dma_buf(board_dat, data); 1409 + spi_controller_get(data->host); 1410 + 1411 + spi_unregister_controller(data->host); 1411 1412 1412 1413 /* check for any pending messages; no action is taken if the queue 1413 1414 * is still full; but at least we tried. Unload anyway */ ··· 1433 1432 free_irq(board_dat->pdev->irq, data); 1434 1433 } 1435 1434 1435 + if (use_dma) 1436 + pch_free_dma_buf(board_dat, data); 1437 + 1436 1438 pci_iounmap(board_dat->pdev, data->io_remap_addr); 1437 - spi_unregister_controller(data->host); 1439 + 1440 + spi_controller_put(data->host); 1438 1441 } 1439 1442 #ifdef CONFIG_PM 1440 1443 static int pch_spi_pd_suspend(struct platform_device *pd_dev,
+2 -1
drivers/usb/typec/ucsi/ucsi.c
··· 44 44 return; 45 45 46 46 if (UCSI_CCI_CONNECTOR(cci)) { 47 - if (UCSI_CCI_CONNECTOR(cci) <= ucsi->cap.num_connectors) 47 + if (!ucsi->cap.num_connectors || 48 + UCSI_CCI_CONNECTOR(cci) <= ucsi->cap.num_connectors) 48 49 ucsi_connector_change(ucsi, UCSI_CCI_CONNECTOR(cci)); 49 50 else 50 51 dev_err(ucsi->dev, "bogus connector number in CCI: %lu\n",
+5
fs/cachefiles/namei.c
··· 810 810 if (ret < 0) 811 811 goto error_unlock; 812 812 813 + /* 814 + * cachefiles_bury_object() expects 2 references to 'victim', 815 + * and drops one. 816 + */ 817 + dget(victim); 813 818 ret = cachefiles_bury_object(cache, NULL, dir, victim, 814 819 FSCACHE_OBJECT_WAS_CULLED); 815 820 dput(victim);
+5 -1
fs/eventpoll.c
··· 226 226 */ 227 227 refcount_t refcount; 228 228 229 + /* used to defer freeing past ep_get_upwards_depth_proc() RCU walk */ 230 + struct rcu_head rcu; 231 + 229 232 #ifdef CONFIG_NET_RX_BUSY_POLL 230 233 /* used to track busy poll napi_id */ 231 234 unsigned int napi_id; ··· 822 819 mutex_destroy(&ep->mtx); 823 820 free_uid(ep->user); 824 821 wakeup_source_unregister(ep->ws); 825 - kfree(ep); 822 + /* ep_get_upwards_depth_proc() may still hold epi->ep under RCU */ 823 + kfree_rcu(ep, rcu); 826 824 } 827 825 828 826 /*
+48 -20
fs/kernfs/dir.c
··· 14 14 #include <linux/slab.h> 15 15 #include <linux/security.h> 16 16 #include <linux/hash.h> 17 + #include <linux/ns_common.h> 17 18 18 19 #include "kernfs-internal.h" 19 20 ··· 307 306 return parent; 308 307 } 309 308 309 + /* 310 + * kernfs_ns_id - return the namespace id for a given namespace 311 + * @ns: namespace tag (may be NULL) 312 + * 313 + * Use the 64-bit namespace id instead of raw pointers for hashing 314 + * and comparison to avoid leaking kernel addresses to userspace. 315 + */ 316 + static u64 kernfs_ns_id(const struct ns_common *ns) 317 + { 318 + return ns ? ns->ns_id : 0; 319 + } 320 + 310 321 /** 311 322 * kernfs_name_hash - calculate hash of @ns + @name 312 323 * @name: Null terminated string to hash ··· 326 313 * 327 314 * Return: 31-bit hash of ns + name (so it fits in an off_t) 328 315 */ 329 - static unsigned int kernfs_name_hash(const char *name, const void *ns) 316 + static unsigned int kernfs_name_hash(const char *name, 317 + const struct ns_common *ns) 330 318 { 331 - unsigned long hash = init_name_hash(ns); 319 + unsigned long hash = init_name_hash(kernfs_ns_id(ns)); 332 320 unsigned int len = strlen(name); 333 321 while (len--) 334 322 hash = partial_name_hash(*name++, hash); ··· 344 330 } 345 331 346 332 static int kernfs_name_compare(unsigned int hash, const char *name, 347 - const void *ns, const struct kernfs_node *kn) 333 + const struct ns_common *ns, const struct kernfs_node *kn) 348 334 { 335 + u64 ns_id = kernfs_ns_id(ns); 336 + u64 kn_ns_id = kernfs_ns_id(kn->ns); 337 + 349 338 if (hash < kn->hash) 350 339 return -1; 351 340 if (hash > kn->hash) 352 341 return 1; 353 - if (ns < kn->ns) 342 + if (ns_id < kn_ns_id) 354 343 return -1; 355 - if (ns > kn->ns) 344 + if (ns_id > kn_ns_id) 356 345 return 1; 357 346 return strcmp(name, kernfs_rcu_name(kn)); 358 347 } ··· 873 856 */ 874 857 static struct kernfs_node *kernfs_find_ns(struct kernfs_node *parent, 875 858 const unsigned char *name, 876 - const void *ns) 859 + const struct ns_common *ns) 877 860 { 878 861 struct rb_node *node = parent->dir.children.rb_node; 879 862 bool has_ns = kernfs_ns_enabled(parent); ··· 906 889 907 890 static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent, 908 891 const unsigned char *path, 909 - const void *ns) 892 + const struct ns_common *ns) 910 893 { 911 894 ssize_t len; 912 895 char *p, *name; ··· 947 930 * Return: pointer to the found kernfs_node on success, %NULL on failure. 948 931 */ 949 932 struct kernfs_node *kernfs_find_and_get_ns(struct kernfs_node *parent, 950 - const char *name, const void *ns) 933 + const char *name, 934 + const struct ns_common *ns) 951 935 { 952 936 struct kernfs_node *kn; 953 937 struct kernfs_root *root = kernfs_root(parent); ··· 974 956 * Return: pointer to the found kernfs_node on success, %NULL on failure. 975 957 */ 976 958 struct kernfs_node *kernfs_walk_and_get_ns(struct kernfs_node *parent, 977 - const char *path, const void *ns) 959 + const char *path, 960 + const struct ns_common *ns) 978 961 { 979 962 struct kernfs_node *kn; 980 963 struct kernfs_root *root = kernfs_root(parent); ··· 1098 1079 struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent, 1099 1080 const char *name, umode_t mode, 1100 1081 kuid_t uid, kgid_t gid, 1101 - void *priv, const void *ns) 1082 + void *priv, 1083 + const struct ns_common *ns) 1102 1084 { 1103 1085 struct kernfs_node *kn; 1104 1086 int rc; ··· 1219 1199 1220 1200 /* The kernfs node has been moved to a different namespace */ 1221 1201 if (parent && kernfs_ns_enabled(parent) && 1222 - kernfs_info(dentry->d_sb)->ns != kn->ns) 1202 + kernfs_ns_id(kernfs_info(dentry->d_sb)->ns) != kernfs_ns_id(kn->ns)) 1223 1203 goto out_bad; 1224 1204 1225 1205 up_read(&root->kernfs_rwsem); ··· 1241 1221 struct kernfs_node *kn; 1242 1222 struct kernfs_root *root; 1243 1223 struct inode *inode = NULL; 1244 - const void *ns = NULL; 1224 + const struct ns_common *ns = NULL; 1245 1225 1246 1226 root = kernfs_root(parent); 1247 1227 down_read(&root->kernfs_rwsem); ··· 1722 1702 * Return: %0 on success, -ENOENT if such entry doesn't exist. 1723 1703 */ 1724 1704 int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name, 1725 - const void *ns) 1705 + const struct ns_common *ns) 1726 1706 { 1727 1707 struct kernfs_node *kn; 1728 1708 struct kernfs_root *root; ··· 1761 1741 * Return: %0 on success, -errno on failure. 1762 1742 */ 1763 1743 int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent, 1764 - const char *new_name, const void *new_ns) 1744 + const char *new_name, const struct ns_common *new_ns) 1765 1745 { 1766 1746 struct kernfs_node *old_parent; 1767 1747 struct kernfs_root *root; ··· 1791 1771 old_name = kernfs_rcu_name(kn); 1792 1772 if (!new_name) 1793 1773 new_name = old_name; 1794 - if ((old_parent == new_parent) && (kn->ns == new_ns) && 1774 + if ((old_parent == new_parent) && 1775 + (kernfs_ns_id(kn->ns) == kernfs_ns_id(new_ns)) && 1795 1776 (strcmp(old_name, new_name) == 0)) 1796 1777 goto out; /* nothing to rename */ 1797 1778 ··· 1853 1832 return 0; 1854 1833 } 1855 1834 1856 - static struct kernfs_node *kernfs_dir_pos(const void *ns, 1835 + static struct kernfs_node *kernfs_dir_pos(const struct ns_common *ns, 1857 1836 struct kernfs_node *parent, loff_t hash, struct kernfs_node *pos) 1858 1837 { 1859 1838 if (pos) { ··· 1866 1845 } 1867 1846 if (!pos && (hash > 1) && (hash < INT_MAX)) { 1868 1847 struct rb_node *node = parent->dir.children.rb_node; 1848 + u64 ns_id = kernfs_ns_id(ns); 1869 1849 while (node) { 1870 1850 pos = rb_to_kn(node); 1871 1851 ··· 1874 1852 node = node->rb_left; 1875 1853 else if (hash > pos->hash) 1876 1854 node = node->rb_right; 1855 + else if (ns_id < kernfs_ns_id(pos->ns)) 1856 + node = node->rb_left; 1857 + else if (ns_id > kernfs_ns_id(pos->ns)) 1858 + node = node->rb_right; 1877 1859 else 1878 1860 break; 1879 1861 } 1880 1862 } 1881 1863 /* Skip over entries which are dying/dead or in the wrong namespace */ 1882 - while (pos && (!kernfs_active(pos) || pos->ns != ns)) { 1864 + while (pos && (!kernfs_active(pos) || 1865 + kernfs_ns_id(pos->ns) != kernfs_ns_id(ns))) { 1883 1866 struct rb_node *node = rb_next(&pos->rb); 1884 1867 if (!node) 1885 1868 pos = NULL; ··· 1894 1867 return pos; 1895 1868 } 1896 1869 1897 - static struct kernfs_node *kernfs_dir_next_pos(const void *ns, 1870 + static struct kernfs_node *kernfs_dir_next_pos(const struct ns_common *ns, 1898 1871 struct kernfs_node *parent, ino_t ino, struct kernfs_node *pos) 1899 1872 { 1900 1873 pos = kernfs_dir_pos(ns, parent, ino, pos); ··· 1905 1878 pos = NULL; 1906 1879 else 1907 1880 pos = rb_to_kn(node); 1908 - } while (pos && (!kernfs_active(pos) || pos->ns != ns)); 1881 + } while (pos && (!kernfs_active(pos) || 1882 + kernfs_ns_id(pos->ns) != kernfs_ns_id(ns))); 1909 1883 } 1910 1884 return pos; 1911 1885 } ··· 1917 1889 struct kernfs_node *parent = kernfs_dentry_node(dentry); 1918 1890 struct kernfs_node *pos = file->private_data; 1919 1891 struct kernfs_root *root; 1920 - const void *ns = NULL; 1892 + const struct ns_common *ns = NULL; 1921 1893 1922 1894 if (!dir_emit_dots(file, ctx)) 1923 1895 return 0;
+1 -1
fs/kernfs/file.c
··· 1045 1045 umode_t mode, kuid_t uid, kgid_t gid, 1046 1046 loff_t size, 1047 1047 const struct kernfs_ops *ops, 1048 - void *priv, const void *ns, 1048 + void *priv, const struct ns_common *ns, 1049 1049 struct lock_class_key *key) 1050 1050 { 1051 1051 struct kernfs_node *kn;
+1 -1
fs/kernfs/kernfs-internal.h
··· 97 97 * instance. If multiple tags become necessary, make the following 98 98 * an array and compare kernfs_node tag against every entry. 99 99 */ 100 - const void *ns; 100 + const struct ns_common *ns; 101 101 102 102 /* anchored at kernfs_root->supers, protected by kernfs_rwsem */ 103 103 struct list_head node;
+1 -1
fs/kernfs/mount.c
··· 345 345 * 346 346 * Return: the namespace tag associated with kernfs super_block @sb. 347 347 */ 348 - const void *kernfs_super_ns(struct super_block *sb) 348 + const struct ns_common *kernfs_super_ns(struct super_block *sb) 349 349 { 350 350 struct kernfs_super_info *info = kernfs_info(sb); 351 351
+10 -6
fs/nfs/sysfs.c
··· 11 11 #include <linux/netdevice.h> 12 12 #include <linux/string.h> 13 13 #include <linux/nfs_fs.h> 14 + #include <net/net_namespace.h> 14 15 #include <linux/rcupdate.h> 15 16 #include <linux/lockd/lockd.h> 16 17 ··· 128 127 kfree(rcu_dereference_raw(c->identifier)); 129 128 } 130 129 131 - static const void *nfs_netns_client_namespace(const struct kobject *kobj) 130 + static const struct ns_common *nfs_netns_client_namespace(const struct kobject *kobj) 132 131 { 133 - return container_of(kobj, struct nfs_netns_client, kobject)->net; 132 + return to_ns_common(container_of(kobj, struct nfs_netns_client, 133 + kobject)->net); 134 134 } 135 135 136 136 static struct kobj_attribute nfs_netns_client_id = __ATTR(identifier, ··· 158 156 kfree(c); 159 157 } 160 158 161 - static const void *nfs_netns_namespace(const struct kobject *kobj) 159 + static const struct ns_common *nfs_netns_namespace(const struct kobject *kobj) 162 160 { 163 - return container_of(kobj, struct nfs_netns_client, nfs_net_kobj)->net; 161 + return to_ns_common(container_of(kobj, struct nfs_netns_client, 162 + nfs_net_kobj)->net); 164 163 } 165 164 166 165 static struct kobj_type nfs_netns_object_type = { ··· 353 350 /* no-op: why? see lib/kobject.c kobject_cleanup() */ 354 351 } 355 352 356 - static const void *nfs_netns_server_namespace(const struct kobject *kobj) 353 + static const struct ns_common *nfs_netns_server_namespace(const struct kobject *kobj) 357 354 { 358 - return container_of(kobj, struct nfs_server, kobj)->nfs_client->cl_net; 355 + return to_ns_common(container_of(kobj, struct nfs_server, 356 + kobj)->nfs_client->cl_net); 359 357 } 360 358 361 359 static struct kobj_type nfs_sb_ktype = {
+10
fs/ocfs2/inode.c
··· 1505 1505 goto bail; 1506 1506 } 1507 1507 1508 + if (le16_to_cpu(data->id_count) > 1509 + ocfs2_max_inline_data_with_xattr(sb, di)) { 1510 + rc = ocfs2_error(sb, 1511 + "Invalid dinode #%llu: inline data id_count %u exceeds max %d\n", 1512 + (unsigned long long)bh->b_blocknr, 1513 + le16_to_cpu(data->id_count), 1514 + ocfs2_max_inline_data_with_xattr(sb, di)); 1515 + goto bail; 1516 + } 1517 + 1508 1518 if (le64_to_cpu(di->i_size) > le16_to_cpu(data->id_count)) { 1509 1519 rc = ocfs2_error(sb, 1510 1520 "Invalid dinode #%llu: inline data i_size %llu exceeds id_count %u\n",
+3 -3
fs/sysfs/dir.c
··· 37 37 * @kobj: object we're creating directory for 38 38 * @ns: the namespace tag to use 39 39 */ 40 - int sysfs_create_dir_ns(struct kobject *kobj, const void *ns) 40 + int sysfs_create_dir_ns(struct kobject *kobj, const struct ns_common *ns) 41 41 { 42 42 struct kernfs_node *parent, *kn; 43 43 kuid_t uid; ··· 103 103 } 104 104 105 105 int sysfs_rename_dir_ns(struct kobject *kobj, const char *new_name, 106 - const void *new_ns) 106 + const struct ns_common *new_ns) 107 107 { 108 108 struct kernfs_node *parent; 109 109 int ret; ··· 115 115 } 116 116 117 117 int sysfs_move_dir_ns(struct kobject *kobj, struct kobject *new_parent_kobj, 118 - const void *new_ns) 118 + const struct ns_common *new_ns) 119 119 { 120 120 struct kernfs_node *kn = kobj->sd; 121 121 struct kernfs_node *new_parent;
+4 -4
fs/sysfs/file.c
··· 272 272 273 273 int sysfs_add_file_mode_ns(struct kernfs_node *parent, 274 274 const struct attribute *attr, umode_t mode, kuid_t uid, 275 - kgid_t gid, const void *ns) 275 + kgid_t gid, const struct ns_common *ns) 276 276 { 277 277 struct kobject *kobj = parent->priv; 278 278 const struct sysfs_ops *sysfs_ops = kobj->ktype->sysfs_ops; ··· 322 322 323 323 int sysfs_add_bin_file_mode_ns(struct kernfs_node *parent, 324 324 const struct bin_attribute *battr, umode_t mode, size_t size, 325 - kuid_t uid, kgid_t gid, const void *ns) 325 + kuid_t uid, kgid_t gid, const struct ns_common *ns) 326 326 { 327 327 const struct attribute *attr = &battr->attr; 328 328 struct lock_class_key *key = NULL; ··· 362 362 * @ns: namespace the new file should belong to 363 363 */ 364 364 int sysfs_create_file_ns(struct kobject *kobj, const struct attribute *attr, 365 - const void *ns) 365 + const struct ns_common *ns) 366 366 { 367 367 kuid_t uid; 368 368 kgid_t gid; ··· 505 505 * Hash the attribute name and namespace tag and kill the victim. 506 506 */ 507 507 void sysfs_remove_file_ns(struct kobject *kobj, const struct attribute *attr, 508 - const void *ns) 508 + const struct ns_common *ns) 509 509 { 510 510 struct kernfs_node *parent = kobj->sd; 511 511
+6 -4
fs/sysfs/mount.c
··· 55 55 static int sysfs_init_fs_context(struct fs_context *fc) 56 56 { 57 57 struct kernfs_fs_context *kfc; 58 - struct net *netns; 58 + struct ns_common *ns; 59 59 60 60 if (!(fc->sb_flags & SB_KERNMOUNT)) { 61 61 if (!kobj_ns_current_may_mount(KOBJ_NS_TYPE_NET)) ··· 66 66 if (!kfc) 67 67 return -ENOMEM; 68 68 69 - kfc->ns_tag = netns = kobj_ns_grab_current(KOBJ_NS_TYPE_NET); 69 + kfc->ns_tag = ns = kobj_ns_grab_current(KOBJ_NS_TYPE_NET); 70 70 kfc->root = sysfs_root; 71 71 kfc->magic = SYSFS_MAGIC; 72 72 fc->fs_private = kfc; 73 73 fc->ops = &sysfs_fs_context_ops; 74 - if (netns) { 74 + if (ns) { 75 + struct net *netns = to_net_ns(ns); 76 + 75 77 put_user_ns(fc->user_ns); 76 78 fc->user_ns = get_user_ns(netns->user_ns); 77 79 } ··· 83 81 84 82 static void sysfs_kill_sb(struct super_block *sb) 85 83 { 86 - void *ns = (void *)kernfs_super_ns(sb); 84 + struct ns_common *ns = (struct ns_common *)kernfs_super_ns(sb); 87 85 88 86 kernfs_kill_sb(sb); 89 87 kobj_ns_drop(KOBJ_NS_TYPE_NET, ns);
+4 -3
fs/sysfs/symlink.c
··· 121 121 void sysfs_delete_link(struct kobject *kobj, struct kobject *targ, 122 122 const char *name) 123 123 { 124 - const void *ns = NULL; 124 + const struct ns_common *ns = NULL; 125 125 126 126 /* 127 127 * We don't own @target and it may be removed at any time. ··· 164 164 * A helper function for the common rename symlink idiom. 165 165 */ 166 166 int sysfs_rename_link_ns(struct kobject *kobj, struct kobject *targ, 167 - const char *old, const char *new, const void *new_ns) 167 + const char *old, const char *new, 168 + const struct ns_common *new_ns) 168 169 { 169 170 struct kernfs_node *parent, *kn = NULL; 170 - const void *old_ns = NULL; 171 + const struct ns_common *old_ns = NULL; 171 172 int result; 172 173 173 174 if (!kobj)
+2 -2
fs/sysfs/sysfs.h
··· 29 29 */ 30 30 int sysfs_add_file_mode_ns(struct kernfs_node *parent, 31 31 const struct attribute *attr, umode_t amode, kuid_t uid, 32 - kgid_t gid, const void *ns); 32 + kgid_t gid, const struct ns_common *ns); 33 33 int sysfs_add_bin_file_mode_ns(struct kernfs_node *parent, 34 34 const struct bin_attribute *battr, umode_t mode, size_t size, 35 - kuid_t uid, kgid_t gid, const void *ns); 35 + kuid_t uid, kgid_t gid, const struct ns_common *ns); 36 36 37 37 /* 38 38 * symlink.c
+36 -12
include/dt-bindings/reset/spacemit,k3-resets.h
··· 97 97 #define RESET_APMU_SDH0 13 98 98 #define RESET_APMU_SDH1 14 99 99 #define RESET_APMU_SDH2 15 100 - #define RESET_APMU_USB2 16 101 - #define RESET_APMU_USB3_PORTA 17 102 - #define RESET_APMU_USB3_PORTB 18 103 - #define RESET_APMU_USB3_PORTC 19 104 - #define RESET_APMU_USB3_PORTD 20 100 + #define RESET_APMU_USB2_AHB 16 101 + #define RESET_APMU_USB2_VCC 17 102 + #define RESET_APMU_USB2_PHY 18 103 + #define RESET_APMU_USB3_A_AHB 19 104 + #define RESET_APMU_USB3_A_VCC 20 105 105 #define RESET_APMU_QSPI 21 106 106 #define RESET_APMU_QSPI_BUS 22 107 107 #define RESET_APMU_DMA 23 ··· 132 132 #define RESET_APMU_CPU7_SW 48 133 133 #define RESET_APMU_C1_MPSUB_SW 49 134 134 #define RESET_APMU_MPSUB_DBG 50 135 - #define RESET_APMU_UCIE 51 136 - #define RESET_APMU_RCPU 52 135 + #define RESET_APMU_USB3_A_PHY 51 /* USB3 A */ 136 + #define RESET_APMU_USB3_B_AHB 52 137 137 #define RESET_APMU_DSI4LN2_ESCCLK 53 138 138 #define RESET_APMU_DSI4LN2_LCD_SW 54 139 139 #define RESET_APMU_DSI4LN2_LCD_MCLK 55 ··· 143 143 #define RESET_APMU_UFS_ACLK 59 144 144 #define RESET_APMU_EDP0 60 145 145 #define RESET_APMU_EDP1 61 146 - #define RESET_APMU_PCIE_PORTA 62 147 - #define RESET_APMU_PCIE_PORTB 63 148 - #define RESET_APMU_PCIE_PORTC 64 149 - #define RESET_APMU_PCIE_PORTD 65 150 - #define RESET_APMU_PCIE_PORTE 66 146 + #define RESET_APMU_USB3_B_VCC 62 /* USB3 B */ 147 + #define RESET_APMU_USB3_B_PHY 63 148 + #define RESET_APMU_USB3_C_AHB 64 149 + #define RESET_APMU_USB3_C_VCC 65 150 + #define RESET_APMU_USB3_C_PHY 66 151 151 #define RESET_APMU_EMAC0 67 152 152 #define RESET_APMU_EMAC1 68 153 153 #define RESET_APMU_EMAC2 69 154 154 #define RESET_APMU_ESPI_MCLK 70 155 155 #define RESET_APMU_ESPI_SCLK 71 156 + #define RESET_APMU_USB3_D_AHB 72 /* USB3 D */ 157 + #define RESET_APMU_USB3_D_VCC 73 158 + #define RESET_APMU_USB3_D_PHY 74 159 + #define RESET_APMU_UCIE_IP 75 160 + #define RESET_APMU_UCIE_HOT 76 161 + #define RESET_APMU_UCIE_MON 77 162 + #define RESET_APMU_RCPU_AUDIO_SYS 78 163 + #define RESET_APMU_RCPU_MCU_CORE 79 164 + #define RESET_APMU_RCPU_AUDIO_APMU 80 165 + #define RESET_APMU_PCIE_A_DBI 81 166 + #define RESET_APMU_PCIE_A_SLAVE 82 167 + #define RESET_APMU_PCIE_A_MASTER 83 168 + #define RESET_APMU_PCIE_B_DBI 84 169 + #define RESET_APMU_PCIE_B_SLAVE 85 170 + #define RESET_APMU_PCIE_B_MASTER 86 171 + #define RESET_APMU_PCIE_C_DBI 87 172 + #define RESET_APMU_PCIE_C_SLAVE 88 173 + #define RESET_APMU_PCIE_C_MASTER 89 174 + #define RESET_APMU_PCIE_D_DBI 90 175 + #define RESET_APMU_PCIE_D_SLAVE 91 176 + #define RESET_APMU_PCIE_D_MASTER 92 177 + #define RESET_APMU_PCIE_E_DBI 93 178 + #define RESET_APMU_PCIE_E_SLAVE 94 179 + #define RESET_APMU_PCIE_E_MASTER 95 156 180 157 181 /* DCIU resets*/ 158 182 #define RESET_DCIU_HDMA 0
+6
include/hyperv/hvgdk_mini.h
··· 1533 1533 u8 data[HV_HYPERCALL_MMIO_MAX_DATA_LENGTH]; 1534 1534 } __packed; 1535 1535 1536 + enum hv_intercept_access_type { 1537 + HV_INTERCEPT_ACCESS_READ = 0, 1538 + HV_INTERCEPT_ACCESS_WRITE = 1, 1539 + HV_INTERCEPT_ACCESS_EXECUTE = 2 1540 + }; 1541 + 1536 1542 #endif /* _HV_HVGDK_MINI_H */
+2 -2
include/hyperv/hvhdk.h
··· 779 779 u32 vp_index; 780 780 u8 instruction_length:4; 781 781 u8 cr8:4; /* Only set for exo partitions */ 782 - u8 intercept_access_type; 782 + u8 intercept_access_type; /* enum hv_intercept_access_type */ 783 783 union hv_x64_vp_execution_state execution_state; 784 784 struct hv_x64_segment_register cs_segment; 785 785 u64 rip; ··· 825 825 struct hv_arm64_intercept_message_header { 826 826 u32 vp_index; 827 827 u8 instruction_length; 828 - u8 intercept_access_type; 828 + u8 intercept_access_type; /* enum hv_intercept_access_type */ 829 829 union hv_arm64_vp_execution_state execution_state; 830 830 u64 pc; 831 831 u64 cpsr;
+2
include/linux/clockchips.h
··· 80 80 * @shift: nanoseconds to cycles divisor (power of two) 81 81 * @state_use_accessors:current state of the device, assigned by the core code 82 82 * @features: features 83 + * @next_event_forced: True if the last programming was a forced event 83 84 * @retries: number of forced programming retries 84 85 * @set_state_periodic: switch state to periodic 85 86 * @set_state_oneshot: switch state to oneshot ··· 109 108 u32 shift; 110 109 enum clock_event_state state_use_accessors; 111 110 unsigned int features; 111 + unsigned int next_event_forced; 112 112 unsigned long retries; 113 113 114 114 int (*set_state_periodic)(struct clock_event_device *);
+3 -3
include/linux/cpu.h
··· 229 229 #define smt_mitigations SMT_MITIGATIONS_OFF 230 230 #endif 231 231 232 - int arch_get_indir_br_lp_status(struct task_struct *t, unsigned long __user *status); 233 - int arch_set_indir_br_lp_status(struct task_struct *t, unsigned long status); 234 - int arch_lock_indir_br_lp_status(struct task_struct *t, unsigned long status); 232 + int arch_prctl_get_branch_landing_pad_state(struct task_struct *t, unsigned long __user *state); 233 + int arch_prctl_set_branch_landing_pad_state(struct task_struct *t, unsigned long state); 234 + int arch_prctl_lock_branch_landing_pad_state(struct task_struct *t); 235 235 236 236 #endif /* _LINUX_CPU_H_ */
+3 -3
include/linux/device/class.h
··· 62 62 int (*shutdown_pre)(struct device *dev); 63 63 64 64 const struct kobj_ns_type_operations *ns_type; 65 - const void *(*namespace)(const struct device *dev); 65 + const struct ns_common *(*namespace)(const struct device *dev); 66 66 67 67 void (*get_ownership)(const struct device *dev, kuid_t *uid, kgid_t *gid); 68 68 ··· 180 180 struct class_attribute class_attr_##_name = __ATTR_WO(_name) 181 181 182 182 int __must_check class_create_file_ns(const struct class *class, const struct class_attribute *attr, 183 - const void *ns); 183 + const struct ns_common *ns); 184 184 void class_remove_file_ns(const struct class *class, const struct class_attribute *attr, 185 - const void *ns); 185 + const struct ns_common *ns); 186 186 187 187 static inline int __must_check class_create_file(const struct class *class, 188 188 const struct class_attribute *attr)
-74
include/linux/firmware/thead/thead,th1520-aon.h
··· 97 97 #define RPC_GET_SVC_FLAG_ACK_TYPE(MESG) (((MESG)->svc & 0x40) >> 6) 98 98 #define RPC_SET_SVC_FLAG_ACK_TYPE(MESG, ACK) ((MESG)->svc |= (ACK) << 6) 99 99 100 - #define RPC_SET_BE64(MESG, OFFSET, SET_DATA) \ 101 - do { \ 102 - u8 *data = (u8 *)(MESG); \ 103 - u64 _offset = (OFFSET); \ 104 - u64 _set_data = (SET_DATA); \ 105 - data[_offset + 7] = _set_data & 0xFF; \ 106 - data[_offset + 6] = (_set_data & 0xFF00) >> 8; \ 107 - data[_offset + 5] = (_set_data & 0xFF0000) >> 16; \ 108 - data[_offset + 4] = (_set_data & 0xFF000000) >> 24; \ 109 - data[_offset + 3] = (_set_data & 0xFF00000000) >> 32; \ 110 - data[_offset + 2] = (_set_data & 0xFF0000000000) >> 40; \ 111 - data[_offset + 1] = (_set_data & 0xFF000000000000) >> 48; \ 112 - data[_offset + 0] = (_set_data & 0xFF00000000000000) >> 56; \ 113 - } while (0) 114 - 115 - #define RPC_SET_BE32(MESG, OFFSET, SET_DATA) \ 116 - do { \ 117 - u8 *data = (u8 *)(MESG); \ 118 - u64 _offset = (OFFSET); \ 119 - u64 _set_data = (SET_DATA); \ 120 - data[_offset + 3] = (_set_data) & 0xFF; \ 121 - data[_offset + 2] = (_set_data & 0xFF00) >> 8; \ 122 - data[_offset + 1] = (_set_data & 0xFF0000) >> 16; \ 123 - data[_offset + 0] = (_set_data & 0xFF000000) >> 24; \ 124 - } while (0) 125 - 126 - #define RPC_SET_BE16(MESG, OFFSET, SET_DATA) \ 127 - do { \ 128 - u8 *data = (u8 *)(MESG); \ 129 - u64 _offset = (OFFSET); \ 130 - u64 _set_data = (SET_DATA); \ 131 - data[_offset + 1] = (_set_data) & 0xFF; \ 132 - data[_offset + 0] = (_set_data & 0xFF00) >> 8; \ 133 - } while (0) 134 - 135 - #define RPC_SET_U8(MESG, OFFSET, SET_DATA) \ 136 - do { \ 137 - u8 *data = (u8 *)(MESG); \ 138 - data[OFFSET] = (SET_DATA) & 0xFF; \ 139 - } while (0) 140 - 141 - #define RPC_GET_BE64(MESG, OFFSET, PTR) \ 142 - do { \ 143 - u8 *data = (u8 *)(MESG); \ 144 - u64 _offset = (OFFSET); \ 145 - *(u32 *)(PTR) = \ 146 - (data[_offset + 7] | data[_offset + 6] << 8 | \ 147 - data[_offset + 5] << 16 | data[_offset + 4] << 24 | \ 148 - data[_offset + 3] << 32 | data[_offset + 2] << 40 | \ 149 - data[_offset + 1] << 48 | data[_offset + 0] << 56); \ 150 - } while (0) 151 - 152 - #define RPC_GET_BE32(MESG, OFFSET, PTR) \ 153 - do { \ 154 - u8 *data = (u8 *)(MESG); \ 155 - u64 _offset = (OFFSET); \ 156 - *(u32 *)(PTR) = \ 157 - (data[_offset + 3] | data[_offset + 2] << 8 | \ 158 - data[_offset + 1] << 16 | data[_offset + 0] << 24); \ 159 - } while (0) 160 - 161 - #define RPC_GET_BE16(MESG, OFFSET, PTR) \ 162 - do { \ 163 - u8 *data = (u8 *)(MESG); \ 164 - u64 _offset = (OFFSET); \ 165 - *(u16 *)(PTR) = (data[_offset + 1] | data[_offset + 0] << 8); \ 166 - } while (0) 167 - 168 - #define RPC_GET_U8(MESG, OFFSET, PTR) \ 169 - do { \ 170 - u8 *data = (u8 *)(MESG); \ 171 - *(u8 *)(PTR) = (data[OFFSET]); \ 172 - } while (0) 173 - 174 100 /* 175 101 * Defines for SC PM Power Mode 176 102 */
+24 -16
include/linux/kernfs.h
··· 23 23 struct file; 24 24 struct dentry; 25 25 struct iattr; 26 + struct ns_common; 26 27 struct seq_file; 27 28 struct vm_area_struct; 28 29 struct vm_operations_struct; ··· 210 209 211 210 struct rb_node rb; 212 211 213 - const void *ns; /* namespace tag */ 212 + const struct ns_common *ns; /* namespace tag */ 214 213 unsigned int hash; /* ns + name hash */ 215 214 unsigned short flags; 216 215 umode_t mode; ··· 332 331 */ 333 332 struct kernfs_fs_context { 334 333 struct kernfs_root *root; /* Root of the hierarchy being mounted */ 335 - void *ns_tag; /* Namespace tag of the mount (or NULL) */ 334 + struct ns_common *ns_tag; /* Namespace tag of the mount (or NULL) */ 336 335 unsigned long magic; /* File system specific magic number */ 337 336 338 337 /* The following are set/used by kernfs_mount() */ ··· 407 406 void pr_cont_kernfs_path(struct kernfs_node *kn); 408 407 struct kernfs_node *kernfs_get_parent(struct kernfs_node *kn); 409 408 struct kernfs_node *kernfs_find_and_get_ns(struct kernfs_node *parent, 410 - const char *name, const void *ns); 409 + const char *name, 410 + const struct ns_common *ns); 411 411 struct kernfs_node *kernfs_walk_and_get_ns(struct kernfs_node *parent, 412 - const char *path, const void *ns); 412 + const char *path, 413 + const struct ns_common *ns); 413 414 void kernfs_get(struct kernfs_node *kn); 414 415 void kernfs_put(struct kernfs_node *kn); 415 416 ··· 429 426 struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent, 430 427 const char *name, umode_t mode, 431 428 kuid_t uid, kgid_t gid, 432 - void *priv, const void *ns); 429 + void *priv, 430 + const struct ns_common *ns); 433 431 struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent, 434 432 const char *name); 435 433 struct kernfs_node *__kernfs_create_file(struct kernfs_node *parent, ··· 438 434 kuid_t uid, kgid_t gid, 439 435 loff_t size, 440 436 const struct kernfs_ops *ops, 441 - void *priv, const void *ns, 437 + void *priv, 438 + const struct ns_common *ns, 442 439 struct lock_class_key *key); 443 440 struct kernfs_node *kernfs_create_link(struct kernfs_node *parent, 444 441 const char *name, ··· 451 446 void kernfs_unbreak_active_protection(struct kernfs_node *kn); 452 447 bool kernfs_remove_self(struct kernfs_node *kn); 453 448 int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name, 454 - const void *ns); 449 + const struct ns_common *ns); 455 450 int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent, 456 - const char *new_name, const void *new_ns); 451 + const char *new_name, const struct ns_common *new_ns); 457 452 int kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr); 458 453 __poll_t kernfs_generic_poll(struct kernfs_open_file *of, 459 454 struct poll_table_struct *pt); ··· 464 459 int kernfs_xattr_set(struct kernfs_node *kn, const char *name, 465 460 const void *value, size_t size, int flags); 466 461 467 - const void *kernfs_super_ns(struct super_block *sb); 462 + const struct ns_common *kernfs_super_ns(struct super_block *sb); 468 463 int kernfs_get_tree(struct fs_context *fc); 469 464 void kernfs_free_fs_context(struct fs_context *fc); 470 465 void kernfs_kill_sb(struct super_block *sb); ··· 499 494 500 495 static inline struct kernfs_node * 501 496 kernfs_find_and_get_ns(struct kernfs_node *parent, const char *name, 502 - const void *ns) 497 + const struct ns_common *ns) 503 498 { return NULL; } 504 499 static inline struct kernfs_node * 505 500 kernfs_walk_and_get_ns(struct kernfs_node *parent, const char *path, 506 - const void *ns) 501 + const struct ns_common *ns) 507 502 { return NULL; } 508 503 509 504 static inline void kernfs_get(struct kernfs_node *kn) { } ··· 531 526 static inline struct kernfs_node * 532 527 kernfs_create_dir_ns(struct kernfs_node *parent, const char *name, 533 528 umode_t mode, kuid_t uid, kgid_t gid, 534 - void *priv, const void *ns) 529 + void *priv, const struct ns_common *ns) 535 530 { return ERR_PTR(-ENOSYS); } 536 531 537 532 static inline struct kernfs_node * 538 533 __kernfs_create_file(struct kernfs_node *parent, const char *name, 539 534 umode_t mode, kuid_t uid, kgid_t gid, 540 535 loff_t size, const struct kernfs_ops *ops, 541 - void *priv, const void *ns, struct lock_class_key *key) 536 + void *priv, const struct ns_common *ns, 537 + struct lock_class_key *key) 542 538 { return ERR_PTR(-ENOSYS); } 543 539 544 540 static inline struct kernfs_node * ··· 555 549 { return false; } 556 550 557 551 static inline int kernfs_remove_by_name_ns(struct kernfs_node *kn, 558 - const char *name, const void *ns) 552 + const char *name, 553 + const struct ns_common *ns) 559 554 { return -ENOSYS; } 560 555 561 556 static inline int kernfs_rename_ns(struct kernfs_node *kn, 562 557 struct kernfs_node *new_parent, 563 - const char *new_name, const void *new_ns) 558 + const char *new_name, 559 + const struct ns_common *new_ns) 564 560 { return -ENOSYS; } 565 561 566 562 static inline int kernfs_setattr(struct kernfs_node *kn, ··· 583 575 const void *value, size_t size, int flags) 584 576 { return -ENOSYS; } 585 577 586 - static inline const void *kernfs_super_ns(struct super_block *sb) 578 + static inline const struct ns_common *kernfs_super_ns(struct super_block *sb) 587 579 { return NULL; } 588 580 589 581 static inline int kernfs_get_tree(struct fs_context *fc)
+2 -2
include/linux/kobject.h
··· 109 109 struct kobject * __must_check kobject_get_unless_zero(struct kobject *kobj); 110 110 void kobject_put(struct kobject *kobj); 111 111 112 - const void *kobject_namespace(const struct kobject *kobj); 112 + const struct ns_common *kobject_namespace(const struct kobject *kobj); 113 113 void kobject_get_ownership(const struct kobject *kobj, kuid_t *uid, kgid_t *gid); 114 114 char *kobject_get_path(const struct kobject *kobj, gfp_t flag); 115 115 ··· 118 118 const struct sysfs_ops *sysfs_ops; 119 119 const struct attribute_group **default_groups; 120 120 const struct kobj_ns_type_operations *(*child_ns_type)(const struct kobject *kobj); 121 - const void *(*namespace)(const struct kobject *kobj); 121 + const struct ns_common *(*namespace)(const struct kobject *kobj); 122 122 void (*get_ownership)(const struct kobject *kobj, kuid_t *uid, kgid_t *gid); 123 123 }; 124 124
+7 -6
include/linux/kobject_ns.h
··· 16 16 #ifndef _LINUX_KOBJECT_NS_H 17 17 #define _LINUX_KOBJECT_NS_H 18 18 19 + struct ns_common; 19 20 struct sock; 20 21 struct kobject; 21 22 ··· 40 39 struct kobj_ns_type_operations { 41 40 enum kobj_ns_type type; 42 41 bool (*current_may_mount)(void); 43 - void *(*grab_current_ns)(void); 44 - const void *(*netlink_ns)(struct sock *sk); 45 - const void *(*initial_ns)(void); 46 - void (*drop_ns)(void *); 42 + struct ns_common *(*grab_current_ns)(void); 43 + const struct ns_common *(*netlink_ns)(struct sock *sk); 44 + const struct ns_common *(*initial_ns)(void); 45 + void (*drop_ns)(struct ns_common *); 47 46 }; 48 47 49 48 int kobj_ns_type_register(const struct kobj_ns_type_operations *ops); ··· 52 51 const struct kobj_ns_type_operations *kobj_ns_ops(const struct kobject *kobj); 53 52 54 53 bool kobj_ns_current_may_mount(enum kobj_ns_type type); 55 - void *kobj_ns_grab_current(enum kobj_ns_type type); 56 - void kobj_ns_drop(enum kobj_ns_type type, void *ns); 54 + struct ns_common *kobj_ns_grab_current(enum kobj_ns_type type); 55 + void kobj_ns_drop(enum kobj_ns_type type, struct ns_common *ns); 57 56 58 57 #endif /* _LINUX_KOBJECT_NS_H */
+3 -3
include/linux/mmap_lock.h
··· 546 546 __mmap_lock_trace_acquire_returned(mm, true, true); 547 547 } 548 548 549 - static inline int mmap_write_lock_killable(struct mm_struct *mm) 549 + static inline int __must_check mmap_write_lock_killable(struct mm_struct *mm) 550 550 { 551 551 int ret; 552 552 ··· 593 593 __mmap_lock_trace_acquire_returned(mm, false, true); 594 594 } 595 595 596 - static inline int mmap_read_lock_killable(struct mm_struct *mm) 596 + static inline int __must_check mmap_read_lock_killable(struct mm_struct *mm) 597 597 { 598 598 int ret; 599 599 ··· 603 603 return ret; 604 604 } 605 605 606 - static inline bool mmap_read_trylock(struct mm_struct *mm) 606 + static inline bool __must_check mmap_read_trylock(struct mm_struct *mm) 607 607 { 608 608 bool ret; 609 609
+2 -2
include/linux/netdevice.h
··· 5339 5339 } 5340 5340 5341 5341 int netdev_class_create_file_ns(const struct class_attribute *class_attr, 5342 - const void *ns); 5342 + const struct ns_common *ns); 5343 5343 void netdev_class_remove_file_ns(const struct class_attribute *class_attr, 5344 - const void *ns); 5344 + const struct ns_common *ns); 5345 5345 5346 5346 extern const struct kobj_ns_type_operations net_ns_type_operations; 5347 5347
+1
include/linux/soc/qcom/pdr.h
··· 5 5 #include <linux/soc/qcom/qmi.h> 6 6 7 7 #define SERVREG_NAME_LENGTH 64 8 + #define SERVREG_PFR_LENGTH 256 8 9 9 10 struct pdr_service; 10 11 struct pdr_handle;
+12 -12
include/linux/sysfs.h
··· 396 396 397 397 #ifdef CONFIG_SYSFS 398 398 399 - int __must_check sysfs_create_dir_ns(struct kobject *kobj, const void *ns); 399 + int __must_check sysfs_create_dir_ns(struct kobject *kobj, const struct ns_common *ns); 400 400 void sysfs_remove_dir(struct kobject *kobj); 401 401 int __must_check sysfs_rename_dir_ns(struct kobject *kobj, const char *new_name, 402 - const void *new_ns); 402 + const struct ns_common *new_ns); 403 403 int __must_check sysfs_move_dir_ns(struct kobject *kobj, 404 404 struct kobject *new_parent_kobj, 405 - const void *new_ns); 405 + const struct ns_common *new_ns); 406 406 int __must_check sysfs_create_mount_point(struct kobject *parent_kobj, 407 407 const char *name); 408 408 void sysfs_remove_mount_point(struct kobject *parent_kobj, ··· 410 410 411 411 int __must_check sysfs_create_file_ns(struct kobject *kobj, 412 412 const struct attribute *attr, 413 - const void *ns); 413 + const struct ns_common *ns); 414 414 int __must_check sysfs_create_files(struct kobject *kobj, 415 415 const struct attribute * const *attr); 416 416 int __must_check sysfs_chmod_file(struct kobject *kobj, ··· 419 419 const struct attribute *attr); 420 420 void sysfs_unbreak_active_protection(struct kernfs_node *kn); 421 421 void sysfs_remove_file_ns(struct kobject *kobj, const struct attribute *attr, 422 - const void *ns); 422 + const struct ns_common *ns); 423 423 bool sysfs_remove_file_self(struct kobject *kobj, const struct attribute *attr); 424 424 void sysfs_remove_files(struct kobject *kobj, const struct attribute * const *attr); 425 425 ··· 437 437 438 438 int sysfs_rename_link_ns(struct kobject *kobj, struct kobject *target, 439 439 const char *old_name, const char *new_name, 440 - const void *new_ns); 440 + const struct ns_common *new_ns); 441 441 442 442 void sysfs_delete_link(struct kobject *dir, struct kobject *targ, 443 443 const char *name); ··· 502 502 503 503 #else /* CONFIG_SYSFS */ 504 504 505 - static inline int sysfs_create_dir_ns(struct kobject *kobj, const void *ns) 505 + static inline int sysfs_create_dir_ns(struct kobject *kobj, const struct ns_common *ns) 506 506 { 507 507 return 0; 508 508 } ··· 512 512 } 513 513 514 514 static inline int sysfs_rename_dir_ns(struct kobject *kobj, 515 - const char *new_name, const void *new_ns) 515 + const char *new_name, const struct ns_common *new_ns) 516 516 { 517 517 return 0; 518 518 } 519 519 520 520 static inline int sysfs_move_dir_ns(struct kobject *kobj, 521 521 struct kobject *new_parent_kobj, 522 - const void *new_ns) 522 + const struct ns_common *new_ns) 523 523 { 524 524 return 0; 525 525 } ··· 537 537 538 538 static inline int sysfs_create_file_ns(struct kobject *kobj, 539 539 const struct attribute *attr, 540 - const void *ns) 540 + const struct ns_common *ns) 541 541 { 542 542 return 0; 543 543 } ··· 567 567 568 568 static inline void sysfs_remove_file_ns(struct kobject *kobj, 569 569 const struct attribute *attr, 570 - const void *ns) 570 + const struct ns_common *ns) 571 571 { 572 572 } 573 573 ··· 612 612 613 613 static inline int sysfs_rename_link_ns(struct kobject *k, struct kobject *t, 614 614 const char *old_name, 615 - const char *new_name, const void *ns) 615 + const char *new_name, const struct ns_common *ns) 616 616 { 617 617 return 0; 618 618 }
+1 -1
include/net/ip_tunnels.h
··· 32 32 * recursion involves route lookups and full IP output, consuming much 33 33 * more stack per level, so a lower limit is needed. 34 34 */ 35 - #define IP_TUNNEL_RECURSION_LIMIT 4 35 + #define IP_TUNNEL_RECURSION_LIMIT 5 36 36 37 37 /* Keep error state on tunnel for 30 sec */ 38 38 #define IPTUNNEL_ERR_TIMEO (30*HZ)
+4 -4
include/net/net_namespace.h
··· 264 264 #define ipx_unregister_sysctl() 265 265 #endif 266 266 267 - #ifdef CONFIG_NET_NS 268 - void __put_net(struct net *net); 269 - 270 267 static inline struct net *to_net_ns(struct ns_common *ns) 271 268 { 272 269 return container_of(ns, struct net, ns); 273 270 } 271 + 272 + #ifdef CONFIG_NET_NS 273 + void __put_net(struct net *net); 274 274 275 275 /* Try using get_net_track() instead */ 276 276 static inline struct net *get_net(struct net *net) ··· 309 309 return ns_ref_read(net) != 0; 310 310 } 311 311 312 - void net_drop_ns(void *); 312 + void net_drop_ns(struct ns_common *); 313 313 void net_passive_dec(struct net *net); 314 314 315 315 #else
+1
include/net/netfilter/nf_conntrack_timeout.h
··· 14 14 struct nf_ct_timeout { 15 15 __u16 l3num; 16 16 const struct nf_conntrack_l4proto *l4proto; 17 + struct rcu_head rcu; 17 18 char data[]; 18 19 }; 19 20
-1
include/net/netfilter/nf_queue.h
··· 23 23 struct nf_hook_state state; 24 24 bool nf_ct_is_unconfirmed; 25 25 u16 size; /* sizeof(entry) + saved route keys */ 26 - u16 queue_num; 27 26 28 27 /* extra space to store route keys */ 29 28 };
+1 -1
include/net/xdp_sock.h
··· 14 14 #include <linux/mm.h> 15 15 #include <net/sock.h> 16 16 17 - #define XDP_UMEM_SG_FLAG (1 << 1) 17 + #define XDP_UMEM_SG_FLAG BIT(3) 18 18 19 19 struct net_device; 20 20 struct xsk_queue;
+22 -1
include/net/xdp_sock_drv.h
··· 41 41 return XDP_PACKET_HEADROOM + pool->headroom; 42 42 } 43 43 44 + static inline u32 xsk_pool_get_tailroom(bool mbuf) 45 + { 46 + return mbuf ? SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) : 0; 47 + } 48 + 44 49 static inline u32 xsk_pool_get_chunk_size(struct xsk_buff_pool *pool) 45 50 { 46 51 return pool->chunk_size; 47 52 } 48 53 49 - static inline u32 xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool) 54 + static inline u32 __xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool) 50 55 { 51 56 return xsk_pool_get_chunk_size(pool) - xsk_pool_get_headroom(pool); 57 + } 58 + 59 + static inline u32 xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool) 60 + { 61 + u32 frame_size = __xsk_pool_get_rx_frame_size(pool); 62 + struct xdp_umem *umem = pool->umem; 63 + bool mbuf; 64 + 65 + /* Reserve tailroom only for zero-copy pools that opted into 66 + * multi-buffer. The reserved area is used for skb_shared_info, 67 + * matching the XDP core's xdp_data_hard_end() layout. 68 + */ 69 + mbuf = pool->dev && (umem->flags & XDP_UMEM_SG_FLAG); 70 + frame_size -= xsk_pool_get_tailroom(mbuf); 71 + 72 + return ALIGN_DOWN(frame_size, 128); 52 73 } 53 74 54 75 static inline u32 xsk_pool_get_rx_frag_step(struct xsk_buff_pool *pool)
+5
include/sound/sdca_interrupts.h
··· 69 69 int sdca_irq_request(struct device *dev, struct sdca_interrupt_info *interrupt_info, 70 70 int sdca_irq, const char *name, irq_handler_t handler, 71 71 void *data); 72 + void sdca_irq_free(struct device *dev, struct sdca_interrupt_info *interrupt_info, 73 + int sdca_irq, const char *name, void *data); 72 74 int sdca_irq_data_populate(struct device *dev, struct regmap *function_regmap, 73 75 struct snd_soc_component *component, 74 76 struct sdca_function_data *function, ··· 82 80 struct sdca_interrupt_info *info); 83 81 int sdca_irq_populate(struct sdca_function_data *function, 84 82 struct snd_soc_component *component, 83 + struct sdca_interrupt_info *info); 84 + void sdca_irq_cleanup(struct device *dev, 85 + struct sdca_function_data *function, 85 86 struct sdca_interrupt_info *info); 86 87 struct sdca_interrupt_info *sdca_irq_allocate(struct device *dev, 87 88 struct regmap *regmap, int irq);
+3 -1
include/trace/events/rxrpc.h
··· 185 185 EM(rxrpc_skb_put_input, "PUT input ") \ 186 186 EM(rxrpc_skb_put_jumbo_subpacket, "PUT jumbo-sub") \ 187 187 EM(rxrpc_skb_put_oob, "PUT oob ") \ 188 + EM(rxrpc_skb_put_old_response, "PUT old-resp ") \ 188 189 EM(rxrpc_skb_put_purge, "PUT purge ") \ 189 190 EM(rxrpc_skb_put_purge_oob, "PUT purge-oob") \ 190 191 EM(rxrpc_skb_put_response, "PUT response ") \ ··· 348 347 EM(rxrpc_call_see_release, "SEE release ") \ 349 348 EM(rxrpc_call_see_userid_exists, "SEE u-exists") \ 350 349 EM(rxrpc_call_see_waiting_call, "SEE q-conn ") \ 351 - E_(rxrpc_call_see_zap, "SEE zap ") 350 + E_(rxrpc_call_see_still_live, "SEE !still-l") 352 351 353 352 #define rxrpc_txqueue_traces \ 354 353 EM(rxrpc_txqueue_await_reply, "AWR") \ ··· 521 520 #define rxrpc_req_ack_traces \ 522 521 EM(rxrpc_reqack_ack_lost, "ACK-LOST ") \ 523 522 EM(rxrpc_reqack_app_stall, "APP-STALL ") \ 523 + EM(rxrpc_reqack_jumbo_win, "JUMBO-WIN ") \ 524 524 EM(rxrpc_reqack_more_rtt, "MORE-RTT ") \ 525 525 EM(rxrpc_reqack_no_srv_last, "NO-SRVLAST") \ 526 526 EM(rxrpc_reqack_old_rtt, "OLD-RTT ") \
+4
include/uapi/linux/input-event-codes.h
··· 643 643 #define KEY_EPRIVACY_SCREEN_ON 0x252 644 644 #define KEY_EPRIVACY_SCREEN_OFF 0x253 645 645 646 + #define KEY_ACTION_ON_SELECTION 0x254 /* AL Action on Selection (HUTRR119) */ 647 + #define KEY_CONTEXTUAL_INSERT 0x255 /* AL Contextual Insertion (HUTRR119) */ 648 + #define KEY_CONTEXTUAL_QUERY 0x256 /* AL Contextual Query (HUTRR119) */ 649 + 646 650 #define KEY_KBDINPUTASSIST_PREV 0x260 647 651 #define KEY_KBDINPUTASSIST_NEXT 0x261 648 652 #define KEY_KBDINPUTASSIST_PREVGROUP 0x262
+6 -5
include/uapi/linux/kvm.h
··· 11 11 #include <linux/const.h> 12 12 #include <linux/types.h> 13 13 #include <linux/compiler.h> 14 + #include <linux/stddef.h> 14 15 #include <linux/ioctl.h> 15 16 #include <asm/kvm.h> 16 17 ··· 543 542 544 543 struct kvm_coalesced_mmio_ring { 545 544 __u32 first, last; 546 - struct kvm_coalesced_mmio coalesced_mmio[]; 545 + __DECLARE_FLEX_ARRAY(struct kvm_coalesced_mmio, coalesced_mmio); 547 546 }; 548 547 549 548 #define KVM_COALESCED_MMIO_MAX \ ··· 593 592 /* for KVM_SET_SIGNAL_MASK */ 594 593 struct kvm_signal_mask { 595 594 __u32 len; 596 - __u8 sigset[]; 595 + __DECLARE_FLEX_ARRAY(__u8, sigset); 597 596 }; 598 597 599 598 /* for KVM_TPR_ACCESS_REPORTING */ ··· 1052 1051 struct kvm_irq_routing { 1053 1052 __u32 nr; 1054 1053 __u32 flags; 1055 - struct kvm_irq_routing_entry entries[]; 1054 + __DECLARE_FLEX_ARRAY(struct kvm_irq_routing_entry, entries); 1056 1055 }; 1057 1056 1058 1057 #define KVM_IRQFD_FLAG_DEASSIGN (1 << 0) ··· 1143 1142 1144 1143 struct kvm_reg_list { 1145 1144 __u64 n; /* number of regs */ 1146 - __u64 reg[]; 1145 + __DECLARE_FLEX_ARRAY(__u64, reg); 1147 1146 }; 1148 1147 1149 1148 struct kvm_one_reg { ··· 1609 1608 #ifdef __KERNEL__ 1610 1609 char name[KVM_STATS_NAME_SIZE]; 1611 1610 #else 1612 - char name[]; 1611 + __DECLARE_FLEX_ARRAY(char, name); 1613 1612 #endif 1614 1613 }; 1615 1614
+15 -22
include/uapi/linux/prctl.h
··· 397 397 # define PR_RSEQ_SLICE_EXT_ENABLE 0x01 398 398 399 399 /* 400 - * Get the current indirect branch tracking configuration for the current 401 - * thread, this will be the value configured via PR_SET_INDIR_BR_LP_STATUS. 400 + * Get or set the control flow integrity (CFI) configuration for the 401 + * current thread. 402 + * 403 + * Some per-thread control flow integrity settings are not yet 404 + * controlled through this prctl(); see for example 405 + * PR_{GET,SET,LOCK}_SHADOW_STACK_STATUS 402 406 */ 403 - #define PR_GET_INDIR_BR_LP_STATUS 80 404 - 407 + #define PR_GET_CFI 80 408 + #define PR_SET_CFI 81 405 409 /* 406 - * Set the indirect branch tracking configuration. PR_INDIR_BR_LP_ENABLE will 407 - * enable cpu feature for user thread, to track all indirect branches and ensure 408 - * they land on arch defined landing pad instruction. 409 - * x86 - If enabled, an indirect branch must land on an ENDBRANCH instruction. 410 - * arch64 - If enabled, an indirect branch must land on a BTI instruction. 411 - * riscv - If enabled, an indirect branch must land on an lpad instruction. 412 - * PR_INDIR_BR_LP_DISABLE will disable feature for user thread and indirect 413 - * branches will no more be tracked by cpu to land on arch defined landing pad 414 - * instruction. 410 + * Forward-edge CFI variants (excluding ARM64 BTI, which has its own 411 + * prctl()s). 415 412 */ 416 - #define PR_SET_INDIR_BR_LP_STATUS 81 417 - # define PR_INDIR_BR_LP_ENABLE (1UL << 0) 418 - 419 - /* 420 - * Prevent further changes to the specified indirect branch tracking 421 - * configuration. All bits may be locked via this call, including 422 - * undefined bits. 423 - */ 424 - #define PR_LOCK_INDIR_BR_LP_STATUS 82 413 + #define PR_CFI_BRANCH_LANDING_PADS 0 414 + /* Return and control values for PR_{GET,SET}_CFI */ 415 + # define PR_CFI_ENABLE _BITUL(0) 416 + # define PR_CFI_DISABLE _BITUL(1) 417 + # define PR_CFI_LOCK _BITUL(2) 425 418 426 419 #endif /* _LINUX_PRCTL_H */
+1
kernel/dma/debug.c
··· 615 615 } else if (rc == -EEXIST && 616 616 !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && 617 617 !(entry->is_cache_clean && overlap_cache_clean) && 618 + dma_get_cache_alignment() >= L1_CACHE_BYTES && 618 619 !(IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && 619 620 is_swiotlb_active(entry->dev))) { 620 621 err_printk(entry->dev, entry,
+7 -2
kernel/liveupdate/luo_session.c
··· 558 558 } 559 559 560 560 scoped_guard(mutex, &session->mutex) { 561 - luo_file_deserialize(&session->file_set, 562 - &sh->ser[i].file_set_ser); 561 + err = luo_file_deserialize(&session->file_set, 562 + &sh->ser[i].file_set_ser); 563 + } 564 + if (err) { 565 + pr_warn("Failed to deserialize files for session [%s] %pe\n", 566 + session->name, ERR_PTR(err)); 567 + return err; 563 568 } 564 569 } 565 570
+1 -1
kernel/sched/deadline.c
··· 1027 1027 if (dl_time_before(dl_se->deadline, rq_clock(rq)) || 1028 1028 dl_entity_overflow(dl_se, rq_clock(rq))) { 1029 1029 1030 - if (unlikely(!dl_is_implicit(dl_se) && 1030 + if (unlikely((!dl_is_implicit(dl_se) || dl_se->dl_defer) && 1031 1031 !dl_time_before(dl_se->deadline, rq_clock(rq)) && 1032 1032 !is_dl_boosted(dl_se))) { 1033 1033 update_dl_revised_wakeup(dl_se, rq);
+17 -13
kernel/sys.c
··· 2388 2388 return -EINVAL; 2389 2389 } 2390 2390 2391 - int __weak arch_get_indir_br_lp_status(struct task_struct *t, unsigned long __user *status) 2391 + int __weak arch_prctl_get_branch_landing_pad_state(struct task_struct *t, 2392 + unsigned long __user *state) 2392 2393 { 2393 2394 return -EINVAL; 2394 2395 } 2395 2396 2396 - int __weak arch_set_indir_br_lp_status(struct task_struct *t, unsigned long status) 2397 + int __weak arch_prctl_set_branch_landing_pad_state(struct task_struct *t, unsigned long state) 2397 2398 { 2398 2399 return -EINVAL; 2399 2400 } 2400 2401 2401 - int __weak arch_lock_indir_br_lp_status(struct task_struct *t, unsigned long status) 2402 + int __weak arch_prctl_lock_branch_landing_pad_state(struct task_struct *t) 2402 2403 { 2403 2404 return -EINVAL; 2404 2405 } ··· 2889 2888 return -EINVAL; 2890 2889 error = rseq_slice_extension_prctl(arg2, arg3); 2891 2890 break; 2892 - case PR_GET_INDIR_BR_LP_STATUS: 2893 - if (arg3 || arg4 || arg5) 2891 + case PR_GET_CFI: 2892 + if (arg2 != PR_CFI_BRANCH_LANDING_PADS) 2894 2893 return -EINVAL; 2895 - error = arch_get_indir_br_lp_status(me, (unsigned long __user *)arg2); 2894 + if (arg4 || arg5) 2895 + return -EINVAL; 2896 + error = arch_prctl_get_branch_landing_pad_state(me, (unsigned long __user *)arg3); 2896 2897 break; 2897 - case PR_SET_INDIR_BR_LP_STATUS: 2898 - if (arg3 || arg4 || arg5) 2898 + case PR_SET_CFI: 2899 + if (arg2 != PR_CFI_BRANCH_LANDING_PADS) 2899 2900 return -EINVAL; 2900 - error = arch_set_indir_br_lp_status(me, arg2); 2901 - break; 2902 - case PR_LOCK_INDIR_BR_LP_STATUS: 2903 - if (arg3 || arg4 || arg5) 2901 + if (arg4 || arg5) 2904 2902 return -EINVAL; 2905 - error = arch_lock_indir_br_lp_status(me, arg2); 2903 + error = arch_prctl_set_branch_landing_pad_state(me, arg3); 2904 + if (error) 2905 + break; 2906 + if (arg3 & PR_CFI_LOCK && !(arg3 & PR_CFI_DISABLE)) 2907 + error = arch_prctl_lock_branch_landing_pad_state(me); 2906 2908 break; 2907 2909 default: 2908 2910 trace_task_prctl_unknown(option, arg2, arg3, arg4, arg5);
+19 -8
kernel/time/clockevents.c
··· 172 172 { 173 173 clockevents_switch_state(dev, CLOCK_EVT_STATE_SHUTDOWN); 174 174 dev->next_event = KTIME_MAX; 175 + dev->next_event_forced = 0; 175 176 } 176 177 177 178 /** ··· 306 305 { 307 306 unsigned long long clc; 308 307 int64_t delta; 309 - int rc; 310 308 311 309 if (WARN_ON_ONCE(expires < 0)) 312 310 return -ETIME; ··· 324 324 return dev->set_next_ktime(expires, dev); 325 325 326 326 delta = ktime_to_ns(ktime_sub(expires, ktime_get())); 327 - if (delta <= 0) 328 - return force ? clockevents_program_min_delta(dev) : -ETIME; 329 327 330 - delta = min(delta, (int64_t) dev->max_delta_ns); 331 - delta = max(delta, (int64_t) dev->min_delta_ns); 328 + /* Required for tick_periodic() during early boot */ 329 + if (delta <= 0 && !force) 330 + return -ETIME; 332 331 333 - clc = ((unsigned long long) delta * dev->mult) >> dev->shift; 334 - rc = dev->set_next_event((unsigned long) clc, dev); 332 + if (delta > (int64_t)dev->min_delta_ns) { 333 + delta = min(delta, (int64_t) dev->max_delta_ns); 334 + clc = ((unsigned long long) delta * dev->mult) >> dev->shift; 335 + if (!dev->set_next_event((unsigned long) clc, dev)) 336 + return 0; 337 + } 335 338 336 - return (rc && force) ? clockevents_program_min_delta(dev) : rc; 339 + if (dev->next_event_forced) 340 + return 0; 341 + 342 + if (dev->set_next_event(dev->min_delta_ticks, dev)) { 343 + if (!force || clockevents_program_min_delta(dev)) 344 + return -ETIME; 345 + } 346 + dev->next_event_forced = 1; 347 + return 0; 337 348 } 338 349 339 350 /*
+1
kernel/time/hrtimer.c
··· 1888 1888 BUG_ON(!cpu_base->hres_active); 1889 1889 cpu_base->nr_events++; 1890 1890 dev->next_event = KTIME_MAX; 1891 + dev->next_event_forced = 0; 1891 1892 1892 1893 raw_spin_lock_irqsave(&cpu_base->lock, flags); 1893 1894 entry_time = now = hrtimer_update_base(cpu_base);
+7 -1
kernel/time/tick-broadcast.c
··· 76 76 */ 77 77 static void tick_broadcast_start_periodic(struct clock_event_device *bc) 78 78 { 79 - if (bc) 79 + if (bc) { 80 + bc->next_event_forced = 0; 80 81 tick_setup_periodic(bc, 1); 82 + } 81 83 } 82 84 83 85 /* ··· 405 403 bool bc_local; 406 404 407 405 raw_spin_lock(&tick_broadcast_lock); 406 + tick_broadcast_device.evtdev->next_event_forced = 0; 408 407 409 408 /* Handle spurious interrupts gracefully */ 410 409 if (clockevent_state_shutdown(tick_broadcast_device.evtdev)) { ··· 699 696 700 697 raw_spin_lock(&tick_broadcast_lock); 701 698 dev->next_event = KTIME_MAX; 699 + tick_broadcast_device.evtdev->next_event_forced = 0; 702 700 next_event = KTIME_MAX; 703 701 cpumask_clear(tmpmask); 704 702 now = ktime_get(); ··· 1067 1063 1068 1064 1069 1065 bc->event_handler = tick_handle_oneshot_broadcast; 1066 + bc->next_event_forced = 0; 1070 1067 bc->next_event = KTIME_MAX; 1071 1068 1072 1069 /* ··· 1180 1175 } 1181 1176 1182 1177 /* This moves the broadcast assignment to this CPU: */ 1178 + bc->next_event_forced = 0; 1183 1179 clockevents_program_event(bc, bc->next_event, 1); 1184 1180 } 1185 1181 raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
+1
kernel/time/tick-common.c
··· 110 110 int cpu = smp_processor_id(); 111 111 ktime_t next = dev->next_event; 112 112 113 + dev->next_event_forced = 0; 113 114 tick_periodic(cpu); 114 115 115 116 /*
+2 -1
kernel/time/tick-sched.c
··· 345 345 int val = atomic_read(dep); 346 346 347 347 if (likely(!tracepoint_enabled(tick_stop))) 348 - return !val; 348 + return !!val; 349 349 350 350 if (val & TICK_DEP_MASK_POSIX_TIMER) { 351 351 trace_tick_stop(0, TICK_DEP_MASK_POSIX_TIMER); ··· 1513 1513 struct tick_sched *ts = this_cpu_ptr(&tick_cpu_sched); 1514 1514 1515 1515 dev->next_event = KTIME_MAX; 1516 + dev->next_event_forced = 0; 1516 1517 1517 1518 if (likely(tick_nohz_handler(&ts->sched_timer) == HRTIMER_RESTART)) 1518 1519 tick_program_event(hrtimer_get_expires(&ts->sched_timer), 1);
+1 -1
kernel/trace/trace_probe.c
··· 1068 1068 { 1069 1069 size_t len = strlen(str); 1070 1070 1071 - if (str[len - 1] != '"') { 1071 + if (!len || str[len - 1] != '"') { 1072 1072 trace_probe_log_err(offs + len, IMMSTR_NO_CLOSE); 1073 1073 return -EINVAL; 1074 1074 }
+13 -1
kernel/workqueue.c
··· 1849 1849 raw_spin_lock_irq(&pwq->pool->lock); 1850 1850 if (pwq->plugged) { 1851 1851 pwq->plugged = false; 1852 - if (pwq_activate_first_inactive(pwq, true)) 1852 + if (pwq_activate_first_inactive(pwq, true)) { 1853 + /* 1854 + * While plugged, queueing skips activation which 1855 + * includes bumping the nr_active count and adding the 1856 + * pwq to nna->pending_pwqs if the count can't be 1857 + * obtained. We need to restore both for the pwq being 1858 + * unplugged. The first call activates the first 1859 + * inactive work item and the second, if there are more 1860 + * inactive, puts the pwq on pending_pwqs. 1861 + */ 1862 + pwq_activate_first_inactive(pwq, false); 1863 + 1853 1864 kick_pool(pwq->pool); 1865 + } 1854 1866 } 1855 1867 raw_spin_unlock_irq(&pwq->pool->lock); 1856 1868 }
+4 -4
lib/kobject.c
··· 27 27 * and thus @kobj should have a namespace tag associated with it. Returns 28 28 * %NULL otherwise. 29 29 */ 30 - const void *kobject_namespace(const struct kobject *kobj) 30 + const struct ns_common *kobject_namespace(const struct kobject *kobj) 31 31 { 32 32 const struct kobj_ns_type_operations *ns_ops = kobj_ns_ops(kobj); 33 33 ··· 1083 1083 return may_mount; 1084 1084 } 1085 1085 1086 - void *kobj_ns_grab_current(enum kobj_ns_type type) 1086 + struct ns_common *kobj_ns_grab_current(enum kobj_ns_type type) 1087 1087 { 1088 - void *ns = NULL; 1088 + struct ns_common *ns = NULL; 1089 1089 1090 1090 spin_lock(&kobj_ns_type_lock); 1091 1091 if (kobj_ns_type_is_valid(type) && kobj_ns_ops_tbl[type]) ··· 1096 1096 } 1097 1097 EXPORT_SYMBOL_GPL(kobj_ns_grab_current); 1098 1098 1099 - void kobj_ns_drop(enum kobj_ns_type type, void *ns) 1099 + void kobj_ns_drop(enum kobj_ns_type type, struct ns_common *ns) 1100 1100 { 1101 1101 spin_lock(&kobj_ns_type_lock); 1102 1102 if (kobj_ns_type_is_valid(type) &&
+8 -5
lib/kobject_uevent.c
··· 238 238 239 239 ops = kobj_ns_ops(kobj); 240 240 if (ops) { 241 - const void *init_ns, *ns; 241 + const struct ns_common *init_ns, *ns; 242 242 243 243 ns = kobj->ktype->namespace(kobj); 244 244 init_ns = ops->initial_ns(); ··· 388 388 389 389 #ifdef CONFIG_NET 390 390 const struct kobj_ns_type_operations *ops; 391 - const struct net *net = NULL; 391 + const struct ns_common *ns = NULL; 392 392 393 393 ops = kobj_ns_ops(kobj); 394 394 if (!ops && kobj->kset) { ··· 404 404 */ 405 405 if (ops && ops->netlink_ns && kobj->ktype->namespace) 406 406 if (ops->type == KOBJ_NS_TYPE_NET) 407 - net = kobj->ktype->namespace(kobj); 407 + ns = kobj->ktype->namespace(kobj); 408 408 409 - if (!net) 409 + if (!ns) 410 410 ret = uevent_net_broadcast_untagged(env, action_string, 411 411 devpath); 412 - else 412 + else { 413 + const struct net *net = container_of(ns, struct net, ns); 414 + 413 415 ret = uevent_net_broadcast_tagged(net->uevent_sock->sk, env, 414 416 action_string, devpath); 417 + } 415 418 #endif 416 419 417 420 return ret;
+7
mm/damon/stat.c
··· 245 245 { 246 246 int err; 247 247 248 + if (damon_stat_context) { 249 + if (damon_is_running(damon_stat_context)) 250 + return -EAGAIN; 251 + damon_destroy_ctx(damon_stat_context); 252 + } 253 + 248 254 damon_stat_context = damon_stat_build_ctx(); 249 255 if (!damon_stat_context) 250 256 return -ENOMEM; ··· 267 261 { 268 262 damon_stop(&damon_stat_context, 1); 269 263 damon_destroy_ctx(damon_stat_context); 264 + damon_stat_context = NULL; 270 265 } 271 266 272 267 static int damon_stat_enabled_store(
+2 -1
mm/damon/sysfs.c
··· 1670 1670 repeat_call_control->data = kdamond; 1671 1671 repeat_call_control->repeat = true; 1672 1672 repeat_call_control->dealloc_on_cancel = true; 1673 - damon_call(ctx, repeat_call_control); 1673 + if (damon_call(ctx, repeat_call_control)) 1674 + kfree(repeat_call_control); 1674 1675 return err; 1675 1676 } 1676 1677
+8 -3
mm/filemap.c
··· 3883 3883 unsigned int nr_pages = 0, folio_type; 3884 3884 unsigned short mmap_miss = 0, mmap_miss_saved; 3885 3885 3886 + /* 3887 + * Recalculate end_pgoff based on file_end before calling 3888 + * next_uptodate_folio() to avoid races with concurrent 3889 + * truncation. 3890 + */ 3891 + file_end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE) - 1; 3892 + end_pgoff = min(end_pgoff, file_end); 3893 + 3886 3894 rcu_read_lock(); 3887 3895 folio = next_uptodate_folio(&xas, mapping, end_pgoff); 3888 3896 if (!folio) 3889 3897 goto out; 3890 - 3891 - file_end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE) - 1; 3892 - end_pgoff = min(end_pgoff, file_end); 3893 3898 3894 3899 /* 3895 3900 * Do not allow to map with PMD across i_size to preserve
+20
mm/memory_hotplug.c
··· 1209 1209 1210 1210 if (node_arg.nid >= 0) 1211 1211 node_set_state(nid, N_MEMORY); 1212 + /* 1213 + * Check whether we are adding normal memory to the node for the first 1214 + * time. 1215 + */ 1216 + if (!node_state(nid, N_NORMAL_MEMORY) && zone_idx(zone) <= ZONE_NORMAL) 1217 + node_set_state(nid, N_NORMAL_MEMORY); 1218 + 1212 1219 if (need_zonelists_rebuild) 1213 1220 build_all_zonelists(NULL); 1214 1221 ··· 1915 1908 unsigned long flags; 1916 1909 char *reason; 1917 1910 int ret; 1911 + unsigned long normal_pages = 0; 1912 + enum zone_type zt; 1918 1913 1919 1914 /* 1920 1915 * {on,off}lining is constrained to full memory sections (or more ··· 2064 2055 /* reinitialise watermarks and update pcp limits */ 2065 2056 init_per_zone_wmark_min(); 2066 2057 2058 + /* 2059 + * Check whether this operation removes the last normal memory from 2060 + * the node. We do this before clearing N_MEMORY to avoid the possible 2061 + * transient "!N_MEMORY && N_NORMAL_MEMORY" state. 2062 + */ 2063 + if (zone_idx(zone) <= ZONE_NORMAL) { 2064 + for (zt = 0; zt <= ZONE_NORMAL; zt++) 2065 + normal_pages += pgdat->node_zones[zt].present_pages; 2066 + if (!normal_pages) 2067 + node_clear_state(node, N_NORMAL_MEMORY); 2068 + } 2067 2069 /* 2068 2070 * Make sure to mark the node as memory-less before rebuilding the zone 2069 2071 * list. Otherwise this node would still appear in the fallback lists.
+21
mm/page-writeback.c
··· 1858 1858 break; 1859 1859 } 1860 1860 1861 + /* 1862 + * Unconditionally start background writeback if it's not 1863 + * already in progress. We need to do this because the global 1864 + * dirty threshold check above (nr_dirty > gdtc->bg_thresh) 1865 + * doesn't account for these cases: 1866 + * 1867 + * a) strictlimit BDIs: throttling is calculated using per-wb 1868 + * thresholds. The per-wb threshold can be exceeded even when 1869 + * nr_dirty < gdtc->bg_thresh 1870 + * 1871 + * b) memcg-based throttling: memcg uses its own dirty count and 1872 + * thresholds and can trigger throttling even when global 1873 + * nr_dirty < gdtc->bg_thresh 1874 + * 1875 + * Writeback needs to be started else the writer stalls in the 1876 + * throttle loop waiting for dirty pages to be written back 1877 + * while no writeback is running. 1878 + */ 1879 + if (unlikely(!writeback_in_progress(wb))) 1880 + wb_start_background_writeback(wb); 1881 + 1861 1882 mem_cgroup_flush_foreign(wb); 1862 1883 1863 1884 /*
+7
mm/vma.c
··· 2781 2781 if (map.charged) 2782 2782 vm_unacct_memory(map.charged); 2783 2783 abort_munmap: 2784 + /* 2785 + * This indicates that .mmap_prepare has set a new file, differing from 2786 + * desc->vm_file. But since we're aborting the operation, only the 2787 + * original file will be cleaned up. Ensure we clean up both. 2788 + */ 2789 + if (map.file_doesnt_need_get) 2790 + fput(map.file); 2784 2791 vms_abort_munmap_vmas(&map.vms, &map.mas_detach); 2785 2792 return error; 2786 2793 }
+18 -9
net/batman-adv/bridge_loop_avoidance.c
··· 2130 2130 struct batadv_bla_claim *claim) 2131 2131 { 2132 2132 const u8 *primary_addr = primary_if->net_dev->dev_addr; 2133 + struct batadv_bla_backbone_gw *backbone_gw; 2133 2134 u16 backbone_crc; 2134 2135 bool is_own; 2135 2136 void *hdr; ··· 2146 2145 2147 2146 genl_dump_check_consistent(cb, hdr); 2148 2147 2149 - is_own = batadv_compare_eth(claim->backbone_gw->orig, 2150 - primary_addr); 2148 + backbone_gw = batadv_bla_claim_get_backbone_gw(claim); 2151 2149 2152 - spin_lock_bh(&claim->backbone_gw->crc_lock); 2153 - backbone_crc = claim->backbone_gw->crc; 2154 - spin_unlock_bh(&claim->backbone_gw->crc_lock); 2150 + is_own = batadv_compare_eth(backbone_gw->orig, primary_addr); 2151 + 2152 + spin_lock_bh(&backbone_gw->crc_lock); 2153 + backbone_crc = backbone_gw->crc; 2154 + spin_unlock_bh(&backbone_gw->crc_lock); 2155 2155 2156 2156 if (is_own) 2157 2157 if (nla_put_flag(msg, BATADV_ATTR_BLA_OWN)) { 2158 2158 genlmsg_cancel(msg, hdr); 2159 - goto out; 2159 + goto put_backbone_gw; 2160 2160 } 2161 2161 2162 2162 if (nla_put(msg, BATADV_ATTR_BLA_ADDRESS, ETH_ALEN, claim->addr) || 2163 2163 nla_put_u16(msg, BATADV_ATTR_BLA_VID, claim->vid) || 2164 2164 nla_put(msg, BATADV_ATTR_BLA_BACKBONE, ETH_ALEN, 2165 - claim->backbone_gw->orig) || 2165 + backbone_gw->orig) || 2166 2166 nla_put_u16(msg, BATADV_ATTR_BLA_CRC, 2167 2167 backbone_crc)) { 2168 2168 genlmsg_cancel(msg, hdr); 2169 - goto out; 2169 + goto put_backbone_gw; 2170 2170 } 2171 2171 2172 2172 genlmsg_end(msg, hdr); 2173 2173 ret = 0; 2174 2174 2175 + put_backbone_gw: 2176 + batadv_backbone_gw_put(backbone_gw); 2175 2177 out: 2176 2178 return ret; 2177 2179 } ··· 2452 2448 bool batadv_bla_check_claim(struct batadv_priv *bat_priv, 2453 2449 u8 *addr, unsigned short vid) 2454 2450 { 2451 + struct batadv_bla_backbone_gw *backbone_gw; 2455 2452 struct batadv_bla_claim search_claim; 2456 2453 struct batadv_bla_claim *claim = NULL; 2457 2454 struct batadv_hard_iface *primary_if = NULL; ··· 2475 2470 * return false. 2476 2471 */ 2477 2472 if (claim) { 2478 - if (!batadv_compare_eth(claim->backbone_gw->orig, 2473 + backbone_gw = batadv_bla_claim_get_backbone_gw(claim); 2474 + 2475 + if (!batadv_compare_eth(backbone_gw->orig, 2479 2476 primary_if->net_dev->dev_addr)) 2480 2477 ret = false; 2478 + 2479 + batadv_backbone_gw_put(backbone_gw); 2481 2480 batadv_claim_put(claim); 2482 2481 } 2483 2482
+7 -2
net/batman-adv/translation-table.c
··· 798 798 { 799 799 u16 num_vlan = 0; 800 800 u16 num_entries = 0; 801 - u16 change_offset; 802 - u16 tvlv_len; 801 + u16 tvlv_len = 0; 802 + unsigned int change_offset; 803 803 struct batadv_tvlv_tt_vlan_data *tt_vlan; 804 804 struct batadv_orig_node_vlan *vlan; 805 805 u8 *tt_change_ptr; ··· 815 815 /* if tt_len is negative, allocate the space needed by the full table */ 816 816 if (*tt_len < 0) 817 817 *tt_len = batadv_tt_len(num_entries); 818 + 819 + if (change_offset > U16_MAX || *tt_len > U16_MAX - change_offset) { 820 + *tt_len = 0; 821 + goto out; 822 + } 818 823 819 824 tvlv_len = *tt_len; 820 825 tvlv_len += change_offset;
+6
net/bridge/br_fdb.c
··· 597 597 dev = br->dev; 598 598 } 599 599 600 + if (!vg) 601 + return; 602 + 600 603 list_for_each_entry(v, &vg->vlan_list, vlist) 601 604 br_fdb_find_delete_local(br, p, dev->dev_addr, v->vid); 602 605 } ··· 632 629 vg = br_vlan_group(br); 633 630 dev = br->dev; 634 631 } 632 + 633 + if (!vg) 634 + return 0; 635 635 636 636 list_for_each_entry(v, &vg->vlan_list, vlist) { 637 637 if (!br_vlan_should_use(v))
+25 -25
net/core/net-sysfs.c
··· 1181 1181 netdev_put(queue->dev, &queue->dev_tracker); 1182 1182 } 1183 1183 1184 - static const void *rx_queue_namespace(const struct kobject *kobj) 1184 + static const struct ns_common *rx_queue_namespace(const struct kobject *kobj) 1185 1185 { 1186 1186 struct netdev_rx_queue *queue = to_rx_queue(kobj); 1187 1187 struct device *dev = &queue->dev->dev; 1188 - const void *ns = NULL; 1189 1188 1190 1189 if (dev->class && dev->class->namespace) 1191 - ns = dev->class->namespace(dev); 1190 + return dev->class->namespace(dev); 1192 1191 1193 - return ns; 1192 + return NULL; 1194 1193 } 1195 1194 1196 1195 static void rx_queue_get_ownership(const struct kobject *kobj, 1197 1196 kuid_t *uid, kgid_t *gid) 1198 1197 { 1199 - const struct net *net = rx_queue_namespace(kobj); 1198 + const struct ns_common *ns = rx_queue_namespace(kobj); 1200 1199 1201 - net_ns_get_ownership(net, uid, gid); 1200 + net_ns_get_ownership(ns ? container_of(ns, struct net, ns) : NULL, 1201 + uid, gid); 1202 1202 } 1203 1203 1204 1204 static const struct kobj_type rx_queue_ktype = { ··· 1931 1931 netdev_put(queue->dev, &queue->dev_tracker); 1932 1932 } 1933 1933 1934 - static const void *netdev_queue_namespace(const struct kobject *kobj) 1934 + static const struct ns_common *netdev_queue_namespace(const struct kobject *kobj) 1935 1935 { 1936 1936 struct netdev_queue *queue = to_netdev_queue(kobj); 1937 1937 struct device *dev = &queue->dev->dev; 1938 - const void *ns = NULL; 1939 1938 1940 1939 if (dev->class && dev->class->namespace) 1941 - ns = dev->class->namespace(dev); 1940 + return dev->class->namespace(dev); 1942 1941 1943 - return ns; 1942 + return NULL; 1944 1943 } 1945 1944 1946 1945 static void netdev_queue_get_ownership(const struct kobject *kobj, 1947 1946 kuid_t *uid, kgid_t *gid) 1948 1947 { 1949 - const struct net *net = netdev_queue_namespace(kobj); 1948 + const struct ns_common *ns = netdev_queue_namespace(kobj); 1950 1949 1951 - net_ns_get_ownership(net, uid, gid); 1950 + net_ns_get_ownership(ns ? container_of(ns, struct net, ns) : NULL, 1951 + uid, gid); 1952 1952 } 1953 1953 1954 1954 static const struct kobj_type netdev_queue_ktype = { ··· 2185 2185 return ns_capable(net->user_ns, CAP_SYS_ADMIN); 2186 2186 } 2187 2187 2188 - static void *net_grab_current_ns(void) 2188 + static struct ns_common *net_grab_current_ns(void) 2189 2189 { 2190 - struct net *ns = current->nsproxy->net_ns; 2190 + struct net *net = current->nsproxy->net_ns; 2191 2191 #ifdef CONFIG_NET_NS 2192 - if (ns) 2193 - refcount_inc(&ns->passive); 2192 + if (net) 2193 + refcount_inc(&net->passive); 2194 2194 #endif 2195 - return ns; 2195 + return net ? to_ns_common(net) : NULL; 2196 2196 } 2197 2197 2198 - static const void *net_initial_ns(void) 2198 + static const struct ns_common *net_initial_ns(void) 2199 2199 { 2200 - return &init_net; 2200 + return to_ns_common(&init_net); 2201 2201 } 2202 2202 2203 - static const void *net_netlink_ns(struct sock *sk) 2203 + static const struct ns_common *net_netlink_ns(struct sock *sk) 2204 2204 { 2205 - return sock_net(sk); 2205 + return to_ns_common(sock_net(sk)); 2206 2206 } 2207 2207 2208 2208 const struct kobj_ns_type_operations net_ns_type_operations = { ··· 2252 2252 kvfree(dev); 2253 2253 } 2254 2254 2255 - static const void *net_namespace(const struct device *d) 2255 + static const struct ns_common *net_namespace(const struct device *d) 2256 2256 { 2257 2257 const struct net_device *dev = to_net_dev(d); 2258 2258 2259 - return dev_net(dev); 2259 + return to_ns_common(dev_net(dev)); 2260 2260 } 2261 2261 2262 2262 static void net_get_ownership(const struct device *d, kuid_t *uid, kgid_t *gid) ··· 2402 2402 } 2403 2403 2404 2404 int netdev_class_create_file_ns(const struct class_attribute *class_attr, 2405 - const void *ns) 2405 + const struct ns_common *ns) 2406 2406 { 2407 2407 return class_create_file_ns(&net_class, class_attr, ns); 2408 2408 } 2409 2409 EXPORT_SYMBOL(netdev_class_create_file_ns); 2410 2410 2411 2411 void netdev_class_remove_file_ns(const struct class_attribute *class_attr, 2412 - const void *ns) 2412 + const struct ns_common *ns) 2413 2413 { 2414 2414 class_remove_file_ns(&net_class, class_attr, ns); 2415 2415 }
+3 -5
net/core/net_namespace.c
··· 540 540 } 541 541 } 542 542 543 - void net_drop_ns(void *p) 543 + void net_drop_ns(struct ns_common *ns) 544 544 { 545 - struct net *net = (struct net *)p; 546 - 547 - if (net) 548 - net_passive_dec(net); 545 + if (ns) 546 + net_passive_dec(to_net_ns(ns)); 549 547 } 550 548 551 549 struct net *copy_net_ns(u64 flags,
+1 -1
net/core/netdev_rx_queue.c
··· 117 117 struct netdev_rx_queue *rxq; 118 118 int ret; 119 119 120 - if (!netdev_need_ops_lock(dev)) 120 + if (!qops) 121 121 return -EOPNOTSUPP; 122 122 123 123 if (rxq_idx >= dev->real_num_rx_queues) {
+27 -13
net/core/rtnetlink.c
··· 3894 3894 goto out; 3895 3895 } 3896 3896 3897 - static struct net *rtnl_get_peer_net(const struct rtnl_link_ops *ops, 3897 + static struct net *rtnl_get_peer_net(struct sk_buff *skb, 3898 + const struct rtnl_link_ops *ops, 3898 3899 struct nlattr *tbp[], 3899 3900 struct nlattr *data[], 3900 3901 struct netlink_ext_ack *extack) 3901 3902 { 3902 - struct nlattr *tb[IFLA_MAX + 1]; 3903 + struct nlattr *tb[IFLA_MAX + 1], **attrs; 3904 + struct net *net; 3903 3905 int err; 3904 3906 3905 - if (!data || !data[ops->peer_type]) 3906 - return rtnl_link_get_net_ifla(tbp); 3907 - 3908 - err = rtnl_nla_parse_ifinfomsg(tb, data[ops->peer_type], extack); 3909 - if (err < 0) 3910 - return ERR_PTR(err); 3911 - 3912 - if (ops->validate) { 3913 - err = ops->validate(tb, NULL, extack); 3907 + if (!data || !data[ops->peer_type]) { 3908 + attrs = tbp; 3909 + } else { 3910 + err = rtnl_nla_parse_ifinfomsg(tb, data[ops->peer_type], extack); 3914 3911 if (err < 0) 3915 3912 return ERR_PTR(err); 3913 + 3914 + if (ops->validate) { 3915 + err = ops->validate(tb, NULL, extack); 3916 + if (err < 0) 3917 + return ERR_PTR(err); 3918 + } 3919 + 3920 + attrs = tb; 3916 3921 } 3917 3922 3918 - return rtnl_link_get_net_ifla(tb); 3923 + net = rtnl_link_get_net_ifla(attrs); 3924 + if (IS_ERR_OR_NULL(net)) 3925 + return net; 3926 + 3927 + if (!netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN)) { 3928 + put_net(net); 3929 + return ERR_PTR(-EPERM); 3930 + } 3931 + 3932 + return net; 3919 3933 } 3920 3934 3921 3935 static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh, ··· 4068 4054 } 4069 4055 4070 4056 if (ops->peer_type) { 4071 - peer_net = rtnl_get_peer_net(ops, tb, data, extack); 4057 + peer_net = rtnl_get_peer_net(skb, ops, tb, data, extack); 4072 4058 if (IS_ERR(peer_net)) { 4073 4059 ret = PTR_ERR(peer_net); 4074 4060 goto put_ops;
+1 -4
net/core/skbuff.c
··· 1083 1083 1084 1084 static void skb_kfree_head(void *head, unsigned int end_offset) 1085 1085 { 1086 - if (end_offset == SKB_SMALL_HEAD_HEADROOM) 1087 - kmem_cache_free(net_hotdata.skb_small_head_cache, head); 1088 - else 1089 - kfree(head); 1086 + kfree(head); 1090 1087 } 1091 1088 1092 1089 static void skb_free_head(struct sk_buff *skb)
+1 -1
net/devlink/health.c
··· 1327 1327 if (sk) { 1328 1328 devlink_fmsg_pair_nest_start(fmsg, "sk"); 1329 1329 devlink_fmsg_obj_nest_start(fmsg); 1330 - devlink_fmsg_put(fmsg, "family", sk->sk_type); 1330 + devlink_fmsg_put(fmsg, "family", sk->sk_family); 1331 1331 devlink_fmsg_put(fmsg, "type", sk->sk_type); 1332 1332 devlink_fmsg_put(fmsg, "proto", sk->sk_protocol); 1333 1333 devlink_fmsg_obj_nest_end(fmsg);
+7
net/ipv4/icmp.c
··· 1346 1346 if (iio->ident.addr.ctype3_hdr.addrlen != sizeof(struct in6_addr)) 1347 1347 goto send_mal_query; 1348 1348 dev = ipv6_stub->ipv6_dev_find(net, &iio->ident.addr.ip_addr.ipv6_addr, dev); 1349 + /* 1350 + * If IPv6 identifier lookup is unavailable, silently 1351 + * discard the request instead of misreporting NO_IF. 1352 + */ 1353 + if (IS_ERR(dev)) 1354 + return false; 1355 + 1349 1356 dev_hold(dev); 1350 1357 break; 1351 1358 #endif
+28 -13
net/ipv4/nexthop.c
··· 902 902 goto nla_put_failure; 903 903 904 904 if (op_flags & NHA_OP_FLAG_DUMP_STATS && 905 - (nla_put_u32(skb, NHA_HW_STATS_ENABLE, nhg->hw_stats) || 906 - nla_put_nh_group_stats(skb, nh, op_flags))) 905 + nla_put_nh_group_stats(skb, nh, op_flags)) 907 906 goto nla_put_failure; 908 907 909 908 return 0; ··· 1003 1004 nla_total_size_64bit(8);/* NHA_RES_GROUP_UNBALANCED_TIME */ 1004 1005 } 1005 1006 1006 - static size_t nh_nlmsg_size_grp(struct nexthop *nh) 1007 + static size_t nh_nlmsg_size_grp(struct nexthop *nh, u32 op_flags) 1007 1008 { 1008 1009 struct nh_group *nhg = rtnl_dereference(nh->nh_grp); 1009 1010 size_t sz = sizeof(struct nexthop_grp) * nhg->num_nh; 1010 1011 size_t tot = nla_total_size(sz) + 1011 - nla_total_size(2); /* NHA_GROUP_TYPE */ 1012 + nla_total_size(2) + /* NHA_GROUP_TYPE */ 1013 + nla_total_size(0); /* NHA_FDB */ 1012 1014 1013 1015 if (nhg->resilient) 1014 1016 tot += nh_nlmsg_size_grp_res(nhg); 1017 + 1018 + if (op_flags & NHA_OP_FLAG_DUMP_STATS) { 1019 + tot += nla_total_size(0) + /* NHA_GROUP_STATS */ 1020 + nla_total_size(4); /* NHA_HW_STATS_ENABLE */ 1021 + tot += nhg->num_nh * 1022 + (nla_total_size(0) + /* NHA_GROUP_STATS_ENTRY */ 1023 + nla_total_size(4) + /* NHA_GROUP_STATS_ENTRY_ID */ 1024 + nla_total_size_64bit(8)); /* NHA_GROUP_STATS_ENTRY_PACKETS */ 1025 + 1026 + if (op_flags & NHA_OP_FLAG_DUMP_HW_STATS) { 1027 + tot += nhg->num_nh * 1028 + nla_total_size_64bit(8); /* NHA_GROUP_STATS_ENTRY_PACKETS_HW */ 1029 + tot += nla_total_size(4); /* NHA_HW_STATS_USED */ 1030 + } 1031 + } 1015 1032 1016 1033 return tot; 1017 1034 } ··· 1063 1048 return sz; 1064 1049 } 1065 1050 1066 - static size_t nh_nlmsg_size(struct nexthop *nh) 1051 + static size_t nh_nlmsg_size(struct nexthop *nh, u32 op_flags) 1067 1052 { 1068 1053 size_t sz = NLMSG_ALIGN(sizeof(struct nhmsg)); 1069 1054 1070 1055 sz += nla_total_size(4); /* NHA_ID */ 1071 1056 1072 1057 if (nh->is_group) 1073 - sz += nh_nlmsg_size_grp(nh) + 1058 + sz += nh_nlmsg_size_grp(nh, op_flags) + 1074 1059 nla_total_size(4) + /* NHA_OP_FLAGS */ 1075 1060 0; 1076 1061 else ··· 1086 1071 struct sk_buff *skb; 1087 1072 int err = -ENOBUFS; 1088 1073 1089 - skb = nlmsg_new(nh_nlmsg_size(nh), gfp_any()); 1074 + skb = nlmsg_new(nh_nlmsg_size(nh, 0), gfp_any()); 1090 1075 if (!skb) 1091 1076 goto errout; 1092 1077 ··· 3392 3377 if (err) 3393 3378 return err; 3394 3379 3395 - err = -ENOBUFS; 3396 - skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL); 3397 - if (!skb) 3398 - goto out; 3399 - 3400 3380 err = -ENOENT; 3401 3381 nh = nexthop_find_by_id(net, id); 3402 3382 if (!nh) 3403 - goto errout_free; 3383 + goto out; 3384 + 3385 + err = -ENOBUFS; 3386 + skb = nlmsg_new(nh_nlmsg_size(nh, op_flags), GFP_KERNEL); 3387 + if (!skb) 3388 + goto out; 3404 3389 3405 3390 err = nh_fill_node(skb, nh, RTM_NEWNEXTHOP, NETLINK_CB(in_skb).portid, 3406 3391 nlh->nlmsg_seq, 0, op_flags);
+4 -1
net/ipv4/xfrm4_input.c
··· 50 50 { 51 51 struct xfrm_offload *xo = xfrm_offload(skb); 52 52 struct iphdr *iph = ip_hdr(skb); 53 + struct net_device *dev = skb->dev; 53 54 54 55 iph->protocol = XFRM_MODE_SKB_CB(skb)->protocol; 55 56 ··· 74 73 } 75 74 76 75 NF_HOOK(NFPROTO_IPV4, NF_INET_PRE_ROUTING, 77 - dev_net(skb->dev), NULL, skb, skb->dev, NULL, 76 + dev_net(dev), NULL, skb, dev, NULL, 78 77 xfrm4_rcv_encap_finish); 78 + if (async) 79 + dev_put(dev); 79 80 return 0; 80 81 } 81 82
+21 -12
net/ipv6/ioam6.c
··· 710 710 struct ioam6_schema *sc, 711 711 unsigned int sclen, bool is_input) 712 712 { 713 - struct net_device *dev = skb_dst_dev(skb); 713 + /* Note: skb_dst_dev_rcu() can't be NULL at this point. */ 714 + struct net_device *dev = skb_dst_dev_rcu(skb); 715 + struct inet6_dev *i_skb_dev, *idev; 714 716 struct timespec64 ts; 715 717 ktime_t tstamp; 716 718 u64 raw64; ··· 723 721 724 722 data = trace->data + trace->remlen * 4 - trace->nodelen * 4 - sclen * 4; 725 723 724 + i_skb_dev = skb->dev ? __in6_dev_get(skb->dev) : NULL; 725 + idev = __in6_dev_get(dev); 726 + 726 727 /* hop_lim and node_id */ 727 728 if (trace->type.bit0) { 728 729 byte = ipv6_hdr(skb)->hop_limit; 729 730 if (is_input) 730 731 byte--; 731 732 732 - raw32 = dev_net(dev)->ipv6.sysctl.ioam6_id; 733 + raw32 = READ_ONCE(dev_net(dev)->ipv6.sysctl.ioam6_id); 733 734 734 735 *(__be32 *)data = cpu_to_be32((byte << 24) | raw32); 735 736 data += sizeof(__be32); ··· 740 735 741 736 /* ingress_if_id and egress_if_id */ 742 737 if (trace->type.bit1) { 743 - if (!skb->dev) 738 + if (!i_skb_dev) 744 739 raw16 = IOAM6_U16_UNAVAILABLE; 745 740 else 746 - raw16 = (__force u16)READ_ONCE(__in6_dev_get(skb->dev)->cnf.ioam6_id); 741 + raw16 = (__force u16)READ_ONCE(i_skb_dev->cnf.ioam6_id); 747 742 748 743 *(__be16 *)data = cpu_to_be16(raw16); 749 744 data += sizeof(__be16); 750 745 751 - if (dev->flags & IFF_LOOPBACK) 746 + if ((dev->flags & IFF_LOOPBACK) || !idev) 752 747 raw16 = IOAM6_U16_UNAVAILABLE; 753 748 else 754 - raw16 = (__force u16)READ_ONCE(__in6_dev_get(dev)->cnf.ioam6_id); 749 + raw16 = (__force u16)READ_ONCE(idev->cnf.ioam6_id); 755 750 756 751 *(__be16 *)data = cpu_to_be16(raw16); 757 752 data += sizeof(__be16); ··· 803 798 struct Qdisc *qdisc; 804 799 __u32 qlen, backlog; 805 800 806 - if (dev->flags & IFF_LOOPBACK) { 801 + if (dev->flags & IFF_LOOPBACK || 802 + skb_get_queue_mapping(skb) >= dev->num_tx_queues) { 807 803 *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 808 804 } else { 809 805 queue = skb_get_tx_queue(dev, skb); 810 806 qdisc = rcu_dereference(queue->qdisc); 807 + 808 + spin_lock_bh(qdisc_lock(qdisc)); 811 809 qdisc_qstats_qlen_backlog(qdisc, &qlen, &backlog); 810 + spin_unlock_bh(qdisc_lock(qdisc)); 812 811 813 812 *(__be32 *)data = cpu_to_be32(backlog); 814 813 } ··· 831 822 if (is_input) 832 823 byte--; 833 824 834 - raw64 = dev_net(dev)->ipv6.sysctl.ioam6_id_wide; 825 + raw64 = READ_ONCE(dev_net(dev)->ipv6.sysctl.ioam6_id_wide); 835 826 836 827 *(__be64 *)data = cpu_to_be64(((u64)byte << 56) | raw64); 837 828 data += sizeof(__be64); ··· 839 830 840 831 /* ingress_if_id and egress_if_id (wide) */ 841 832 if (trace->type.bit9) { 842 - if (!skb->dev) 833 + if (!i_skb_dev) 843 834 raw32 = IOAM6_U32_UNAVAILABLE; 844 835 else 845 - raw32 = READ_ONCE(__in6_dev_get(skb->dev)->cnf.ioam6_id_wide); 836 + raw32 = READ_ONCE(i_skb_dev->cnf.ioam6_id_wide); 846 837 847 838 *(__be32 *)data = cpu_to_be32(raw32); 848 839 data += sizeof(__be32); 849 840 850 - if (dev->flags & IFF_LOOPBACK) 841 + if ((dev->flags & IFF_LOOPBACK) || !idev) 851 842 raw32 = IOAM6_U32_UNAVAILABLE; 852 843 else 853 - raw32 = READ_ONCE(__in6_dev_get(dev)->cnf.ioam6_id_wide); 844 + raw32 = READ_ONCE(idev->cnf.ioam6_id_wide); 854 845 855 846 *(__be32 *)data = cpu_to_be32(raw32); 856 847 data += sizeof(__be32);
+1 -2
net/ipv6/netfilter/ip6t_eui64.c
··· 22 22 unsigned char eui64[8]; 23 23 24 24 if (!(skb_mac_header(skb) >= skb->head && 25 - skb_mac_header(skb) + ETH_HLEN <= skb->data) && 26 - par->fragoff != 0) { 25 + skb_mac_header(skb) + ETH_HLEN <= skb->data)) { 27 26 par->hotdrop = true; 28 27 return false; 29 28 }
+23 -11
net/ipv6/seg6_iptunnel.c
··· 48 48 } 49 49 50 50 struct seg6_lwt { 51 - struct dst_cache cache; 51 + struct dst_cache cache_input; 52 + struct dst_cache cache_output; 52 53 struct seg6_iptunnel_encap tuninfo[]; 53 54 }; 54 55 ··· 489 488 slwt = seg6_lwt_lwtunnel(lwtst); 490 489 491 490 local_bh_disable(); 492 - dst = dst_cache_get(&slwt->cache); 491 + dst = dst_cache_get(&slwt->cache_input); 493 492 local_bh_enable(); 494 493 495 494 err = seg6_do_srh(skb, dst); ··· 505 504 /* cache only if we don't create a dst reference loop */ 506 505 if (!dst->error && lwtst != dst->lwtstate) { 507 506 local_bh_disable(); 508 - dst_cache_set_ip6(&slwt->cache, dst, 507 + dst_cache_set_ip6(&slwt->cache_input, dst, 509 508 &ipv6_hdr(skb)->saddr); 510 509 local_bh_enable(); 511 510 } ··· 565 564 slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate); 566 565 567 566 local_bh_disable(); 568 - dst = dst_cache_get(&slwt->cache); 567 + dst = dst_cache_get(&slwt->cache_output); 569 568 local_bh_enable(); 570 569 571 570 err = seg6_do_srh(skb, dst); ··· 592 591 /* cache only if we don't create a dst reference loop */ 593 592 if (orig_dst->lwtstate != dst->lwtstate) { 594 593 local_bh_disable(); 595 - dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr); 594 + dst_cache_set_ip6(&slwt->cache_output, dst, &fl6.saddr); 596 595 local_bh_enable(); 597 596 } 598 597 ··· 702 701 703 702 slwt = seg6_lwt_lwtunnel(newts); 704 703 705 - err = dst_cache_init(&slwt->cache, GFP_ATOMIC); 706 - if (err) { 707 - kfree(newts); 708 - return err; 709 - } 704 + err = dst_cache_init(&slwt->cache_input, GFP_ATOMIC); 705 + if (err) 706 + goto err_free_newts; 707 + 708 + err = dst_cache_init(&slwt->cache_output, GFP_ATOMIC); 709 + if (err) 710 + goto err_destroy_input; 710 711 711 712 memcpy(&slwt->tuninfo, tuninfo, tuninfo_len); 712 713 ··· 723 720 *ts = newts; 724 721 725 722 return 0; 723 + 724 + err_destroy_input: 725 + dst_cache_destroy(&slwt->cache_input); 726 + err_free_newts: 727 + kfree(newts); 728 + return err; 726 729 } 727 730 728 731 static void seg6_destroy_state(struct lwtunnel_state *lwt) 729 732 { 730 - dst_cache_destroy(&seg6_lwt_lwtunnel(lwt)->cache); 733 + struct seg6_lwt *slwt = seg6_lwt_lwtunnel(lwt); 734 + 735 + dst_cache_destroy(&slwt->cache_input); 736 + dst_cache_destroy(&slwt->cache_output); 731 737 } 732 738 733 739 static int seg6_fill_encap_info(struct sk_buff *skb,
+4 -1
net/ipv6/xfrm6_input.c
··· 43 43 int xfrm6_transport_finish(struct sk_buff *skb, int async) 44 44 { 45 45 struct xfrm_offload *xo = xfrm_offload(skb); 46 + struct net_device *dev = skb->dev; 46 47 int nhlen = -skb_network_offset(skb); 47 48 48 49 skb_network_header(skb)[IP6CB(skb)->nhoff] = ··· 69 68 } 70 69 71 70 NF_HOOK(NFPROTO_IPV6, NF_INET_PRE_ROUTING, 72 - dev_net(skb->dev), NULL, skb, skb->dev, NULL, 71 + dev_net(dev), NULL, skb, dev, NULL, 73 72 xfrm6_transport_finish2); 73 + if (async) 74 + dev_put(dev); 74 75 return 0; 75 76 } 76 77
+34 -18
net/key/af_key.c
··· 757 757 return 0; 758 758 } 759 759 760 + static unsigned int pfkey_sockaddr_fill_zero_tail(const xfrm_address_t *xaddr, 761 + __be16 port, 762 + struct sockaddr *sa, 763 + unsigned short family) 764 + { 765 + unsigned int prefixlen; 766 + int sockaddr_len = pfkey_sockaddr_len(family); 767 + int sockaddr_size = pfkey_sockaddr_size(family); 768 + 769 + prefixlen = pfkey_sockaddr_fill(xaddr, port, sa, family); 770 + if (sockaddr_size > sockaddr_len) 771 + memset((u8 *)sa + sockaddr_len, 0, sockaddr_size - sockaddr_len); 772 + 773 + return prefixlen; 774 + } 775 + 760 776 static struct sk_buff *__pfkey_xfrm_state2msg(const struct xfrm_state *x, 761 777 int add_keys, int hsc) 762 778 { ··· 3222 3206 addr->sadb_address_proto = 0; 3223 3207 addr->sadb_address_reserved = 0; 3224 3208 addr->sadb_address_prefixlen = 3225 - pfkey_sockaddr_fill(&x->props.saddr, 0, 3226 - (struct sockaddr *) (addr + 1), 3227 - x->props.family); 3209 + pfkey_sockaddr_fill_zero_tail(&x->props.saddr, 0, 3210 + (struct sockaddr *)(addr + 1), 3211 + x->props.family); 3228 3212 if (!addr->sadb_address_prefixlen) 3229 3213 BUG(); 3230 3214 ··· 3237 3221 addr->sadb_address_proto = 0; 3238 3222 addr->sadb_address_reserved = 0; 3239 3223 addr->sadb_address_prefixlen = 3240 - pfkey_sockaddr_fill(&x->id.daddr, 0, 3241 - (struct sockaddr *) (addr + 1), 3242 - x->props.family); 3224 + pfkey_sockaddr_fill_zero_tail(&x->id.daddr, 0, 3225 + (struct sockaddr *)(addr + 1), 3226 + x->props.family); 3243 3227 if (!addr->sadb_address_prefixlen) 3244 3228 BUG(); 3245 3229 ··· 3437 3421 addr->sadb_address_proto = 0; 3438 3422 addr->sadb_address_reserved = 0; 3439 3423 addr->sadb_address_prefixlen = 3440 - pfkey_sockaddr_fill(&x->props.saddr, 0, 3441 - (struct sockaddr *) (addr + 1), 3442 - x->props.family); 3424 + pfkey_sockaddr_fill_zero_tail(&x->props.saddr, 0, 3425 + (struct sockaddr *)(addr + 1), 3426 + x->props.family); 3443 3427 if (!addr->sadb_address_prefixlen) 3444 3428 BUG(); 3445 3429 ··· 3459 3443 addr->sadb_address_proto = 0; 3460 3444 addr->sadb_address_reserved = 0; 3461 3445 addr->sadb_address_prefixlen = 3462 - pfkey_sockaddr_fill(ipaddr, 0, 3463 - (struct sockaddr *) (addr + 1), 3464 - x->props.family); 3446 + pfkey_sockaddr_fill_zero_tail(ipaddr, 0, 3447 + (struct sockaddr *)(addr + 1), 3448 + x->props.family); 3465 3449 if (!addr->sadb_address_prefixlen) 3466 3450 BUG(); 3467 3451 ··· 3490 3474 switch (type) { 3491 3475 case SADB_EXT_ADDRESS_SRC: 3492 3476 addr->sadb_address_prefixlen = sel->prefixlen_s; 3493 - pfkey_sockaddr_fill(&sel->saddr, 0, 3494 - (struct sockaddr *)(addr + 1), 3495 - sel->family); 3477 + pfkey_sockaddr_fill_zero_tail(&sel->saddr, 0, 3478 + (struct sockaddr *)(addr + 1), 3479 + sel->family); 3496 3480 break; 3497 3481 case SADB_EXT_ADDRESS_DST: 3498 3482 addr->sadb_address_prefixlen = sel->prefixlen_d; 3499 - pfkey_sockaddr_fill(&sel->daddr, 0, 3500 - (struct sockaddr *)(addr + 1), 3501 - sel->family); 3483 + pfkey_sockaddr_fill_zero_tail(&sel->daddr, 0, 3484 + (struct sockaddr *)(addr + 1), 3485 + sel->family); 3502 3486 break; 3503 3487 default: 3504 3488 return -EINVAL;
+5
net/l2tp/l2tp_core.c
··· 1290 1290 uh->source = inet->inet_sport; 1291 1291 uh->dest = inet->inet_dport; 1292 1292 udp_len = uhlen + session->hdr_len + data_len; 1293 + if (udp_len > U16_MAX) { 1294 + kfree_skb(skb); 1295 + ret = NET_XMIT_DROP; 1296 + goto out_unlock; 1297 + } 1293 1298 uh->len = htons(udp_len); 1294 1299 1295 1300 /* Calculate UDP checksum if configured to do so */
+5 -19
net/mptcp/pm_kernel.c
··· 720 720 721 721 static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet, 722 722 struct mptcp_pm_addr_entry *entry, 723 - bool needs_id, bool replace) 723 + bool replace) 724 724 { 725 725 struct mptcp_pm_addr_entry *cur, *del_entry = NULL; 726 726 int ret = -EINVAL; ··· 779 779 } 780 780 } 781 781 782 - if (!entry->addr.id && needs_id) { 782 + if (!entry->addr.id) { 783 783 find_next: 784 784 entry->addr.id = find_next_zero_bit(pernet->id_bitmap, 785 785 MPTCP_PM_MAX_ADDR_ID + 1, ··· 790 790 } 791 791 } 792 792 793 - if (!entry->addr.id && needs_id) 793 + if (!entry->addr.id) 794 794 goto out; 795 795 796 796 __set_bit(entry->addr.id, pernet->id_bitmap); ··· 923 923 return -ENOMEM; 924 924 925 925 entry->addr.port = 0; 926 - ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true, false); 926 + ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, false); 927 927 if (ret < 0) 928 928 kfree(entry); 929 929 ··· 977 977 return 0; 978 978 } 979 979 980 - static bool mptcp_pm_has_addr_attr_id(const struct nlattr *attr, 981 - struct genl_info *info) 982 - { 983 - struct nlattr *tb[MPTCP_PM_ADDR_ATTR_MAX + 1]; 984 - 985 - if (!nla_parse_nested_deprecated(tb, MPTCP_PM_ADDR_ATTR_MAX, attr, 986 - mptcp_pm_address_nl_policy, info->extack) && 987 - tb[MPTCP_PM_ADDR_ATTR_ID]) 988 - return true; 989 - return false; 990 - } 991 - 992 980 /* Add an MPTCP endpoint */ 993 981 int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info) 994 982 { ··· 1025 1037 goto out_free; 1026 1038 } 1027 1039 } 1028 - ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, 1029 - !mptcp_pm_has_addr_attr_id(attr, info), 1030 - true); 1040 + ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true); 1031 1041 if (ret < 0) { 1032 1042 GENL_SET_ERR_MSG_FMT(info, "too many addresses or duplicate one: %d", ret); 1033 1043 goto out_free;
+2
net/mptcp/protocol.c
··· 4660 4660 { 4661 4661 int err; 4662 4662 4663 + mptcp_subflow_v6_init(); 4664 + 4663 4665 mptcp_v6_prot = mptcp_prot; 4664 4666 strscpy(mptcp_v6_prot.name, "MPTCPv6", sizeof(mptcp_v6_prot.name)); 4665 4667 mptcp_v6_prot.slab = NULL;
+1
net/mptcp/protocol.h
··· 875 875 void __init mptcp_proto_init(void); 876 876 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 877 877 int __init mptcp_proto_v6_init(void); 878 + void __init mptcp_subflow_v6_init(void); 878 879 #endif 879 880 880 881 struct sock *mptcp_sk_clone_init(const struct sock *sk,
+9 -6
net/mptcp/subflow.c
··· 2165 2165 tcp_prot_override.psock_update_sk_prot = NULL; 2166 2166 #endif 2167 2167 2168 + mptcp_diag_subflow_init(&subflow_ulp_ops); 2169 + 2170 + if (tcp_register_ulp(&subflow_ulp_ops) != 0) 2171 + panic("MPTCP: failed to register subflows to ULP\n"); 2172 + } 2173 + 2168 2174 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 2175 + void __init mptcp_subflow_v6_init(void) 2176 + { 2169 2177 /* In struct mptcp_subflow_request_sock, we assume the TCP request sock 2170 2178 * structures for v4 and v6 have the same size. It should not changed in 2171 2179 * the future but better to make sure to be warned if it is no longer ··· 2212 2204 /* Disable sockmap processing for subflows */ 2213 2205 tcpv6_prot_override.psock_update_sk_prot = NULL; 2214 2206 #endif 2215 - #endif 2216 - 2217 - mptcp_diag_subflow_init(&subflow_ulp_ops); 2218 - 2219 - if (tcp_register_ulp(&subflow_ulp_ops) != 0) 2220 - panic("MPTCP: failed to register subflows to ULP\n"); 2221 2207 } 2208 + #endif
-1
net/netfilter/ipvs/ip_vs_ctl.c
··· 1452 1452 ret = ip_vs_bind_scheduler(svc, sched); 1453 1453 if (ret) 1454 1454 goto out_err; 1455 - sched = NULL; 1456 1455 } 1457 1456 1458 1457 ret = ip_vs_start_estimator(ipvs, &svc->stats);
+1 -1
net/netfilter/nft_ct.c
··· 1020 1020 nf_queue_nf_hook_drop(ctx->net); 1021 1021 nf_ct_untimeout(ctx->net, timeout); 1022 1022 nf_ct_netns_put(ctx->net, ctx->family); 1023 - kfree(priv->timeout); 1023 + kfree_rcu(priv->timeout, rcu); 1024 1024 } 1025 1025 1026 1026 static int nft_ct_timeout_obj_dump(struct sk_buff *skb,
+30 -4
net/netfilter/xt_multiport.c
··· 105 105 return ports_match_v1(multiinfo, ntohs(pptr[0]), ntohs(pptr[1])); 106 106 } 107 107 108 + static bool 109 + multiport_valid_ranges(const struct xt_multiport_v1 *multiinfo) 110 + { 111 + unsigned int i; 112 + 113 + for (i = 0; i < multiinfo->count; i++) { 114 + if (!multiinfo->pflags[i]) 115 + continue; 116 + 117 + if (++i >= multiinfo->count) 118 + return false; 119 + 120 + if (multiinfo->pflags[i]) 121 + return false; 122 + 123 + if (multiinfo->ports[i - 1] > multiinfo->ports[i]) 124 + return false; 125 + } 126 + 127 + return true; 128 + } 129 + 108 130 static inline bool 109 131 check(u_int16_t proto, 110 132 u_int8_t ip_invflags, ··· 149 127 const struct ipt_ip *ip = par->entryinfo; 150 128 const struct xt_multiport_v1 *multiinfo = par->matchinfo; 151 129 152 - return check(ip->proto, ip->invflags, multiinfo->flags, 153 - multiinfo->count) ? 0 : -EINVAL; 130 + if (!check(ip->proto, ip->invflags, multiinfo->flags, multiinfo->count)) 131 + return -EINVAL; 132 + 133 + return multiport_valid_ranges(multiinfo) ? 0 : -EINVAL; 154 134 } 155 135 156 136 static int multiport_mt6_check(const struct xt_mtchk_param *par) ··· 160 136 const struct ip6t_ip6 *ip = par->entryinfo; 161 137 const struct xt_multiport_v1 *multiinfo = par->matchinfo; 162 138 163 - return check(ip->proto, ip->invflags, multiinfo->flags, 164 - multiinfo->count) ? 0 : -EINVAL; 139 + if (!check(ip->proto, ip->invflags, multiinfo->flags, multiinfo->count)) 140 + return -EINVAL; 141 + 142 + return multiport_valid_ranges(multiinfo) ? 0 : -EINVAL; 165 143 } 166 144 167 145 static struct xt_match multiport_mt_reg[] __read_mostly = {
+24 -11
net/rfkill/core.c
··· 73 73 struct rfkill_event_ext ev; 74 74 }; 75 75 76 + /* Max rfkill events that can be "in-flight" for one data source */ 77 + #define MAX_RFKILL_EVENT 1000 76 78 struct rfkill_data { 77 79 struct list_head list; 78 80 struct list_head events; 79 81 struct mutex mtx; 80 82 wait_queue_head_t read_wait; 83 + u32 event_count; 81 84 bool input_handler; 82 85 u8 max_size; 83 86 }; ··· 258 255 } 259 256 #endif /* CONFIG_RFKILL_LEDS */ 260 257 261 - static void rfkill_fill_event(struct rfkill_event_ext *ev, 262 - struct rfkill *rfkill, 263 - enum rfkill_operation op) 258 + static int rfkill_fill_event(struct rfkill_int_event *int_ev, 259 + struct rfkill *rfkill, 260 + struct rfkill_data *data, 261 + enum rfkill_operation op) 264 262 { 263 + struct rfkill_event_ext *ev = &int_ev->ev; 265 264 unsigned long flags; 266 265 267 266 ev->idx = rfkill->idx; ··· 276 271 RFKILL_BLOCK_SW_PREV)); 277 272 ev->hard_block_reasons = rfkill->hard_block_reasons; 278 273 spin_unlock_irqrestore(&rfkill->lock, flags); 274 + 275 + scoped_guard(mutex, &data->mtx) { 276 + if (data->event_count++ > MAX_RFKILL_EVENT) { 277 + data->event_count--; 278 + return -ENOSPC; 279 + } 280 + list_add_tail(&int_ev->list, &data->events); 281 + } 282 + return 0; 279 283 } 280 284 281 285 static void rfkill_send_events(struct rfkill *rfkill, enum rfkill_operation op) ··· 296 282 ev = kzalloc_obj(*ev); 297 283 if (!ev) 298 284 continue; 299 - rfkill_fill_event(&ev->ev, rfkill, op); 300 - mutex_lock(&data->mtx); 301 - list_add_tail(&ev->list, &data->events); 302 - mutex_unlock(&data->mtx); 285 + if (rfkill_fill_event(ev, rfkill, data, op)) { 286 + kfree(ev); 287 + continue; 288 + } 303 289 wake_up_interruptible(&data->read_wait); 304 290 } 305 291 } ··· 1200 1186 if (!ev) 1201 1187 goto free; 1202 1188 rfkill_sync(rfkill); 1203 - rfkill_fill_event(&ev->ev, rfkill, RFKILL_OP_ADD); 1204 - mutex_lock(&data->mtx); 1205 - list_add_tail(&ev->list, &data->events); 1206 - mutex_unlock(&data->mtx); 1189 + if (rfkill_fill_event(ev, rfkill, data, RFKILL_OP_ADD)) 1190 + kfree(ev); 1207 1191 } 1208 1192 list_add(&data->list, &rfkill_fds); 1209 1193 mutex_unlock(&rfkill_global_mutex); ··· 1271 1259 ret = -EFAULT; 1272 1260 1273 1261 list_del(&ev->list); 1262 + data->event_count--; 1274 1263 kfree(ev); 1275 1264 out: 1276 1265 mutex_unlock(&data->mtx);
-6
net/rxrpc/af_rxrpc.c
··· 654 654 goto success; 655 655 656 656 case RXRPC_SECURITY_KEY: 657 - ret = -EINVAL; 658 - if (rx->key) 659 - goto error; 660 657 ret = -EISCONN; 661 658 if (rx->sk.sk_state != RXRPC_UNBOUND) 662 659 goto error; ··· 661 664 goto error; 662 665 663 666 case RXRPC_SECURITY_KEYRING: 664 - ret = -EINVAL; 665 - if (rx->key) 666 - goto error; 667 667 ret = -EISCONN; 668 668 if (rx->sk.sk_state != RXRPC_UNBOUND) 669 669 goto error;
+1 -1
net/rxrpc/ar-internal.h
··· 117 117 atomic_t stat_tx_jumbo[10]; 118 118 atomic_t stat_rx_jumbo[10]; 119 119 120 - atomic_t stat_why_req_ack[8]; 120 + atomic_t stat_why_req_ack[9]; 121 121 122 122 atomic_t stat_io_loop; 123 123 };
+10 -15
net/rxrpc/call_object.c
··· 654 654 if (dead) { 655 655 ASSERTCMP(__rxrpc_call_state(call), ==, RXRPC_CALL_COMPLETE); 656 656 657 - if (!list_empty(&call->link)) { 658 - spin_lock(&rxnet->call_lock); 659 - list_del_init(&call->link); 660 - spin_unlock(&rxnet->call_lock); 661 - } 657 + spin_lock(&rxnet->call_lock); 658 + list_del_rcu(&call->link); 659 + spin_unlock(&rxnet->call_lock); 662 660 663 661 rxrpc_cleanup_call(call); 664 662 } ··· 692 694 rxrpc_put_bundle(call->bundle, rxrpc_bundle_put_call); 693 695 rxrpc_put_peer(call->peer, rxrpc_peer_put_call); 694 696 rxrpc_put_local(call->local, rxrpc_local_put_call); 697 + key_put(call->key); 695 698 call_rcu(&call->rcu, rxrpc_rcu_free_call); 696 699 } 697 700 ··· 729 730 _enter(""); 730 731 731 732 if (!list_empty(&rxnet->calls)) { 733 + int shown = 0; 734 + 732 735 spin_lock(&rxnet->call_lock); 733 736 734 - while (!list_empty(&rxnet->calls)) { 735 - call = list_entry(rxnet->calls.next, 736 - struct rxrpc_call, link); 737 - _debug("Zapping call %p", call); 738 - 739 - rxrpc_see_call(call, rxrpc_call_see_zap); 740 - list_del_init(&call->link); 737 + list_for_each_entry(call, &rxnet->calls, link) { 738 + rxrpc_see_call(call, rxrpc_call_see_still_live); 741 739 742 740 pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n", 743 741 call, refcount_read(&call->ref), 744 742 rxrpc_call_states[__rxrpc_call_state(call)], 745 743 call->flags, call->events); 746 744 747 - spin_unlock(&rxnet->call_lock); 748 - cond_resched(); 749 - spin_lock(&rxnet->call_lock); 745 + if (++shown >= 10) 746 + break; 750 747 } 751 748 752 749 spin_unlock(&rxnet->call_lock);
+15 -4
net/rxrpc/conn_event.c
··· 247 247 struct sk_buff *skb) 248 248 { 249 249 struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 250 + bool secured = false; 250 251 int ret; 251 252 252 253 if (conn->state == RXRPC_CONN_ABORTED) ··· 263 262 return ret; 264 263 265 264 case RXRPC_PACKET_TYPE_RESPONSE: 265 + spin_lock_irq(&conn->state_lock); 266 + if (conn->state != RXRPC_CONN_SERVICE_CHALLENGING) { 267 + spin_unlock_irq(&conn->state_lock); 268 + return 0; 269 + } 270 + spin_unlock_irq(&conn->state_lock); 271 + 266 272 ret = conn->security->verify_response(conn, skb); 267 273 if (ret < 0) 268 274 return ret; ··· 280 272 return ret; 281 273 282 274 spin_lock_irq(&conn->state_lock); 283 - if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) 275 + if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) { 284 276 conn->state = RXRPC_CONN_SERVICE; 277 + secured = true; 278 + } 285 279 spin_unlock_irq(&conn->state_lock); 286 280 287 - if (conn->state == RXRPC_CONN_SERVICE) { 281 + if (secured) { 288 282 /* Offload call state flipping to the I/O thread. As 289 283 * we've already received the packet, put it on the 290 284 * front of the queue. ··· 567 557 spin_lock_irq(&local->lock); 568 558 old = conn->tx_response; 569 559 if (old) { 570 - struct rxrpc_skb_priv *osp = rxrpc_skb(skb); 560 + struct rxrpc_skb_priv *osp = rxrpc_skb(old); 571 561 572 562 /* Always go with the response to the most recent challenge. */ 573 563 if (after(sp->resp.challenge_serial, osp->resp.challenge_serial)) 574 - conn->tx_response = old; 564 + conn->tx_response = skb; 575 565 else 576 566 old = skb; 577 567 } else { ··· 579 569 } 580 570 spin_unlock_irq(&local->lock); 581 571 rxrpc_poke_conn(conn, rxrpc_conn_get_poke_response); 572 + rxrpc_free_skb(old, rxrpc_skb_put_old_response); 582 573 }
+1 -1
net/rxrpc/input_rack.c
··· 413 413 break; 414 414 //case RXRPC_CALL_RACKTIMER_ZEROWIN: 415 415 default: 416 - pr_warn("Unexpected rack timer %u", call->rack_timer_mode); 416 + pr_warn("Unexpected rack timer %u", mode); 417 417 } 418 418 }
+2 -1
net/rxrpc/io_thread.c
··· 419 419 420 420 if (sp->hdr.callNumber > chan->call_id) { 421 421 if (rxrpc_to_client(sp)) { 422 - rxrpc_put_call(call, rxrpc_call_put_input); 422 + if (call) 423 + rxrpc_put_call(call, rxrpc_call_put_input); 423 424 return rxrpc_protocol_error(skb, 424 425 rxrpc_eproto_unexpected_implicit_end); 425 426 }
+23 -17
net/rxrpc/key.c
··· 13 13 #include <crypto/skcipher.h> 14 14 #include <linux/module.h> 15 15 #include <linux/net.h> 16 + #include <linux/overflow.h> 16 17 #include <linux/skbuff.h> 17 18 #include <linux/key-type.h> 18 19 #include <linux/ctype.h> ··· 73 72 return -EKEYREJECTED; 74 73 75 74 plen = sizeof(*token) + sizeof(*token->kad) + tktlen; 76 - prep->quotalen = datalen + plen; 75 + prep->quotalen += datalen + plen; 77 76 78 77 plen -= sizeof(*token); 79 78 token = kzalloc_obj(*token); ··· 172 171 size_t plen; 173 172 const __be32 *ticket, *key; 174 173 s64 tmp; 175 - u32 tktlen, keylen; 174 + size_t raw_keylen, raw_tktlen, keylen, tktlen; 176 175 177 176 _enter(",{%x,%x,%x,%x},%x", 178 177 ntohl(xdr[0]), ntohl(xdr[1]), ntohl(xdr[2]), ntohl(xdr[3]), ··· 182 181 goto reject; 183 182 184 183 key = xdr + (6 * 2 + 1); 185 - keylen = ntohl(key[-1]); 186 - _debug("keylen: %x", keylen); 187 - keylen = round_up(keylen, 4); 184 + raw_keylen = ntohl(key[-1]); 185 + _debug("keylen: %zx", raw_keylen); 186 + if (raw_keylen > AFSTOKEN_GK_KEY_MAX) 187 + goto reject; 188 + keylen = round_up(raw_keylen, 4); 188 189 if ((6 * 2 + 2) * 4 + keylen > toklen) 189 190 goto reject; 190 191 191 192 ticket = xdr + (6 * 2 + 1 + (keylen / 4) + 1); 192 - tktlen = ntohl(ticket[-1]); 193 - _debug("tktlen: %x", tktlen); 194 - tktlen = round_up(tktlen, 4); 193 + raw_tktlen = ntohl(ticket[-1]); 194 + _debug("tktlen: %zx", raw_tktlen); 195 + if (raw_tktlen > AFSTOKEN_GK_TOKEN_MAX) 196 + goto reject; 197 + tktlen = round_up(raw_tktlen, 4); 195 198 if ((6 * 2 + 2) * 4 + keylen + tktlen != toklen) { 196 - kleave(" = -EKEYREJECTED [%x!=%x, %x,%x]", 199 + kleave(" = -EKEYREJECTED [%zx!=%x, %zx,%zx]", 197 200 (6 * 2 + 2) * 4 + keylen + tktlen, toklen, 198 201 keylen, tktlen); 199 202 goto reject; 200 203 } 201 204 202 205 plen = sizeof(*token) + sizeof(*token->rxgk) + tktlen + keylen; 203 - prep->quotalen = datalen + plen; 206 + prep->quotalen += datalen + plen; 204 207 205 208 plen -= sizeof(*token); 206 209 token = kzalloc_obj(*token); 207 210 if (!token) 208 211 goto nomem; 209 212 210 - token->rxgk = kzalloc(sizeof(*token->rxgk) + keylen, GFP_KERNEL); 213 + token->rxgk = kzalloc(struct_size_t(struct rxgk_key, _key, raw_keylen), GFP_KERNEL); 211 214 if (!token->rxgk) 212 215 goto nomem_token; 213 216 ··· 226 221 token->rxgk->enctype = tmp = xdr_dec64(xdr + 5 * 2); 227 222 if (tmp < 0 || tmp > UINT_MAX) 228 223 goto reject_token; 229 - token->rxgk->key.len = ntohl(key[-1]); 224 + token->rxgk->key.len = raw_keylen; 230 225 token->rxgk->key.data = token->rxgk->_key; 231 - token->rxgk->ticket.len = ntohl(ticket[-1]); 226 + token->rxgk->ticket.len = raw_tktlen; 232 227 233 228 if (token->rxgk->endtime != 0) { 234 229 expiry = rxrpc_s64_to_time64(token->rxgk->endtime); ··· 241 236 memcpy(token->rxgk->key.data, key, token->rxgk->key.len); 242 237 243 238 /* Pad the ticket so that we can use it directly in XDR */ 244 - token->rxgk->ticket.data = kzalloc(round_up(token->rxgk->ticket.len, 4), 245 - GFP_KERNEL); 239 + token->rxgk->ticket.data = kzalloc(tktlen, GFP_KERNEL); 246 240 if (!token->rxgk->ticket.data) 247 241 goto nomem_yrxgk; 248 242 memcpy(token->rxgk->ticket.data, ticket, token->rxgk->ticket.len); ··· 278 274 nomem: 279 275 return -ENOMEM; 280 276 reject_token: 277 + kfree(token->rxgk); 281 278 kfree(token); 282 279 reject: 283 280 return -EKEYREJECTED; ··· 465 460 memcpy(&kver, prep->data, sizeof(kver)); 466 461 prep->data += sizeof(kver); 467 462 prep->datalen -= sizeof(kver); 463 + prep->quotalen = 0; 468 464 469 465 _debug("KEY I/F VERSION: %u", kver); 470 466 ··· 503 497 goto error; 504 498 505 499 plen = sizeof(*token->kad) + v1->ticket_length; 506 - prep->quotalen = plen + sizeof(*token); 500 + prep->quotalen += plen + sizeof(*token); 507 501 508 502 ret = -ENOMEM; 509 503 token = kzalloc_obj(*token); ··· 622 616 623 617 _enter(""); 624 618 625 - if (optlen <= 0 || optlen > PAGE_SIZE - 1 || rx->securities) 619 + if (optlen <= 0 || optlen > PAGE_SIZE - 1 || rx->key) 626 620 return -EINVAL; 627 621 628 622 description = memdup_sockptr_nul(optval, optlen);
+2
net/rxrpc/output.c
··· 479 479 why = rxrpc_reqack_old_rtt; 480 480 else if (!last && !after(READ_ONCE(call->send_top), txb->seq)) 481 481 why = rxrpc_reqack_app_stall; 482 + else if (call->tx_winsize <= (2 * req->n) || call->cong_cwnd <= (2 * req->n)) 483 + why = rxrpc_reqack_jumbo_win; 482 484 else 483 485 goto dont_set_request_ack; 484 486
+21 -16
net/rxrpc/proc.c
··· 10 10 #include <net/af_rxrpc.h> 11 11 #include "ar-internal.h" 12 12 13 + #define RXRPC_PROC_ADDRBUF_SIZE \ 14 + (sizeof("[xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:255.255.255.255]") + \ 15 + sizeof(":12345")) 16 + 13 17 static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = { 14 18 [RXRPC_CONN_UNUSED] = "Unused ", 15 19 [RXRPC_CONN_CLIENT_UNSECURED] = "ClUnsec ", ··· 57 53 struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq)); 58 54 enum rxrpc_call_state state; 59 55 rxrpc_seq_t tx_bottom; 60 - char lbuff[50], rbuff[50]; 56 + char lbuff[RXRPC_PROC_ADDRBUF_SIZE], rbuff[RXRPC_PROC_ADDRBUF_SIZE]; 61 57 long timeout = 0; 62 58 63 59 if (v == &rxnet->calls) { ··· 73 69 74 70 local = call->local; 75 71 if (local) 76 - sprintf(lbuff, "%pISpc", &local->srx.transport); 72 + scnprintf(lbuff, sizeof(lbuff), "%pISpc", &local->srx.transport); 77 73 else 78 74 strcpy(lbuff, "no_local"); 79 75 80 - sprintf(rbuff, "%pISpc", &call->dest_srx.transport); 76 + scnprintf(rbuff, sizeof(rbuff), "%pISpc", &call->dest_srx.transport); 81 77 82 78 state = rxrpc_call_state(call); 83 79 if (state != RXRPC_CALL_SERVER_PREALLOC) ··· 146 142 struct rxrpc_connection *conn; 147 143 struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq)); 148 144 const char *state; 149 - char lbuff[50], rbuff[50]; 145 + char lbuff[RXRPC_PROC_ADDRBUF_SIZE], rbuff[RXRPC_PROC_ADDRBUF_SIZE]; 150 146 151 147 if (v == &rxnet->conn_proc_list) { 152 148 seq_puts(seq, ··· 165 161 goto print; 166 162 } 167 163 168 - sprintf(lbuff, "%pISpc", &conn->local->srx.transport); 169 - sprintf(rbuff, "%pISpc", &conn->peer->srx.transport); 164 + scnprintf(lbuff, sizeof(lbuff), "%pISpc", &conn->local->srx.transport); 165 + scnprintf(rbuff, sizeof(rbuff), "%pISpc", &conn->peer->srx.transport); 170 166 print: 171 167 state = rxrpc_is_conn_aborted(conn) ? 172 168 rxrpc_call_completions[conn->completion] : ··· 232 228 { 233 229 struct rxrpc_bundle *bundle; 234 230 struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq)); 235 - char lbuff[50], rbuff[50]; 231 + char lbuff[RXRPC_PROC_ADDRBUF_SIZE], rbuff[RXRPC_PROC_ADDRBUF_SIZE]; 236 232 237 233 if (v == &rxnet->bundle_proc_list) { 238 234 seq_puts(seq, ··· 246 242 247 243 bundle = list_entry(v, struct rxrpc_bundle, proc_link); 248 244 249 - sprintf(lbuff, "%pISpc", &bundle->local->srx.transport); 250 - sprintf(rbuff, "%pISpc", &bundle->peer->srx.transport); 245 + scnprintf(lbuff, sizeof(lbuff), "%pISpc", &bundle->local->srx.transport); 246 + scnprintf(rbuff, sizeof(rbuff), "%pISpc", &bundle->peer->srx.transport); 251 247 seq_printf(seq, 252 248 "UDP %-47.47s %-47.47s %4x %3u %3d" 253 249 " %c%c%c %08x | %08x %08x %08x %08x %08x\n", ··· 283 279 { 284 280 struct rxrpc_peer *peer; 285 281 time64_t now; 286 - char lbuff[50], rbuff[50]; 282 + char lbuff[RXRPC_PROC_ADDRBUF_SIZE], rbuff[RXRPC_PROC_ADDRBUF_SIZE]; 287 283 288 284 if (v == SEQ_START_TOKEN) { 289 285 seq_puts(seq, ··· 294 290 295 291 peer = list_entry(v, struct rxrpc_peer, hash_link); 296 292 297 - sprintf(lbuff, "%pISpc", &peer->local->srx.transport); 293 + scnprintf(lbuff, sizeof(lbuff), "%pISpc", &peer->local->srx.transport); 298 294 299 - sprintf(rbuff, "%pISpc", &peer->srx.transport); 295 + scnprintf(rbuff, sizeof(rbuff), "%pISpc", &peer->srx.transport); 300 296 301 297 now = ktime_get_seconds(); 302 298 seq_printf(seq, ··· 405 401 static int rxrpc_local_seq_show(struct seq_file *seq, void *v) 406 402 { 407 403 struct rxrpc_local *local; 408 - char lbuff[50]; 404 + char lbuff[RXRPC_PROC_ADDRBUF_SIZE]; 409 405 410 406 if (v == SEQ_START_TOKEN) { 411 407 seq_puts(seq, ··· 416 412 417 413 local = hlist_entry(v, struct rxrpc_local, link); 418 414 419 - sprintf(lbuff, "%pISpc", &local->srx.transport); 415 + scnprintf(lbuff, sizeof(lbuff), "%pISpc", &local->srx.transport); 420 416 421 417 seq_printf(seq, 422 418 "UDP %-47.47s %3u %3u %3u\n", ··· 522 518 atomic_read(&rxnet->stat_rx_acks[RXRPC_ACK_IDLE]), 523 519 atomic_read(&rxnet->stat_rx_acks[0])); 524 520 seq_printf(seq, 525 - "Why-Req-A: acklost=%u mrtt=%u ortt=%u stall=%u\n", 521 + "Why-Req-A: acklost=%u mrtt=%u ortt=%u stall=%u jwin=%u\n", 526 522 atomic_read(&rxnet->stat_why_req_ack[rxrpc_reqack_ack_lost]), 527 523 atomic_read(&rxnet->stat_why_req_ack[rxrpc_reqack_more_rtt]), 528 524 atomic_read(&rxnet->stat_why_req_ack[rxrpc_reqack_old_rtt]), 529 - atomic_read(&rxnet->stat_why_req_ack[rxrpc_reqack_app_stall])); 525 + atomic_read(&rxnet->stat_why_req_ack[rxrpc_reqack_app_stall]), 526 + atomic_read(&rxnet->stat_why_req_ack[rxrpc_reqack_jumbo_win])); 530 527 seq_printf(seq, 531 528 "Why-Req-A: nolast=%u retx=%u slows=%u smtxw=%u\n", 532 529 atomic_read(&rxnet->stat_why_req_ack[rxrpc_reqack_no_srv_last]),
+13 -6
net/rxrpc/rxgk.c
··· 1085 1085 1086 1086 _enter(""); 1087 1087 1088 + if ((end - p) * sizeof(__be32) < 24) 1089 + return rxrpc_abort_conn(conn, skb, RXGK_NOTAUTH, -EPROTO, 1090 + rxgk_abort_resp_short_auth); 1088 1091 if (memcmp(p, conn->rxgk.nonce, 20) != 0) 1089 1092 return rxrpc_abort_conn(conn, skb, RXGK_NOTAUTH, -EPROTO, 1090 1093 rxgk_abort_resp_bad_nonce); ··· 1101 1098 p += xdr_round_up(app_len) / sizeof(__be32); 1102 1099 if (end - p < 4) 1103 1100 return rxrpc_abort_conn(conn, skb, RXGK_NOTAUTH, -EPROTO, 1104 - rxgk_abort_resp_short_applen); 1101 + rxgk_abort_resp_short_auth); 1105 1102 1106 1103 level = ntohl(*p++); 1107 1104 epoch = ntohl(*p++); ··· 1167 1164 } 1168 1165 1169 1166 p = auth; 1170 - ret = rxgk_do_verify_authenticator(conn, krb5, skb, p, p + auth_len); 1167 + ret = rxgk_do_verify_authenticator(conn, krb5, skb, p, 1168 + p + auth_len / sizeof(*p)); 1171 1169 error: 1172 1170 kfree(auth); 1173 1171 return ret; ··· 1212 1208 1213 1209 token_offset = offset; 1214 1210 token_len = ntohl(rhdr.token_len); 1215 - if (xdr_round_up(token_len) + sizeof(__be32) > len) 1211 + if (token_len > len || 1212 + xdr_round_up(token_len) + sizeof(__be32) > len) 1216 1213 goto short_packet; 1217 1214 1218 1215 trace_rxrpc_rx_response(conn, sp->hdr.serial, 0, sp->hdr.cksum, token_len); ··· 1228 1223 1229 1224 auth_offset = offset; 1230 1225 auth_len = ntohl(xauth_len); 1231 - if (auth_len < len) 1226 + if (auth_len > len) 1232 1227 goto short_packet; 1233 1228 if (auth_len & 3) 1234 1229 goto inconsistent; ··· 1273 1268 if (ret < 0) { 1274 1269 rxrpc_abort_conn(conn, skb, RXGK_SEALEDINCON, ret, 1275 1270 rxgk_abort_resp_auth_dec); 1276 - goto out; 1271 + goto out_gk; 1277 1272 } 1278 1273 1279 1274 ret = rxgk_verify_authenticator(conn, krb5, skb, auth_offset, auth_len); 1280 1275 if (ret < 0) 1281 - goto out; 1276 + goto out_gk; 1282 1277 1283 1278 conn->key = key; 1284 1279 key = NULL; 1285 1280 ret = 0; 1281 + out_gk: 1282 + rxgk_put(gk); 1286 1283 out: 1287 1284 key_put(key); 1288 1285 _leave(" = %d", ret);
+43 -20
net/rxrpc/rxkad.c
··· 197 197 struct rxrpc_crypt iv; 198 198 __be32 *tmpbuf; 199 199 size_t tmpsize = 4 * sizeof(__be32); 200 + int ret; 200 201 201 202 _enter(""); 202 203 ··· 226 225 skcipher_request_set_sync_tfm(req, ci); 227 226 skcipher_request_set_callback(req, 0, NULL, NULL); 228 227 skcipher_request_set_crypt(req, &sg, &sg, tmpsize, iv.x); 229 - crypto_skcipher_encrypt(req); 228 + ret = crypto_skcipher_encrypt(req); 230 229 skcipher_request_free(req); 231 230 232 231 memcpy(&conn->rxkad.csum_iv, tmpbuf + 2, sizeof(conn->rxkad.csum_iv)); 233 232 kfree(tmpbuf); 234 - _leave(" = 0"); 235 - return 0; 233 + _leave(" = %d", ret); 234 + return ret; 236 235 } 237 236 238 237 /* ··· 265 264 struct scatterlist sg; 266 265 size_t pad; 267 266 u16 check; 267 + int ret; 268 268 269 269 _enter(""); 270 270 ··· 288 286 skcipher_request_set_sync_tfm(req, call->conn->rxkad.cipher); 289 287 skcipher_request_set_callback(req, 0, NULL, NULL); 290 288 skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x); 291 - crypto_skcipher_encrypt(req); 289 + ret = crypto_skcipher_encrypt(req); 292 290 skcipher_request_zero(req); 293 291 294 - _leave(" = 0"); 295 - return 0; 292 + _leave(" = %d", ret); 293 + return ret; 296 294 } 297 295 298 296 /* ··· 347 345 union { 348 346 __be32 buf[2]; 349 347 } crypto __aligned(8); 350 - u32 x, y; 348 + u32 x, y = 0; 351 349 int ret; 352 350 353 351 _enter("{%d{%x}},{#%u},%u,", ··· 378 376 skcipher_request_set_sync_tfm(req, call->conn->rxkad.cipher); 379 377 skcipher_request_set_callback(req, 0, NULL, NULL); 380 378 skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x); 381 - crypto_skcipher_encrypt(req); 379 + ret = crypto_skcipher_encrypt(req); 382 380 skcipher_request_zero(req); 381 + if (ret < 0) 382 + goto out; 383 383 384 384 y = ntohl(crypto.buf[1]); 385 385 y = (y >> 16) & 0xffff; ··· 417 413 memset(p + txb->pkt_len, 0, gap); 418 414 } 419 415 416 + out: 420 417 skcipher_request_free(req); 421 418 _leave(" = %d [set %x]", ret, y); 422 419 return ret; ··· 458 453 skcipher_request_set_sync_tfm(req, call->conn->rxkad.cipher); 459 454 skcipher_request_set_callback(req, 0, NULL, NULL); 460 455 skcipher_request_set_crypt(req, sg, sg, 8, iv.x); 461 - crypto_skcipher_decrypt(req); 456 + ret = crypto_skcipher_decrypt(req); 462 457 skcipher_request_zero(req); 458 + if (ret < 0) 459 + return ret; 463 460 464 461 /* Extract the decrypted packet length */ 465 462 if (skb_copy_bits(skb, sp->offset, &sechdr, sizeof(sechdr)) < 0) ··· 538 531 skcipher_request_set_sync_tfm(req, call->conn->rxkad.cipher); 539 532 skcipher_request_set_callback(req, 0, NULL, NULL); 540 533 skcipher_request_set_crypt(req, sg, sg, sp->len, iv.x); 541 - crypto_skcipher_decrypt(req); 534 + ret = crypto_skcipher_decrypt(req); 542 535 skcipher_request_zero(req); 543 536 if (sg != _sg) 544 537 kfree(sg); 538 + if (ret < 0) { 539 + WARN_ON_ONCE(ret != -ENOMEM); 540 + return ret; 541 + } 545 542 546 543 /* Extract the decrypted packet length */ 547 544 if (skb_copy_bits(skb, sp->offset, &sechdr, sizeof(sechdr)) < 0) ··· 613 602 skcipher_request_set_sync_tfm(req, call->conn->rxkad.cipher); 614 603 skcipher_request_set_callback(req, 0, NULL, NULL); 615 604 skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x); 616 - crypto_skcipher_encrypt(req); 605 + ret = crypto_skcipher_encrypt(req); 617 606 skcipher_request_zero(req); 607 + if (ret < 0) 608 + goto out; 618 609 619 610 y = ntohl(crypto.buf[1]); 620 611 cksum = (y >> 16) & 0xffff; ··· 971 958 struct in_addr addr; 972 959 unsigned int life; 973 960 time64_t issue, now; 961 + int ret; 974 962 bool little_endian; 975 963 u8 *p, *q, *name, *end; 976 964 ··· 991 977 sg_init_one(&sg[0], ticket, ticket_len); 992 978 skcipher_request_set_callback(req, 0, NULL, NULL); 993 979 skcipher_request_set_crypt(req, sg, sg, ticket_len, iv.x); 994 - crypto_skcipher_decrypt(req); 980 + ret = crypto_skcipher_decrypt(req); 995 981 skcipher_request_free(req); 982 + if (ret < 0) 983 + return rxrpc_abort_conn(conn, skb, RXKADBADTICKET, -EPROTO, 984 + rxkad_abort_resp_tkt_short); 996 985 997 986 p = ticket; 998 987 end = p + ticket_len; ··· 1090 1073 /* 1091 1074 * decrypt the response packet 1092 1075 */ 1093 - static void rxkad_decrypt_response(struct rxrpc_connection *conn, 1094 - struct rxkad_response *resp, 1095 - const struct rxrpc_crypt *session_key) 1076 + static int rxkad_decrypt_response(struct rxrpc_connection *conn, 1077 + struct rxkad_response *resp, 1078 + const struct rxrpc_crypt *session_key) 1096 1079 { 1097 1080 struct skcipher_request *req = rxkad_ci_req; 1098 1081 struct scatterlist sg[1]; 1099 1082 struct rxrpc_crypt iv; 1083 + int ret; 1100 1084 1101 1085 _enter(",,%08x%08x", 1102 1086 ntohl(session_key->n[0]), ntohl(session_key->n[1])); 1103 1087 1104 1088 mutex_lock(&rxkad_ci_mutex); 1105 - if (crypto_sync_skcipher_setkey(rxkad_ci, session_key->x, 1106 - sizeof(*session_key)) < 0) 1107 - BUG(); 1089 + ret = crypto_sync_skcipher_setkey(rxkad_ci, session_key->x, 1090 + sizeof(*session_key)); 1091 + if (ret < 0) 1092 + goto unlock; 1108 1093 1109 1094 memcpy(&iv, session_key, sizeof(iv)); 1110 1095 ··· 1115 1096 skcipher_request_set_sync_tfm(req, rxkad_ci); 1116 1097 skcipher_request_set_callback(req, 0, NULL, NULL); 1117 1098 skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x); 1118 - crypto_skcipher_decrypt(req); 1099 + ret = crypto_skcipher_decrypt(req); 1119 1100 skcipher_request_zero(req); 1120 1101 1102 + unlock: 1121 1103 mutex_unlock(&rxkad_ci_mutex); 1122 1104 1123 1105 _leave(""); 1106 + return ret; 1124 1107 } 1125 1108 1126 1109 /* ··· 1215 1194 1216 1195 /* use the session key from inside the ticket to decrypt the 1217 1196 * response */ 1218 - rxkad_decrypt_response(conn, response, &session_key); 1197 + ret = rxkad_decrypt_response(conn, response, &session_key); 1198 + if (ret < 0) 1199 + goto temporary_error_free_ticket; 1219 1200 1220 1201 if (ntohl(response->encrypted.epoch) != conn->proto.epoch || 1221 1202 ntohl(response->encrypted.cid) != conn->proto.cid ||
+1 -1
net/rxrpc/sendmsg.c
··· 637 637 memset(&cp, 0, sizeof(cp)); 638 638 cp.local = rx->local; 639 639 cp.peer = peer; 640 - cp.key = rx->key; 640 + cp.key = key; 641 641 cp.security_level = rx->min_sec_level; 642 642 cp.exclusive = rx->exclusive | p->exclusive; 643 643 cp.upgrade = p->upgrade;
+3
net/rxrpc/server_key.c
··· 125 125 126 126 _enter(""); 127 127 128 + if (rx->securities) 129 + return -EINVAL; 130 + 128 131 if (optlen <= 0 || optlen > PAGE_SIZE - 1) 129 132 return -EINVAL; 130 133
+5 -1
net/sched/act_csum.c
··· 604 604 protocol = skb->protocol; 605 605 orig_vlan_tag_present = true; 606 606 } else { 607 - struct vlan_hdr *vlan = (struct vlan_hdr *)skb->data; 607 + struct vlan_hdr *vlan; 608 608 609 + if (!pskb_may_pull(skb, VLAN_HLEN)) 610 + goto drop; 611 + 612 + vlan = (struct vlan_hdr *)skb->data; 609 613 protocol = vlan->h_vlan_encapsulated_proto; 610 614 skb_pull(skb, VLAN_HLEN); 611 615 skb_reset_network_header(skb);
+10 -7
net/sunrpc/sysfs.c
··· 6 6 #include <linux/kobject.h> 7 7 #include <linux/sunrpc/addr.h> 8 8 #include <linux/sunrpc/xprtsock.h> 9 + #include <net/net_namespace.h> 9 10 10 11 #include "sysfs.h" 11 12 ··· 554 553 kfree(xprt); 555 554 } 556 555 557 - static const void *rpc_sysfs_client_namespace(const struct kobject *kobj) 556 + static const struct ns_common *rpc_sysfs_client_namespace(const struct kobject *kobj) 558 557 { 559 - return container_of(kobj, struct rpc_sysfs_client, kobject)->net; 558 + return to_ns_common(container_of(kobj, struct rpc_sysfs_client, 559 + kobject)->net); 560 560 } 561 561 562 - static const void *rpc_sysfs_xprt_switch_namespace(const struct kobject *kobj) 562 + static const struct ns_common *rpc_sysfs_xprt_switch_namespace(const struct kobject *kobj) 563 563 { 564 - return container_of(kobj, struct rpc_sysfs_xprt_switch, kobject)->net; 564 + return to_ns_common(container_of(kobj, struct rpc_sysfs_xprt_switch, 565 + kobject)->net); 565 566 } 566 567 567 - static const void *rpc_sysfs_xprt_namespace(const struct kobject *kobj) 568 + static const struct ns_common *rpc_sysfs_xprt_namespace(const struct kobject *kobj) 568 569 { 569 - return container_of(kobj, struct rpc_sysfs_xprt, 570 - kobject)->xprt->xprt_net; 570 + return to_ns_common(container_of(kobj, struct rpc_sysfs_xprt, 571 + kobject)->xprt->xprt_net); 571 572 } 572 573 573 574 static struct kobj_attribute rpc_sysfs_clnt_version = __ATTR(rpc_version,
+5 -1
net/tipc/group.c
··· 746 746 u32 port = msg_origport(hdr); 747 747 struct tipc_member *m, *pm; 748 748 u16 remitted, in_flight; 749 + u16 acked; 749 750 750 751 if (!grp) 751 752 return; ··· 799 798 case GRP_ACK_MSG: 800 799 if (!m) 801 800 return; 802 - m->bc_acked = msg_grp_bc_acked(hdr); 801 + acked = msg_grp_bc_acked(hdr); 802 + if (less_eq(acked, m->bc_acked)) 803 + return; 804 + m->bc_acked = acked; 803 805 if (--grp->bc_ackers) 804 806 return; 805 807 list_del_init(&m->small_win);
+10
net/tls/tls_sw.c
··· 584 584 if (rc == -EBUSY) { 585 585 rc = tls_encrypt_async_wait(ctx); 586 586 rc = rc ?: -EINPROGRESS; 587 + /* 588 + * The async callback tls_encrypt_done() has already 589 + * decremented encrypt_pending and restored the sge on 590 + * both success and error. Skip the synchronous cleanup 591 + * below on error, just remove the record and return. 592 + */ 593 + if (rc != -EINPROGRESS) { 594 + list_del(&rec->list); 595 + return rc; 596 + } 587 597 } 588 598 if (!rc || rc != -EINPROGRESS) { 589 599 atomic_dec(&ctx->encrypt_pending);
+13 -8
net/unix/diag.c
··· 28 28 29 29 static int sk_diag_dump_vfs(struct sock *sk, struct sk_buff *nlskb) 30 30 { 31 - struct dentry *dentry = unix_sk(sk)->path.dentry; 31 + struct unix_diag_vfs uv; 32 + struct dentry *dentry; 33 + bool have_vfs = false; 32 34 35 + unix_state_lock(sk); 36 + dentry = unix_sk(sk)->path.dentry; 33 37 if (dentry) { 34 - struct unix_diag_vfs uv = { 35 - .udiag_vfs_ino = d_backing_inode(dentry)->i_ino, 36 - .udiag_vfs_dev = dentry->d_sb->s_dev, 37 - }; 38 - 39 - return nla_put(nlskb, UNIX_DIAG_VFS, sizeof(uv), &uv); 38 + uv.udiag_vfs_ino = d_backing_inode(dentry)->i_ino; 39 + uv.udiag_vfs_dev = dentry->d_sb->s_dev; 40 + have_vfs = true; 40 41 } 42 + unix_state_unlock(sk); 41 43 42 - return 0; 44 + if (!have_vfs) 45 + return 0; 46 + 47 + return nla_put(nlskb, UNIX_DIAG_VFS, sizeof(uv), &uv); 43 48 } 44 49 45 50 static int sk_diag_dump_peer(struct sock *sk, struct sk_buff *nlskb)
+2 -2
net/wireless/sysfs.c
··· 154 154 #define WIPHY_PM_OPS NULL 155 155 #endif 156 156 157 - static const void *wiphy_namespace(const struct device *d) 157 + static const struct ns_common *wiphy_namespace(const struct device *d) 158 158 { 159 159 struct wiphy *wiphy = container_of(d, struct wiphy, dev); 160 160 161 - return wiphy_net(wiphy); 161 + return to_ns_common(wiphy_net(wiphy)); 162 162 } 163 163 164 164 struct class ieee80211_class = {
+2 -1
net/xdp/xdp_umem.c
··· 203 203 if (!unaligned_chunks && chunks_rem) 204 204 return -EINVAL; 205 205 206 - if (headroom >= chunk_size - XDP_PACKET_HEADROOM) 206 + if (headroom > chunk_size - XDP_PACKET_HEADROOM - 207 + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) - 128) 207 208 return -EINVAL; 208 209 209 210 if (mr->flags & XDP_UMEM_TX_METADATA_LEN) {
+2 -2
net/xdp/xsk.c
··· 239 239 240 240 static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len) 241 241 { 242 - u32 frame_size = xsk_pool_get_rx_frame_size(xs->pool); 242 + u32 frame_size = __xsk_pool_get_rx_frame_size(xs->pool); 243 243 void *copy_from = xsk_copy_xdp_start(xdp), *copy_to; 244 244 u32 from_len, meta_len, rem, num_desc; 245 245 struct xdp_buff_xsk *xskb; ··· 338 338 if (xs->dev != xdp->rxq->dev || xs->queue_id != xdp->rxq->queue_index) 339 339 return -EINVAL; 340 340 341 - if (len > xsk_pool_get_rx_frame_size(xs->pool) && !xs->sg) { 341 + if (len > __xsk_pool_get_rx_frame_size(xs->pool) && !xs->sg) { 342 342 xs->rx_dropped++; 343 343 return -ENOSPC; 344 344 }
+29 -3
net/xdp/xsk_buff_pool.c
··· 10 10 #include "xdp_umem.h" 11 11 #include "xsk.h" 12 12 13 + #define ETH_PAD_LEN (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN) 14 + 13 15 void xp_add_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs) 14 16 { 15 17 if (!xs->tx) ··· 159 157 int xp_assign_dev(struct xsk_buff_pool *pool, 160 158 struct net_device *netdev, u16 queue_id, u16 flags) 161 159 { 160 + u32 needed = netdev->mtu + ETH_PAD_LEN; 161 + u32 segs = netdev->xdp_zc_max_segs; 162 + bool mbuf = flags & XDP_USE_SG; 162 163 bool force_zc, force_copy; 163 164 struct netdev_bpf bpf; 165 + u32 frame_size; 164 166 int err = 0; 165 167 166 168 ASSERT_RTNL(); ··· 184 178 if (err) 185 179 return err; 186 180 187 - if (flags & XDP_USE_SG) 181 + if (mbuf) 188 182 pool->umem->flags |= XDP_UMEM_SG_FLAG; 189 183 190 184 if (flags & XDP_USE_NEED_WAKEUP) ··· 206 200 goto err_unreg_pool; 207 201 } 208 202 209 - if (netdev->xdp_zc_max_segs == 1 && (flags & XDP_USE_SG)) { 210 - err = -EOPNOTSUPP; 203 + if (mbuf) { 204 + if (segs == 1) { 205 + err = -EOPNOTSUPP; 206 + goto err_unreg_pool; 207 + } 208 + } else { 209 + segs = 1; 210 + } 211 + 212 + /* open-code xsk_pool_get_rx_frame_size() as pool->dev is not 213 + * set yet at this point; we are before getting down to driver 214 + */ 215 + frame_size = __xsk_pool_get_rx_frame_size(pool) - 216 + xsk_pool_get_tailroom(mbuf); 217 + frame_size = ALIGN_DOWN(frame_size, 128); 218 + 219 + if (needed > frame_size * segs) { 220 + err = -EINVAL; 211 221 goto err_unreg_pool; 212 222 } 213 223 ··· 269 247 struct xdp_umem *umem = umem_xs->umem; 270 248 271 249 flags = umem->zc ? XDP_ZEROCOPY : XDP_COPY; 250 + 251 + if (umem->flags & XDP_UMEM_SG_FLAG) 252 + flags |= XDP_USE_SG; 253 + 272 254 if (umem_xs->pool->uses_need_wakeup) 273 255 flags |= XDP_USE_NEED_WAKEUP; 274 256
+14 -4
net/xfrm/xfrm_input.c
··· 506 506 /* An encap_type of -1 indicates async resumption. */ 507 507 if (encap_type == -1) { 508 508 async = 1; 509 - dev_put(skb->dev); 510 509 seq = XFRM_SKB_CB(skb)->seq.input.low; 511 510 spin_lock(&x->lock); 512 511 goto resume; ··· 658 659 dev_hold(skb->dev); 659 660 660 661 nexthdr = x->type->input(x, skb); 661 - if (nexthdr == -EINPROGRESS) 662 + if (nexthdr == -EINPROGRESS) { 663 + if (async) 664 + dev_put(skb->dev); 662 665 return 0; 666 + } 663 667 664 668 dev_put(skb->dev); 665 669 spin_lock(&x->lock); ··· 697 695 XFRM_MODE_SKB_CB(skb)->protocol = nexthdr; 698 696 699 697 err = xfrm_inner_mode_input(x, skb); 700 - if (err == -EINPROGRESS) 698 + if (err == -EINPROGRESS) { 699 + if (async) 700 + dev_put(skb->dev); 701 701 return 0; 702 - else if (err) { 702 + } else if (err) { 703 703 XFRM_INC_STATS(net, LINUX_MIB_XFRMINSTATEMODEERROR); 704 704 goto drop; 705 705 } ··· 738 734 sp->olen = 0; 739 735 if (skb_valid_dst(skb)) 740 736 skb_dst_drop(skb); 737 + if (async) 738 + dev_put(skb->dev); 741 739 gro_cells_receive(&gro_cells, skb); 742 740 return 0; 743 741 } else { ··· 759 753 sp->olen = 0; 760 754 if (skb_valid_dst(skb)) 761 755 skb_dst_drop(skb); 756 + if (async) 757 + dev_put(skb->dev); 762 758 gro_cells_receive(&gro_cells, skb); 763 759 return err; 764 760 } ··· 771 763 drop_unlock: 772 764 spin_unlock(&x->lock); 773 765 drop: 766 + if (async) 767 + dev_put(skb->dev); 774 768 xfrm_rcv_cb(skb, family, x && x->type ? x->type->proto : nexthdr, -1); 775 769 kfree_skb(skb); 776 770 return 0;
+2 -3
net/xfrm/xfrm_policy.c
··· 4290 4290 #endif 4291 4291 xfrm_policy_flush(net, XFRM_POLICY_TYPE_MAIN, false); 4292 4292 4293 + synchronize_rcu(); 4294 + 4293 4295 WARN_ON(!list_empty(&net->xfrm.policy_all)); 4294 4296 4295 4297 for (dir = 0; dir < XFRM_POLICY_MAX; dir++) { ··· 4528 4526 pol = xfrm_policy_lookup_bytype(net, type, &fl, sel->family, dir, if_id); 4529 4527 if (IS_ERR_OR_NULL(pol)) 4530 4528 goto out_unlock; 4531 - 4532 - if (!xfrm_pol_hold_rcu(pol)) 4533 - pol = NULL; 4534 4529 out_unlock: 4535 4530 rcu_read_unlock(); 4536 4531 return pol;
+12 -2
net/xfrm/xfrm_user.c
··· 2677 2677 + nla_total_size(4) /* XFRM_AE_RTHR */ 2678 2678 + nla_total_size(4) /* XFRM_AE_ETHR */ 2679 2679 + nla_total_size(sizeof(x->dir)) /* XFRMA_SA_DIR */ 2680 - + nla_total_size(4); /* XFRMA_SA_PCPU */ 2680 + + nla_total_size(4) /* XFRMA_SA_PCPU */ 2681 + + nla_total_size(sizeof(x->if_id)); /* XFRMA_IF_ID */ 2681 2682 } 2682 2683 2683 2684 static int build_aevent(struct sk_buff *skb, struct xfrm_state *x, const struct km_event *c) ··· 2790 2789 c.portid = nlh->nlmsg_pid; 2791 2790 2792 2791 err = build_aevent(r_skb, x, &c); 2793 - BUG_ON(err < 0); 2792 + if (err < 0) { 2793 + spin_unlock_bh(&x->lock); 2794 + xfrm_state_put(x); 2795 + kfree_skb(r_skb); 2796 + return err; 2797 + } 2794 2798 2795 2799 err = nlmsg_unicast(xfrm_net_nlsk(net, skb), r_skb, NETLINK_CB(skb).portid); 2796 2800 spin_unlock_bh(&x->lock); ··· 3966 3960 return err; 3967 3961 } 3968 3962 upe->hard = !!hard; 3963 + /* clear the padding bytes */ 3964 + memset_after(upe, 0, hard); 3969 3965 3970 3966 nlmsg_end(skb, nlh); 3971 3967 return 0; ··· 4125 4117 return -EMSGSIZE; 4126 4118 4127 4119 ur = nlmsg_data(nlh); 4120 + memset(ur, 0, sizeof(*ur)); 4128 4121 ur->proto = proto; 4129 4122 memcpy(&ur->sel, sel, sizeof(ur->sel)); 4130 4123 ··· 4173 4164 4174 4165 um = nlmsg_data(nlh); 4175 4166 4167 + memset(&um->id, 0, sizeof(um->id)); 4176 4168 memcpy(&um->id.daddr, &x->id.daddr, sizeof(um->id.daddr)); 4177 4169 um->id.spi = x->id.spi; 4178 4170 um->id.family = x->props.family;
+2 -1
scripts/Makefile.package
··· 195 195 .tmp_modules_cpio: FORCE 196 196 $(Q)$(MAKE) -f $(srctree)/Makefile 197 197 $(Q)rm -rf $@ 198 - $(Q)$(MAKE) -f $(srctree)/Makefile INSTALL_MOD_PATH=$@ modules_install 198 + $(Q)$(MAKE) -f $(srctree)/Makefile INSTALL_MOD_PATH=$@/$(INSTALL_MOD_PATH) modules_install 199 199 200 200 quiet_cmd_cpio = CPIO $@ 201 201 cmd_cpio = $(CONFIG_SHELL) $(srctree)/usr/gen_initramfs.sh -o $@ $< ··· 264 264 @echo ' tarxz-pkg - Build the kernel as a xz compressed tarball' 265 265 @echo ' tarzst-pkg - Build the kernel as a zstd compressed tarball' 266 266 @echo ' modules-cpio-pkg - Build the kernel modules as cpio archive' 267 + @echo ' (uses INSTALL_MOD_PATH inside the archive)' 267 268 @echo ' perf-tar-src-pkg - Build the perf source tarball with no compression' 268 269 @echo ' perf-targz-src-pkg - Build the perf source tarball with gzip compression' 269 270 @echo ' perf-tarbz2-src-pkg - Build the perf source tarball with bz2 compression'
+1 -1
scripts/mod/modpost.c
··· 56 56 57 57 static bool error_occurred; 58 58 59 - static bool extra_warn; 59 + static bool extra_warn __attribute__((unused)); 60 60 61 61 bool target_is_big_endian; 62 62 bool host_is_big_endian;
+1
sound/hda/codecs/realtek/alc269.c
··· 7680 7680 SND_PCI_QUIRK(0x17aa, 0x38fd, "ThinkBook plus Gen5 Hybrid", ALC287_FIXUP_TAS2781_I2C), 7681 7681 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 7682 7682 SND_PCI_QUIRK(0x17aa, 0x390d, "Lenovo Yoga Pro 7 14ASP10", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 7683 + SND_PCI_QUIRK(0x17aa, 0x3911, "Lenovo Yoga Pro 7 14IAH10", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 7683 7684 SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC), 7684 7685 SND_PCI_QUIRK(0x17aa, 0x391a, "Lenovo Yoga Slim 7 14AKP10", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 7685 7686 SND_PCI_QUIRK(0x17aa, 0x391f, "Yoga S990-16 pro Quad YC Quad", ALC287_FIXUP_TXNW2781_I2C),
-9
sound/hda/codecs/realtek/alc662.c
··· 313 313 ALC897_FIXUP_HEADSET_MIC_PIN2, 314 314 ALC897_FIXUP_UNIS_H3C_X500S, 315 315 ALC897_FIXUP_HEADSET_MIC_PIN3, 316 - ALC897_FIXUP_H610M_HP_PIN, 317 316 }; 318 317 319 318 static const struct hda_fixup alc662_fixups[] = { ··· 766 767 { } 767 768 }, 768 769 }, 769 - [ALC897_FIXUP_H610M_HP_PIN] = { 770 - .type = HDA_FIXUP_PINS, 771 - .v.pins = (const struct hda_pintbl[]) { 772 - { 0x19, 0x0321403f }, /* HP out */ 773 - { } 774 - }, 775 - }, 776 770 }; 777 771 778 772 static const struct hda_quirk alc662_fixup_tbl[] = { ··· 815 823 SND_PCI_QUIRK(0x1043, 0x8469, "ASUS mobo", ALC662_FIXUP_NO_JACK_DETECT), 816 824 SND_PCI_QUIRK(0x105b, 0x0cd6, "Foxconn", ALC662_FIXUP_ASUS_MODE2), 817 825 SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD), 818 - SND_PCI_QUIRK(0x1458, 0xa194, "H610M H V2 DDR4", ALC897_FIXUP_H610M_HP_PIN), 819 826 SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE), 820 827 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS), 821 828 SND_PCI_QUIRK(0x17aa, 0x1057, "Lenovo P360", ALC897_FIXUP_HEADSET_MIC_PIN),
+5 -2
sound/hda/controllers/intel.c
··· 295 295 #define AZX_DCAPS_INTEL_LNL \ 296 296 (AZX_DCAPS_INTEL_SKYLAKE | AZX_DCAPS_PIO_COMMANDS) 297 297 298 + #define AZX_DCAPS_INTEL_NVL \ 299 + (AZX_DCAPS_INTEL_LNL & ~AZX_DCAPS_NO_ALIGN_BUFSIZE) 300 + 298 301 /* quirks for ATI SB / AMD Hudson */ 299 302 #define AZX_DCAPS_PRESET_ATI_SB \ 300 303 (AZX_DCAPS_NO_TCSEL | AZX_DCAPS_POSFIX_LPIB |\ ··· 2568 2565 /* Wildcat Lake */ 2569 2566 { PCI_DEVICE_DATA(INTEL, HDA_WCL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_LNL) }, 2570 2567 /* Nova Lake */ 2571 - { PCI_DEVICE_DATA(INTEL, HDA_NVL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_LNL) }, 2572 - { PCI_DEVICE_DATA(INTEL, HDA_NVL_S, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_LNL) }, 2568 + { PCI_DEVICE_DATA(INTEL, HDA_NVL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_NVL) }, 2569 + { PCI_DEVICE_DATA(INTEL, HDA_NVL_S, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_NVL) }, 2573 2570 /* Apollolake (Broxton-P) */ 2574 2571 { PCI_DEVICE_DATA(INTEL, HDA_APL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_BROXTON) }, 2575 2572 /* Gemini-Lake */
+20 -4
sound/soc/amd/acp/acp-sdw-legacy-mach.c
··· 99 99 .callback = soc_sdw_quirk_cb, 100 100 .matches = { 101 101 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 102 - DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "21YW"), 102 + DMI_MATCH(DMI_PRODUCT_SKU, "21YW"), 103 103 }, 104 - .driver_data = (void *)(ASOC_SDW_CODEC_SPKR), 104 + .driver_data = (void *)((ASOC_SDW_CODEC_SPKR) | (ASOC_SDW_ACP_DMIC)), 105 105 }, 106 106 { 107 107 .callback = soc_sdw_quirk_cb, 108 108 .matches = { 109 109 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 110 - DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "21YX"), 110 + DMI_MATCH(DMI_PRODUCT_SKU, "21YX"), 111 111 }, 112 - .driver_data = (void *)(ASOC_SDW_CODEC_SPKR), 112 + .driver_data = (void *)((ASOC_SDW_CODEC_SPKR) | (ASOC_SDW_ACP_DMIC)), 113 + }, 114 + { 115 + .callback = soc_sdw_quirk_cb, 116 + .matches = { /* Lenovo P16s G5 AMD */ 117 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 118 + DMI_MATCH(DMI_PRODUCT_SKU, "21XG"), 119 + }, 120 + .driver_data = (void *)(ASOC_SDW_ACP_DMIC), 121 + }, 122 + { 123 + .callback = soc_sdw_quirk_cb, 124 + .matches = { /* Lenovo P16s G5 AMD */ 125 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 126 + DMI_MATCH(DMI_PRODUCT_SKU, "21XH"), 127 + }, 128 + .driver_data = (void *)(ASOC_SDW_ACP_DMIC), 113 129 }, 114 130 { 115 131 .callback = soc_sdw_quirk_cb,
+8 -1
sound/soc/codecs/nau8325.c
··· 142 142 static bool nau8325_writeable_reg(struct device *dev, unsigned int reg) 143 143 { 144 144 switch (reg) { 145 - case NAU8325_R00_HARDWARE_RST: 145 + case NAU8325_R00_HARDWARE_RST ... NAU8325_R01_SOFTWARE_RST: 146 146 case NAU8325_R03_CLK_CTRL ... NAU8325_R06_INT_CLR_STATUS: 147 147 case NAU8325_R09_IRQOUT ... NAU8325_R13_DAC_VOLUME: 148 148 case NAU8325_R29_DAC_CTRL1 ... NAU8325_R2A_DAC_CTRL2: ··· 670 670 regmap_write(regmap, NAU8325_R00_HARDWARE_RST, 0x0000); 671 671 } 672 672 673 + static void nau8325_software_reset(struct regmap *regmap) 674 + { 675 + regmap_write(regmap, NAU8325_R01_SOFTWARE_RST, 0x0000); 676 + regmap_write(regmap, NAU8325_R01_SOFTWARE_RST, 0x0000); 677 + } 678 + 673 679 static void nau8325_init_regs(struct nau8325 *nau8325) 674 680 { 675 681 struct regmap *regmap = nau8325->regmap; ··· 862 856 nau8325_print_device_properties(nau8325); 863 857 864 858 nau8325_reset_chip(nau8325->regmap); 859 + nau8325_software_reset(nau8325->regmap); 865 860 ret = regmap_read(nau8325->regmap, NAU8325_R02_DEVICE_ID, &value); 866 861 if (ret) { 867 862 dev_dbg(dev, "Failed to read device id (%d)", ret);
+6 -3
sound/soc/intel/avs/board_selection.c
··· 520 520 if (num_elems > max_ssps) { 521 521 dev_err(adev->dev, "board supports only %d SSP, %d specified\n", 522 522 max_ssps, num_elems); 523 - return -EINVAL; 523 + ret = -EINVAL; 524 + goto exit; 524 525 } 525 526 526 527 for (ssp_port = 0; ssp_port < num_elems; ssp_port++) { ··· 529 528 for_each_set_bit(tdm_slot, &tdm_slots, 16) { 530 529 ret = avs_register_i2s_test_board(adev, ssp_port, tdm_slot); 531 530 if (ret) 532 - return ret; 531 + goto exit; 533 532 } 534 533 } 535 534 536 - return 0; 535 + exit: 536 + kfree(array); 537 + return ret; 537 538 } 538 539 539 540 static int avs_register_i2s_board(struct avs_dev *adev, struct snd_soc_acpi_mach *mach)
+17
sound/soc/sdca/sdca_class_function.c
··· 198 198 return sdca_irq_populate(drv->function, component, core->irq_info); 199 199 } 200 200 201 + static void class_function_component_remove(struct snd_soc_component *component) 202 + { 203 + struct class_function_drv *drv = snd_soc_component_get_drvdata(component); 204 + struct sdca_class_drv *core = drv->core; 205 + 206 + sdca_irq_cleanup(component->dev, drv->function, core->irq_info); 207 + } 208 + 201 209 static int class_function_set_jack(struct snd_soc_component *component, 202 210 struct snd_soc_jack *jack, void *d) 203 211 { ··· 217 209 218 210 static const struct snd_soc_component_driver class_function_component_drv = { 219 211 .probe = class_function_component_probe, 212 + .remove = class_function_component_remove, 220 213 .endianness = 1, 221 214 }; 222 215 ··· 411 402 return 0; 412 403 } 413 404 405 + static void class_function_remove(struct auxiliary_device *auxdev) 406 + { 407 + struct class_function_drv *drv = auxiliary_get_drvdata(auxdev); 408 + 409 + sdca_irq_cleanup(drv->dev, drv->function, drv->core->irq_info); 410 + } 411 + 414 412 static int class_function_runtime_suspend(struct device *dev) 415 413 { 416 414 struct auxiliary_device *auxdev = to_auxiliary_dev(dev); ··· 566 550 }, 567 551 568 552 .probe = class_function_probe, 553 + .remove = class_function_remove, 569 554 .id_table = class_function_id_table 570 555 }; 571 556 module_auxiliary_driver(class_function_drv);
+74 -8
sound/soc/sdca/sdca_interrupts.c
··· 117 117 118 118 status = val; 119 119 for_each_set_bit(mask, &status, BITS_PER_BYTE) { 120 - mask = 1 << mask; 121 - 122 - switch (mask) { 120 + switch (BIT(mask)) { 123 121 case SDCA_CTL_ENTITY_0_FUNCTION_NEEDS_INITIALIZATION: 124 122 //FIXME: Add init writes 125 123 break; ··· 138 140 } 139 141 } 140 142 141 - ret = regmap_write(interrupt->function_regmap, reg, val); 143 + ret = regmap_write(interrupt->function_regmap, reg, val & 0x7F); 142 144 if (ret < 0) { 143 145 dev_err(dev, "failed to clear function status: %d\n", ret); 144 146 goto error; ··· 250 252 if (irq < 0) 251 253 return irq; 252 254 253 - ret = devm_request_threaded_irq(dev, irq, NULL, handler, 254 - IRQF_ONESHOT, name, data); 255 + ret = request_threaded_irq(irq, NULL, handler, IRQF_ONESHOT, name, data); 255 256 if (ret) 256 257 return ret; 257 258 ··· 259 262 dev_dbg(dev, "requested irq %d for %s\n", irq, name); 260 263 261 264 return 0; 265 + } 266 + 267 + static void sdca_irq_free_locked(struct device *dev, struct sdca_interrupt_info *info, 268 + int sdca_irq, const char *name, void *data) 269 + { 270 + int irq; 271 + 272 + irq = regmap_irq_get_virq(info->irq_data, sdca_irq); 273 + if (irq < 0) 274 + return; 275 + 276 + free_irq(irq, data); 277 + 278 + info->irqs[sdca_irq].irq = 0; 279 + 280 + dev_dbg(dev, "freed irq %d for %s\n", irq, name); 262 281 } 263 282 264 283 /** ··· 316 303 EXPORT_SYMBOL_NS_GPL(sdca_irq_request, "SND_SOC_SDCA"); 317 304 318 305 /** 306 + * sdca_irq_free - free an individual SDCA interrupt 307 + * @dev: Pointer to the struct device. 308 + * @info: Pointer to the interrupt information structure. 309 + * @sdca_irq: SDCA interrupt position. 310 + * @name: Name to be given to the IRQ. 311 + * @data: Private data pointer that will be passed to the handler. 312 + * 313 + * Typically this is handled internally by sdca_irq_cleanup, however if 314 + * a device requires custom IRQ handling this can be called manually before 315 + * calling sdca_irq_cleanup, which will then skip that IRQ whilst processing. 316 + */ 317 + void sdca_irq_free(struct device *dev, struct sdca_interrupt_info *info, 318 + int sdca_irq, const char *name, void *data) 319 + { 320 + if (sdca_irq < 0 || sdca_irq >= SDCA_MAX_INTERRUPTS) 321 + return; 322 + 323 + guard(mutex)(&info->irq_lock); 324 + 325 + sdca_irq_free_locked(dev, info, sdca_irq, name, data); 326 + } 327 + EXPORT_SYMBOL_NS_GPL(sdca_irq_free, "SND_SOC_SDCA"); 328 + 329 + /** 319 330 * sdca_irq_data_populate - Populate common interrupt data 320 331 * @dev: Pointer to the Function device. 321 332 * @regmap: Pointer to the Function regmap. ··· 365 328 if (!dev) 366 329 return -ENODEV; 367 330 368 - name = devm_kasprintf(dev, GFP_KERNEL, "%s %s %s", function->desc->name, 369 - entity->label, control->label); 331 + name = kasprintf(GFP_KERNEL, "%s %s %s", function->desc->name, 332 + entity->label, control->label); 370 333 if (!name) 371 334 return -ENOMEM; 372 335 ··· 552 515 return 0; 553 516 } 554 517 EXPORT_SYMBOL_NS_GPL(sdca_irq_populate, "SND_SOC_SDCA"); 518 + 519 + /** 520 + * sdca_irq_cleanup - Free all the individual IRQs for an SDCA Function 521 + * @sdev: Device pointer against which the sdca_interrupt_info was allocated. 522 + * @function: Pointer to the SDCA Function. 523 + * @info: Pointer to the SDCA interrupt info for this device. 524 + * 525 + * Typically this would be called from the driver for a single SDCA Function. 526 + */ 527 + void sdca_irq_cleanup(struct device *dev, 528 + struct sdca_function_data *function, 529 + struct sdca_interrupt_info *info) 530 + { 531 + int i; 532 + 533 + guard(mutex)(&info->irq_lock); 534 + 535 + for (i = 0; i < SDCA_MAX_INTERRUPTS; i++) { 536 + struct sdca_interrupt *interrupt = &info->irqs[i]; 537 + 538 + if (interrupt->function != function || !interrupt->irq) 539 + continue; 540 + 541 + sdca_irq_free_locked(dev, info, i, interrupt->name, interrupt); 542 + 543 + kfree(interrupt->name); 544 + } 545 + } 546 + EXPORT_SYMBOL_NS_GPL(sdca_irq_cleanup, "SND_SOC_SDCA"); 555 547 556 548 /** 557 549 * sdca_irq_allocate - allocate an SDCA interrupt structure for a device
+12 -2
sound/soc/sof/intel/hda-pcm.c
··· 219 219 int hda_dsp_pcm_open(struct snd_sof_dev *sdev, 220 220 struct snd_pcm_substream *substream) 221 221 { 222 + const struct sof_intel_dsp_desc *chip_info = get_chip_info(sdev->pdata); 222 223 struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream); 223 224 struct snd_pcm_runtime *runtime = substream->runtime; 224 225 struct snd_soc_component *scomp = sdev->component; ··· 269 268 return -ENODEV; 270 269 } 271 270 272 - /* minimum as per HDA spec */ 273 - snd_pcm_hw_constraint_step(substream->runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 4); 271 + /* 272 + * Set period size constraint to ensure BDLE buffer length and 273 + * start address alignment requirements are met. Align to 128 274 + * bytes for newer Intel platforms, with older ones using 4 byte alignment. 275 + */ 276 + if (chip_info->hw_ip_version >= SOF_INTEL_ACE_4_0) 277 + snd_pcm_hw_constraint_step(substream->runtime, 0, 278 + SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 128); 279 + else 280 + snd_pcm_hw_constraint_step(substream->runtime, 0, 281 + SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 4); 274 282 275 283 /* avoid circular buffer wrap in middle of period */ 276 284 snd_pcm_hw_constraint_integer(substream->runtime,
+4 -6
sound/soc/sof/intel/hda.c
··· 1133 1133 1134 1134 #if IS_ENABLED(CONFIG_SND_SOC_SOF_INTEL_SOUNDWIRE) 1135 1135 1136 - static bool is_endpoint_present(struct sdw_slave *sdw_device, 1137 - struct asoc_sdw_codec_info *dai_info, int dai_type) 1136 + static bool is_endpoint_present(struct sdw_slave *sdw_device, int dai_type) 1138 1137 { 1139 1138 int i; 1140 1139 ··· 1144 1145 } 1145 1146 1146 1147 for (i = 0; i < sdw_device->sdca_data.num_functions; i++) { 1147 - if (dai_type == dai_info->dais[i].dai_type) 1148 + if (dai_type == asoc_sdw_get_dai_type(sdw_device->sdca_data.function[i].type)) 1148 1149 return true; 1149 1150 } 1150 1151 dev_dbg(&sdw_device->dev, "Endpoint DAI type %d not found\n", dai_type); ··· 1198 1199 } 1199 1200 for (j = 0; j < codec_info_list[i].dai_num; j++) { 1200 1201 /* Check if the endpoint is present by the SDCA DisCo table */ 1201 - if (!is_endpoint_present(sdw_device, &codec_info_list[i], 1202 - codec_info_list[i].dais[j].dai_type)) 1202 + if (!is_endpoint_present(sdw_device, codec_info_list[i].dais[j].dai_type)) 1203 1203 continue; 1204 1204 1205 - endpoints[ep_index].num = ep_index; 1205 + endpoints[ep_index].num = j; 1206 1206 if (codec_info_list[i].dais[j].dai_type == SOC_SDW_DAI_TYPE_AMP) { 1207 1207 /* Assume all amp are aggregated */ 1208 1208 endpoints[ep_index].aggregated = 1;
+3
sound/soc/stm/stm32_sai_sub.c
··· 802 802 break; 803 803 /* Left justified */ 804 804 case SND_SOC_DAIFMT_MSB: 805 + cr1 |= SAI_XCR1_CKSTR; 805 806 frcr |= SAI_XFRCR_FSPOL | SAI_XFRCR_FSDEF; 806 807 break; 807 808 /* Right justified */ ··· 810 809 frcr |= SAI_XFRCR_FSPOL | SAI_XFRCR_FSDEF; 811 810 break; 812 811 case SND_SOC_DAIFMT_DSP_A: 812 + cr1 |= SAI_XCR1_CKSTR; 813 813 frcr |= SAI_XFRCR_FSPOL | SAI_XFRCR_FSOFF; 814 814 break; 815 815 case SND_SOC_DAIFMT_DSP_B: 816 + cr1 |= SAI_XCR1_CKSTR; 816 817 frcr |= SAI_XFRCR_FSPOL; 817 818 break; 818 819 default:
+15 -21
tools/perf/trace/beauty/include/uapi/linux/prctl.h
··· 397 397 # define PR_RSEQ_SLICE_EXT_ENABLE 0x01 398 398 399 399 /* 400 - * Get the current indirect branch tracking configuration for the current 401 - * thread, this will be the value configured via PR_SET_INDIR_BR_LP_STATUS. 400 + * Get or set the control flow integrity (CFI) configuration for the 401 + * current thread. 402 + * 403 + * Some per-thread control flow integrity settings are not yet 404 + * controlled through this prctl(); see for example 405 + * PR_{GET,SET,LOCK}_SHADOW_STACK_STATUS 402 406 */ 403 - #define PR_GET_INDIR_BR_LP_STATUS 80 404 - 407 + #define PR_GET_CFI 80 408 + #define PR_SET_CFI 81 405 409 /* 406 - * Set the indirect branch tracking configuration. PR_INDIR_BR_LP_ENABLE will 407 - * enable cpu feature for user thread, to track all indirect branches and ensure 408 - * they land on arch defined landing pad instruction. 409 - * x86 - If enabled, an indirect branch must land on an ENDBRANCH instruction. 410 - * arch64 - If enabled, an indirect branch must land on a BTI instruction. 411 - * riscv - If enabled, an indirect branch must land on an lpad instruction. 412 - * PR_INDIR_BR_LP_DISABLE will disable feature for user thread and indirect 413 - * branches will no more be tracked by cpu to land on arch defined landing pad 414 - * instruction. 410 + * Forward-edge CFI variants (excluding ARM64 BTI, which has its own 411 + * prctl()s). 415 412 */ 416 - #define PR_SET_INDIR_BR_LP_STATUS 81 417 - # define PR_INDIR_BR_LP_ENABLE (1UL << 0) 413 + #define PR_CFI_BRANCH_LANDING_PADS 0 414 + /* Return and control values for PR_{GET,SET}_CFI */ 415 + # define PR_CFI_ENABLE _BITUL(0) 416 + # define PR_CFI_DISABLE _BITUL(1) 417 + # define PR_CFI_LOCK _BITUL(2) 418 418 419 - /* 420 - * Prevent further changes to the specified indirect branch tracking 421 - * configuration. All bits may be locked via this call, including 422 - * undefined bits. 423 - */ 424 - #define PR_LOCK_INDIR_BR_LP_STATUS 82 425 419 426 420 #endif /* _LINUX_PRCTL_H */
+54 -46
tools/power/x86/turbostat/turbostat.c
··· 2409 2409 int max_l3_id; 2410 2410 int max_node_num; 2411 2411 int nodes_per_pkg; 2412 - int cores_per_node; 2412 + int cores_per_pkg; 2413 2413 int threads_per_core; 2414 2414 } topo; 2415 2415 ··· 2837 2837 UNUSED(type); 2838 2838 2839 2839 if (format == FORMAT_RAW && width >= 64) 2840 - return (sprintf(outp, "%s%-8s", (*printed++ ? delim : ""), name)); 2840 + return (sprintf(outp, "%s%-8s", ((*printed)++ ? delim : ""), name)); 2841 2841 else 2842 - return (sprintf(outp, "%s%s", (*printed++ ? delim : ""), name)); 2842 + return (sprintf(outp, "%s%s", ((*printed)++ ? delim : ""), name)); 2843 2843 } 2844 2844 2845 2845 static inline int print_hex_value(int width, int *printed, char *delim, unsigned long long value) 2846 2846 { 2847 2847 if (width <= 32) 2848 - return (sprintf(outp, "%s%08x", (*printed++ ? delim : ""), (unsigned int)value)); 2848 + return (sprintf(outp, "%s%08x", ((*printed)++ ? delim : ""), (unsigned int)value)); 2849 2849 else 2850 - return (sprintf(outp, "%s%016llx", (*printed++ ? delim : ""), value)); 2850 + return (sprintf(outp, "%s%016llx", ((*printed)++ ? delim : ""), value)); 2851 2851 } 2852 2852 2853 2853 static inline int print_decimal_value(int width, int *printed, char *delim, unsigned long long value) 2854 2854 { 2855 - if (width <= 32) 2856 - return (sprintf(outp, "%s%d", (*printed++ ? delim : ""), (unsigned int)value)); 2857 - else 2858 - return (sprintf(outp, "%s%-8lld", (*printed++ ? delim : ""), value)); 2855 + UNUSED(width); 2856 + 2857 + return (sprintf(outp, "%s%lld", ((*printed)++ ? delim : ""), value)); 2859 2858 } 2860 2859 2861 2860 static inline int print_float_value(int *printed, char *delim, double value) 2862 2861 { 2863 - return (sprintf(outp, "%s%0.2f", (*printed++ ? delim : ""), value)); 2862 + return (sprintf(outp, "%s%0.2f", ((*printed)++ ? delim : ""), value)); 2864 2863 } 2865 2864 2866 2865 void print_header(char *delim) ··· 3468 3469 for (i = 0, pp = sys.perf_tp; pp; ++i, pp = pp->next) { 3469 3470 if (pp->format == FORMAT_RAW) 3470 3471 outp += print_hex_value(pp->width, &printed, delim, t->perf_counter[i]); 3471 - else if (pp->format == FORMAT_DELTA || mp->format == FORMAT_AVERAGE) 3472 + else if (pp->format == FORMAT_DELTA || pp->format == FORMAT_AVERAGE) 3472 3473 outp += print_decimal_value(pp->width, &printed, delim, t->perf_counter[i]); 3473 3474 else if (pp->format == FORMAT_PERCENT) { 3474 3475 if (pp->type == COUNTER_USEC) ··· 3489 3490 3490 3491 case PMT_TYPE_XTAL_TIME: 3491 3492 value_converted = pct(value_raw / crystal_hz, interval_float); 3492 - outp += sprintf(outp, "%s%.2f", (printed++ ? delim : ""), value_converted); 3493 + outp += print_float_value(&printed, delim, value_converted); 3493 3494 break; 3494 3495 3495 3496 case PMT_TYPE_TCORE_CLOCK: 3496 3497 value_converted = pct(value_raw / tcore_clock_freq_hz, interval_float); 3497 - outp += sprintf(outp, "%s%.2f", (printed++ ? delim : ""), value_converted); 3498 + outp += print_float_value(&printed, delim, value_converted); 3498 3499 } 3499 3500 } 3500 3501 ··· 3538 3539 for (i = 0, pp = sys.perf_cp; pp; i++, pp = pp->next) { 3539 3540 if (pp->format == FORMAT_RAW) 3540 3541 outp += print_hex_value(pp->width, &printed, delim, c->perf_counter[i]); 3541 - else if (pp->format == FORMAT_DELTA || mp->format == FORMAT_AVERAGE) 3542 + else if (pp->format == FORMAT_DELTA || pp->format == FORMAT_AVERAGE) 3542 3543 outp += print_decimal_value(pp->width, &printed, delim, c->perf_counter[i]); 3543 3544 else if (pp->format == FORMAT_PERCENT) 3544 3545 outp += print_float_value(&printed, delim, pct(c->perf_counter[i], tsc)); ··· 3694 3695 outp += print_hex_value(pp->width, &printed, delim, p->perf_counter[i]); 3695 3696 else if (pp->type == COUNTER_K2M) 3696 3697 outp += sprintf(outp, "%s%d", (printed++ ? delim : ""), (unsigned int)p->perf_counter[i] / 1000); 3697 - else if (pp->format == FORMAT_DELTA || mp->format == FORMAT_AVERAGE) 3698 + else if (pp->format == FORMAT_DELTA || pp->format == FORMAT_AVERAGE) 3698 3699 outp += print_decimal_value(pp->width, &printed, delim, p->perf_counter[i]); 3699 3700 else if (pp->format == FORMAT_PERCENT) 3700 3701 outp += print_float_value(&printed, delim, pct(p->perf_counter[i], tsc)); ··· 9121 9122 cpuid_has_hv = ecx_flags & (1 << 31); 9122 9123 9123 9124 if (!no_msr) { 9124 - if (get_msr(sched_getcpu(), MSR_IA32_UCODE_REV, &ucode_patch)) 9125 + if (get_msr(sched_getcpu(), MSR_IA32_UCODE_REV, &ucode_patch)) { 9125 9126 warnx("get_msr(UCODE)"); 9126 - else 9127 + } else { 9127 9128 ucode_patch_valid = true; 9129 + if (!authentic_amd && !hygon_genuine) 9130 + ucode_patch >>= 32; 9131 + } 9128 9132 } 9129 9133 9130 9134 /* ··· 9141 9139 if (!quiet) { 9142 9140 fprintf(outf, "CPUID(1): family:model:stepping 0x%x:%x:%x (%d:%d:%d)", family, model, stepping, family, model, stepping); 9143 9141 if (ucode_patch_valid) 9144 - fprintf(outf, " microcode 0x%x", (unsigned int)((ucode_patch >> 32) & 0xFFFFFFFF)); 9142 + fprintf(outf, " microcode 0x%x", (unsigned int)ucode_patch); 9145 9143 fputc('\n', outf); 9146 9144 9147 9145 fprintf(outf, "CPUID(0x80000000): max_extended_levels: 0x%x\n", max_extended_level); ··· 9405 9403 if (!is_hybrid) { 9406 9404 fd_l2_percpu[cpu] = open_perf_counter(cpu, perf_pmu_types.uniform, perf_model_support->first.refs, -1, PERF_FORMAT_GROUP); 9407 9405 if (fd_l2_percpu[cpu] == -1) { 9408 - err(-1, "%s(cpu%d, 0x%x, 0x%llx) REFS", __func__, cpu, perf_pmu_types.uniform, perf_model_support->first.refs); 9406 + warnx("%s(cpu%d, 0x%x, 0x%llx) REFS", __func__, cpu, perf_pmu_types.uniform, perf_model_support->first.refs); 9409 9407 free_fd_l2_percpu(); 9410 9408 return; 9411 9409 } 9412 9410 retval = open_perf_counter(cpu, perf_pmu_types.uniform, perf_model_support->first.hits, fd_l2_percpu[cpu], PERF_FORMAT_GROUP); 9413 9411 if (retval == -1) { 9414 - err(-1, "%s(cpu%d, 0x%x, 0x%llx) HITS", __func__, cpu, perf_pmu_types.uniform, perf_model_support->first.hits); 9412 + warnx("%s(cpu%d, 0x%x, 0x%llx) HITS", __func__, cpu, perf_pmu_types.uniform, perf_model_support->first.hits); 9415 9413 free_fd_l2_percpu(); 9416 9414 return; 9417 9415 } ··· 9420 9418 if (perf_pcore_set && CPU_ISSET_S(cpu, cpu_possible_setsize, perf_pcore_set)) { 9421 9419 fd_l2_percpu[cpu] = open_perf_counter(cpu, perf_pmu_types.pcore, perf_model_support->first.refs, -1, PERF_FORMAT_GROUP); 9422 9420 if (fd_l2_percpu[cpu] == -1) { 9423 - err(-1, "%s(cpu%d, 0x%x, 0x%llx) REFS", __func__, cpu, perf_pmu_types.pcore, perf_model_support->first.refs); 9421 + warnx("%s(cpu%d, 0x%x, 0x%llx) REFS", __func__, cpu, perf_pmu_types.pcore, perf_model_support->first.refs); 9424 9422 free_fd_l2_percpu(); 9425 9423 return; 9426 9424 } 9427 9425 retval = open_perf_counter(cpu, perf_pmu_types.pcore, perf_model_support->first.hits, fd_l2_percpu[cpu], PERF_FORMAT_GROUP); 9428 9426 if (retval == -1) { 9429 - err(-1, "%s(cpu%d, 0x%x, 0x%llx) HITS", __func__, cpu, perf_pmu_types.pcore, perf_model_support->first.hits); 9427 + warnx("%s(cpu%d, 0x%x, 0x%llx) HITS", __func__, cpu, perf_pmu_types.pcore, perf_model_support->first.hits); 9430 9428 free_fd_l2_percpu(); 9431 9429 return; 9432 9430 } 9433 9431 } else if (perf_ecore_set && CPU_ISSET_S(cpu, cpu_possible_setsize, perf_ecore_set)) { 9434 9432 fd_l2_percpu[cpu] = open_perf_counter(cpu, perf_pmu_types.ecore, perf_model_support->second.refs, -1, PERF_FORMAT_GROUP); 9435 9433 if (fd_l2_percpu[cpu] == -1) { 9436 - err(-1, "%s(cpu%d, 0x%x, 0x%llx) REFS", __func__, cpu, perf_pmu_types.pcore, perf_model_support->second.refs); 9434 + warnx("%s(cpu%d, 0x%x, 0x%llx) REFS", __func__, cpu, perf_pmu_types.ecore, perf_model_support->second.refs); 9437 9435 free_fd_l2_percpu(); 9438 9436 return; 9439 9437 } 9440 9438 retval = open_perf_counter(cpu, perf_pmu_types.ecore, perf_model_support->second.hits, fd_l2_percpu[cpu], PERF_FORMAT_GROUP); 9441 9439 if (retval == -1) { 9442 - err(-1, "%s(cpu%d, 0x%x, 0x%llx) HITS", __func__, cpu, perf_pmu_types.pcore, perf_model_support->second.hits); 9440 + warnx("%s(cpu%d, 0x%x, 0x%llx) HITS", __func__, cpu, perf_pmu_types.ecore, perf_model_support->second.hits); 9443 9441 free_fd_l2_percpu(); 9444 9442 return; 9445 9443 } 9446 9444 } else if (perf_lcore_set && CPU_ISSET_S(cpu, cpu_possible_setsize, perf_lcore_set)) { 9447 9445 fd_l2_percpu[cpu] = open_perf_counter(cpu, perf_pmu_types.lcore, perf_model_support->third.refs, -1, PERF_FORMAT_GROUP); 9448 9446 if (fd_l2_percpu[cpu] == -1) { 9449 - err(-1, "%s(cpu%d, 0x%x, 0x%llx) REFS", __func__, cpu, perf_pmu_types.pcore, perf_model_support->third.refs); 9447 + warnx("%s(cpu%d, 0x%x, 0x%llx) REFS", __func__, cpu, perf_pmu_types.lcore, perf_model_support->third.refs); 9450 9448 free_fd_l2_percpu(); 9451 9449 return; 9452 9450 } 9453 9451 retval = open_perf_counter(cpu, perf_pmu_types.lcore, perf_model_support->third.hits, fd_l2_percpu[cpu], PERF_FORMAT_GROUP); 9454 9452 if (retval == -1) { 9455 - err(-1, "%s(cpu%d, 0x%x, 0x%llx) HITS", __func__, cpu, perf_pmu_types.pcore, perf_model_support->third.hits); 9453 + warnx("%s(cpu%d, 0x%x, 0x%llx) HITS", __func__, cpu, perf_pmu_types.lcore, perf_model_support->third.hits); 9456 9454 free_fd_l2_percpu(); 9457 9455 return; 9458 9456 } ··· 9636 9634 topo.max_core_id = max_core_id; /* within a package */ 9637 9635 topo.max_package_id = max_package_id; 9638 9636 9639 - topo.cores_per_node = max_core_id + 1; 9637 + topo.cores_per_pkg = max_core_id + 1; 9640 9638 if (debug > 1) 9641 - fprintf(outf, "max_core_id %d, sizing for %d cores per package\n", max_core_id, topo.cores_per_node); 9639 + fprintf(outf, "max_core_id %d, sizing for %d cores per package\n", max_core_id, topo.cores_per_pkg); 9642 9640 if (!summary_only) 9643 9641 BIC_PRESENT(BIC_Core); 9644 9642 ··· 9703 9701 void allocate_counters(struct counters *counters) 9704 9702 { 9705 9703 int i; 9706 - int num_cores = topo.cores_per_node * topo.nodes_per_pkg * topo.num_packages; 9707 - int num_threads = topo.threads_per_core * num_cores; 9704 + int num_cores = topo.cores_per_pkg * topo.num_packages; 9708 9705 9709 - counters->threads = calloc(num_threads, sizeof(struct thread_data)); 9706 + counters->threads = calloc(topo.max_cpu_num + 1, sizeof(struct thread_data)); 9710 9707 if (counters->threads == NULL) 9711 9708 goto error; 9712 9709 9713 - for (i = 0; i < num_threads; i++) 9710 + for (i = 0; i < topo.max_cpu_num + 1; i++) 9714 9711 (counters->threads)[i].cpu_id = -1; 9715 9712 9716 9713 counters->cores = calloc(num_cores, sizeof(struct core_data)); ··· 11285 11284 } 11286 11285 } 11287 11286 11287 + static bool cpuidle_counter_wanted(char *name) 11288 + { 11289 + if (is_deferred_skip(name)) 11290 + return false; 11291 + 11292 + return DO_BIC(BIC_cpuidle) || is_deferred_add(name); 11293 + } 11294 + 11288 11295 void probe_cpuidle_counts(void) 11289 11296 { 11290 11297 char path[64]; ··· 11302 11293 int min_state = 1024, max_state = 0; 11303 11294 char *sp; 11304 11295 11305 - if (!DO_BIC(BIC_cpuidle)) 11296 + if (!DO_BIC(BIC_cpuidle) && !deferred_add_index) 11306 11297 return; 11307 11298 11308 11299 for (state = 10; state >= 0; --state) { ··· 11316 11307 fclose(input); 11317 11308 11318 11309 remove_underbar(name_buf); 11319 - 11320 - if (!DO_BIC(BIC_cpuidle) && !is_deferred_add(name_buf)) 11321 - continue; 11322 - 11323 - if (is_deferred_skip(name_buf)) 11324 - continue; 11325 11310 11326 11311 /* truncate "C1-HSW\n" to "C1", or truncate "C1\n" to "C1" */ 11327 11312 sp = strchr(name_buf, '-'); ··· 11331 11328 * Add 'C1+' for C1, and so on. The 'below' sysfs file always contains 0 for 11332 11329 * the last state, so do not add it. 11333 11330 */ 11334 - 11335 11331 *sp = '+'; 11336 11332 *(sp + 1) = '\0'; 11337 - sprintf(path, "cpuidle/state%d/below", state); 11338 - add_counter(0, path, name_buf, 64, SCOPE_CPU, COUNTER_ITEMS, FORMAT_DELTA, SYSFS_PERCPU, 0); 11333 + if (cpuidle_counter_wanted(name_buf)) { 11334 + sprintf(path, "cpuidle/state%d/below", state); 11335 + add_counter(0, path, name_buf, 64, SCOPE_CPU, COUNTER_ITEMS, FORMAT_DELTA, SYSFS_PERCPU, 0); 11336 + } 11339 11337 } 11340 11338 11341 11339 *sp = '\0'; 11342 - sprintf(path, "cpuidle/state%d/usage", state); 11343 - add_counter(0, path, name_buf, 64, SCOPE_CPU, COUNTER_ITEMS, FORMAT_DELTA, SYSFS_PERCPU, 0); 11340 + if (cpuidle_counter_wanted(name_buf)) { 11341 + sprintf(path, "cpuidle/state%d/usage", state); 11342 + add_counter(0, path, name_buf, 64, SCOPE_CPU, COUNTER_ITEMS, FORMAT_DELTA, SYSFS_PERCPU, 0); 11343 + } 11344 11344 11345 11345 /* 11346 11346 * The 'above' sysfs file always contains 0 for the shallowest state (smallest ··· 11352 11346 if (state != min_state) { 11353 11347 *sp = '-'; 11354 11348 *(sp + 1) = '\0'; 11355 - sprintf(path, "cpuidle/state%d/above", state); 11356 - add_counter(0, path, name_buf, 64, SCOPE_CPU, COUNTER_ITEMS, FORMAT_DELTA, SYSFS_PERCPU, 0); 11349 + if (cpuidle_counter_wanted(name_buf)) { 11350 + sprintf(path, "cpuidle/state%d/above", state); 11351 + add_counter(0, path, name_buf, 64, SCOPE_CPU, COUNTER_ITEMS, FORMAT_DELTA, SYSFS_PERCPU, 0); 11352 + } 11357 11353 } 11358 11354 } 11359 11355 }
+26 -29
tools/testing/selftests/bpf/prog_tests/test_xsk.c
··· 179 179 return xsk_socket__create(&xsk->xsk, ifobject->ifindex, 0, umem->umem, rxr, txr, &cfg); 180 180 } 181 181 182 - #define MAX_SKB_FRAGS_PATH "/proc/sys/net/core/max_skb_frags" 183 - static unsigned int get_max_skb_frags(void) 184 - { 185 - unsigned int max_skb_frags = 0; 186 - FILE *file; 187 - 188 - file = fopen(MAX_SKB_FRAGS_PATH, "r"); 189 - if (!file) { 190 - ksft_print_msg("Error opening %s\n", MAX_SKB_FRAGS_PATH); 191 - return 0; 192 - } 193 - 194 - if (fscanf(file, "%u", &max_skb_frags) != 1) 195 - ksft_print_msg("Error reading %s\n", MAX_SKB_FRAGS_PATH); 196 - 197 - fclose(file); 198 - return max_skb_frags; 199 - } 200 - 201 182 static int set_ring_size(struct ifobject *ifobj) 202 183 { 203 184 int ret; ··· 1959 1978 1960 1979 int testapp_stats_rx_dropped(struct test_spec *test) 1961 1980 { 1981 + u32 umem_tr = test->ifobj_tx->umem_tailroom; 1982 + 1962 1983 if (test->mode == TEST_MODE_ZC) { 1963 1984 ksft_print_msg("Can not run RX_DROPPED test for ZC mode\n"); 1964 1985 return TEST_SKIP; 1965 1986 } 1966 1987 1967 - if (pkt_stream_replace_half(test, MIN_PKT_SIZE * 4, 0)) 1988 + if (pkt_stream_replace_half(test, (MIN_PKT_SIZE * 3) + umem_tr, 0)) 1968 1989 return TEST_FAILURE; 1969 1990 test->ifobj_rx->umem->frame_headroom = test->ifobj_rx->umem->frame_size - 1970 - XDP_PACKET_HEADROOM - MIN_PKT_SIZE * 3; 1991 + XDP_PACKET_HEADROOM - (MIN_PKT_SIZE * 2) - umem_tr; 1971 1992 if (pkt_stream_receive_half(test)) 1972 1993 return TEST_FAILURE; 1973 1994 test->ifobj_rx->validation_func = validate_rx_dropped; ··· 2225 2242 if (test->mode == TEST_MODE_ZC) { 2226 2243 max_frags = test->ifobj_tx->xdp_zc_max_segs; 2227 2244 } else { 2228 - max_frags = get_max_skb_frags(); 2229 - if (!max_frags) { 2230 - ksft_print_msg("Can't get MAX_SKB_FRAGS from system, using default (17)\n"); 2231 - max_frags = 17; 2232 - } 2245 + max_frags = test->ifobj_tx->max_skb_frags; 2233 2246 max_frags += 1; 2234 2247 } 2235 2248 ··· 2530 2551 2531 2552 int testapp_adjust_tail_grow(struct test_spec *test) 2532 2553 { 2554 + if (test->mode == TEST_MODE_SKB) 2555 + return TEST_SKIP; 2556 + 2533 2557 /* Grow by 4 bytes for testing purpose */ 2534 2558 return testapp_adjust_tail(test, 4, MIN_PKT_SIZE * 2); 2535 2559 } 2536 2560 2537 2561 int testapp_adjust_tail_grow_mb(struct test_spec *test) 2538 2562 { 2563 + u32 grow_size; 2564 + 2565 + if (test->mode == TEST_MODE_SKB) 2566 + return TEST_SKIP; 2567 + 2568 + /* worst case scenario is when underlying setup will work on 3k 2569 + * buffers, let us account for it; given that we will use 6k as 2570 + * pkt_len, expect that it will be broken down to 2 descs each 2571 + * with 3k payload; 2572 + * 2573 + * 4k is truesize, 3k payload, 256 HR, 320 TR; 2574 + */ 2575 + grow_size = XSK_UMEM__MAX_FRAME_SIZE - 2576 + XSK_UMEM__LARGE_FRAME_SIZE - 2577 + XDP_PACKET_HEADROOM - 2578 + test->ifobj_tx->umem_tailroom; 2539 2579 test->mtu = MAX_ETH_JUMBO_SIZE; 2540 - /* Grow by (frag_size - last_frag_Size) - 1 to stay inside the last fragment */ 2541 - return testapp_adjust_tail(test, (XSK_UMEM__MAX_FRAME_SIZE / 2) - 1, 2542 - XSK_UMEM__LARGE_FRAME_SIZE * 2); 2580 + 2581 + return testapp_adjust_tail(test, grow_size, XSK_UMEM__LARGE_FRAME_SIZE * 2); 2543 2582 } 2544 2583 2545 2584 int testapp_tx_queue_consumer(struct test_spec *test)
+23
tools/testing/selftests/bpf/prog_tests/test_xsk.h
··· 31 31 #define SOCK_RECONF_CTR 10 32 32 #define USLEEP_MAX 10000 33 33 34 + #define MAX_SKB_FRAGS_PATH "/proc/sys/net/core/max_skb_frags" 35 + #define SMP_CACHE_BYTES_PATH "/sys/devices/system/cpu/cpu0/cache/index0/coherency_line_size" 36 + 34 37 extern bool opt_verbose; 35 38 #define print_verbose(x...) do { if (opt_verbose) ksft_print_msg(x); } while (0) 36 39 ··· 46 43 static inline u64 ceil_u64(u64 a, u64 b) 47 44 { 48 45 return (a + b - 1) / b; 46 + } 47 + 48 + static inline unsigned int read_procfs_val(const char *path) 49 + { 50 + unsigned int read_val = 0; 51 + FILE *file; 52 + 53 + file = fopen(path, "r"); 54 + if (!file) { 55 + ksft_print_msg("Error opening %s\n", path); 56 + return 0; 57 + } 58 + 59 + if (fscanf(file, "%u", &read_val) != 1) 60 + ksft_print_msg("Error reading %s\n", path); 61 + 62 + fclose(file); 63 + return read_val; 49 64 } 50 65 51 66 /* Simple test */ ··· 136 115 int mtu; 137 116 u32 bind_flags; 138 117 u32 xdp_zc_max_segs; 118 + u32 umem_tailroom; 119 + u32 max_skb_frags; 139 120 bool tx_on; 140 121 bool rx_on; 141 122 bool use_poll;
+19
tools/testing/selftests/bpf/prog_tests/xsk.c
··· 62 62 63 63 static void test_xsk(const struct test_spec *test_to_run, enum test_mode mode) 64 64 { 65 + u32 max_frags, umem_tailroom, cache_line_size; 65 66 struct ifobject *ifobj_tx, *ifobj_rx; 66 67 struct test_spec test; 67 68 int ret; ··· 84 83 ifobj_tx->set_ring.default_tx = ifobj_tx->ring.tx_pending; 85 84 ifobj_tx->set_ring.default_rx = ifobj_tx->ring.rx_pending; 86 85 } 86 + 87 + cache_line_size = read_procfs_val(SMP_CACHE_BYTES_PATH); 88 + if (!cache_line_size) 89 + cache_line_size = 64; 90 + 91 + max_frags = read_procfs_val(MAX_SKB_FRAGS_PATH); 92 + if (!max_frags) 93 + max_frags = 17; 94 + 95 + ifobj_tx->max_skb_frags = max_frags; 96 + ifobj_rx->max_skb_frags = max_frags; 97 + 98 + /* 48 bytes is a part of skb_shared_info w/o frags array; 99 + * 16 bytes is sizeof(skb_frag_t) 100 + */ 101 + umem_tailroom = ALIGN(48 + (max_frags * 16), cache_line_size); 102 + ifobj_tx->umem_tailroom = umem_tailroom; 103 + ifobj_rx->umem_tailroom = umem_tailroom; 87 104 88 105 if (!ASSERT_OK(init_iface(ifobj_rx, worker_testapp_validate_rx), "init RX")) 89 106 goto delete_rx;
+3 -1
tools/testing/selftests/bpf/progs/xsk_xdp_progs.c
··· 26 26 27 27 SEC("xdp.frags") int xsk_xdp_drop(struct xdp_md *xdp) 28 28 { 29 + static unsigned int drop_idx; 30 + 29 31 /* Drop every other packet */ 30 - if (idx++ % 2) 32 + if (drop_idx++ % 2) 31 33 return XDP_DROP; 32 34 33 35 return bpf_redirect_map(&xsk, 0, XDP_DROP);
+23
tools/testing/selftests/bpf/xskxceiver.c
··· 80 80 #include <linux/mman.h> 81 81 #include <linux/netdev.h> 82 82 #include <linux/ethtool.h> 83 + #include <linux/align.h> 83 84 #include <arpa/inet.h> 84 85 #include <net/if.h> 85 86 #include <locale.h> ··· 334 333 int main(int argc, char **argv) 335 334 { 336 335 const size_t total_tests = ARRAY_SIZE(tests) + ARRAY_SIZE(ci_skip_tests); 336 + u32 cache_line_size, max_frags, umem_tailroom; 337 337 struct pkt_stream *rx_pkt_stream_default; 338 338 struct pkt_stream *tx_pkt_stream_default; 339 339 struct ifobject *ifobj_tx, *ifobj_rx; ··· 355 353 exit_with_error(ENOMEM); 356 354 357 355 setlocale(LC_ALL, ""); 356 + 357 + cache_line_size = read_procfs_val(SMP_CACHE_BYTES_PATH); 358 + if (!cache_line_size) { 359 + ksft_print_msg("Can't get SMP_CACHE_BYTES from system, using default (64)\n"); 360 + cache_line_size = 64; 361 + } 362 + 363 + max_frags = read_procfs_val(MAX_SKB_FRAGS_PATH); 364 + if (!max_frags) { 365 + ksft_print_msg("Can't get MAX_SKB_FRAGS from system, using default (17)\n"); 366 + max_frags = 17; 367 + } 368 + ifobj_tx->max_skb_frags = max_frags; 369 + ifobj_rx->max_skb_frags = max_frags; 370 + 371 + /* 48 bytes is a part of skb_shared_info w/o frags array; 372 + * 16 bytes is sizeof(skb_frag_t) 373 + */ 374 + umem_tailroom = ALIGN(48 + (max_frags * 16), cache_line_size); 375 + ifobj_tx->umem_tailroom = umem_tailroom; 376 + ifobj_rx->umem_tailroom = umem_tailroom; 358 377 359 378 parse_command_line(ifobj_tx, ifobj_rx, argc, argv); 360 379
+1
tools/testing/selftests/net/Makefile
··· 89 89 srv6_end_x_next_csid_l3vpn_test.sh \ 90 90 srv6_hencap_red_l3vpn_test.sh \ 91 91 srv6_hl2encap_red_l2vpn_test.sh \ 92 + srv6_iptunnel_cache.sh \ 92 93 stress_reuseport_listen.sh \ 93 94 tcp_fastopen_backup_key.sh \ 94 95 test_bpf.sh \
+1
tools/testing/selftests/net/forwarding/bridge_vlan_mcast.sh
··· 414 414 bridge vlan add vid 10 dev br1 self pvid untagged 415 415 ip link set dev $h1 master br1 416 416 ip link set dev br1 up 417 + setup_wait_dev $h1 0 417 418 bridge vlan add vid 10 dev $h1 master 418 419 bridge vlan global set vid 10 dev br1 mcast_snooping 1 mcast_querier 1 419 420 sleep 2
+44 -6
tools/testing/selftests/net/netfilter/nf_queue.c
··· 19 19 bool count_packets; 20 20 bool gso_enabled; 21 21 bool failopen; 22 + bool out_of_order; 23 + bool bogus_verdict; 22 24 int verbose; 23 25 unsigned int queue_num; 24 26 unsigned int timeout; ··· 33 31 34 32 static void help(const char *p) 35 33 { 36 - printf("Usage: %s [-c|-v [-vv] ] [-o] [-t timeout] [-q queue_num] [-Qdst_queue ] [ -d ms_delay ] [-G]\n", p); 34 + printf("Usage: %s [-c|-v [-vv] ] [-o] [-O] [-b] [-t timeout] [-q queue_num] [-Qdst_queue ] [ -d ms_delay ] [-G]\n", p); 37 35 } 38 36 39 37 static int parse_attr_cb(const struct nlattr *attr, void *data) ··· 277 275 unsigned int buflen = 64 * 1024 + MNL_SOCKET_BUFFER_SIZE; 278 276 struct mnl_socket *nl; 279 277 struct nlmsghdr *nlh; 278 + uint32_t ooo_ids[16]; 280 279 unsigned int portid; 280 + int ooo_count = 0; 281 281 char *buf; 282 282 int ret; 283 283 ··· 312 308 313 309 ret = mnl_cb_run(buf, ret, 0, portid, queue_cb, NULL); 314 310 if (ret < 0) { 311 + /* bogus verdict mode will generate ENOENT error messages */ 312 + if (opts.bogus_verdict && errno == ENOENT) 313 + continue; 315 314 perror("mnl_cb_run"); 316 315 exit(EXIT_FAILURE); 317 316 } ··· 323 316 if (opts.delay_ms) 324 317 sleep_ms(opts.delay_ms); 325 318 326 - nlh = nfq_build_verdict(buf, id, opts.queue_num, opts.verdict); 327 - if (mnl_socket_sendto(nl, nlh, nlh->nlmsg_len) < 0) { 328 - perror("mnl_socket_sendto"); 329 - exit(EXIT_FAILURE); 319 + if (opts.bogus_verdict) { 320 + for (int i = 0; i < 50; i++) { 321 + nlh = nfq_build_verdict(buf, id + 0x7FFFFFFF + i, 322 + opts.queue_num, opts.verdict); 323 + mnl_socket_sendto(nl, nlh, nlh->nlmsg_len); 324 + } 325 + } 326 + 327 + if (opts.out_of_order) { 328 + ooo_ids[ooo_count] = id; 329 + if (ooo_count >= 15) { 330 + for (ooo_count; ooo_count >= 0; ooo_count--) { 331 + nlh = nfq_build_verdict(buf, ooo_ids[ooo_count], 332 + opts.queue_num, opts.verdict); 333 + if (mnl_socket_sendto(nl, nlh, nlh->nlmsg_len) < 0) { 334 + perror("mnl_socket_sendto"); 335 + exit(EXIT_FAILURE); 336 + } 337 + } 338 + ooo_count = 0; 339 + } else { 340 + ooo_count++; 341 + } 342 + } else { 343 + nlh = nfq_build_verdict(buf, id, opts.queue_num, opts.verdict); 344 + if (mnl_socket_sendto(nl, nlh, nlh->nlmsg_len) < 0) { 345 + perror("mnl_socket_sendto"); 346 + exit(EXIT_FAILURE); 347 + } 330 348 } 331 349 } 332 350 ··· 364 332 { 365 333 int c; 366 334 367 - while ((c = getopt(argc, argv, "chvot:q:Q:d:G")) != -1) { 335 + while ((c = getopt(argc, argv, "chvoObt:q:Q:d:G")) != -1) { 368 336 switch (c) { 369 337 case 'c': 370 338 opts.count_packets = true; ··· 406 374 break; 407 375 case 'v': 408 376 opts.verbose++; 377 + break; 378 + case 'O': 379 + opts.out_of_order = true; 380 + break; 381 + case 'b': 382 + opts.bogus_verdict = true; 409 383 break; 410 384 } 411 385 }
+71 -12
tools/testing/selftests/net/netfilter/nft_queue.sh
··· 11 11 timeout=5 12 12 13 13 SCTP_TEST_TIMEOUT=60 14 + STRESS_TEST_TIMEOUT=30 14 15 15 16 cleanup() 16 17 { ··· 720 719 fi 721 720 } 722 721 722 + check_tainted() 723 + { 724 + local msg="$1" 725 + 726 + if [ "$tainted_then" -ne 0 ];then 727 + return 728 + fi 729 + 730 + read tainted_now < /proc/sys/kernel/tainted 731 + if [ "$tainted_now" -eq 0 ];then 732 + echo "PASS: $msg" 733 + else 734 + echo "TAINT: $msg" 735 + dmesg 736 + ret=1 737 + fi 738 + } 739 + 740 + test_queue_stress() 741 + { 742 + read tainted_then < /proc/sys/kernel/tainted 743 + local i 744 + 745 + ip netns exec "$nsrouter" nft -f /dev/stdin <<EOF 746 + flush ruleset 747 + table inet t { 748 + chain forward { 749 + type filter hook forward priority 0; policy accept; 750 + 751 + queue flags bypass to numgen random mod 8 752 + } 753 + } 754 + EOF 755 + timeout "$STRESS_TEST_TIMEOUT" ip netns exec "$ns2" \ 756 + socat -u UDP-LISTEN:12345,fork,pf=ipv4 STDOUT > /dev/null & 757 + 758 + timeout "$STRESS_TEST_TIMEOUT" ip netns exec "$ns3" \ 759 + socat -u UDP-LISTEN:12345,fork,pf=ipv4 STDOUT > /dev/null & 760 + 761 + for i in $(seq 0 7); do 762 + ip netns exec "$nsrouter" timeout "$STRESS_TEST_TIMEOUT" \ 763 + ./nf_queue -q $i -t 2 -O -b > /dev/null & 764 + done 765 + 766 + ip netns exec "$ns1" timeout "$STRESS_TEST_TIMEOUT" \ 767 + ping -q -f 10.0.2.99 > /dev/null 2>&1 & 768 + ip netns exec "$ns1" timeout "$STRESS_TEST_TIMEOUT" \ 769 + ping -q -f 10.0.3.99 > /dev/null 2>&1 & 770 + ip netns exec "$ns1" timeout "$STRESS_TEST_TIMEOUT" \ 771 + ping -q -f "dead:2::99" > /dev/null 2>&1 & 772 + ip netns exec "$ns1" timeout "$STRESS_TEST_TIMEOUT" \ 773 + ping -q -f "dead:3::99" > /dev/null 2>&1 & 774 + 775 + busywait "$BUSYWAIT_TIMEOUT" udp_listener_ready "$ns2" 12345 776 + busywait "$BUSYWAIT_TIMEOUT" udp_listener_ready "$ns3" 12345 777 + 778 + for i in $(seq 1 4);do 779 + ip netns exec "$ns1" timeout "$STRESS_TEST_TIMEOUT" \ 780 + socat -u STDIN UDP-DATAGRAM:10.0.2.99:12345 < /dev/zero > /dev/null & 781 + ip netns exec "$ns1" timeout "$STRESS_TEST_TIMEOUT" \ 782 + socat -u STDIN UDP-DATAGRAM:10.0.3.99:12345 < /dev/zero > /dev/null & 783 + done 784 + 785 + wait 786 + 787 + check_tainted "concurrent queueing" 788 + } 789 + 723 790 test_queue_removal() 724 791 { 725 792 read tainted_then < /proc/sys/kernel/tainted ··· 811 742 812 743 ip netns exec "$ns1" nft flush ruleset 813 744 814 - if [ "$tainted_then" -ne 0 ];then 815 - return 816 - fi 817 - 818 - read tainted_now < /proc/sys/kernel/tainted 819 - if [ "$tainted_now" -eq 0 ];then 820 - echo "PASS: queue program exiting while packets queued" 821 - else 822 - echo "TAINT: queue program exiting while packets queued" 823 - dmesg 824 - ret=1 825 - fi 745 + check_tainted "queue program exiting while packets queued" 826 746 } 827 747 828 748 ip netns exec "$nsrouter" sysctl net.ipv6.conf.all.forwarding=1 > /dev/null ··· 857 799 test_sctp_output 858 800 test_udp_nat_race 859 801 test_udp_gro_ct 802 + test_queue_stress 860 803 861 804 # should be last, adds vrf device in ns1 and changes routes 862 805 test_icmp_vrf
+197
tools/testing/selftests/net/srv6_iptunnel_cache.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # author: Andrea Mayer <andrea.mayer@uniroma2.it> 5 + 6 + # This test verifies that the seg6 lwtunnel does not share the dst_cache 7 + # between the input (forwarding) and output (locally generated) paths. 8 + # 9 + # A shared dst_cache allows a forwarded packet to populate the cache and a 10 + # subsequent locally generated packet to silently reuse that entry, bypassing 11 + # its own route lookup. To expose this, the SID is made reachable only for 12 + # forwarded traffic (via an ip rule matching iif) and blackholed for everything 13 + # else. A local ping on ns_router must always hit the blackhole; 14 + # if it succeeds after a forwarded packet has populated the 15 + # cache, the bug is confirmed. 16 + # 17 + # Both forwarded and local packets are pinned to the same CPU with taskset, 18 + # since dst_cache is per-cpu. 19 + # 20 + # 21 + # +--------------------+ +--------------------+ 22 + # | ns_src | | ns_dst | 23 + # | | | | 24 + # | veth-s0 | | veth-d0 | 25 + # | fd00::1/64 | | fd01::2/64 | 26 + # +-------+------------+ +----------+---------+ 27 + # | | 28 + # | +--------------------+ | 29 + # | | ns_router | | 30 + # | | | | 31 + # +------------+ veth-r0 veth-r1 +--------------+ 32 + # | fd00::2 fd01::1 | 33 + # +--------------------+ 34 + # 35 + # 36 + # ns_router: encap (main table) 37 + # +---------+---------------------------------------+ 38 + # | dst | action | 39 + # +---------+---------------------------------------+ 40 + # | cafe::1 | encap seg6 mode encap segs fc00::100 | 41 + # +---------+---------------------------------------+ 42 + # 43 + # ns_router: post-encap SID resolution 44 + # +-------+------------+----------------------------+ 45 + # | table | dst | action | 46 + # +-------+------------+----------------------------+ 47 + # | 100 | fc00::100 | via fd01::2 dev veth-r1 | 48 + # +-------+------------+----------------------------+ 49 + # | main | fc00::100 | blackhole | 50 + # +-------+------------+----------------------------+ 51 + # 52 + # ns_router: ip rule 53 + # +------------------+------------------------------+ 54 + # | match | action | 55 + # +------------------+------------------------------+ 56 + # | iif veth-r0 | lookup 100 | 57 + # +------------------+------------------------------+ 58 + # 59 + # ns_dst: SRv6 decap (main table) 60 + # +--------------+----------------------------------+ 61 + # | SID | action | 62 + # +--------------+----------------------------------+ 63 + # | fc00::100 | End.DT6 table 255 (local) | 64 + # +--------------+----------------------------------+ 65 + 66 + source lib.sh 67 + 68 + readonly SID="fc00::100" 69 + readonly DEST="cafe::1" 70 + 71 + readonly SRC_MAC="02:00:00:00:00:01" 72 + readonly RTR_R0_MAC="02:00:00:00:00:02" 73 + readonly RTR_R1_MAC="02:00:00:00:00:03" 74 + readonly DST_MAC="02:00:00:00:00:04" 75 + 76 + cleanup() 77 + { 78 + cleanup_ns "${NS_SRC}" "${NS_RTR}" "${NS_DST}" 79 + } 80 + 81 + check_prerequisites() 82 + { 83 + if ! command -v ip &>/dev/null; then 84 + echo "SKIP: ip tool not found" 85 + exit "${ksft_skip}" 86 + fi 87 + 88 + if ! command -v ping &>/dev/null; then 89 + echo "SKIP: ping not found" 90 + exit "${ksft_skip}" 91 + fi 92 + 93 + if ! command -v sysctl &>/dev/null; then 94 + echo "SKIP: sysctl not found" 95 + exit "${ksft_skip}" 96 + fi 97 + 98 + if ! command -v taskset &>/dev/null; then 99 + echo "SKIP: taskset not found" 100 + exit "${ksft_skip}" 101 + fi 102 + } 103 + 104 + setup() 105 + { 106 + setup_ns NS_SRC NS_RTR NS_DST 107 + 108 + ip link add veth-s0 netns "${NS_SRC}" type veth \ 109 + peer name veth-r0 netns "${NS_RTR}" 110 + ip link add veth-r1 netns "${NS_RTR}" type veth \ 111 + peer name veth-d0 netns "${NS_DST}" 112 + 113 + ip -n "${NS_SRC}" link set veth-s0 address "${SRC_MAC}" 114 + ip -n "${NS_RTR}" link set veth-r0 address "${RTR_R0_MAC}" 115 + ip -n "${NS_RTR}" link set veth-r1 address "${RTR_R1_MAC}" 116 + ip -n "${NS_DST}" link set veth-d0 address "${DST_MAC}" 117 + 118 + # ns_src 119 + ip -n "${NS_SRC}" link set veth-s0 up 120 + ip -n "${NS_SRC}" addr add fd00::1/64 dev veth-s0 nodad 121 + ip -n "${NS_SRC}" -6 route add "${DEST}"/128 via fd00::2 122 + 123 + # ns_router 124 + ip -n "${NS_RTR}" link set veth-r0 up 125 + ip -n "${NS_RTR}" addr add fd00::2/64 dev veth-r0 nodad 126 + ip -n "${NS_RTR}" link set veth-r1 up 127 + ip -n "${NS_RTR}" addr add fd01::1/64 dev veth-r1 nodad 128 + ip netns exec "${NS_RTR}" sysctl -qw net.ipv6.conf.all.forwarding=1 129 + 130 + ip -n "${NS_RTR}" -6 route add "${DEST}"/128 \ 131 + encap seg6 mode encap segs "${SID}" dev veth-r0 132 + ip -n "${NS_RTR}" -6 route add "${SID}"/128 table 100 \ 133 + via fd01::2 dev veth-r1 134 + ip -n "${NS_RTR}" -6 route add blackhole "${SID}"/128 135 + ip -n "${NS_RTR}" -6 rule add iif veth-r0 lookup 100 136 + 137 + # ns_dst 138 + ip -n "${NS_DST}" link set veth-d0 up 139 + ip -n "${NS_DST}" addr add fd01::2/64 dev veth-d0 nodad 140 + ip -n "${NS_DST}" addr add "${DEST}"/128 dev lo nodad 141 + ip -n "${NS_DST}" -6 route add "${SID}"/128 \ 142 + encap seg6local action End.DT6 table 255 dev veth-d0 143 + ip -n "${NS_DST}" -6 route add fd00::/64 via fd01::1 144 + 145 + # static neighbors 146 + ip -n "${NS_SRC}" -6 neigh add fd00::2 dev veth-s0 \ 147 + lladdr "${RTR_R0_MAC}" nud permanent 148 + ip -n "${NS_RTR}" -6 neigh add fd00::1 dev veth-r0 \ 149 + lladdr "${SRC_MAC}" nud permanent 150 + ip -n "${NS_RTR}" -6 neigh add fd01::2 dev veth-r1 \ 151 + lladdr "${DST_MAC}" nud permanent 152 + ip -n "${NS_DST}" -6 neigh add fd01::1 dev veth-d0 \ 153 + lladdr "${RTR_R1_MAC}" nud permanent 154 + } 155 + 156 + test_cache_isolation() 157 + { 158 + RET=0 159 + 160 + # local ping with empty cache: must fail (SID is blackholed) 161 + if ip netns exec "${NS_RTR}" taskset -c 0 \ 162 + ping -c 1 -W 2 "${DEST}" &>/dev/null; then 163 + echo "SKIP: local ping succeeded, topology broken" 164 + exit "${ksft_skip}" 165 + fi 166 + 167 + # forward from ns_src to populate the input cache 168 + if ! ip netns exec "${NS_SRC}" taskset -c 0 \ 169 + ping -c 1 -W 2 "${DEST}" &>/dev/null; then 170 + echo "SKIP: forwarded ping failed, topology broken" 171 + exit "${ksft_skip}" 172 + fi 173 + 174 + # local ping again: must still fail; if the output path reuses 175 + # the input cache, it bypasses the blackhole and the ping succeeds 176 + if ip netns exec "${NS_RTR}" taskset -c 0 \ 177 + ping -c 1 -W 2 "${DEST}" &>/dev/null; then 178 + echo "FAIL: output path used dst cached by input path" 179 + RET="${ksft_fail}" 180 + else 181 + echo "PASS: output path dst_cache is independent" 182 + fi 183 + 184 + return "${RET}" 185 + } 186 + 187 + if [ "$(id -u)" -ne 0 ]; then 188 + echo "SKIP: Need root privileges" 189 + exit "${ksft_skip}" 190 + fi 191 + 192 + trap cleanup EXIT 193 + 194 + check_prerequisites 195 + setup 196 + test_cache_isolation 197 + exit "${RET}"
+7 -6
tools/testing/selftests/riscv/cfi/cfitests.c
··· 94 94 } 95 95 96 96 switch (ptrace_test_num) { 97 - #define CFI_ENABLE_MASK (PTRACE_CFI_LP_EN_STATE | \ 98 - PTRACE_CFI_SS_EN_STATE | \ 99 - PTRACE_CFI_SS_PTR_STATE) 97 + #define CFI_ENABLE_MASK (PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE | \ 98 + PTRACE_CFI_SHADOW_STACK_EN_STATE | \ 99 + PTRACE_CFI_SHADOW_STACK_PTR_STATE) 100 100 case 0: 101 101 if ((cfi_reg.cfi_status.cfi_state & CFI_ENABLE_MASK) != CFI_ENABLE_MASK) 102 102 ksft_exit_fail_msg("%s: ptrace_getregset failed, %llu\n", __func__, ··· 106 106 __func__); 107 107 break; 108 108 case 1: 109 - if (!(cfi_reg.cfi_status.cfi_state & PTRACE_CFI_ELP_STATE)) 109 + if (!(cfi_reg.cfi_status.cfi_state & 110 + PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE)) 110 111 ksft_exit_fail_msg("%s: elp must have been set\n", __func__); 111 112 /* clear elp state. not interested in anything else */ 112 113 cfi_reg.cfi_status.cfi_state = 0; ··· 146 145 * pads for user mode except lighting up a bit in senvcfg via a prctl. 147 146 * Enable landing pad support throughout the execution of the test binary. 148 147 */ 149 - ret = my_syscall5(__NR_prctl, PR_GET_INDIR_BR_LP_STATUS, &lpad_status, 0, 0, 0); 148 + ret = my_syscall5(__NR_prctl, PR_GET_CFI, PR_CFI_BRANCH_LANDING_PADS, &lpad_status, 0, 0); 150 149 if (ret) 151 150 ksft_exit_fail_msg("Get landing pad status failed with %d\n", ret); 152 151 153 - if (!(lpad_status & PR_INDIR_BR_LP_ENABLE)) 152 + if (!(lpad_status & PR_CFI_ENABLE)) 154 153 ksft_exit_fail_msg("Landing pad is not enabled, should be enabled via glibc\n"); 155 154 156 155 ret = my_syscall5(__NR_prctl, PR_GET_SHADOW_STACK_STATUS, &ss_status, 0, 0, 0);
+6 -2
tools/testing/vsock/util.c
··· 344 344 ret = send(fd, buf + nwritten, len - nwritten, flags); 345 345 timeout_check("send"); 346 346 347 - if (ret == 0 || (ret < 0 && errno != EINTR)) 347 + if (ret < 0 && errno == EINTR) 348 + continue; 349 + if (ret <= 0) 348 350 break; 349 351 350 352 nwritten += ret; ··· 398 396 ret = recv(fd, buf + nread, len - nread, flags); 399 397 timeout_check("recv"); 400 398 401 - if (ret == 0 || (ret < 0 && errno != EINTR)) 399 + if (ret < 0 && errno == EINTR) 400 + continue; 401 + if (ret <= 0) 402 402 break; 403 403 404 404 nread += ret;