Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-7.0-rc4).

drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
db25c42c2e1f9 ("net/mlx5e: RX, Fix XDP multi-buf frag counting for striding RQ")
dff1c3164a692 ("net/mlx5e: SHAMPO, Always calculate page size")
https://lore.kernel.org/aa7ORohmf67EKihj@sirena.org.uk

drivers/net/ethernet/ti/am65-cpsw-nuss.c
840c9d13cb1ca ("net: ethernet: ti: am65-cpsw-nuss: Fix rx_filter value for PTP support")
a23c657e332f2 ("net: ethernet: ti: am65-cpsw: Use also port number to identify timestamps")
https://lore.kernel.org/abK3EkIXuVgMyGI7@sirena.org.uk

No adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+5043 -2660
+3 -1
.mailmap
··· 219 219 Danilo Krummrich <dakr@kernel.org> <dakr@redhat.com> 220 220 David Brownell <david-b@pacbell.net> 221 221 David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org> 222 + David Gow <david@davidgow.net> <davidgow@google.com> 222 223 David Heidelberg <david@ixit.cz> <d.okias@gmail.com> 223 224 David Hildenbrand <david@kernel.org> <david@redhat.com> 224 225 David Rheinsberg <david@readahead.eu> <dh.herrmann@gmail.com> ··· 498 497 Loic Poulain <loic.poulain@oss.qualcomm.com> <loic.poulain@linaro.org> 499 498 Loic Poulain <loic.poulain@oss.qualcomm.com> <loic.poulain@intel.com> 500 499 Lorenzo Pieralisi <lpieralisi@kernel.org> <lorenzo.pieralisi@arm.com> 501 - Lorenzo Stoakes <lorenzo.stoakes@oracle.com> <lstoakes@gmail.com> 500 + Lorenzo Stoakes <ljs@kernel.org> <lstoakes@gmail.com> 501 + Lorenzo Stoakes <ljs@kernel.org> <lorenzo.stoakes@oracle.com> 502 502 Luca Ceresoli <luca.ceresoli@bootlin.com> <luca@lucaceresoli.net> 503 503 Luca Weiss <luca@lucaweiss.eu> <luca@z3ntu.xyz> 504 504 Lucas De Marchi <demarchi@kernel.org> <lucas.demarchi@intel.com>
+2 -2
Documentation/ABI/testing/sysfs-block-zram
··· 151 151 The algorithm_params file is write-only and is used to setup 152 152 compression algorithm parameters. 153 153 154 - What: /sys/block/zram<id>/writeback_compressed 154 + What: /sys/block/zram<id>/compressed_writeback 155 155 Date: Decemeber 2025 156 156 Contact: Richard Chang <richardycc@google.com> 157 157 Description: 158 - The writeback_compressed device atrribute toggles compressed 158 + The compressed_writeback device atrribute toggles compressed 159 159 writeback feature. 160 160 161 161 What: /sys/block/zram<id>/writeback_batch_size
+5 -5
Documentation/ABI/testing/sysfs-driver-uniwill-laptop
··· 1 - What: /sys/bus/platform/devices/INOU0000:XX/fn_lock_toggle_enable 1 + What: /sys/bus/platform/devices/INOU0000:XX/fn_lock 2 2 Date: November 2025 3 3 KernelVersion: 6.19 4 4 Contact: Armin Wolf <W_Armin@gmx.de> ··· 8 8 9 9 Reading this file returns the current enable status of the FN lock functionality. 10 10 11 - What: /sys/bus/platform/devices/INOU0000:XX/super_key_toggle_enable 11 + What: /sys/bus/platform/devices/INOU0000:XX/super_key_enable 12 12 Date: November 2025 13 13 KernelVersion: 6.19 14 14 Contact: Armin Wolf <W_Armin@gmx.de> 15 15 Description: 16 - Allows userspace applications to enable/disable the super key functionality 17 - of the integrated keyboard by writing "1"/"0" into this file. 16 + Allows userspace applications to enable/disable the super key of the integrated 17 + keyboard by writing "1"/"0" into this file. 18 18 19 - Reading this file returns the current enable status of the super key functionality. 19 + Reading this file returns the current enable status of the super key. 20 20 21 21 What: /sys/bus/platform/devices/INOU0000:XX/touchpad_toggle_enable 22 22 Date: November 2025
+3 -3
Documentation/admin-guide/blockdev/zram.rst
··· 216 216 writeback_limit_enable RW show and set writeback_limit feature 217 217 writeback_batch_size RW show and set maximum number of in-flight 218 218 writeback operations 219 - writeback_compressed RW show and set compressed writeback feature 219 + compressed_writeback RW show and set compressed writeback feature 220 220 comp_algorithm RW show and change the compression algorithm 221 221 algorithm_params WO setup compression algorithm parameters 222 222 compact WO trigger memory compaction ··· 439 439 By default zram stores written back pages in decompressed (raw) form, which 440 440 means that writeback operation involves decompression of the page before 441 441 writing it to the backing device. This behavior can be changed by enabling 442 - `writeback_compressed` feature, which causes zram to write compressed pages 442 + `compressed_writeback` feature, which causes zram to write compressed pages 443 443 to the backing device, thus avoiding decompression overhead. To enable 444 444 this feature, execute:: 445 445 446 - $ echo yes > /sys/block/zramX/writeback_compressed 446 + $ echo yes > /sys/block/zramX/compressed_writeback 447 447 448 448 Note that this feature should be configured before the `zramX` device is 449 449 initialized.
+13
Documentation/admin-guide/kernel-parameters.txt
··· 74 74 TPM TPM drivers are enabled. 75 75 UMS USB Mass Storage support is enabled. 76 76 USB USB support is enabled. 77 + NVME NVMe support is enabled 77 78 USBHID USB Human Interface Device support is enabled. 78 79 V4L Video For Linux support is enabled. 79 80 VGA The VGA console has been enabled. ··· 4787 4786 'node', 'default' can be specified 4788 4787 This can be set from sysctl after boot. 4789 4788 See Documentation/admin-guide/sysctl/vm.rst for details. 4789 + 4790 + nvme.quirks= [NVME] A list of quirk entries to augment the built-in 4791 + nvme quirk list. List entries are separated by a 4792 + '-' character. 4793 + Each entry has the form VendorID:ProductID:quirk_names. 4794 + The IDs are 4-digits hex numbers and quirk_names is a 4795 + list of quirk names separated by commas. A quirk name 4796 + can be prefixed by '^', meaning that the specified 4797 + quirk must be disabled. 4798 + 4799 + Example: 4800 + nvme.quirks=7710:2267:bogus_nid,^identify_cns-9900:7711:broken_msi 4790 4801 4791 4802 ohci1394_dma=early [HW,EARLY] enable debugging via the ohci1394 driver. 4792 4803 See Documentation/core-api/debugging-via-ohci1394.rst for more
+1 -1
Documentation/admin-guide/laptops/uniwill-laptop.rst
··· 24 24 25 25 The ``uniwill-laptop`` driver allows the user to enable/disable: 26 26 27 - - the FN and super key lock functionality of the integrated keyboard 27 + - the FN lock and super key of the integrated keyboard 28 28 - the touchpad toggle functionality of the integrated touchpad 29 29 30 30 See Documentation/ABI/testing/sysfs-driver-uniwill-laptop for details.
-1
Documentation/devicetree/bindings/hwmon/kontron,sl28cpld-hwmon.yaml
··· 16 16 properties: 17 17 compatible: 18 18 enum: 19 - - kontron,sa67mcu-hwmon 20 19 - kontron,sl28cpld-fan 21 20 22 21 reg:
+93
Documentation/devicetree/bindings/powerpc/fsl/fsl,mpc83xx.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/powerpc/fsl/fsl,mpc83xx.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Freescale PowerQUICC II Pro (MPC83xx) platforms 8 + 9 + maintainers: 10 + - J. Neuschäfer <j.ne@posteo.net> 11 + 12 + properties: 13 + $nodename: 14 + const: '/' 15 + compatible: 16 + oneOf: 17 + - description: MPC83xx Reference Design Boards 18 + items: 19 + - enum: 20 + - fsl,mpc8308rdb 21 + - fsl,mpc8315erdb 22 + - fsl,mpc8360rdk 23 + - fsl,mpc8377rdb 24 + - fsl,mpc8377wlan 25 + - fsl,mpc8378rdb 26 + - fsl,mpc8379rdb 27 + 28 + - description: MPC8313E Reference Design Board 29 + items: 30 + - const: MPC8313ERDB 31 + - const: MPC831xRDB 32 + - const: MPC83xxRDB 33 + 34 + - description: MPC8323E Reference Design Board 35 + items: 36 + - const: MPC8323ERDB 37 + - const: MPC832xRDB 38 + - const: MPC83xxRDB 39 + 40 + - description: MPC8349E-mITX(-GP) Reference Design Platform 41 + items: 42 + - enum: 43 + - MPC8349EMITX 44 + - MPC8349EMITXGP 45 + - const: MPC834xMITX 46 + - const: MPC83xxMITX 47 + 48 + - description: Keymile KMETER1 board 49 + const: keymile,KMETER1 50 + 51 + - description: MPC8308 P1M board 52 + const: denx,mpc8308_p1m 53 + 54 + patternProperties: 55 + "^soc@.*$": 56 + type: object 57 + properties: 58 + compatible: 59 + oneOf: 60 + - items: 61 + - enum: 62 + - fsl,mpc8315-immr 63 + - fsl,mpc8308-immr 64 + - const: simple-bus 65 + - items: 66 + - const: fsl,mpc8360-immr 67 + - const: fsl,immr 68 + - const: fsl,soc 69 + - const: simple-bus 70 + - const: simple-bus 71 + 72 + additionalProperties: true 73 + 74 + examples: 75 + - | 76 + / { 77 + compatible = "fsl,mpc8315erdb"; 78 + model = "MPC8315E-RDB"; 79 + #address-cells = <1>; 80 + #size-cells = <1>; 81 + 82 + soc@e0000000 { 83 + compatible = "fsl,mpc8315-immr", "simple-bus"; 84 + reg = <0xe0000000 0x00000200>; 85 + #address-cells = <1>; 86 + #size-cells = <1>; 87 + device_type = "soc"; 88 + ranges = <0 0xe0000000 0x00100000>; 89 + bus-frequency = <0>; 90 + }; 91 + }; 92 + 93 + ...
+1
Documentation/devicetree/bindings/sound/nvidia,tegra-audio-graph-card.yaml
··· 23 23 enum: 24 24 - nvidia,tegra210-audio-graph-card 25 25 - nvidia,tegra186-audio-graph-card 26 + - nvidia,tegra238-audio-graph-card 26 27 - nvidia,tegra264-audio-graph-card 27 28 28 29 clocks:
+1
Documentation/devicetree/bindings/sound/renesas,rz-ssi.yaml
··· 20 20 - renesas,r9a07g044-ssi # RZ/G2{L,LC} 21 21 - renesas,r9a07g054-ssi # RZ/V2L 22 22 - renesas,r9a08g045-ssi # RZ/G3S 23 + - renesas,r9a08g046-ssi # RZ/G3L 23 24 - const: renesas,rz-ssi 24 25 25 26 reg:
+1 -1
Documentation/hwmon/emc1403.rst
··· 57 57 - https://ww1.microchip.com/downloads/en/DeviceDoc/EMC1438%20DS%20Rev.%201.0%20(04-29-10).pdf 58 58 59 59 Author: 60 - Kalhan Trisal <kalhan.trisal@intel.com 60 + Kalhan Trisal <kalhan.trisal@intel.com> 61 61 62 62 63 63 Description
-1
Documentation/hwmon/index.rst
··· 220 220 q54sj108a2 221 221 qnap-mcu-hwmon 222 222 raspberrypi-hwmon 223 - sa67 224 223 sbrmi 225 224 sbtsi_temp 226 225 sch5627
-41
Documentation/hwmon/sa67.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0-only 2 - 3 - Kernel driver sa67mcu 4 - ===================== 5 - 6 - Supported chips: 7 - 8 - * Kontron sa67mcu 9 - 10 - Prefix: 'sa67mcu' 11 - 12 - Datasheet: not available 13 - 14 - Authors: Michael Walle <mwalle@kernel.org> 15 - 16 - Description 17 - ----------- 18 - 19 - The sa67mcu is a board management controller which also exposes a hardware 20 - monitoring controller. 21 - 22 - The controller has two voltage and one temperature sensor. The values are 23 - hold in two 8 bit registers to form one 16 bit value. Reading the lower byte 24 - will also capture the high byte to make the access atomic. The unit of the 25 - volatge sensors are 1mV and the unit of the temperature sensor is 0.1degC. 26 - 27 - Sysfs entries 28 - ------------- 29 - 30 - The following attributes are supported. 31 - 32 - ======================= ======================================================== 33 - in0_label "VDDIN" 34 - in0_input Measured VDDIN voltage. 35 - 36 - in1_label "VDD_RTC" 37 - in1_input Measured VDD_RTC voltage. 38 - 39 - temp1_input MCU temperature. Roughly the board temperature. 40 - ======================= ======================================================== 41 -
+4
Documentation/sound/alsa-configuration.rst
··· 2372 2372 audible volume 2373 2373 * bit 25: ``mixer_capture_min_mute`` 2374 2374 Similar to bit 24 but for capture streams 2375 + * bit 26: ``skip_iface_setup`` 2376 + Skip the probe-time interface setup (usb_set_interface, 2377 + init_pitch, init_sample_rate); redundant with 2378 + snd_usb_endpoint_prepare() at stream-open time 2375 2379 2376 2380 This module supports multiple devices, autoprobe and hotplugging. 2377 2381
+27 -19
MAINTAINERS
··· 13937 13937 13938 13938 KERNEL UNIT TESTING FRAMEWORK (KUnit) 13939 13939 M: Brendan Higgins <brendan.higgins@linux.dev> 13940 - M: David Gow <davidgow@google.com> 13940 + M: David Gow <david@davidgow.net> 13941 13941 R: Rae Moar <raemoar63@gmail.com> 13942 13942 L: linux-kselftest@vger.kernel.org 13943 13943 L: kunit-dev@googlegroups.com ··· 14757 14757 F: drivers/platform/x86/hp/hp_accel.c 14758 14758 14759 14759 LIST KUNIT TEST 14760 - M: David Gow <davidgow@google.com> 14760 + M: David Gow <david@davidgow.net> 14761 14761 L: linux-kselftest@vger.kernel.org 14762 14762 L: kunit-dev@googlegroups.com 14763 14763 S: Maintained ··· 16357 16357 16358 16358 MEDIATEK T7XX 5G WWAN MODEM DRIVER 16359 16359 M: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com> 16360 - R: Chiranjeevi Rapolu <chiranjeevi.rapolu@linux.intel.com> 16361 16360 R: Liu Haijun <haijun.liu@mediatek.com> 16362 16361 R: Ricardo Martinez <ricardo.martinez@linux.intel.com> 16363 16362 L: netdev@vger.kernel.org ··· 16641 16642 MEMORY MANAGEMENT - CORE 16642 16643 M: Andrew Morton <akpm@linux-foundation.org> 16643 16644 M: David Hildenbrand <david@kernel.org> 16644 - R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16645 + R: Lorenzo Stoakes <ljs@kernel.org> 16645 16646 R: Liam R. Howlett <Liam.Howlett@oracle.com> 16646 16647 R: Vlastimil Babka <vbabka@kernel.org> 16647 16648 R: Mike Rapoport <rppt@kernel.org> ··· 16771 16772 MEMORY MANAGEMENT - MISC 16772 16773 M: Andrew Morton <akpm@linux-foundation.org> 16773 16774 M: David Hildenbrand <david@kernel.org> 16774 - R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16775 + R: Lorenzo Stoakes <ljs@kernel.org> 16775 16776 R: Liam R. Howlett <Liam.Howlett@oracle.com> 16776 16777 R: Vlastimil Babka <vbabka@kernel.org> 16777 16778 R: Mike Rapoport <rppt@kernel.org> ··· 16862 16863 R: Michal Hocko <mhocko@kernel.org> 16863 16864 R: Qi Zheng <zhengqi.arch@bytedance.com> 16864 16865 R: Shakeel Butt <shakeel.butt@linux.dev> 16865 - R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16866 + R: Lorenzo Stoakes <ljs@kernel.org> 16866 16867 L: linux-mm@kvack.org 16867 16868 S: Maintained 16868 16869 F: mm/vmscan.c ··· 16871 16872 MEMORY MANAGEMENT - RMAP (REVERSE MAPPING) 16872 16873 M: Andrew Morton <akpm@linux-foundation.org> 16873 16874 M: David Hildenbrand <david@kernel.org> 16874 - M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16875 + M: Lorenzo Stoakes <ljs@kernel.org> 16875 16876 R: Rik van Riel <riel@surriel.com> 16876 16877 R: Liam R. Howlett <Liam.Howlett@oracle.com> 16877 16878 R: Vlastimil Babka <vbabka@kernel.org> ··· 16916 16917 MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE) 16917 16918 M: Andrew Morton <akpm@linux-foundation.org> 16918 16919 M: David Hildenbrand <david@kernel.org> 16919 - M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16920 + M: Lorenzo Stoakes <ljs@kernel.org> 16920 16921 R: Zi Yan <ziy@nvidia.com> 16921 16922 R: Baolin Wang <baolin.wang@linux.alibaba.com> 16922 16923 R: Liam R. Howlett <Liam.Howlett@oracle.com> ··· 16956 16957 16957 16958 MEMORY MANAGEMENT - RUST 16958 16959 M: Alice Ryhl <aliceryhl@google.com> 16959 - R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16960 + R: Lorenzo Stoakes <ljs@kernel.org> 16960 16961 R: Liam R. Howlett <Liam.Howlett@oracle.com> 16961 16962 L: linux-mm@kvack.org 16962 16963 L: rust-for-linux@vger.kernel.org ··· 16972 16973 MEMORY MAPPING 16973 16974 M: Andrew Morton <akpm@linux-foundation.org> 16974 16975 M: Liam R. Howlett <Liam.Howlett@oracle.com> 16975 - M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16976 + M: Lorenzo Stoakes <ljs@kernel.org> 16976 16977 R: Vlastimil Babka <vbabka@kernel.org> 16977 16978 R: Jann Horn <jannh@google.com> 16978 16979 R: Pedro Falcato <pfalcato@suse.de> ··· 17002 17003 M: Andrew Morton <akpm@linux-foundation.org> 17003 17004 M: Suren Baghdasaryan <surenb@google.com> 17004 17005 M: Liam R. Howlett <Liam.Howlett@oracle.com> 17005 - M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 17006 + M: Lorenzo Stoakes <ljs@kernel.org> 17006 17007 R: Vlastimil Babka <vbabka@kernel.org> 17007 17008 R: Shakeel Butt <shakeel.butt@linux.dev> 17008 17009 L: linux-mm@kvack.org ··· 17017 17018 MEMORY MAPPING - MADVISE (MEMORY ADVICE) 17018 17019 M: Andrew Morton <akpm@linux-foundation.org> 17019 17020 M: Liam R. Howlett <Liam.Howlett@oracle.com> 17020 - M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 17021 + M: Lorenzo Stoakes <ljs@kernel.org> 17021 17022 M: David Hildenbrand <david@kernel.org> 17022 17023 R: Vlastimil Babka <vbabka@kernel.org> 17023 17024 R: Jann Horn <jannh@google.com> ··· 20106 20107 F: drivers/pci/controller/pci-aardvark.c 20107 20108 20108 20109 PCI DRIVER FOR ALTERA PCIE IP 20109 - M: Joyce Ooi <joyce.ooi@intel.com> 20110 20110 L: linux-pci@vger.kernel.org 20111 - S: Supported 20111 + S: Orphan 20112 20112 F: Documentation/devicetree/bindings/pci/altr,pcie-root-port.yaml 20113 20113 F: drivers/pci/controller/pcie-altera.c 20114 20114 ··· 20352 20354 F: Documentation/PCI/pci-error-recovery.rst 20353 20355 20354 20356 PCI MSI DRIVER FOR ALTERA MSI IP 20355 - M: Joyce Ooi <joyce.ooi@intel.com> 20356 20357 L: linux-pci@vger.kernel.org 20357 - S: Supported 20358 + S: Orphan 20358 20359 F: Documentation/devicetree/bindings/interrupt-controller/altr,msi-controller.yaml 20359 20360 F: drivers/pci/controller/pcie-altera-msi.c 20360 20361 ··· 22265 22268 S: Orphan 22266 22269 F: drivers/net/wireless/rsi/ 22267 22270 22271 + RELAY 22272 + M: Andrew Morton <akpm@linux-foundation.org> 22273 + M: Jens Axboe <axboe@kernel.dk> 22274 + M: Jason Xing <kernelxing@tencent.com> 22275 + L: linux-kernel@vger.kernel.org 22276 + S: Maintained 22277 + F: Documentation/filesystems/relay.rst 22278 + F: include/linux/relay.h 22279 + F: kernel/relay.c 22280 + 22268 22281 REGISTER MAP ABSTRACTION 22269 22282 M: Mark Brown <broonie@kernel.org> 22270 22283 L: linux-kernel@vger.kernel.org ··· 23164 23157 23165 23158 RUST [ALLOC] 23166 23159 M: Danilo Krummrich <dakr@kernel.org> 23167 - R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 23160 + R: Lorenzo Stoakes <ljs@kernel.org> 23168 23161 R: Vlastimil Babka <vbabka@kernel.org> 23169 23162 R: Liam R. Howlett <Liam.Howlett@oracle.com> 23170 23163 R: Uladzislau Rezki <urezki@gmail.com> ··· 24328 24321 F: Documentation/devicetree/bindings/pwm/kontron,sl28cpld-pwm.yaml 24329 24322 F: Documentation/devicetree/bindings/watchdog/kontron,sl28cpld-wdt.yaml 24330 24323 F: drivers/gpio/gpio-sl28cpld.c 24331 - F: drivers/hwmon/sa67mcu-hwmon.c 24332 24324 F: drivers/hwmon/sl28cpld-hwmon.c 24333 24325 F: drivers/irqchip/irq-sl28cpld.c 24334 24326 F: drivers/pwm/pwm-sl28cpld.c ··· 24341 24335 24342 24336 SLAB ALLOCATOR 24343 24337 M: Vlastimil Babka <vbabka@kernel.org> 24338 + M: Harry Yoo <harry.yoo@oracle.com> 24344 24339 M: Andrew Morton <akpm@linux-foundation.org> 24340 + R: Hao Li <hao.li@linux.dev> 24345 24341 R: Christoph Lameter <cl@gentwo.org> 24346 24342 R: David Rientjes <rientjes@google.com> 24347 24343 R: Roman Gushchin <roman.gushchin@linux.dev> 24348 - R: Harry Yoo <harry.yoo@oracle.com> 24349 24344 L: linux-mm@kvack.org 24350 24345 S: Maintained 24351 24346 T: git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git ··· 25757 25750 F: include/net/pkt_sched.h 25758 25751 F: include/net/sch_priv.h 25759 25752 F: include/net/tc_act/ 25753 + F: include/net/tc_wrapper.h 25760 25754 F: include/uapi/linux/pkt_cls.h 25761 25755 F: include/uapi/linux/pkt_sched.h 25762 25756 F: include/uapi/linux/tc_act/
+5 -5
Makefile
··· 2 2 VERSION = 7 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 1497 1497 $(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean 1498 1498 endif 1499 1499 1500 - PHONY += objtool_clean 1500 + PHONY += objtool_clean objtool_mrproper 1501 1501 1502 1502 objtool_O = $(abspath $(objtree))/tools/objtool 1503 1503 1504 - objtool_clean: 1504 + objtool_clean objtool_mrproper: 1505 1505 ifneq ($(wildcard $(objtool_O)),) 1506 - $(Q)$(MAKE) -sC $(abs_srctree)/tools/objtool O=$(objtool_O) srctree=$(abs_srctree) clean 1506 + $(Q)$(MAKE) -sC $(abs_srctree)/tools/objtool O=$(objtool_O) srctree=$(abs_srctree) $(patsubst objtool_%,%,$@) 1507 1507 endif 1508 1508 1509 1509 tools/: FORCE ··· 1686 1686 $(mrproper-dirs): 1687 1687 $(Q)$(MAKE) $(clean)=$(patsubst _mrproper_%,%,$@) 1688 1688 1689 - mrproper: clean $(mrproper-dirs) 1689 + mrproper: clean objtool_mrproper $(mrproper-dirs) 1690 1690 $(call cmd,rmfiles) 1691 1691 @find . $(RCS_FIND_IGNORE) \ 1692 1692 \( -name '*.rmeta' \) \
+1
arch/alpha/kernel/vmlinux.lds.S
··· 71 71 72 72 STABS_DEBUG 73 73 DWARF_DEBUG 74 + MODINFO 74 75 ELF_DETAILS 75 76 76 77 DISCARDS
+1
arch/arc/kernel/vmlinux.lds.S
··· 123 123 _end = . ; 124 124 125 125 STABS_DEBUG 126 + MODINFO 126 127 ELF_DETAILS 127 128 DISCARDS 128 129
+1
arch/arm/boot/compressed/vmlinux.lds.S
··· 21 21 COMMON_DISCARDS 22 22 *(.ARM.exidx*) 23 23 *(.ARM.extab*) 24 + *(.modinfo) 24 25 *(.note.*) 25 26 *(.rel.*) 26 27 *(.printk_index)
+1
arch/arm/kernel/vmlinux-xip.lds.S
··· 154 154 155 155 STABS_DEBUG 156 156 DWARF_DEBUG 157 + MODINFO 157 158 ARM_DETAILS 158 159 159 160 ARM_ASSERTS
+1
arch/arm/kernel/vmlinux.lds.S
··· 153 153 154 154 STABS_DEBUG 155 155 DWARF_DEBUG 156 + MODINFO 156 157 ARM_DETAILS 157 158 158 159 ARM_ASSERTS
+7 -5
arch/arm64/include/asm/cmpxchg.h
··· 91 91 #define __xchg_wrapper(sfx, ptr, x) \ 92 92 ({ \ 93 93 __typeof__(*(ptr)) __ret; \ 94 - __ret = (__typeof__(*(ptr))) \ 95 - __arch_xchg##sfx((unsigned long)(x), (ptr), sizeof(*(ptr))); \ 94 + __ret = (__force __typeof__(*(ptr))) \ 95 + __arch_xchg##sfx((__force unsigned long)(x), (ptr), \ 96 + sizeof(*(ptr))); \ 96 97 __ret; \ 97 98 }) 98 99 ··· 176 175 #define __cmpxchg_wrapper(sfx, ptr, o, n) \ 177 176 ({ \ 178 177 __typeof__(*(ptr)) __ret; \ 179 - __ret = (__typeof__(*(ptr))) \ 180 - __cmpxchg##sfx((ptr), (unsigned long)(o), \ 181 - (unsigned long)(n), sizeof(*(ptr))); \ 178 + __ret = (__force __typeof__(*(ptr))) \ 179 + __cmpxchg##sfx((ptr), (__force unsigned long)(o), \ 180 + (__force unsigned long)(n), \ 181 + sizeof(*(ptr))); \ 182 182 __ret; \ 183 183 }) 184 184
+5 -5
arch/arm64/include/asm/pgtable-prot.h
··· 50 50 51 51 #define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL)) 52 52 53 - #define _PAGE_KERNEL (PROT_NORMAL) 54 - #define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY) 55 - #define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY) 56 - #define _PAGE_KERNEL_EXEC (PROT_NORMAL & ~PTE_PXN) 57 - #define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT) 53 + #define _PAGE_KERNEL (PROT_NORMAL | PTE_DIRTY) 54 + #define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY | PTE_DIRTY) 55 + #define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY | PTE_DIRTY) 56 + #define _PAGE_KERNEL_EXEC ((PROT_NORMAL & ~PTE_PXN) | PTE_DIRTY) 57 + #define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT | PTE_DIRTY) 58 58 59 59 #define _PAGE_SHARED (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) 60 60 #define _PAGE_SHARED_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE)
+4
arch/arm64/include/asm/runtime-const.h
··· 2 2 #ifndef _ASM_RUNTIME_CONST_H 3 3 #define _ASM_RUNTIME_CONST_H 4 4 5 + #ifdef MODULE 6 + #error "Cannot use runtime-const infrastructure from modules" 7 + #endif 8 + 5 9 #include <asm/cacheflush.h> 6 10 7 11 /* Sigh. You can still run arm64 in BE mode */
+1
arch/arm64/kernel/vmlinux.lds.S
··· 349 349 350 350 STABS_DEBUG 351 351 DWARF_DEBUG 352 + MODINFO 352 353 ELF_DETAILS 353 354 354 355 HEAD_SYMBOLS
+49 -4
arch/arm64/mm/contpte.c
··· 599 599 } 600 600 EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes); 601 601 602 + static bool contpte_all_subptes_match_access_flags(pte_t *ptep, pte_t entry) 603 + { 604 + pte_t *cont_ptep = contpte_align_down(ptep); 605 + /* 606 + * PFNs differ per sub-PTE. Match only bits consumed by 607 + * __ptep_set_access_flags(): AF, DIRTY and write permission. 608 + */ 609 + const pteval_t cmp_mask = PTE_RDONLY | PTE_AF | PTE_WRITE | PTE_DIRTY; 610 + pteval_t entry_cmp = pte_val(entry) & cmp_mask; 611 + int i; 612 + 613 + for (i = 0; i < CONT_PTES; i++) { 614 + pteval_t pte_cmp = pte_val(__ptep_get(cont_ptep + i)) & cmp_mask; 615 + 616 + if (pte_cmp != entry_cmp) 617 + return false; 618 + } 619 + 620 + return true; 621 + } 622 + 602 623 int contpte_ptep_set_access_flags(struct vm_area_struct *vma, 603 624 unsigned long addr, pte_t *ptep, 604 625 pte_t entry, int dirty) ··· 629 608 int i; 630 609 631 610 /* 632 - * Gather the access/dirty bits for the contiguous range. If nothing has 633 - * changed, its a noop. 611 + * Check whether all sub-PTEs in the CONT block already match the 612 + * requested access flags/write permission, using raw per-PTE values 613 + * rather than the gathered ptep_get() view. 614 + * 615 + * __ptep_set_access_flags() can update AF, dirty and write 616 + * permission, but only to make the mapping more permissive. 617 + * 618 + * ptep_get() gathers AF/dirty state across the whole CONT block, 619 + * which is correct for a CPU with FEAT_HAFDBS. But page-table 620 + * walkers that evaluate each descriptor individually (e.g. a CPU 621 + * without DBM support, or an SMMU without HTTU, or with HA/HD 622 + * disabled in CD.TCR) can keep faulting on the target sub-PTE if 623 + * only a sibling has been updated. Gathering can therefore cause 624 + * false no-ops when only a sibling has been updated: 625 + * - write faults: target still has PTE_RDONLY (needs PTE_RDONLY cleared) 626 + * - read faults: target still lacks PTE_AF 627 + * 628 + * Per Arm ARM (DDI 0487) D8.7.1, any sub-PTE in a CONT range may 629 + * become the effective cached translation, so all entries must have 630 + * consistent attributes. Check the full CONT block before returning 631 + * no-op, and when any sub-PTE mismatches, proceed to update the whole 632 + * range. 634 633 */ 635 - orig_pte = pte_mknoncont(ptep_get(ptep)); 636 - if (pte_val(orig_pte) == pte_val(entry)) 634 + if (contpte_all_subptes_match_access_flags(ptep, entry)) 637 635 return 0; 636 + 637 + /* 638 + * Use raw target pte (not gathered) for write-bit unfold decision. 639 + */ 640 + orig_pte = pte_mknoncont(__ptep_get(ptep)); 638 641 639 642 /* 640 643 * We can fix up access/dirty bits without having to unfold the contig
+1
arch/csky/kernel/vmlinux.lds.S
··· 109 109 110 110 STABS_DEBUG 111 111 DWARF_DEBUG 112 + MODINFO 112 113 ELF_DETAILS 113 114 114 115 DISCARDS
+1
arch/hexagon/kernel/vmlinux.lds.S
··· 62 62 63 63 STABS_DEBUG 64 64 DWARF_DEBUG 65 + MODINFO 65 66 ELF_DETAILS 66 67 .hexagon.attributes 0 : { *(.hexagon.attributes) } 67 68
+1
arch/loongarch/kernel/vmlinux.lds.S
··· 147 147 148 148 STABS_DEBUG 149 149 DWARF_DEBUG 150 + MODINFO 150 151 ELF_DETAILS 151 152 152 153 #ifdef CONFIG_EFI_STUB
+1
arch/m68k/kernel/vmlinux-nommu.lds
··· 85 85 _end = .; 86 86 87 87 STABS_DEBUG 88 + MODINFO 88 89 ELF_DETAILS 89 90 90 91 /* Sections to be discarded */
+1
arch/m68k/kernel/vmlinux-std.lds
··· 58 58 _end = . ; 59 59 60 60 STABS_DEBUG 61 + MODINFO 61 62 ELF_DETAILS 62 63 63 64 /* Sections to be discarded */
+1
arch/m68k/kernel/vmlinux-sun3.lds
··· 51 51 _end = . ; 52 52 53 53 STABS_DEBUG 54 + MODINFO 54 55 ELF_DETAILS 55 56 56 57 /* Sections to be discarded */
+1
arch/mips/kernel/vmlinux.lds.S
··· 217 217 218 218 STABS_DEBUG 219 219 DWARF_DEBUG 220 + MODINFO 220 221 ELF_DETAILS 221 222 222 223 /* These must appear regardless of . */
+1
arch/nios2/kernel/vmlinux.lds.S
··· 57 57 58 58 STABS_DEBUG 59 59 DWARF_DEBUG 60 + MODINFO 60 61 ELF_DETAILS 61 62 62 63 DISCARDS
+1
arch/openrisc/kernel/vmlinux.lds.S
··· 101 101 /* Throw in the debugging sections */ 102 102 STABS_DEBUG 103 103 DWARF_DEBUG 104 + MODINFO 104 105 ELF_DETAILS 105 106 106 107 /* Sections to be discarded -- must be last */
+1
arch/parisc/boot/compressed/vmlinux.lds.S
··· 90 90 /* Sections to be discarded */ 91 91 DISCARDS 92 92 /DISCARD/ : { 93 + *(.modinfo) 93 94 #ifdef CONFIG_64BIT 94 95 /* temporary hack until binutils is fixed to not emit these 95 96 * for static binaries
+1 -1
arch/parisc/include/asm/pgtable.h
··· 85 85 printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, (unsigned long)pgd_val(e)) 86 86 87 87 /* This is the size of the initially mapped kernel memory */ 88 - #if defined(CONFIG_64BIT) 88 + #if defined(CONFIG_64BIT) || defined(CONFIG_KALLSYMS) 89 89 #define KERNEL_INITIAL_ORDER 26 /* 1<<26 = 64MB */ 90 90 #else 91 91 #define KERNEL_INITIAL_ORDER 25 /* 1<<25 = 32MB */
+6 -1
arch/parisc/kernel/head.S
··· 56 56 57 57 .import __bss_start,data 58 58 .import __bss_stop,data 59 + .import __end,data 59 60 60 61 load32 PA(__bss_start),%r3 61 62 load32 PA(__bss_stop),%r4 ··· 150 149 * everything ... it will get remapped correctly later */ 151 150 ldo 0+_PAGE_KERNEL_RWX(%r0),%r3 /* Hardwired 0 phys addr start */ 152 151 load32 (1<<(KERNEL_INITIAL_ORDER-PAGE_SHIFT)),%r11 /* PFN count */ 153 - load32 PA(pg0),%r1 152 + load32 PA(_end),%r1 153 + SHRREG %r1,PAGE_SHIFT,%r1 /* %r1 is PFN count for _end symbol */ 154 + cmpb,<<,n %r11,%r1,1f 155 + copy %r1,%r11 /* %r1 PFN count smaller than %r11 */ 156 + 1: load32 PA(pg0),%r1 154 157 155 158 $pgt_fill_loop: 156 159 STREGM %r3,ASM_PTE_ENTRY_SIZE(%r1)
+12 -8
arch/parisc/kernel/setup.c
··· 120 120 #endif 121 121 printk(KERN_CONT ".\n"); 122 122 123 - /* 124 - * Check if initial kernel page mappings are sufficient. 125 - * panic early if not, else we may access kernel functions 126 - * and variables which can't be reached. 127 - */ 128 - if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE) 129 - panic("KERNEL_INITIAL_ORDER too small!"); 130 - 131 123 #ifdef CONFIG_64BIT 132 124 if(parisc_narrow_firmware) { 133 125 printk(KERN_INFO "Kernel is using PDC in 32-bit mode.\n"); ··· 270 278 { 271 279 int ret, cpunum; 272 280 struct pdc_coproc_cfg coproc_cfg; 281 + 282 + /* 283 + * Check if initial kernel page mapping is sufficient. 284 + * Print warning if not, because we may access kernel functions and 285 + * variables which can't be reached yet through the initial mappings. 286 + * Note that the panic() and printk() functions are not functional 287 + * yet, so we need to use direct iodc() firmware calls instead. 288 + */ 289 + const char warn1[] = "CRITICAL: Kernel may crash because " 290 + "KERNEL_INITIAL_ORDER is too small.\n"; 291 + if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE) 292 + pdc_iodc_print(warn1, sizeof(warn1) - 1); 273 293 274 294 /* check QEMU/SeaBIOS marker in PAGE0 */ 275 295 running_on_qemu = (memcmp(&PAGE0->pad0, "SeaBIOS", 8) == 0);
+1
arch/parisc/kernel/vmlinux.lds.S
··· 165 165 _end = . ; 166 166 167 167 STABS_DEBUG 168 + MODINFO 168 169 ELF_DETAILS 169 170 .note 0 : { *(.note) } 170 171
+2 -2
arch/powerpc/Kconfig
··· 573 573 depends on FUNCTION_TRACER && (PPC32 || PPC64_ELF_ABI_V2) 574 574 depends on $(cc-option,-fpatchable-function-entry=2) 575 575 def_bool y if PPC32 576 - def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mlittle-endian) if PPC64 && CPU_LITTLE_ENDIAN 577 - def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mbig-endian) if PPC64 && CPU_BIG_ENDIAN 576 + def_bool $(success,$(srctree)/arch/powerpc/tools/check-fpatchable-function-entry.sh $(CC) $(CLANG_FLAGS) -mlittle-endian) if PPC64 && CPU_LITTLE_ENDIAN 577 + def_bool $(success,$(srctree)/arch/powerpc/tools/check-fpatchable-function-entry.sh $(CC) -mbig-endian) if PPC64 && CPU_BIG_ENDIAN 578 578 579 579 config PPC_FTRACE_OUT_OF_LINE 580 580 def_bool PPC64 && ARCH_USING_PATCHABLE_FUNCTION_ENTRY
+1 -1
arch/powerpc/boot/dts/asp834x-redboot.dts
··· 37 37 }; 38 38 }; 39 39 40 - memory { 40 + memory@0 { 41 41 device_type = "memory"; 42 42 reg = <0x00000000 0x8000000>; // 128MB at 0 43 43 };
-156
arch/powerpc/boot/dts/fsl/interlaken-lac-portals.dtsi
··· 1 - /* T4240 Interlaken LAC Portal device tree stub with 24 portals. 2 - * 3 - * Copyright 2012 Freescale Semiconductor Inc. 4 - * 5 - * Redistribution and use in source and binary forms, with or without 6 - * modification, are permitted provided that the following conditions are met: 7 - * * Redistributions of source code must retain the above copyright 8 - * notice, this list of conditions and the following disclaimer. 9 - * * Redistributions in binary form must reproduce the above copyright 10 - * notice, this list of conditions and the following disclaimer in the 11 - * documentation and/or other materials provided with the distribution. 12 - * * Neither the name of Freescale Semiconductor nor the 13 - * names of its contributors may be used to endorse or promote products 14 - * derived from this software without specific prior written permission. 15 - * 16 - * 17 - * ALTERNATIVELY, this software may be distributed under the terms of the 18 - * GNU General Public License ("GPL") as published by the Free Software 19 - * Foundation, either version 2 of that License or (at your option) any 20 - * later version. 21 - * 22 - * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 23 - * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 24 - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 25 - * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 26 - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 27 - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 28 - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 29 - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 30 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 31 - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 32 - */ 33 - 34 - #address-cells = <0x1>; 35 - #size-cells = <0x1>; 36 - compatible = "fsl,interlaken-lac-portals"; 37 - 38 - lportal0: lac-portal@0 { 39 - compatible = "fsl,interlaken-lac-portal-v1.0"; 40 - reg = <0x0 0x1000>; 41 - }; 42 - 43 - lportal1: lac-portal@1000 { 44 - compatible = "fsl,interlaken-lac-portal-v1.0"; 45 - reg = <0x1000 0x1000>; 46 - }; 47 - 48 - lportal2: lac-portal@2000 { 49 - compatible = "fsl,interlaken-lac-portal-v1.0"; 50 - reg = <0x2000 0x1000>; 51 - }; 52 - 53 - lportal3: lac-portal@3000 { 54 - compatible = "fsl,interlaken-lac-portal-v1.0"; 55 - reg = <0x3000 0x1000>; 56 - }; 57 - 58 - lportal4: lac-portal@4000 { 59 - compatible = "fsl,interlaken-lac-portal-v1.0"; 60 - reg = <0x4000 0x1000>; 61 - }; 62 - 63 - lportal5: lac-portal@5000 { 64 - compatible = "fsl,interlaken-lac-portal-v1.0"; 65 - reg = <0x5000 0x1000>; 66 - }; 67 - 68 - lportal6: lac-portal@6000 { 69 - compatible = "fsl,interlaken-lac-portal-v1.0"; 70 - reg = <0x6000 0x1000>; 71 - }; 72 - 73 - lportal7: lac-portal@7000 { 74 - compatible = "fsl,interlaken-lac-portal-v1.0"; 75 - reg = <0x7000 0x1000>; 76 - }; 77 - 78 - lportal8: lac-portal@8000 { 79 - compatible = "fsl,interlaken-lac-portal-v1.0"; 80 - reg = <0x8000 0x1000>; 81 - }; 82 - 83 - lportal9: lac-portal@9000 { 84 - compatible = "fsl,interlaken-lac-portal-v1.0"; 85 - reg = <0x9000 0x1000>; 86 - }; 87 - 88 - lportal10: lac-portal@A000 { 89 - compatible = "fsl,interlaken-lac-portal-v1.0"; 90 - reg = <0xA000 0x1000>; 91 - }; 92 - 93 - lportal11: lac-portal@B000 { 94 - compatible = "fsl,interlaken-lac-portal-v1.0"; 95 - reg = <0xB000 0x1000>; 96 - }; 97 - 98 - lportal12: lac-portal@C000 { 99 - compatible = "fsl,interlaken-lac-portal-v1.0"; 100 - reg = <0xC000 0x1000>; 101 - }; 102 - 103 - lportal13: lac-portal@D000 { 104 - compatible = "fsl,interlaken-lac-portal-v1.0"; 105 - reg = <0xD000 0x1000>; 106 - }; 107 - 108 - lportal14: lac-portal@E000 { 109 - compatible = "fsl,interlaken-lac-portal-v1.0"; 110 - reg = <0xE000 0x1000>; 111 - }; 112 - 113 - lportal15: lac-portal@F000 { 114 - compatible = "fsl,interlaken-lac-portal-v1.0"; 115 - reg = <0xF000 0x1000>; 116 - }; 117 - 118 - lportal16: lac-portal@10000 { 119 - compatible = "fsl,interlaken-lac-portal-v1.0"; 120 - reg = <0x10000 0x1000>; 121 - }; 122 - 123 - lportal17: lac-portal@11000 { 124 - compatible = "fsl,interlaken-lac-portal-v1.0"; 125 - reg = <0x11000 0x1000>; 126 - }; 127 - 128 - lportal18: lac-portal@1200 { 129 - compatible = "fsl,interlaken-lac-portal-v1.0"; 130 - reg = <0x12000 0x1000>; 131 - }; 132 - 133 - lportal19: lac-portal@13000 { 134 - compatible = "fsl,interlaken-lac-portal-v1.0"; 135 - reg = <0x13000 0x1000>; 136 - }; 137 - 138 - lportal20: lac-portal@14000 { 139 - compatible = "fsl,interlaken-lac-portal-v1.0"; 140 - reg = <0x14000 0x1000>; 141 - }; 142 - 143 - lportal21: lac-portal@15000 { 144 - compatible = "fsl,interlaken-lac-portal-v1.0"; 145 - reg = <0x15000 0x1000>; 146 - }; 147 - 148 - lportal22: lac-portal@16000 { 149 - compatible = "fsl,interlaken-lac-portal-v1.0"; 150 - reg = <0x16000 0x1000>; 151 - }; 152 - 153 - lportal23: lac-portal@17000 { 154 - compatible = "fsl,interlaken-lac-portal-v1.0"; 155 - reg = <0x17000 0x1000>; 156 - };
-45
arch/powerpc/boot/dts/fsl/interlaken-lac.dtsi
··· 1 - /* 2 - * T4 Interlaken Look-aside Controller (LAC) device tree stub 3 - * 4 - * Copyright 2012 Freescale Semiconductor Inc. 5 - * 6 - * Redistribution and use in source and binary forms, with or without 7 - * modification, are permitted provided that the following conditions are met: 8 - * * Redistributions of source code must retain the above copyright 9 - * notice, this list of conditions and the following disclaimer. 10 - * * Redistributions in binary form must reproduce the above copyright 11 - * notice, this list of conditions and the following disclaimer in the 12 - * documentation and/or other materials provided with the distribution. 13 - * * Neither the name of Freescale Semiconductor nor the 14 - * names of its contributors may be used to endorse or promote products 15 - * derived from this software without specific prior written permission. 16 - * 17 - * 18 - * ALTERNATIVELY, this software may be distributed under the terms of the 19 - * GNU General Public License ("GPL") as published by the Free Software 20 - * Foundation, either version 2 of that License or (at your option) any 21 - * later version. 22 - * 23 - * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 - * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 - * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 - */ 34 - 35 - lac: lac@229000 { 36 - compatible = "fsl,interlaken-lac"; 37 - reg = <0x229000 0x1000>; 38 - interrupts = <16 2 1 18>; 39 - }; 40 - 41 - lac-hv@228000 { 42 - compatible = "fsl,interlaken-lac-hv"; 43 - reg = <0x228000 0x1000>; 44 - fsl,non-hv-node = <&lac>; 45 - };
-43
arch/powerpc/boot/dts/fsl/pq3-mpic-message-B.dtsi
··· 1 - /* 2 - * PQ3 MPIC Message (Group B) device tree stub [ controller @ offset 0x42400 ] 3 - * 4 - * Copyright 2012 Freescale Semiconductor Inc. 5 - * 6 - * Redistribution and use in source and binary forms, with or without 7 - * modification, are permitted provided that the following conditions are met: 8 - * * Redistributions of source code must retain the above copyright 9 - * notice, this list of conditions and the following disclaimer. 10 - * * Redistributions in binary form must reproduce the above copyright 11 - * notice, this list of conditions and the following disclaimer in the 12 - * documentation and/or other materials provided with the distribution. 13 - * * Neither the name of Freescale Semiconductor nor the 14 - * names of its contributors may be used to endorse or promote products 15 - * derived from this software without specific prior written permission. 16 - * 17 - * 18 - * ALTERNATIVELY, this software may be distributed under the terms of the 19 - * GNU General Public License ("GPL") as published by the Free Software 20 - * Foundation, either version 2 of that License or (at your option) any 21 - * later version. 22 - * 23 - * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 24 - * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 - * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 - */ 34 - 35 - message@42400 { 36 - compatible = "fsl,mpic-v3.1-msgr"; 37 - reg = <0x42400 0x200>; 38 - interrupts = < 39 - 0xb4 2 0 0 40 - 0xb5 2 0 0 41 - 0xb6 2 0 0 42 - 0xb7 2 0 0>; 43 - };
-80
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-1-best-effort.dtsi
··· 1 - /* 2 - * QorIQ FMan v3 1g port #1 device tree stub [ controller @ offset 0x400000 ] 3 - * 4 - * Copyright 2012 - 2015 Freescale Semiconductor Inc. 5 - * 6 - * Redistribution and use in source and binary forms, with or without 7 - * modification, are permitted provided that the following conditions are met: 8 - * * Redistributions of source code must retain the above copyright 9 - * notice, this list of conditions and the following disclaimer. 10 - * * Redistributions in binary form must reproduce the above copyright 11 - * notice, this list of conditions and the following disclaimer in the 12 - * documentation and/or other materials provided with the distribution. 13 - * * Neither the name of Freescale Semiconductor nor the 14 - * names of its contributors may be used to endorse or promote products 15 - * derived from this software without specific prior written permission. 16 - * 17 - * 18 - * ALTERNATIVELY, this software may be distributed under the terms of the 19 - * GNU General Public License ("GPL") as published by the Free Software 20 - * Foundation, either version 2 of that License or (at your option) any 21 - * later version. 22 - * 23 - * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 24 - * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 - * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 - */ 34 - 35 - fman@400000 { 36 - fman0_rx_0x09: port@89000 { 37 - cell-index = <0x9>; 38 - compatible = "fsl,fman-v3-port-rx"; 39 - reg = <0x89000 0x1000>; 40 - fsl,fman-10g-port; 41 - fsl,fman-best-effort-port; 42 - }; 43 - 44 - fman0_tx_0x29: port@a9000 { 45 - cell-index = <0x29>; 46 - compatible = "fsl,fman-v3-port-tx"; 47 - reg = <0xa9000 0x1000>; 48 - fsl,fman-10g-port; 49 - fsl,fman-best-effort-port; 50 - }; 51 - 52 - ethernet@e2000 { 53 - cell-index = <1>; 54 - compatible = "fsl,fman-memac"; 55 - reg = <0xe2000 0x1000>; 56 - fsl,fman-ports = <&fman0_rx_0x09 &fman0_tx_0x29>; 57 - ptp-timer = <&ptp_timer0>; 58 - pcsphy-handle = <&pcsphy1>, <&qsgmiia_pcs1>; 59 - pcs-handle-names = "sgmii", "qsgmii"; 60 - }; 61 - 62 - mdio@e1000 { 63 - qsgmiia_pcs1: ethernet-pcs@1 { 64 - compatible = "fsl,lynx-pcs"; 65 - reg = <1>; 66 - }; 67 - }; 68 - 69 - mdio@e3000 { 70 - #address-cells = <1>; 71 - #size-cells = <0>; 72 - compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 73 - reg = <0xe3000 0x1000>; 74 - fsl,erratum-a011043; /* must ignore read errors */ 75 - 76 - pcsphy1: ethernet-phy@0 { 77 - reg = <0x0>; 78 - }; 79 - }; 80 - };
+1 -1
arch/powerpc/boot/dts/mpc8308_p1m.dts
··· 37 37 }; 38 38 }; 39 39 40 - memory { 40 + memory@0 { 41 41 device_type = "memory"; 42 42 reg = <0x00000000 0x08000000>; // 128MB at 0 43 43 };
+1 -1
arch/powerpc/boot/dts/mpc8308rdb.dts
··· 38 38 }; 39 39 }; 40 40 41 - memory { 41 + memory@0 { 42 42 device_type = "memory"; 43 43 reg = <0x00000000 0x08000000>; // 128MB at 0 44 44 };
+35 -26
arch/powerpc/boot/dts/mpc8313erdb.dts
··· 6 6 */ 7 7 8 8 /dts-v1/; 9 + #include <dt-bindings/interrupt-controller/irq.h> 9 10 10 11 / { 11 12 model = "MPC8313ERDB"; ··· 39 38 }; 40 39 }; 41 40 42 - memory { 41 + memory@0 { 43 42 device_type = "memory"; 44 43 reg = <0x00000000 0x08000000>; // 128MB at 0 45 44 }; ··· 49 48 #size-cells = <1>; 50 49 compatible = "fsl,mpc8313-elbc", "fsl,elbc", "simple-bus"; 51 50 reg = <0xe0005000 0x1000>; 52 - interrupts = <77 0x8>; 51 + interrupts = <77 IRQ_TYPE_LEVEL_LOW>; 53 52 interrupt-parent = <&ipic>; 54 53 55 54 // CS0 and CS1 are swapped when ··· 119 118 cell-index = <0>; 120 119 compatible = "fsl-i2c"; 121 120 reg = <0x3000 0x100>; 122 - interrupts = <14 0x8>; 121 + interrupts = <14 IRQ_TYPE_LEVEL_LOW>; 123 122 interrupt-parent = <&ipic>; 124 123 dfsrr; 125 124 rtc@68 { ··· 132 131 compatible = "fsl,sec2.2", "fsl,sec2.1", 133 132 "fsl,sec2.0"; 134 133 reg = <0x30000 0x10000>; 135 - interrupts = <11 0x8>; 134 + interrupts = <11 IRQ_TYPE_LEVEL_LOW>; 136 135 interrupt-parent = <&ipic>; 137 136 fsl,num-channels = <1>; 138 137 fsl,channel-fifo-len = <24>; ··· 147 146 cell-index = <1>; 148 147 compatible = "fsl-i2c"; 149 148 reg = <0x3100 0x100>; 150 - interrupts = <15 0x8>; 149 + interrupts = <15 IRQ_TYPE_LEVEL_LOW>; 151 150 interrupt-parent = <&ipic>; 152 151 dfsrr; 153 152 }; ··· 156 155 cell-index = <0>; 157 156 compatible = "fsl,spi"; 158 157 reg = <0x7000 0x1000>; 159 - interrupts = <16 0x8>; 158 + interrupts = <16 IRQ_TYPE_LEVEL_LOW>; 160 159 interrupt-parent = <&ipic>; 161 160 mode = "cpu"; 162 161 }; ··· 168 167 #address-cells = <1>; 169 168 #size-cells = <0>; 170 169 interrupt-parent = <&ipic>; 171 - interrupts = <38 0x8>; 170 + interrupts = <38 IRQ_TYPE_LEVEL_LOW>; 172 171 phy_type = "utmi_wide"; 173 172 sleep = <&pmc 0x00300000>; 174 173 }; ··· 176 175 ptp_clock@24E00 { 177 176 compatible = "fsl,etsec-ptp"; 178 177 reg = <0x24E00 0xB0>; 179 - interrupts = <12 0x8 13 0x8>; 178 + interrupts = <12 IRQ_TYPE_LEVEL_LOW>, 179 + <13 IRQ_TYPE_LEVEL_LOW>; 180 180 interrupt-parent = < &ipic >; 181 181 fsl,tclk-period = <10>; 182 182 fsl,tmr-prsc = <100>; ··· 199 197 compatible = "gianfar"; 200 198 reg = <0x24000 0x1000>; 201 199 local-mac-address = [ 00 00 00 00 00 00 ]; 202 - interrupts = <37 0x8 36 0x8 35 0x8>; 200 + interrupts = <37 IRQ_TYPE_LEVEL_LOW>, 201 + <36 IRQ_TYPE_LEVEL_LOW>, 202 + <35 IRQ_TYPE_LEVEL_LOW>; 203 203 interrupt-parent = <&ipic>; 204 204 tbi-handle = < &tbi0 >; 205 205 /* Vitesse 7385 isn't on the MDIO bus */ ··· 215 211 reg = <0x520 0x20>; 216 212 phy4: ethernet-phy@4 { 217 213 interrupt-parent = <&ipic>; 218 - interrupts = <20 0x8>; 214 + interrupts = <20 IRQ_TYPE_LEVEL_LOW>; 219 215 reg = <0x4>; 220 216 }; 221 217 tbi0: tbi-phy@11 { ··· 235 231 reg = <0x25000 0x1000>; 236 232 ranges = <0x0 0x25000 0x1000>; 237 233 local-mac-address = [ 00 00 00 00 00 00 ]; 238 - interrupts = <34 0x8 33 0x8 32 0x8>; 234 + interrupts = <34 IRQ_TYPE_LEVEL_LOW>, 235 + <33 IRQ_TYPE_LEVEL_LOW>, 236 + <32 IRQ_TYPE_LEVEL_LOW>; 239 237 interrupt-parent = <&ipic>; 240 238 tbi-handle = < &tbi1 >; 241 239 phy-handle = < &phy4 >; ··· 265 259 compatible = "fsl,ns16550", "ns16550"; 266 260 reg = <0x4500 0x100>; 267 261 clock-frequency = <0>; 268 - interrupts = <9 0x8>; 262 + interrupts = <9 IRQ_TYPE_LEVEL_LOW>; 269 263 interrupt-parent = <&ipic>; 270 264 }; 271 265 ··· 275 269 compatible = "fsl,ns16550", "ns16550"; 276 270 reg = <0x4600 0x100>; 277 271 clock-frequency = <0>; 278 - interrupts = <10 0x8>; 272 + interrupts = <10 IRQ_TYPE_LEVEL_LOW>; 279 273 interrupt-parent = <&ipic>; 280 274 }; 281 275 282 276 /* IPIC 283 - * interrupts cell = <intr #, sense> 284 - * sense values match linux IORESOURCE_IRQ_* defines: 285 - * sense == 8: Level, low assertion 286 - * sense == 2: Edge, high-to-low change 277 + * interrupts cell = <intr #, type> 287 278 */ 288 279 ipic: pic@700 { 289 280 interrupt-controller; ··· 293 290 pmc: power@b00 { 294 291 compatible = "fsl,mpc8313-pmc", "fsl,mpc8349-pmc"; 295 292 reg = <0xb00 0x100 0xa00 0x100>; 296 - interrupts = <80 8>; 293 + interrupts = <80 IRQ_TYPE_LEVEL_LOW>; 297 294 interrupt-parent = <&ipic>; 298 295 fsl,mpc8313-wakeup-timer = <&gtm1>; 299 296 ··· 309 306 gtm1: timer@500 { 310 307 compatible = "fsl,mpc8313-gtm", "fsl,gtm"; 311 308 reg = <0x500 0x100>; 312 - interrupts = <90 8 78 8 84 8 72 8>; 309 + interrupts = <90 IRQ_TYPE_LEVEL_LOW>, 310 + <78 IRQ_TYPE_LEVEL_LOW>, 311 + <84 IRQ_TYPE_LEVEL_LOW>, 312 + <72 IRQ_TYPE_LEVEL_LOW>; 313 313 interrupt-parent = <&ipic>; 314 314 }; 315 315 316 316 timer@600 { 317 317 compatible = "fsl,mpc8313-gtm", "fsl,gtm"; 318 318 reg = <0x600 0x100>; 319 - interrupts = <91 8 79 8 85 8 73 8>; 319 + interrupts = <91 IRQ_TYPE_LEVEL_LOW>, 320 + <79 IRQ_TYPE_LEVEL_LOW>, 321 + <85 IRQ_TYPE_LEVEL_LOW>, 322 + <73 IRQ_TYPE_LEVEL_LOW>; 320 323 interrupt-parent = <&ipic>; 321 324 }; 322 325 }; ··· 350 341 0x7800 0x0 0x0 0x3 &ipic 17 0x8 351 342 0x7800 0x0 0x0 0x4 &ipic 18 0x8>; 352 343 interrupt-parent = <&ipic>; 353 - interrupts = <66 0x8>; 344 + interrupts = <66 IRQ_TYPE_LEVEL_LOW>; 354 345 bus-range = <0x0 0x0>; 355 346 ranges = <0x02000000 0x0 0x90000000 0x90000000 0x0 0x10000000 356 347 0x42000000 0x0 0x80000000 0x80000000 0x0 0x10000000 ··· 372 363 reg = <0xe00082a8 4>; 373 364 ranges = <0 0xe0008100 0x1a8>; 374 365 interrupt-parent = <&ipic>; 375 - interrupts = <71 8>; 366 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 376 367 377 368 dma-channel@0 { 378 369 compatible = "fsl,mpc8313-dma-channel", 379 370 "fsl,elo-dma-channel"; 380 371 reg = <0 0x28>; 381 372 interrupt-parent = <&ipic>; 382 - interrupts = <71 8>; 373 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 383 374 cell-index = <0>; 384 375 }; 385 376 ··· 388 379 "fsl,elo-dma-channel"; 389 380 reg = <0x80 0x28>; 390 381 interrupt-parent = <&ipic>; 391 - interrupts = <71 8>; 382 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 392 383 cell-index = <1>; 393 384 }; 394 385 ··· 397 388 "fsl,elo-dma-channel"; 398 389 reg = <0x100 0x28>; 399 390 interrupt-parent = <&ipic>; 400 - interrupts = <71 8>; 391 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 401 392 cell-index = <2>; 402 393 }; 403 394 ··· 406 397 "fsl,elo-dma-channel"; 407 398 reg = <0x180 0x28>; 408 399 interrupt-parent = <&ipic>; 409 - interrupts = <71 8>; 400 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 410 401 cell-index = <3>; 411 402 }; 412 403 };
+64 -55
arch/powerpc/boot/dts/mpc8315erdb.dts
··· 40 40 }; 41 41 }; 42 42 43 - memory { 43 + memory@0 { 44 44 device_type = "memory"; 45 45 reg = <0x00000000 0x08000000>; // 128MB at 0 46 46 }; ··· 50 50 #size-cells = <1>; 51 51 compatible = "fsl,mpc8315-elbc", "fsl,elbc", "simple-bus"; 52 52 reg = <0xe0005000 0x1000>; 53 - interrupts = <77 0x8>; 53 + interrupts = <77 IRQ_TYPE_LEVEL_LOW>; 54 54 interrupt-parent = <&ipic>; 55 55 56 56 // CS0 and CS1 are swapped when ··· 112 112 cell-index = <0>; 113 113 compatible = "fsl-i2c"; 114 114 reg = <0x3000 0x100>; 115 - interrupts = <14 0x8>; 115 + interrupts = <14 IRQ_TYPE_LEVEL_LOW>; 116 116 interrupt-parent = <&ipic>; 117 117 dfsrr; 118 118 rtc@68 { ··· 133 133 cell-index = <0>; 134 134 compatible = "fsl,spi"; 135 135 reg = <0x7000 0x1000>; 136 - interrupts = <16 0x8>; 136 + interrupts = <16 IRQ_TYPE_LEVEL_LOW>; 137 137 interrupt-parent = <&ipic>; 138 + #address-cells = <1>; 139 + #size-cells = <0>; 138 140 mode = "cpu"; 139 141 }; 140 142 ··· 147 145 reg = <0x82a8 4>; 148 146 ranges = <0 0x8100 0x1a8>; 149 147 interrupt-parent = <&ipic>; 150 - interrupts = <71 8>; 148 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 151 149 cell-index = <0>; 152 150 dma-channel@0 { 153 151 compatible = "fsl,mpc8315-dma-channel", "fsl,elo-dma-channel"; 154 152 reg = <0 0x80>; 155 153 cell-index = <0>; 156 154 interrupt-parent = <&ipic>; 157 - interrupts = <71 8>; 155 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 158 156 }; 159 157 dma-channel@80 { 160 158 compatible = "fsl,mpc8315-dma-channel", "fsl,elo-dma-channel"; 161 159 reg = <0x80 0x80>; 162 160 cell-index = <1>; 163 161 interrupt-parent = <&ipic>; 164 - interrupts = <71 8>; 162 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 165 163 }; 166 164 dma-channel@100 { 167 165 compatible = "fsl,mpc8315-dma-channel", "fsl,elo-dma-channel"; 168 166 reg = <0x100 0x80>; 169 167 cell-index = <2>; 170 168 interrupt-parent = <&ipic>; 171 - interrupts = <71 8>; 169 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 172 170 }; 173 171 dma-channel@180 { 174 172 compatible = "fsl,mpc8315-dma-channel", "fsl,elo-dma-channel"; 175 173 reg = <0x180 0x28>; 176 174 cell-index = <3>; 177 175 interrupt-parent = <&ipic>; 178 - interrupts = <71 8>; 176 + interrupts = <71 IRQ_TYPE_LEVEL_LOW>; 179 177 }; 180 178 }; 181 179 ··· 185 183 #address-cells = <1>; 186 184 #size-cells = <0>; 187 185 interrupt-parent = <&ipic>; 188 - interrupts = <38 0x8>; 186 + interrupts = <38 IRQ_TYPE_LEVEL_LOW>; 189 187 phy_type = "utmi"; 190 188 }; 191 189 ··· 199 197 reg = <0x24000 0x1000>; 200 198 ranges = <0x0 0x24000 0x1000>; 201 199 local-mac-address = [ 00 00 00 00 00 00 ]; 202 - interrupts = <32 0x8 33 0x8 34 0x8>; 200 + interrupts = <32 IRQ_TYPE_LEVEL_LOW>, 201 + <33 IRQ_TYPE_LEVEL_LOW>, 202 + <34 IRQ_TYPE_LEVEL_LOW>; 203 203 interrupt-parent = <&ipic>; 204 204 tbi-handle = <&tbi0>; 205 205 phy-handle = < &phy0 >; ··· 242 238 reg = <0x25000 0x1000>; 243 239 ranges = <0x0 0x25000 0x1000>; 244 240 local-mac-address = [ 00 00 00 00 00 00 ]; 245 - interrupts = <35 0x8 36 0x8 37 0x8>; 241 + interrupts = <35 IRQ_TYPE_LEVEL_LOW>, 242 + <36 IRQ_TYPE_LEVEL_LOW>, 243 + <37 IRQ_TYPE_LEVEL_LOW>; 246 244 interrupt-parent = <&ipic>; 247 245 tbi-handle = <&tbi1>; 248 246 phy-handle = < &phy1 >; ··· 269 263 compatible = "fsl,ns16550", "ns16550"; 270 264 reg = <0x4500 0x100>; 271 265 clock-frequency = <133333333>; 272 - interrupts = <9 0x8>; 266 + interrupts = <9 IRQ_TYPE_LEVEL_LOW>; 273 267 interrupt-parent = <&ipic>; 274 268 }; 275 269 ··· 279 273 compatible = "fsl,ns16550", "ns16550"; 280 274 reg = <0x4600 0x100>; 281 275 clock-frequency = <133333333>; 282 - interrupts = <10 0x8>; 276 + interrupts = <10 IRQ_TYPE_LEVEL_LOW>; 283 277 interrupt-parent = <&ipic>; 284 278 }; 285 279 ··· 288 282 "fsl,sec2.4", "fsl,sec2.2", "fsl,sec2.1", 289 283 "fsl,sec2.0"; 290 284 reg = <0x30000 0x10000>; 291 - interrupts = <11 0x8>; 285 + interrupts = <11 IRQ_TYPE_LEVEL_LOW>; 292 286 interrupt-parent = <&ipic>; 293 287 fsl,num-channels = <4>; 294 288 fsl,channel-fifo-len = <24>; ··· 300 294 compatible = "fsl,mpc8315-sata", "fsl,pq-sata"; 301 295 reg = <0x18000 0x1000>; 302 296 cell-index = <1>; 303 - interrupts = <44 0x8>; 297 + interrupts = <44 IRQ_TYPE_LEVEL_LOW>; 304 298 interrupt-parent = <&ipic>; 305 299 }; 306 300 ··· 308 302 compatible = "fsl,mpc8315-sata", "fsl,pq-sata"; 309 303 reg = <0x19000 0x1000>; 310 304 cell-index = <2>; 311 - interrupts = <45 0x8>; 305 + interrupts = <45 IRQ_TYPE_LEVEL_LOW>; 312 306 interrupt-parent = <&ipic>; 313 307 }; 314 308 315 309 gtm1: timer@500 { 316 310 compatible = "fsl,mpc8315-gtm", "fsl,gtm"; 317 311 reg = <0x500 0x100>; 318 - interrupts = <90 8 78 8 84 8 72 8>; 312 + interrupts = <90 IRQ_TYPE_LEVEL_LOW>, 313 + <78 IRQ_TYPE_LEVEL_LOW>, 314 + <84 IRQ_TYPE_LEVEL_LOW>, 315 + <72 IRQ_TYPE_LEVEL_LOW>; 319 316 interrupt-parent = <&ipic>; 320 317 clock-frequency = <133333333>; 321 318 }; ··· 326 317 timer@600 { 327 318 compatible = "fsl,mpc8315-gtm", "fsl,gtm"; 328 319 reg = <0x600 0x100>; 329 - interrupts = <91 8 79 8 85 8 73 8>; 320 + interrupts = <91 IRQ_TYPE_LEVEL_LOW>, 321 + <79 IRQ_TYPE_LEVEL_LOW>, 322 + <85 IRQ_TYPE_LEVEL_LOW>, 323 + <73 IRQ_TYPE_LEVEL_LOW>; 330 324 interrupt-parent = <&ipic>; 331 325 clock-frequency = <133333333>; 332 326 }; 333 327 334 328 /* IPIC 335 - * interrupts cell = <intr #, sense> 336 - * sense values match linux IORESOURCE_IRQ_* defines: 337 - * sense == 8: Level, low assertion 338 - * sense == 2: Edge, high-to-low change 329 + * interrupts cell = <intr #, type> 339 330 */ 340 331 ipic: interrupt-controller@700 { 341 332 interrupt-controller; ··· 349 340 compatible = "fsl,ipic-msi"; 350 341 reg = <0x7c0 0x40>; 351 342 msi-available-ranges = <0 0x100>; 352 - interrupts = <0x43 0x8 353 - 0x4 0x8 354 - 0x51 0x8 355 - 0x52 0x8 356 - 0x56 0x8 357 - 0x57 0x8 358 - 0x58 0x8 359 - 0x59 0x8>; 343 + interrupts = <0x43 IRQ_TYPE_LEVEL_LOW 344 + 0x4 IRQ_TYPE_LEVEL_LOW 345 + 0x51 IRQ_TYPE_LEVEL_LOW 346 + 0x52 IRQ_TYPE_LEVEL_LOW 347 + 0x56 IRQ_TYPE_LEVEL_LOW 348 + 0x57 IRQ_TYPE_LEVEL_LOW 349 + 0x58 IRQ_TYPE_LEVEL_LOW 350 + 0x59 IRQ_TYPE_LEVEL_LOW>; 360 351 interrupt-parent = < &ipic >; 361 352 }; 362 353 ··· 364 355 compatible = "fsl,mpc8315-pmc", "fsl,mpc8313-pmc", 365 356 "fsl,mpc8349-pmc"; 366 357 reg = <0xb00 0x100 0xa00 0x100>; 367 - interrupts = <80 8>; 358 + interrupts = <80 IRQ_TYPE_LEVEL_LOW>; 368 359 interrupt-parent = <&ipic>; 369 360 fsl,mpc8313-wakeup-timer = <&gtm1>; 370 361 }; ··· 383 374 interrupt-map-mask = <0xf800 0x0 0x0 0x7>; 384 375 interrupt-map = < 385 376 /* IDSEL 0x0E -mini PCI */ 386 - 0x7000 0x0 0x0 0x1 &ipic 18 0x8 387 - 0x7000 0x0 0x0 0x2 &ipic 18 0x8 388 - 0x7000 0x0 0x0 0x3 &ipic 18 0x8 389 - 0x7000 0x0 0x0 0x4 &ipic 18 0x8 377 + 0x7000 0x0 0x0 0x1 &ipic 18 IRQ_TYPE_LEVEL_LOW 378 + 0x7000 0x0 0x0 0x2 &ipic 18 IRQ_TYPE_LEVEL_LOW 379 + 0x7000 0x0 0x0 0x3 &ipic 18 IRQ_TYPE_LEVEL_LOW 380 + 0x7000 0x0 0x0 0x4 &ipic 18 IRQ_TYPE_LEVEL_LOW 390 381 391 382 /* IDSEL 0x0F -mini PCI */ 392 - 0x7800 0x0 0x0 0x1 &ipic 17 0x8 393 - 0x7800 0x0 0x0 0x2 &ipic 17 0x8 394 - 0x7800 0x0 0x0 0x3 &ipic 17 0x8 395 - 0x7800 0x0 0x0 0x4 &ipic 17 0x8 383 + 0x7800 0x0 0x0 0x1 &ipic 17 IRQ_TYPE_LEVEL_LOW 384 + 0x7800 0x0 0x0 0x2 &ipic 17 IRQ_TYPE_LEVEL_LOW 385 + 0x7800 0x0 0x0 0x3 &ipic 17 IRQ_TYPE_LEVEL_LOW 386 + 0x7800 0x0 0x0 0x4 &ipic 17 IRQ_TYPE_LEVEL_LOW 396 387 397 388 /* IDSEL 0x10 - PCI slot */ 398 - 0x8000 0x0 0x0 0x1 &ipic 48 0x8 399 - 0x8000 0x0 0x0 0x2 &ipic 17 0x8 400 - 0x8000 0x0 0x0 0x3 &ipic 48 0x8 401 - 0x8000 0x0 0x0 0x4 &ipic 17 0x8>; 389 + 0x8000 0x0 0x0 0x1 &ipic 48 IRQ_TYPE_LEVEL_LOW 390 + 0x8000 0x0 0x0 0x2 &ipic 17 IRQ_TYPE_LEVEL_LOW 391 + 0x8000 0x0 0x0 0x3 &ipic 48 IRQ_TYPE_LEVEL_LOW 392 + 0x8000 0x0 0x0 0x4 &ipic 17 IRQ_TYPE_LEVEL_LOW>; 402 393 interrupt-parent = <&ipic>; 403 - interrupts = <66 0x8>; 394 + interrupts = <66 IRQ_TYPE_LEVEL_LOW>; 404 395 bus-range = <0x0 0x0>; 405 396 ranges = <0x02000000 0 0x90000000 0x90000000 0 0x10000000 406 397 0x42000000 0 0x80000000 0x80000000 0 0x10000000 ··· 426 417 0x01000000 0 0x00000000 0xb1000000 0 0x00800000>; 427 418 bus-range = <0 255>; 428 419 interrupt-map-mask = <0xf800 0 0 7>; 429 - interrupt-map = <0 0 0 1 &ipic 1 8 430 - 0 0 0 2 &ipic 1 8 431 - 0 0 0 3 &ipic 1 8 432 - 0 0 0 4 &ipic 1 8>; 420 + interrupt-map = <0 0 0 1 &ipic 1 IRQ_TYPE_LEVEL_LOW 421 + 0 0 0 2 &ipic 1 IRQ_TYPE_LEVEL_LOW 422 + 0 0 0 3 &ipic 1 IRQ_TYPE_LEVEL_LOW 423 + 0 0 0 4 &ipic 1 IRQ_TYPE_LEVEL_LOW>; 433 424 clock-frequency = <0>; 434 425 435 426 pcie@0 { ··· 457 448 0x01000000 0 0x00000000 0xd1000000 0 0x00800000>; 458 449 bus-range = <0 255>; 459 450 interrupt-map-mask = <0xf800 0 0 7>; 460 - interrupt-map = <0 0 0 1 &ipic 2 8 461 - 0 0 0 2 &ipic 2 8 462 - 0 0 0 3 &ipic 2 8 463 - 0 0 0 4 &ipic 2 8>; 451 + interrupt-map = <0 0 0 1 &ipic 2 IRQ_TYPE_LEVEL_LOW 452 + 0 0 0 2 &ipic 2 IRQ_TYPE_LEVEL_LOW 453 + 0 0 0 3 &ipic 2 IRQ_TYPE_LEVEL_LOW 454 + 0 0 0 4 &ipic 2 IRQ_TYPE_LEVEL_LOW>; 464 455 clock-frequency = <0>; 465 456 466 457 pcie@0 { ··· 480 471 leds { 481 472 compatible = "gpio-leds"; 482 473 483 - pwr { 474 + led-pwr { 484 475 gpios = <&mcu_pio 0 0>; 485 476 default-state = "on"; 486 477 }; 487 478 488 - hdd { 479 + led-hdd { 489 480 gpios = <&mcu_pio 1 0>; 490 481 linux,default-trigger = "disk-activity"; 491 482 };
+1 -1
arch/powerpc/boot/dts/mpc832x_rdb.dts
··· 38 38 }; 39 39 }; 40 40 41 - memory { 41 + memory@0 { 42 42 device_type = "memory"; 43 43 reg = <0x00000000 0x04000000>; 44 44 };
+1 -1
arch/powerpc/boot/dts/mpc8349emitx.dts
··· 39 39 }; 40 40 }; 41 41 42 - memory { 42 + memory@0 { 43 43 device_type = "memory"; 44 44 reg = <0x00000000 0x10000000>; 45 45 };
+1 -1
arch/powerpc/boot/dts/mpc8349emitxgp.dts
··· 37 37 }; 38 38 }; 39 39 40 - memory { 40 + memory@0 { 41 41 device_type = "memory"; 42 42 reg = <0x00000000 0x10000000>; 43 43 };
+1 -1
arch/powerpc/boot/dts/mpc8377_rdb.dts
··· 39 39 }; 40 40 }; 41 41 42 - memory { 42 + memory@0 { 43 43 device_type = "memory"; 44 44 reg = <0x00000000 0x10000000>; // 256MB at 0 45 45 };
+1 -1
arch/powerpc/boot/dts/mpc8377_wlan.dts
··· 40 40 }; 41 41 }; 42 42 43 - memory { 43 + memory@0 { 44 44 device_type = "memory"; 45 45 reg = <0x00000000 0x20000000>; // 512MB at 0 46 46 };
+1 -1
arch/powerpc/boot/dts/mpc8378_rdb.dts
··· 39 39 }; 40 40 }; 41 41 42 - memory { 42 + memory@0 { 43 43 device_type = "memory"; 44 44 reg = <0x00000000 0x10000000>; // 256MB at 0 45 45 };
+1 -1
arch/powerpc/boot/dts/mpc8379_rdb.dts
··· 37 37 }; 38 38 }; 39 39 40 - memory { 40 + memory@0 { 41 41 device_type = "memory"; 42 42 reg = <0x00000000 0x10000000>; // 256MB at 0 43 43 };
+1 -3
arch/powerpc/include/asm/nohash/32/pgtable.h
··· 120 120 121 121 #if defined(CONFIG_44x) 122 122 #include <asm/nohash/32/pte-44x.h> 123 - #elif defined(CONFIG_PPC_85xx) && defined(CONFIG_PTE_64BIT) 124 - #include <asm/nohash/pte-e500.h> 125 123 #elif defined(CONFIG_PPC_85xx) 126 - #include <asm/nohash/32/pte-85xx.h> 124 + #include <asm/nohash/pte-e500.h> 127 125 #elif defined(CONFIG_PPC_8xx) 128 126 #include <asm/nohash/32/pte-8xx.h> 129 127 #endif
-59
arch/powerpc/include/asm/nohash/32/pte-85xx.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ASM_POWERPC_NOHASH_32_PTE_85xx_H 3 - #define _ASM_POWERPC_NOHASH_32_PTE_85xx_H 4 - #ifdef __KERNEL__ 5 - 6 - /* PTE bit definitions for Freescale BookE SW loaded TLB MMU based 7 - * processors 8 - * 9 - MMU Assist Register 3: 10 - 11 - 32 33 34 35 36 ... 50 51 52 53 54 55 56 57 58 59 60 61 62 63 12 - RPN...................... 0 0 U0 U1 U2 U3 UX SX UW SW UR SR 13 - 14 - - PRESENT *must* be in the bottom two bits because swap PTEs use 15 - the top 30 bits. 16 - 17 - */ 18 - 19 - /* Definitions for FSL Book-E Cores */ 20 - #define _PAGE_READ 0x00001 /* H: Read permission (SR) */ 21 - #define _PAGE_PRESENT 0x00002 /* S: PTE contains a translation */ 22 - #define _PAGE_WRITE 0x00004 /* S: Write permission (SW) */ 23 - #define _PAGE_DIRTY 0x00008 /* S: Page dirty */ 24 - #define _PAGE_EXEC 0x00010 /* H: SX permission */ 25 - #define _PAGE_ACCESSED 0x00020 /* S: Page referenced */ 26 - 27 - #define _PAGE_ENDIAN 0x00040 /* H: E bit */ 28 - #define _PAGE_GUARDED 0x00080 /* H: G bit */ 29 - #define _PAGE_COHERENT 0x00100 /* H: M bit */ 30 - #define _PAGE_NO_CACHE 0x00200 /* H: I bit */ 31 - #define _PAGE_WRITETHRU 0x00400 /* H: W bit */ 32 - #define _PAGE_SPECIAL 0x00800 /* S: Special page */ 33 - 34 - #define _PMD_PRESENT 0 35 - #define _PMD_PRESENT_MASK (PAGE_MASK) 36 - #define _PMD_BAD (~PAGE_MASK) 37 - #define _PMD_USER 0 38 - 39 - #define _PTE_NONE_MASK 0 40 - 41 - #define PTE_WIMGE_SHIFT (6) 42 - 43 - /* 44 - * We define 2 sets of base prot bits, one for basic pages (ie, 45 - * cacheable kernel and user pages) and one for non cacheable 46 - * pages. We always set _PAGE_COHERENT when SMP is enabled or 47 - * the processor might need it for DMA coherency. 48 - */ 49 - #define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED) 50 - #if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC) 51 - #define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT) 52 - #else 53 - #define _PAGE_BASE (_PAGE_BASE_NC) 54 - #endif 55 - 56 - #include <asm/pgtable-masks.h> 57 - 58 - #endif /* __KERNEL__ */ 59 - #endif /* _ASM_POWERPC_NOHASH_32_PTE_FSL_85xx_H */
+1 -1
arch/powerpc/include/asm/pgtable-types.h
··· 49 49 #endif /* CONFIG_PPC64 */ 50 50 51 51 /* PGD level */ 52 - #if defined(CONFIG_PPC_85xx) && defined(CONFIG_PTE_64BIT) 52 + #if defined(CONFIG_PPC_85xx) 53 53 typedef struct { unsigned long long pgd; } pgd_t; 54 54 55 55 static inline unsigned long long pgd_val(pgd_t x)
+1 -1
arch/powerpc/include/asm/uaccess.h
··· 255 255 ".section .fixup,\"ax\"\n" \ 256 256 "4: li %0,%3\n" \ 257 257 " li %1,0\n" \ 258 - " li %1+1,0\n" \ 258 + " li %L1,0\n" \ 259 259 " b 3b\n" \ 260 260 ".previous\n" \ 261 261 EX_TABLE(1b, 4b) \
+1 -45
arch/powerpc/kernel/head_85xx.S
··· 305 305 * r12 is pointer to the pte 306 306 * r10 is the pshift from the PGD, if we're a hugepage 307 307 */ 308 - #ifdef CONFIG_PTE_64BIT 309 308 #ifdef CONFIG_HUGETLB_PAGE 310 309 #define FIND_PTE \ 311 310 rlwinm r12, r13, 14, 18, 28; /* Compute pgdir/pmd offset */ \ ··· 328 329 rlwimi r12, r13, 23, 20, 28; /* Compute pte address */ \ 329 330 lwz r11, 4(r12); /* Get pte entry */ 330 331 #endif /* HUGEPAGE */ 331 - #else /* !PTE_64BIT */ 332 - #define FIND_PTE \ 333 - rlwimi r11, r13, 12, 20, 29; /* Create L1 (pgdir/pmd) address */ \ 334 - lwz r11, 0(r11); /* Get L1 entry */ \ 335 - rlwinm. r12, r11, 0, 0, 19; /* Extract L2 (pte) base address */ \ 336 - beq 2f; /* Bail if no table */ \ 337 - rlwimi r12, r13, 22, 20, 29; /* Compute PTE address */ \ 338 - lwz r11, 0(r12); /* Get Linux PTE */ 339 - #endif 340 332 341 333 /* 342 334 * Interrupt vector entry code ··· 463 473 4: 464 474 FIND_PTE 465 475 466 - #ifdef CONFIG_PTE_64BIT 467 476 li r13,_PAGE_PRESENT|_PAGE_BAP_SR 468 477 oris r13,r13,_PAGE_ACCESSED@h 469 - #else 470 - li r13,_PAGE_PRESENT|_PAGE_READ|_PAGE_ACCESSED 471 - #endif 472 478 andc. r13,r13,r11 /* Check permission */ 473 479 474 - #ifdef CONFIG_PTE_64BIT 475 480 #ifdef CONFIG_SMP 476 481 subf r13,r11,r12 /* create false data dep */ 477 482 lwzx r13,r11,r13 /* Get upper pte bits */ 478 483 #else 479 484 lwz r13,0(r12) /* Get upper pte bits */ 480 - #endif 481 485 #endif 482 486 483 487 bne 2f /* Bail if permission/valid mismatch */ ··· 536 552 537 553 FIND_PTE 538 554 /* Make up the required permissions for kernel code */ 539 - #ifdef CONFIG_PTE_64BIT 540 555 li r13,_PAGE_PRESENT | _PAGE_BAP_SX 541 556 oris r13,r13,_PAGE_ACCESSED@h 542 - #else 543 - li r13,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC 544 - #endif 545 557 b 4f 546 558 547 559 /* Get the PGD for the current thread */ ··· 553 573 554 574 FIND_PTE 555 575 /* Make up the required permissions for user code */ 556 - #ifdef CONFIG_PTE_64BIT 557 576 li r13,_PAGE_PRESENT | _PAGE_BAP_UX 558 577 oris r13,r13,_PAGE_ACCESSED@h 559 - #else 560 - li r13,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC 561 - #endif 562 578 563 579 4: 564 580 andc. r13,r13,r11 /* Check permission */ 565 581 566 - #ifdef CONFIG_PTE_64BIT 567 582 #ifdef CONFIG_SMP 568 583 subf r13,r11,r12 /* create false data dep */ 569 584 lwzx r13,r11,r13 /* Get upper pte bits */ 570 585 #else 571 586 lwz r13,0(r12) /* Get upper pte bits */ 572 - #endif 573 587 #endif 574 588 575 589 bne 2f /* Bail if permission mismatch */ ··· 657 683 * r10 - tsize encoding (if HUGETLB_PAGE) or available to use 658 684 * r11 - TLB (info from Linux PTE) 659 685 * r12 - available to use 660 - * r13 - upper bits of PTE (if PTE_64BIT) or available to use 686 + * r13 - upper bits of PTE 661 687 * CR5 - results of addr >= PAGE_OFFSET 662 688 * MAS0, MAS1 - loaded with proper value when we get here 663 689 * MAS2, MAS3 - will need additional info from Linux PTE ··· 725 751 * here we (properly should) assume have the appropriate value. 726 752 */ 727 753 finish_tlb_load_cont: 728 - #ifdef CONFIG_PTE_64BIT 729 754 rlwinm r12, r11, 32-2, 26, 31 /* Move in perm bits */ 730 755 andi. r10, r11, _PAGE_DIRTY 731 756 bne 1f ··· 737 764 srwi r10, r13, 12 /* grab RPN[12:31] */ 738 765 mtspr SPRN_MAS7, r10 739 766 END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS) 740 - #else 741 - li r10, (_PAGE_EXEC | _PAGE_READ) 742 - mr r13, r11 743 - rlwimi r10, r11, 31, 29, 29 /* extract _PAGE_DIRTY into SW */ 744 - and r12, r11, r10 745 - mcrf cr0, cr5 /* Test for user page */ 746 - slwi r10, r12, 1 747 - or r10, r10, r12 748 - rlwinm r10, r10, 0, ~_PAGE_EXEC /* Clear SX on user pages */ 749 - isellt r12, r10, r12 750 - rlwimi r13, r12, 0, 20, 31 /* Get RPN from PTE, merge w/ perms */ 751 - mtspr SPRN_MAS3, r13 752 - #endif 753 767 754 768 mfspr r12, SPRN_MAS2 755 - #ifdef CONFIG_PTE_64BIT 756 769 rlwimi r12, r11, 32-19, 27, 31 /* extract WIMGE from pte */ 757 - #else 758 - rlwimi r12, r11, 26, 27, 31 /* extract WIMGE from pte */ 759 - #endif 760 770 #ifdef CONFIG_HUGETLB_PAGE 761 771 beq 6, 3f /* don't mask if page isn't huge */ 762 772 li r13, 1
+7
arch/powerpc/kernel/pci_of_scan.c
··· 212 212 dev->error_state = pci_channel_io_normal; 213 213 dev->dma_mask = 0xffffffff; 214 214 215 + /* 216 + * Assume 64-bit addresses for MSI initially. Will be changed to 32-bit 217 + * if MSI (rather than MSI-X) capability does not have 218 + * PCI_MSI_FLAGS_64BIT. Can also be overridden by driver. 219 + */ 220 + dev->msi_addr_mask = DMA_BIT_MASK(64); 221 + 215 222 /* Early fixups, before probing the BARs */ 216 223 pci_fixup_device(pci_fixup_early, dev); 217 224
+2 -1
arch/powerpc/kernel/prom_init.c
··· 2893 2893 for (node = 0; prom_next_node(&node); ) { 2894 2894 type[0] = '\0'; 2895 2895 prom_getprop(node, "device_type", type, sizeof(type)); 2896 - if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s")) 2896 + if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s") && 2897 + prom_strcmp(type, "media-bay")) 2897 2898 continue; 2898 2899 2899 2900 if (prom_getproplen(node, "#size-cells") != PROM_ERROR)
+22 -4
arch/powerpc/kernel/trace/ftrace.c
··· 37 37 if (addr >= (unsigned long)__exittext_begin && addr < (unsigned long)__exittext_end) 38 38 return 0; 39 39 40 - if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY) && 41 - !IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) { 42 - addr += MCOUNT_INSN_SIZE; 43 - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS)) 40 + if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) { 41 + if (!IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) { 44 42 addr += MCOUNT_INSN_SIZE; 43 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS)) 44 + addr += MCOUNT_INSN_SIZE; 45 + } else if (IS_ENABLED(CONFIG_CC_IS_CLANG) && IS_ENABLED(CONFIG_PPC64)) { 46 + /* 47 + * addr points to global entry point though the NOP was emitted at local 48 + * entry point due to https://github.com/llvm/llvm-project/issues/163706 49 + * Handle that here with ppc_function_entry() for kernel symbols while 50 + * adjusting module addresses in the else case, by looking for the below 51 + * module global entry point sequence: 52 + * ld r2, -8(r12) 53 + * add r2, r2, r12 54 + */ 55 + if (is_kernel_text(addr) || is_kernel_inittext(addr)) 56 + addr = ppc_function_entry((void *)addr); 57 + else if ((ppc_inst_val(ppc_inst_read((u32 *)addr)) == 58 + PPC_RAW_LD(_R2, _R12, -8)) && 59 + (ppc_inst_val(ppc_inst_read((u32 *)(addr+4))) == 60 + PPC_RAW_ADD(_R2, _R2, _R12))) 61 + addr += 8; 62 + } 45 63 } 46 64 47 65 return addr;
+1
arch/powerpc/kernel/vmlinux.lds.S
··· 397 397 _end = . ; 398 398 399 399 DWARF_DEBUG 400 + MODINFO 400 401 ELF_DETAILS 401 402 402 403 DISCARDS
+9 -8
arch/powerpc/kexec/core.c
··· 23 23 #include <asm/firmware.h> 24 24 25 25 #define cpu_to_be_ulong __PASTE(cpu_to_be, BITS_PER_LONG) 26 + #define __be_word __PASTE(__be, BITS_PER_LONG) 26 27 27 28 #ifdef CONFIG_CRASH_DUMP 28 29 void machine_crash_shutdown(struct pt_regs *regs) ··· 147 146 } 148 147 149 148 /* Values we need to export to the second kernel via the device tree. */ 150 - static phys_addr_t crashk_base; 151 - static phys_addr_t crashk_size; 152 - static unsigned long long mem_limit; 149 + static __be_word crashk_base; 150 + static __be_word crashk_size; 151 + static __be_word mem_limit; 153 152 154 153 static struct property crashk_base_prop = { 155 154 .name = "linux,crashkernel-base", 156 - .length = sizeof(phys_addr_t), 155 + .length = sizeof(__be_word), 157 156 .value = &crashk_base 158 157 }; 159 158 160 159 static struct property crashk_size_prop = { 161 160 .name = "linux,crashkernel-size", 162 - .length = sizeof(phys_addr_t), 161 + .length = sizeof(__be_word), 163 162 .value = &crashk_size, 164 163 }; 165 164 166 165 static struct property memory_limit_prop = { 167 166 .name = "linux,memory-limit", 168 - .length = sizeof(unsigned long long), 167 + .length = sizeof(__be_word), 169 168 .value = &mem_limit, 170 169 }; 171 170 ··· 194 193 } 195 194 #endif /* CONFIG_CRASH_RESERVE */ 196 195 197 - static phys_addr_t kernel_end; 196 + static __be_word kernel_end; 198 197 199 198 static struct property kernel_end_prop = { 200 199 .name = "linux,kernel-end", 201 - .length = sizeof(phys_addr_t), 200 + .length = sizeof(__be_word), 202 201 .value = &kernel_end, 203 202 }; 204 203
+13 -1
arch/powerpc/kexec/file_load_64.c
··· 450 450 kbuf->buffer = headers; 451 451 kbuf->mem = KEXEC_BUF_MEM_UNKNOWN; 452 452 kbuf->bufsz = headers_sz; 453 + 454 + /* 455 + * Account for extra space required to accommodate additional memory 456 + * ranges in elfcorehdr due to memory hotplug events. 457 + */ 453 458 kbuf->memsz = headers_sz + kdump_extra_elfcorehdr_size(cmem); 454 459 kbuf->top_down = false; 455 460 ··· 465 460 } 466 461 467 462 image->elf_load_addr = kbuf->mem; 468 - image->elf_headers_sz = headers_sz; 463 + 464 + /* 465 + * If CONFIG_CRASH_HOTPLUG is enabled, the elfcorehdr kexec segment 466 + * memsz can be larger than bufsz. Always initialize elf_headers_sz 467 + * with memsz. This ensures the correct size is reserved for elfcorehdr 468 + * memory in the FDT prepared for kdump. 469 + */ 470 + image->elf_headers_sz = kbuf->memsz; 469 471 image->elf_headers = headers; 470 472 out: 471 473 kfree(cmem);
-5
arch/powerpc/net/bpf_jit.h
··· 81 81 82 82 #ifdef CONFIG_PPC64 83 83 84 - /* for gpr non volatile registers BPG_REG_6 to 10 */ 85 - #define BPF_PPC_STACK_SAVE (6 * 8) 86 - 87 84 /* If dummy pass (!image), account for maximum possible instructions */ 88 85 #define PPC_LI64(d, i) do { \ 89 86 if (!image) \ ··· 216 219 int bpf_add_extable_entry(struct bpf_prog *fp, u32 *image, u32 *fimage, int pass, 217 220 struct codegen_context *ctx, int insn_idx, 218 221 int jmp_off, int dst_reg, u32 code); 219 - 220 - int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx); 221 222 #endif 222 223 223 224 #endif
+56 -71
arch/powerpc/net/bpf_jit_comp.c
··· 450 450 451 451 bool bpf_jit_supports_kfunc_call(void) 452 452 { 453 - return true; 453 + return IS_ENABLED(CONFIG_PPC64); 454 454 } 455 455 456 456 bool bpf_jit_supports_arena(void) ··· 638 638 * for the traced function (BPF subprog/callee) to fetch it. 639 639 */ 640 640 static void bpf_trampoline_setup_tail_call_info(u32 *image, struct codegen_context *ctx, 641 - int func_frame_offset, 642 - int bpf_dummy_frame_size, int r4_off) 641 + int bpf_frame_size, int r4_off) 643 642 { 644 643 if (IS_ENABLED(CONFIG_PPC64)) { 645 - /* See Generated stack layout */ 646 - int tailcallinfo_offset = BPF_PPC_TAILCALL; 647 - 648 - /* 649 - * func_frame_offset = ...(1) 650 - * bpf_dummy_frame_size + trampoline_frame_size 651 - */ 652 - EMIT(PPC_RAW_LD(_R4, _R1, func_frame_offset)); 653 - EMIT(PPC_RAW_LD(_R3, _R4, -tailcallinfo_offset)); 644 + EMIT(PPC_RAW_LD(_R4, _R1, bpf_frame_size)); 645 + /* Refer to trampoline's Generated stack layout */ 646 + EMIT(PPC_RAW_LD(_R3, _R4, -BPF_PPC_TAILCALL)); 654 647 655 648 /* 656 649 * Setting the tail_call_info in trampoline's frame ··· 651 658 */ 652 659 EMIT(PPC_RAW_CMPLWI(_R3, MAX_TAIL_CALL_CNT)); 653 660 PPC_BCC_CONST_SHORT(COND_GT, 8); 654 - EMIT(PPC_RAW_ADDI(_R3, _R4, bpf_jit_stack_tailcallinfo_offset(ctx))); 661 + EMIT(PPC_RAW_ADDI(_R3, _R4, -BPF_PPC_TAILCALL)); 662 + 655 663 /* 656 - * From ...(1) above: 657 - * trampoline_frame_bottom = ...(2) 658 - * func_frame_offset - bpf_dummy_frame_size 659 - * 660 - * Using ...(2) derived above: 661 - * trampoline_tail_call_info_offset = ...(3) 662 - * trampoline_frame_bottom - tailcallinfo_offset 663 - * 664 - * From ...(3): 665 - * Use trampoline_tail_call_info_offset to write reference of main's 666 - * tail_call_info in trampoline frame. 664 + * Trampoline's tail_call_info is at the same offset, as that of 665 + * any bpf program, with reference to previous frame. Update the 666 + * address of main's tail_call_info in trampoline frame. 667 667 */ 668 - EMIT(PPC_RAW_STL(_R3, _R1, (func_frame_offset - bpf_dummy_frame_size) 669 - - tailcallinfo_offset)); 668 + EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size - BPF_PPC_TAILCALL)); 670 669 } else { 671 670 /* See bpf_jit_stack_offsetof() and BPF_PPC_TC */ 672 671 EMIT(PPC_RAW_LL(_R4, _R1, r4_off)); ··· 666 681 } 667 682 668 683 static void bpf_trampoline_restore_tail_call_cnt(u32 *image, struct codegen_context *ctx, 669 - int func_frame_offset, int r4_off) 684 + int bpf_frame_size, int r4_off) 670 685 { 671 686 if (IS_ENABLED(CONFIG_PPC32)) { 672 687 /* ··· 677 692 } 678 693 } 679 694 680 - static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx, int func_frame_offset, 681 - int nr_regs, int regs_off) 695 + static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx, 696 + int bpf_frame_size, int nr_regs, int regs_off) 682 697 { 683 698 int param_save_area_offset; 684 699 685 - param_save_area_offset = func_frame_offset; /* the two frames we alloted */ 700 + param_save_area_offset = bpf_frame_size; 686 701 param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */ 687 702 688 703 for (int i = 0; i < nr_regs; i++) { ··· 705 720 706 721 /* Used when we call into the traced function. Replicate parameter save area */ 707 722 static void bpf_trampoline_restore_args_stack(u32 *image, struct codegen_context *ctx, 708 - int func_frame_offset, int nr_regs, int regs_off) 723 + int bpf_frame_size, int nr_regs, int regs_off) 709 724 { 710 725 int param_save_area_offset; 711 726 712 - param_save_area_offset = func_frame_offset; /* the two frames we alloted */ 727 + param_save_area_offset = bpf_frame_size; 713 728 param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */ 714 729 715 730 for (int i = 8; i < nr_regs; i++) { ··· 726 741 void *func_addr) 727 742 { 728 743 int regs_off, nregs_off, ip_off, run_ctx_off, retval_off, nvr_off, alt_lr_off, r4_off = 0; 729 - int i, ret, nr_regs, bpf_frame_size = 0, bpf_dummy_frame_size = 0, func_frame_offset; 730 744 struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN]; 731 745 struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY]; 732 746 struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT]; 747 + int i, ret, nr_regs, retaddr_off, bpf_frame_size = 0; 733 748 struct codegen_context codegen_ctx, *ctx; 734 749 u32 *image = (u32 *)rw_image; 735 750 ppc_inst_t branch_insn; ··· 755 770 * Generated stack layout: 756 771 * 757 772 * func prev back chain [ back chain ] 758 - * [ ] 759 - * bpf prog redzone/tailcallcnt [ ... ] 64 bytes (64-bit powerpc) 760 - * [ ] -- 761 - * LR save area [ r0 save (64-bit) ] | header 762 - * [ r0 save (32-bit) ] | 763 - * dummy frame for unwind [ back chain 1 ] -- 764 773 * [ tail_call_info ] optional - 64-bit powerpc 765 774 * [ padding ] align stack frame 766 775 * r4_off [ r4 (tailcallcnt) ] optional - 32-bit powerpc 767 776 * alt_lr_off [ real lr (ool stub)] optional - actual lr 777 + * retaddr_off [ return address ] 768 778 * [ r26 ] 769 779 * nvr_off [ r25 ] nvr save area 770 780 * retval_off [ return value ] 771 781 * [ reg argN ] 772 782 * [ ... ] 773 - * regs_off [ reg_arg1 ] prog ctx context 774 - * nregs_off [ args count ] 775 - * ip_off [ traced function ] 783 + * regs_off [ reg_arg1 ] prog_ctx 784 + * nregs_off [ args count ] ((u64 *)prog_ctx)[-1] 785 + * ip_off [ traced function ] ((u64 *)prog_ctx)[-2] 776 786 * [ ... ] 777 787 * run_ctx_off [ bpf_tramp_run_ctx ] 778 788 * [ reg argN ] ··· 823 843 nvr_off = bpf_frame_size; 824 844 bpf_frame_size += 2 * SZL; 825 845 846 + /* Save area for return address */ 847 + retaddr_off = bpf_frame_size; 848 + bpf_frame_size += SZL; 849 + 826 850 /* Optional save area for actual LR in case of ool ftrace */ 827 851 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) { 828 852 alt_lr_off = bpf_frame_size; ··· 853 869 /* Padding to align stack frame, if any */ 854 870 bpf_frame_size = round_up(bpf_frame_size, SZL * 2); 855 871 856 - /* Dummy frame size for proper unwind - includes 64-bytes red zone for 64-bit powerpc */ 857 - bpf_dummy_frame_size = STACK_FRAME_MIN_SIZE + 64; 858 - 859 - /* Offset to the traced function's stack frame */ 860 - func_frame_offset = bpf_dummy_frame_size + bpf_frame_size; 861 - 862 - /* Create dummy frame for unwind, store original return value */ 872 + /* Store original return value */ 863 873 EMIT(PPC_RAW_STL(_R0, _R1, PPC_LR_STKOFF)); 864 - /* Protect red zone where tail call count goes */ 865 - EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_dummy_frame_size)); 866 874 867 875 /* Create our stack frame */ 868 876 EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_frame_size)); ··· 869 893 if (IS_ENABLED(CONFIG_PPC32) && nr_regs < 2) 870 894 EMIT(PPC_RAW_STL(_R4, _R1, r4_off)); 871 895 872 - bpf_trampoline_save_args(image, ctx, func_frame_offset, nr_regs, regs_off); 896 + bpf_trampoline_save_args(image, ctx, bpf_frame_size, nr_regs, regs_off); 873 897 874 - /* Save our return address */ 898 + /* Save our LR/return address */ 875 899 EMIT(PPC_RAW_MFLR(_R3)); 876 900 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 877 901 EMIT(PPC_RAW_STL(_R3, _R1, alt_lr_off)); 878 902 else 879 - EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF)); 903 + EMIT(PPC_RAW_STL(_R3, _R1, retaddr_off)); 880 904 881 905 /* 882 - * Save ip address of the traced function. 883 - * We could recover this from LR, but we will need to address for OOL trampoline, 884 - * and optional GEP area. 906 + * Derive IP address of the traced function. 907 + * In case of CONFIG_PPC_FTRACE_OUT_OF_LINE or BPF program, LR points to the instruction 908 + * after the 'bl' instruction in the OOL stub. Refer to ftrace_init_ool_stub() and 909 + * bpf_arch_text_poke() for OOL stub of kernel functions and bpf programs respectively. 910 + * Relevant stub sequence: 911 + * 912 + * bl <tramp> 913 + * LR (R3) => mtlr r0 914 + * b <func_addr+4> 915 + * 916 + * Recover kernel function/bpf program address from the unconditional 917 + * branch instruction at the end of OOL stub. 885 918 */ 886 919 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) || flags & BPF_TRAMP_F_IP_ARG) { 887 920 EMIT(PPC_RAW_LWZ(_R4, _R3, 4)); 888 921 EMIT(PPC_RAW_SLWI(_R4, _R4, 6)); 889 922 EMIT(PPC_RAW_SRAWI(_R4, _R4, 6)); 890 923 EMIT(PPC_RAW_ADD(_R3, _R3, _R4)); 891 - EMIT(PPC_RAW_ADDI(_R3, _R3, 4)); 892 924 } 893 925 894 926 if (flags & BPF_TRAMP_F_IP_ARG) 895 927 EMIT(PPC_RAW_STL(_R3, _R1, ip_off)); 896 928 897 - if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 898 - /* Fake our LR for unwind */ 899 - EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF)); 929 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) { 930 + /* Fake our LR for BPF_TRAMP_F_CALL_ORIG case */ 931 + EMIT(PPC_RAW_ADDI(_R3, _R3, 4)); 932 + EMIT(PPC_RAW_STL(_R3, _R1, retaddr_off)); 933 + } 900 934 901 935 /* Save function arg count -- see bpf_get_func_arg_cnt() */ 902 936 EMIT(PPC_RAW_LI(_R3, nr_regs)); ··· 944 958 /* Call the traced function */ 945 959 if (flags & BPF_TRAMP_F_CALL_ORIG) { 946 960 /* 947 - * The address in LR save area points to the correct point in the original function 961 + * retaddr on trampoline stack points to the correct point in the original function 948 962 * with both PPC_FTRACE_OUT_OF_LINE as well as with traditional ftrace instruction 949 963 * sequence 950 964 */ 951 - EMIT(PPC_RAW_LL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF)); 965 + EMIT(PPC_RAW_LL(_R3, _R1, retaddr_off)); 952 966 EMIT(PPC_RAW_MTCTR(_R3)); 953 967 954 968 /* Replicate tail_call_cnt before calling the original BPF prog */ 955 969 if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) 956 - bpf_trampoline_setup_tail_call_info(image, ctx, func_frame_offset, 957 - bpf_dummy_frame_size, r4_off); 970 + bpf_trampoline_setup_tail_call_info(image, ctx, bpf_frame_size, r4_off); 958 971 959 972 /* Restore args */ 960 - bpf_trampoline_restore_args_stack(image, ctx, func_frame_offset, nr_regs, regs_off); 973 + bpf_trampoline_restore_args_stack(image, ctx, bpf_frame_size, nr_regs, regs_off); 961 974 962 975 /* Restore TOC for 64-bit */ 963 976 if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) ··· 970 985 971 986 /* Restore updated tail_call_cnt */ 972 987 if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) 973 - bpf_trampoline_restore_tail_call_cnt(image, ctx, func_frame_offset, r4_off); 988 + bpf_trampoline_restore_tail_call_cnt(image, ctx, bpf_frame_size, r4_off); 974 989 975 990 /* Reserve space to patch branch instruction to skip fexit progs */ 976 991 if (ro_image) /* image is NULL for dummy pass */ ··· 1022 1037 EMIT(PPC_RAW_LD(_R2, _R1, 24)); 1023 1038 if (flags & BPF_TRAMP_F_SKIP_FRAME) { 1024 1039 /* Skip the traced function and return to parent */ 1025 - EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset)); 1040 + EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size)); 1026 1041 EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF)); 1027 1042 EMIT(PPC_RAW_MTLR(_R0)); 1028 1043 EMIT(PPC_RAW_BLR()); ··· 1030 1045 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) { 1031 1046 EMIT(PPC_RAW_LL(_R0, _R1, alt_lr_off)); 1032 1047 EMIT(PPC_RAW_MTLR(_R0)); 1033 - EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset)); 1048 + EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size)); 1034 1049 EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF)); 1035 1050 EMIT(PPC_RAW_BLR()); 1036 1051 } else { 1037 - EMIT(PPC_RAW_LL(_R0, _R1, bpf_frame_size + PPC_LR_STKOFF)); 1052 + EMIT(PPC_RAW_LL(_R0, _R1, retaddr_off)); 1038 1053 EMIT(PPC_RAW_MTCTR(_R0)); 1039 - EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset)); 1054 + EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size)); 1040 1055 EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF)); 1041 1056 EMIT(PPC_RAW_MTLR(_R0)); 1042 1057 EMIT(PPC_RAW_BCTR());
+143 -38
arch/powerpc/net/bpf_jit_comp64.c
··· 32 32 * 33 33 * [ prev sp ] <------------- 34 34 * [ tail_call_info ] 8 | 35 - * [ nv gpr save area ] 6*8 + (12*8) | 35 + * [ nv gpr save area ] (6 * 8) | 36 + * [ addl. nv gpr save area] (12 * 8) | <--- exception boundary/callback program 36 37 * [ local_tmp_var ] 24 | 37 38 * fp (r31) --> [ ebpf stack space ] upto 512 | 38 39 * [ frame header ] 32/112 | 39 40 * sp (r1) ---> [ stack pointer ] -------------- 40 41 * 41 - * Additional (12*8) in 'nv gpr save area' only in case of 42 - * exception boundary. 42 + * Additional (12 * 8) in 'nv gpr save area' only in case of 43 + * exception boundary/callback. 43 44 */ 45 + 46 + /* BPF non-volatile registers save area size */ 47 + #define BPF_PPC_STACK_SAVE (6 * 8) 44 48 45 49 /* for bpf JIT code internal usage */ 46 50 #define BPF_PPC_STACK_LOCALS 24 ··· 52 48 * for additional non volatile registers(r14-r25) to be saved 53 49 * at exception boundary 54 50 */ 55 - #define BPF_PPC_EXC_STACK_SAVE (12*8) 51 + #define BPF_PPC_EXC_STACK_SAVE (12 * 8) 56 52 57 53 /* stack frame excluding BPF stack, ensure this is quadword aligned */ 58 54 #define BPF_PPC_STACKFRAME (STACK_FRAME_MIN_SIZE + \ ··· 129 125 * [ ... ] | 130 126 * sp (r1) ---> [ stack pointer ] -------------- 131 127 * [ tail_call_info ] 8 132 - * [ nv gpr save area ] 6*8 + (12*8) 128 + * [ nv gpr save area ] (6 * 8) 129 + * [ addl. nv gpr save area] (12 * 8) <--- exception boundary/callback program 133 130 * [ local_tmp_var ] 24 134 131 * [ unused red zone ] 224 135 132 * 136 - * Additional (12*8) in 'nv gpr save area' only in case of 137 - * exception boundary. 133 + * Additional (12 * 8) in 'nv gpr save area' only in case of 134 + * exception boundary/callback. 138 135 */ 139 136 static int bpf_jit_stack_local(struct codegen_context *ctx) 140 137 { ··· 153 148 } 154 149 } 155 150 156 - int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx) 151 + static int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx) 157 152 { 158 153 return bpf_jit_stack_local(ctx) + BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE; 159 154 } ··· 242 237 243 238 if (bpf_has_stack_frame(ctx) && !ctx->exception_cb) { 244 239 /* 245 - * exception_cb uses boundary frame after stack walk. 246 - * It can simply use redzone, this optimization reduces 247 - * stack walk loop by one level. 248 - * 249 240 * We need a stack frame, but we don't necessarily need to 250 241 * save/restore LR unless we call other functions 251 242 */ ··· 285 284 * program(main prog) as third arg 286 285 */ 287 286 EMIT(PPC_RAW_MR(_R1, _R5)); 287 + /* 288 + * Exception callback reuses the stack frame of exception boundary. 289 + * But BPF stack depth of exception callback and exception boundary 290 + * don't have to be same. If BPF stack depth is different, adjust the 291 + * stack frame size considering BPF stack depth of exception callback. 292 + * The non-volatile register save area remains unchanged. These non- 293 + * volatile registers are restored in exception callback's epilogue. 294 + */ 295 + EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), _R5, 0)); 296 + EMIT(PPC_RAW_SUB(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_1), _R1)); 297 + EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2), 298 + -BPF_PPC_EXC_STACKFRAME)); 299 + EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_2), ctx->stack_size)); 300 + PPC_BCC_CONST_SHORT(COND_EQ, 12); 301 + EMIT(PPC_RAW_MR(_R1, bpf_to_ppc(TMP_REG_1))); 302 + EMIT(PPC_RAW_STDU(_R1, _R1, -(BPF_PPC_EXC_STACKFRAME + ctx->stack_size))); 288 303 } 289 304 290 305 /* ··· 499 482 return 0; 500 483 } 501 484 485 + static int zero_extend(u32 *image, struct codegen_context *ctx, u32 src_reg, u32 dst_reg, u32 size) 486 + { 487 + switch (size) { 488 + case 1: 489 + /* zero-extend 8 bits into 64 bits */ 490 + EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 56)); 491 + return 0; 492 + case 2: 493 + /* zero-extend 16 bits into 64 bits */ 494 + EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 48)); 495 + return 0; 496 + case 4: 497 + /* zero-extend 32 bits into 64 bits */ 498 + EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 32)); 499 + fallthrough; 500 + case 8: 501 + /* Nothing to do */ 502 + return 0; 503 + default: 504 + return -1; 505 + } 506 + } 507 + 508 + static int sign_extend(u32 *image, struct codegen_context *ctx, u32 src_reg, u32 dst_reg, u32 size) 509 + { 510 + switch (size) { 511 + case 1: 512 + /* sign-extend 8 bits into 64 bits */ 513 + EMIT(PPC_RAW_EXTSB(dst_reg, src_reg)); 514 + return 0; 515 + case 2: 516 + /* sign-extend 16 bits into 64 bits */ 517 + EMIT(PPC_RAW_EXTSH(dst_reg, src_reg)); 518 + return 0; 519 + case 4: 520 + /* sign-extend 32 bits into 64 bits */ 521 + EMIT(PPC_RAW_EXTSW(dst_reg, src_reg)); 522 + fallthrough; 523 + case 8: 524 + /* Nothing to do */ 525 + return 0; 526 + default: 527 + return -1; 528 + } 529 + } 530 + 531 + /* 532 + * Handle powerpc ABI expectations from caller: 533 + * - Unsigned arguments are zero-extended. 534 + * - Signed arguments are sign-extended. 535 + */ 536 + static int prepare_for_kfunc_call(const struct bpf_prog *fp, u32 *image, 537 + struct codegen_context *ctx, 538 + const struct bpf_insn *insn) 539 + { 540 + const struct btf_func_model *m = bpf_jit_find_kfunc_model(fp, insn); 541 + int i; 542 + 543 + if (!m) 544 + return -1; 545 + 546 + for (i = 0; i < m->nr_args; i++) { 547 + /* Note that BPF ABI only allows up to 5 args for kfuncs */ 548 + u32 reg = bpf_to_ppc(BPF_REG_1 + i), size = m->arg_size[i]; 549 + 550 + if (!(m->arg_flags[i] & BTF_FMODEL_SIGNED_ARG)) { 551 + if (zero_extend(image, ctx, reg, reg, size)) 552 + return -1; 553 + } else { 554 + if (sign_extend(image, ctx, reg, reg, size)) 555 + return -1; 556 + } 557 + } 558 + 559 + return 0; 560 + } 561 + 502 562 static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out) 503 563 { 504 564 /* ··· 616 522 617 523 /* 618 524 * tail_call_info++; <- Actual value of tcc here 525 + * Writeback this updated value only if tailcall succeeds. 619 526 */ 620 527 EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), 1)); 528 + 529 + /* prog = array->ptrs[index]; */ 530 + EMIT(PPC_RAW_MULI(bpf_to_ppc(TMP_REG_2), b2p_index, 8)); 531 + EMIT(PPC_RAW_ADD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2), b2p_bpf_array)); 532 + EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2), 533 + offsetof(struct bpf_array, ptrs))); 534 + 535 + /* 536 + * if (prog == NULL) 537 + * goto out; 538 + */ 539 + EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_2), 0)); 540 + PPC_BCC_SHORT(COND_EQ, out); 541 + 542 + /* goto *(prog->bpf_func + prologue_size); */ 543 + EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2), 544 + offsetof(struct bpf_prog, bpf_func))); 545 + EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2), 546 + FUNCTION_DESCR_SIZE + bpf_tailcall_prologue_size)); 547 + EMIT(PPC_RAW_MTCTR(bpf_to_ppc(TMP_REG_2))); 621 548 622 549 /* 623 550 * Before writing updated tail_call_info, distinguish if current frame ··· 653 538 EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), _R1, bpf_jit_stack_tailcallinfo_offset(ctx))); 654 539 /* Writeback updated value to tail_call_info */ 655 540 EMIT(PPC_RAW_STD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_2), 0)); 656 - 657 - /* prog = array->ptrs[index]; */ 658 - EMIT(PPC_RAW_MULI(bpf_to_ppc(TMP_REG_1), b2p_index, 8)); 659 - EMIT(PPC_RAW_ADD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), b2p_bpf_array)); 660 - EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), offsetof(struct bpf_array, ptrs))); 661 - 662 - /* 663 - * if (prog == NULL) 664 - * goto out; 665 - */ 666 - EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_1), 0)); 667 - PPC_BCC_SHORT(COND_EQ, out); 668 - 669 - /* goto *(prog->bpf_func + prologue_size); */ 670 - EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), offsetof(struct bpf_prog, bpf_func))); 671 - EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), 672 - FUNCTION_DESCR_SIZE + bpf_tailcall_prologue_size)); 673 - EMIT(PPC_RAW_MTCTR(bpf_to_ppc(TMP_REG_1))); 674 541 675 542 /* tear down stack, restore NVRs, ... */ 676 543 bpf_jit_emit_common_epilogue(image, ctx); ··· 1220 1123 /* special mov32 for zext */ 1221 1124 EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31)); 1222 1125 break; 1223 - } else if (off == 8) { 1224 - EMIT(PPC_RAW_EXTSB(dst_reg, src_reg)); 1225 - } else if (off == 16) { 1226 - EMIT(PPC_RAW_EXTSH(dst_reg, src_reg)); 1227 - } else if (off == 32) { 1228 - EMIT(PPC_RAW_EXTSW(dst_reg, src_reg)); 1229 - } else if (dst_reg != src_reg) 1230 - EMIT(PPC_RAW_MR(dst_reg, src_reg)); 1126 + } 1127 + if (off == 0) { 1128 + /* MOV */ 1129 + if (dst_reg != src_reg) 1130 + EMIT(PPC_RAW_MR(dst_reg, src_reg)); 1131 + } else { 1132 + /* MOVSX: dst = (s8,s16,s32)src (off = 8,16,32) */ 1133 + if (sign_extend(image, ctx, src_reg, dst_reg, off / 8)) 1134 + return -1; 1135 + } 1231 1136 goto bpf_alu32_trunc; 1232 1137 case BPF_ALU | BPF_MOV | BPF_K: /* (u32) dst = imm */ 1233 1138 case BPF_ALU64 | BPF_MOV | BPF_K: /* dst = (s64) imm */ ··· 1696 1597 &func_addr, &func_addr_fixed); 1697 1598 if (ret < 0) 1698 1599 return ret; 1600 + 1601 + /* Take care of powerpc ABI requirements before kfunc call */ 1602 + if (insn[i].src_reg == BPF_PSEUDO_KFUNC_CALL) { 1603 + if (prepare_for_kfunc_call(fp, image, ctx, &insn[i])) 1604 + return -1; 1605 + } 1699 1606 1700 1607 ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr); 1701 1608 if (ret)
+2 -2
arch/powerpc/platforms/83xx/km83xx.c
··· 155 155 156 156 /* list of the supported boards */ 157 157 static char *board[] __initdata = { 158 - "Keymile,KMETER1", 159 - "Keymile,kmpbec8321", 158 + "keymile,KMETER1", 159 + "keymile,kmpbec8321", 160 160 NULL 161 161 }; 162 162
+2 -2
arch/powerpc/platforms/Kconfig.cputype
··· 276 276 config PPC_E500 277 277 select FSL_EMB_PERFMON 278 278 bool 279 - select ARCH_SUPPORTS_HUGETLBFS if PHYS_64BIT || PPC64 279 + select ARCH_SUPPORTS_HUGETLBFS 280 280 select PPC_SMP_MUXED_IPI 281 281 select PPC_DOORBELL 282 282 select PPC_KUEP ··· 337 337 config PTE_64BIT 338 338 bool 339 339 depends on 44x || PPC_E500 || PPC_86xx 340 - default y if PHYS_64BIT 340 + default y if PPC_E500 || PHYS_64BIT 341 341 342 342 config PHYS_64BIT 343 343 bool 'Large physical address support' if PPC_E500 || PPC_86xx
+1 -1
arch/powerpc/platforms/pseries/msi.c
··· 605 605 &pseries_msi_irq_chip, pseries_dev); 606 606 } 607 607 608 - pseries_dev->msi_used++; 608 + pseries_dev->msi_used += nr_irqs; 609 609 return 0; 610 610 611 611 out:
+2 -2
arch/powerpc/tools/ftrace-gen-ool-stubs.sh
··· 15 15 RELOCATION=R_PPC_ADDR32 16 16 fi 17 17 18 - num_ool_stubs_total=$($objdump -r -j __patchable_function_entries "$vmlinux_o" | 18 + num_ool_stubs_total=$($objdump -r -j __patchable_function_entries -d "$vmlinux_o" | 19 19 grep -c "$RELOCATION") 20 - num_ool_stubs_inittext=$($objdump -r -j __patchable_function_entries "$vmlinux_o" | 20 + num_ool_stubs_inittext=$($objdump -r -j __patchable_function_entries -d "$vmlinux_o" | 21 21 grep -e ".init.text" -e ".text.startup" | grep -c "$RELOCATION") 22 22 num_ool_stubs_text=$((num_ool_stubs_total - num_ool_stubs_inittext)) 23 23
arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh arch/powerpc/tools/check-fpatchable-function-entry.sh
+1
arch/riscv/kernel/vmlinux.lds.S
··· 170 170 171 171 STABS_DEBUG 172 172 DWARF_DEBUG 173 + MODINFO 173 174 ELF_DETAILS 174 175 .riscv.attributes 0 : { *(.riscv.attributes) } 175 176
+1 -1
arch/s390/include/asm/processor.h
··· 159 159 " j 4f\n" 160 160 "3: mvc 8(1,%[addr]),0(%[addr])\n" 161 161 "4:" 162 - : [addr] "+&a" (erase_low), [count] "+&d" (count), [tmp] "=&a" (tmp) 162 + : [addr] "+&a" (erase_low), [count] "+&a" (count), [tmp] "=&a" (tmp) 163 163 : [poison] "d" (poison) 164 164 : "memory", "cc" 165 165 );
+1
arch/s390/kernel/vmlinux.lds.S
··· 221 221 /* Debugging sections. */ 222 222 STABS_DEBUG 223 223 DWARF_DEBUG 224 + MODINFO 224 225 ELF_DETAILS 225 226 226 227 /*
+5 -6
arch/s390/lib/xor.c
··· 28 28 " j 3f\n" 29 29 "2: xc 0(1,%1),0(%2)\n" 30 30 "3:" 31 - : : "d" (bytes), "a" (p1), "a" (p2) 32 - : "0", "cc", "memory"); 31 + : "+a" (bytes), "+a" (p1), "+a" (p2) 32 + : : "0", "cc", "memory"); 33 33 } 34 34 35 35 static void xor_xc_3(unsigned long bytes, unsigned long * __restrict p1, ··· 54 54 "2: xc 0(1,%1),0(%2)\n" 55 55 "3: xc 0(1,%1),0(%3)\n" 56 56 "4:" 57 - : "+d" (bytes), "+a" (p1), "+a" (p2), "+a" (p3) 57 + : "+a" (bytes), "+a" (p1), "+a" (p2), "+a" (p3) 58 58 : : "0", "cc", "memory"); 59 59 } 60 60 ··· 85 85 "3: xc 0(1,%1),0(%3)\n" 86 86 "4: xc 0(1,%1),0(%4)\n" 87 87 "5:" 88 - : "+d" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4) 88 + : "+a" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4) 89 89 : : "0", "cc", "memory"); 90 90 } 91 91 ··· 96 96 const unsigned long * __restrict p5) 97 97 { 98 98 asm volatile( 99 - " larl 1,2f\n" 100 99 " aghi %0,-1\n" 101 100 " jm 6f\n" 102 101 " srlg 0,%0,8\n" ··· 121 122 "4: xc 0(1,%1),0(%4)\n" 122 123 "5: xc 0(1,%1),0(%5)\n" 123 124 "6:" 124 - : "+d" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4), 125 + : "+a" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4), 125 126 "+a" (p5) 126 127 : : "0", "cc", "memory"); 127 128 }
+1
arch/sh/kernel/vmlinux.lds.S
··· 89 89 90 90 STABS_DEBUG 91 91 DWARF_DEBUG 92 + MODINFO 92 93 ELF_DETAILS 93 94 94 95 DISCARDS
+7
arch/sparc/kernel/pci.c
··· 355 355 dev->error_state = pci_channel_io_normal; 356 356 dev->dma_mask = 0xffffffff; 357 357 358 + /* 359 + * Assume 64-bit addresses for MSI initially. Will be changed to 32-bit 360 + * if MSI (rather than MSI-X) capability does not have 361 + * PCI_MSI_FLAGS_64BIT. Can also be overridden by driver. 362 + */ 363 + dev->msi_addr_mask = DMA_BIT_MASK(64); 364 + 358 365 if (of_node_name_eq(node, "pci")) { 359 366 /* a PCI-PCI bridge */ 360 367 dev->hdr_type = PCI_HEADER_TYPE_BRIDGE;
+1
arch/sparc/kernel/vmlinux.lds.S
··· 191 191 192 192 STABS_DEBUG 193 193 DWARF_DEBUG 194 + MODINFO 194 195 ELF_DETAILS 195 196 196 197 DISCARDS
+1
arch/um/kernel/dyn.lds.S
··· 172 172 173 173 STABS_DEBUG 174 174 DWARF_DEBUG 175 + MODINFO 175 176 ELF_DETAILS 176 177 177 178 DISCARDS
+1
arch/um/kernel/uml.lds.S
··· 113 113 114 114 STABS_DEBUG 115 115 DWARF_DEBUG 116 + MODINFO 116 117 ELF_DETAILS 117 118 118 119 DISCARDS
+1
arch/x86/boot/compressed/Makefile
··· 113 113 114 114 ifdef CONFIG_EFI_SBAT 115 115 $(obj)/sbat.o: $(CONFIG_EFI_SBAT_FILE) 116 + AFLAGS_sbat.o += -I $(srctree) 116 117 endif 117 118 118 119 $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE
+5 -4
arch/x86/boot/compressed/sev.c
··· 28 28 #include "sev.h" 29 29 30 30 static struct ghcb boot_ghcb_page __aligned(PAGE_SIZE); 31 - struct ghcb *boot_ghcb; 31 + struct ghcb *boot_ghcb __section(".data"); 32 32 33 33 #undef __init 34 34 #define __init 35 35 36 36 #define __BOOT_COMPRESSED 37 37 38 - u8 snp_vmpl; 39 - u16 ghcb_version; 38 + u8 snp_vmpl __section(".data"); 39 + u16 ghcb_version __section(".data"); 40 40 41 - u64 boot_svsm_caa_pa; 41 + u64 boot_svsm_caa_pa __section(".data"); 42 42 43 43 /* Include code for early handlers */ 44 44 #include "../../boot/startup/sev-shared.c" ··· 188 188 MSR_AMD64_SNP_RESERVED_BIT13 | \ 189 189 MSR_AMD64_SNP_RESERVED_BIT15 | \ 190 190 MSR_AMD64_SNP_SECURE_AVIC | \ 191 + MSR_AMD64_SNP_RESERVED_BITS19_22 | \ 191 192 MSR_AMD64_SNP_RESERVED_MASK) 192 193 193 194 #ifdef CONFIG_AMD_SECURE_AVIC
+1 -1
arch/x86/boot/compressed/vmlinux.lds.S
··· 88 88 /DISCARD/ : { 89 89 *(.dynamic) *(.dynsym) *(.dynstr) *(.dynbss) 90 90 *(.hash) *(.gnu.hash) 91 - *(.note.*) 91 + *(.note.*) *(.modinfo) 92 92 } 93 93 94 94 .got.plt (INFO) : {
+1 -1
arch/x86/boot/startup/sev-shared.c
··· 31 31 static u32 cpuid_hyp_range_max __ro_after_init; 32 32 static u32 cpuid_ext_range_max __ro_after_init; 33 33 34 - bool sev_snp_needs_sfw; 34 + bool sev_snp_needs_sfw __section(".data"); 35 35 36 36 void __noreturn 37 37 sev_es_terminate(unsigned int set, unsigned int reason)
+1
arch/x86/coco/sev/core.c
··· 89 89 [MSR_AMD64_SNP_VMSA_REG_PROT_BIT] = "VMSARegProt", 90 90 [MSR_AMD64_SNP_SMT_PROT_BIT] = "SMTProt", 91 91 [MSR_AMD64_SNP_SECURE_AVIC_BIT] = "SecureAVIC", 92 + [MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT] = "IBPBOnEntry", 92 93 }; 93 94 94 95 /*
+30
arch/x86/entry/vdso/vdso32/sigreturn.S
··· 35 35 #endif 36 36 .endm 37 37 38 + /* 39 + * WARNING: 40 + * 41 + * A bug in the libgcc unwinder as of at least gcc 15.2 (2026) means that 42 + * the unwinder fails to recognize the signal frame flag. 43 + * 44 + * There is a hacky legacy fallback path in libgcc which ends up 45 + * getting invoked instead. It happens to work as long as BOTH of the 46 + * following conditions are true: 47 + * 48 + * 1. There is at least one byte before the each of the sigreturn 49 + * functions which falls outside any function. This is enforced by 50 + * an explicit nop instruction before the ALIGN. 51 + * 2. The code sequences between the entry point up to and including 52 + * the int $0x80 below need to match EXACTLY. Do not change them 53 + * in any way. The exact byte sequences are: 54 + * 55 + * __kernel_sigreturn: 56 + * 0: 58 pop %eax 57 + * 1: b8 77 00 00 00 mov $0x77,%eax 58 + * 6: cd 80 int $0x80 59 + * 60 + * __kernel_rt_sigreturn: 61 + * 0: b8 ad 00 00 00 mov $0xad,%eax 62 + * 5: cd 80 int $0x80 63 + * 64 + * For details, see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=124050 65 + */ 38 66 .text 39 67 .globl __kernel_sigreturn 40 68 .type __kernel_sigreturn,@function 69 + nop /* libgcc hack: see comment above */ 41 70 ALIGN 42 71 __kernel_sigreturn: 43 72 STARTPROC_SIGNAL_FRAME IA32_SIGFRAME_sigcontext ··· 81 52 82 53 .globl __kernel_rt_sigreturn 83 54 .type __kernel_rt_sigreturn,@function 55 + nop /* libgcc hack: see comment above */ 84 56 ALIGN 85 57 __kernel_rt_sigreturn: 86 58 STARTPROC_SIGNAL_FRAME IA32_RT_SIGFRAME_sigcontext
+1 -1
arch/x86/include/asm/efi.h
··· 138 138 extern int __init efi_reuse_config(u64 tables, int nr_tables); 139 139 extern void efi_delete_dummy_variable(void); 140 140 extern void efi_crash_gracefully_on_page_fault(unsigned long phys_addr); 141 - extern void efi_free_boot_services(void); 141 + extern void efi_unmap_boot_services(void); 142 142 143 143 void arch_efi_call_virt_setup(void); 144 144 void arch_efi_call_virt_teardown(void);
+4 -1
arch/x86/include/asm/msr-index.h
··· 740 740 #define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT) 741 741 #define MSR_AMD64_SNP_SECURE_AVIC_BIT 18 742 742 #define MSR_AMD64_SNP_SECURE_AVIC BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT) 743 - #define MSR_AMD64_SNP_RESV_BIT 19 743 + #define MSR_AMD64_SNP_RESERVED_BITS19_22 GENMASK_ULL(22, 19) 744 + #define MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT 23 745 + #define MSR_AMD64_SNP_IBPB_ON_ENTRY BIT_ULL(MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT) 746 + #define MSR_AMD64_SNP_RESV_BIT 24 744 747 #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) 745 748 #define MSR_AMD64_SAVIC_CONTROL 0xc0010138 746 749 #define MSR_AMD64_SAVIC_EN_BIT 0
+6
arch/x86/include/asm/numa.h
··· 22 22 */ 23 23 extern s16 __apicid_to_node[MAX_LOCAL_APIC]; 24 24 extern nodemask_t numa_nodes_parsed __initdata; 25 + extern nodemask_t numa_phys_nodes_parsed __initdata; 25 26 26 27 static inline void set_apicid_to_node(int apicid, s16 node) 27 28 { ··· 49 48 extern void numa_add_cpu(unsigned int cpu); 50 49 extern void numa_remove_cpu(unsigned int cpu); 51 50 extern void init_gi_nodes(void); 51 + extern int num_phys_nodes(void); 52 52 #else /* CONFIG_NUMA */ 53 53 static inline void numa_set_node(int cpu, int node) { } 54 54 static inline void numa_clear_node(int cpu) { } ··· 57 55 static inline void numa_add_cpu(unsigned int cpu) { } 58 56 static inline void numa_remove_cpu(unsigned int cpu) { } 59 57 static inline void init_gi_nodes(void) { } 58 + static inline int num_phys_nodes(void) 59 + { 60 + return 1; 61 + } 60 62 #endif /* CONFIG_NUMA */ 61 63 62 64 #ifdef CONFIG_DEBUG_PER_CPU_MAPS
-2
arch/x86/include/asm/pgtable_64.h
··· 19 19 extern p4d_t level4_kernel_pgt[512]; 20 20 extern p4d_t level4_ident_pgt[512]; 21 21 extern pud_t level3_kernel_pgt[512]; 22 - extern pud_t level3_ident_pgt[512]; 23 22 extern pmd_t level2_kernel_pgt[512]; 24 23 extern pmd_t level2_fixmap_pgt[512]; 25 - extern pmd_t level2_ident_pgt[512]; 26 24 extern pte_t level1_fixmap_pgt[512 * FIXMAP_PMD_NUM]; 27 25 extern pgd_t init_top_pgt[]; 28 26
+6
arch/x86/include/asm/topology.h
··· 155 155 extern unsigned int __max_threads_per_core; 156 156 extern unsigned int __num_threads_per_package; 157 157 extern unsigned int __num_cores_per_package; 158 + extern unsigned int __num_nodes_per_package; 158 159 159 160 const char *get_topology_cpu_type_name(struct cpuinfo_x86 *c); 160 161 enum x86_topology_cpu_type get_topology_cpu_type(struct cpuinfo_x86 *c); ··· 178 177 static inline unsigned int topology_num_threads_per_package(void) 179 178 { 180 179 return __num_threads_per_package; 180 + } 181 + 182 + static inline unsigned int topology_num_nodes_per_package(void) 183 + { 184 + return __num_nodes_per_package; 181 185 } 182 186 183 187 #ifdef CONFIG_X86_LOCAL_APIC
+3
arch/x86/kernel/cpu/common.c
··· 95 95 unsigned int __max_logical_packages __ro_after_init = 1; 96 96 EXPORT_SYMBOL(__max_logical_packages); 97 97 98 + unsigned int __num_nodes_per_package __ro_after_init = 1; 99 + EXPORT_SYMBOL(__num_nodes_per_package); 100 + 98 101 unsigned int __num_cores_per_package __ro_after_init = 1; 99 102 EXPORT_SYMBOL(__num_cores_per_package); 100 103
+5 -31
arch/x86/kernel/cpu/resctrl/monitor.c
··· 364 364 msr_clear_bit(MSR_RMID_SNC_CONFIG, 0); 365 365 } 366 366 367 - /* CPU models that support MSR_RMID_SNC_CONFIG */ 367 + /* CPU models that support SNC and MSR_RMID_SNC_CONFIG */ 368 368 static const struct x86_cpu_id snc_cpu_ids[] __initconst = { 369 369 X86_MATCH_VFM(INTEL_ICELAKE_X, 0), 370 370 X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, 0), ··· 375 375 {} 376 376 }; 377 377 378 - /* 379 - * There isn't a simple hardware bit that indicates whether a CPU is running 380 - * in Sub-NUMA Cluster (SNC) mode. Infer the state by comparing the 381 - * number of CPUs sharing the L3 cache with CPU0 to the number of CPUs in 382 - * the same NUMA node as CPU0. 383 - * It is not possible to accurately determine SNC state if the system is 384 - * booted with a maxcpus=N parameter. That distorts the ratio of SNC nodes 385 - * to L3 caches. It will be OK if system is booted with hyperthreading 386 - * disabled (since this doesn't affect the ratio). 387 - */ 388 378 static __init int snc_get_config(void) 389 379 { 390 - struct cacheinfo *ci = get_cpu_cacheinfo_level(0, RESCTRL_L3_CACHE); 391 - const cpumask_t *node0_cpumask; 392 - int cpus_per_node, cpus_per_l3; 393 - int ret; 380 + int ret = topology_num_nodes_per_package(); 394 381 395 - if (!x86_match_cpu(snc_cpu_ids) || !ci) 382 + if (ret > 1 && !x86_match_cpu(snc_cpu_ids)) { 383 + pr_warn("CoD enabled system? Resctrl not supported\n"); 396 384 return 1; 397 - 398 - cpus_read_lock(); 399 - if (num_online_cpus() != num_present_cpus()) 400 - pr_warn("Some CPUs offline, SNC detection may be incorrect\n"); 401 - cpus_read_unlock(); 402 - 403 - node0_cpumask = cpumask_of_node(cpu_to_node(0)); 404 - 405 - cpus_per_node = cpumask_weight(node0_cpumask); 406 - cpus_per_l3 = cpumask_weight(&ci->shared_cpu_map); 407 - 408 - if (!cpus_per_node || !cpus_per_l3) 409 - return 1; 410 - 411 - ret = cpus_per_l3 / cpus_per_node; 385 + } 412 386 413 387 /* sanity check: Only valid results are 1, 2, 3, 4, 6 */ 414 388 switch (ret) {
+11 -2
arch/x86/kernel/cpu/topology.c
··· 31 31 #include <asm/mpspec.h> 32 32 #include <asm/msr.h> 33 33 #include <asm/smp.h> 34 + #include <asm/numa.h> 34 35 35 36 #include "cpu.h" 36 37 ··· 493 492 set_nr_cpu_ids(allowed); 494 493 495 494 cnta = domain_weight(TOPO_PKG_DOMAIN); 496 - cntb = domain_weight(TOPO_DIE_DOMAIN); 497 495 __max_logical_packages = cnta; 496 + 497 + pr_info("Max. logical packages: %3u\n", __max_logical_packages); 498 + 499 + cntb = num_phys_nodes(); 500 + __num_nodes_per_package = DIV_ROUND_UP(cntb, cnta); 501 + 502 + pr_info("Max. logical nodes: %3u\n", cntb); 503 + pr_info("Num. nodes per package:%3u\n", __num_nodes_per_package); 504 + 505 + cntb = domain_weight(TOPO_DIE_DOMAIN); 498 506 __max_dies_per_package = 1U << (get_count_order(cntb) - get_count_order(cnta)); 499 507 500 - pr_info("Max. logical packages: %3u\n", cnta); 501 508 pr_info("Max. logical dies: %3u\n", cntb); 502 509 pr_info("Max. dies per package: %3u\n", __max_dies_per_package); 503 510
-28
arch/x86/kernel/head_64.S
··· 616 616 617 617 .data 618 618 619 - #if defined(CONFIG_XEN_PV) || defined(CONFIG_PVH) 620 - SYM_DATA_START_PTI_ALIGNED(init_top_pgt) 621 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC 622 - .org init_top_pgt + L4_PAGE_OFFSET*8, 0 623 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC 624 - .org init_top_pgt + L4_START_KERNEL*8, 0 625 - /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ 626 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC 627 - .fill PTI_USER_PGD_FILL,8,0 628 - SYM_DATA_END(init_top_pgt) 629 - 630 - SYM_DATA_START_PAGE_ALIGNED(level3_ident_pgt) 631 - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC 632 - .fill 511, 8, 0 633 - SYM_DATA_END(level3_ident_pgt) 634 - SYM_DATA_START_PAGE_ALIGNED(level2_ident_pgt) 635 - /* 636 - * Since I easily can, map the first 1G. 637 - * Don't set NX because code runs from these pages. 638 - * 639 - * Note: This sets _PAGE_GLOBAL despite whether 640 - * the CPU supports it or it is enabled. But, 641 - * the CPU should ignore the bit. 642 - */ 643 - PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD) 644 - SYM_DATA_END(level2_ident_pgt) 645 - #else 646 619 SYM_DATA_START_PTI_ALIGNED(init_top_pgt) 647 620 .fill 512,8,0 648 621 .fill PTI_USER_PGD_FILL,8,0 649 622 SYM_DATA_END(init_top_pgt) 650 - #endif 651 623 652 624 SYM_DATA_START_PAGE_ALIGNED(level4_kernel_pgt) 653 625 .fill 511,8,0
+144 -55
arch/x86/kernel/smpboot.c
··· 468 468 } 469 469 #endif 470 470 471 - /* 472 - * Set if a package/die has multiple NUMA nodes inside. 473 - * AMD Magny-Cours, Intel Cluster-on-Die, and Intel 474 - * Sub-NUMA Clustering have this. 475 - */ 476 - static bool x86_has_numa_in_package; 477 - 478 471 static struct sched_domain_topology_level x86_topology[] = { 479 472 SDTL_INIT(tl_smt_mask, cpu_smt_flags, SMT), 480 473 #ifdef CONFIG_SCHED_CLUSTER ··· 489 496 * PKG domain since the NUMA domains will auto-magically create the 490 497 * right spanning domains based on the SLIT. 491 498 */ 492 - if (x86_has_numa_in_package) { 499 + if (topology_num_nodes_per_package() > 1) { 493 500 unsigned int pkgdom = ARRAY_SIZE(x86_topology) - 2; 494 501 495 502 memset(&x86_topology[pkgdom], 0, sizeof(x86_topology[pkgdom])); ··· 506 513 } 507 514 508 515 #ifdef CONFIG_NUMA 509 - static int sched_avg_remote_distance; 510 - static int avg_remote_numa_distance(void) 516 + /* 517 + * Test if the on-trace cluster at (N,N) is symmetric. 518 + * Uses upper triangle iteration to avoid obvious duplicates. 519 + */ 520 + static bool slit_cluster_symmetric(int N) 511 521 { 512 - int i, j; 513 - int distance, nr_remote, total_distance; 522 + int u = topology_num_nodes_per_package(); 514 523 515 - if (sched_avg_remote_distance > 0) 516 - return sched_avg_remote_distance; 517 - 518 - nr_remote = 0; 519 - total_distance = 0; 520 - for_each_node_state(i, N_CPU) { 521 - for_each_node_state(j, N_CPU) { 522 - distance = node_distance(i, j); 523 - 524 - if (distance >= REMOTE_DISTANCE) { 525 - nr_remote++; 526 - total_distance += distance; 527 - } 524 + for (int k = 0; k < u; k++) { 525 + for (int l = k; l < u; l++) { 526 + if (node_distance(N + k, N + l) != 527 + node_distance(N + l, N + k)) 528 + return false; 528 529 } 529 530 } 530 - if (nr_remote) 531 - sched_avg_remote_distance = total_distance / nr_remote; 532 - else 533 - sched_avg_remote_distance = REMOTE_DISTANCE; 534 531 535 - return sched_avg_remote_distance; 532 + return true; 533 + } 534 + 535 + /* 536 + * Return the package-id of the cluster, or ~0 if indeterminate. 537 + * Each node in the on-trace cluster should have the same package-id. 538 + */ 539 + static u32 slit_cluster_package(int N) 540 + { 541 + int u = topology_num_nodes_per_package(); 542 + u32 pkg_id = ~0; 543 + 544 + for (int n = 0; n < u; n++) { 545 + const struct cpumask *cpus = cpumask_of_node(N + n); 546 + int cpu; 547 + 548 + for_each_cpu(cpu, cpus) { 549 + u32 id = topology_logical_package_id(cpu); 550 + 551 + if (pkg_id == ~0) 552 + pkg_id = id; 553 + if (pkg_id != id) 554 + return ~0; 555 + } 556 + } 557 + 558 + return pkg_id; 559 + } 560 + 561 + /* 562 + * Validate the SLIT table is of the form expected for SNC, specifically: 563 + * 564 + * - each on-trace cluster should be symmetric, 565 + * - each on-trace cluster should have a unique package-id. 566 + * 567 + * If you NUMA_EMU on top of SNC, you get to keep the pieces. 568 + */ 569 + static bool slit_validate(void) 570 + { 571 + int u = topology_num_nodes_per_package(); 572 + u32 pkg_id, prev_pkg_id = ~0; 573 + 574 + for (int pkg = 0; pkg < topology_max_packages(); pkg++) { 575 + int n = pkg * u; 576 + 577 + /* 578 + * Ensure the on-trace cluster is symmetric and each cluster 579 + * has a different package id. 580 + */ 581 + if (!slit_cluster_symmetric(n)) 582 + return false; 583 + pkg_id = slit_cluster_package(n); 584 + if (pkg_id == ~0) 585 + return false; 586 + if (pkg && pkg_id == prev_pkg_id) 587 + return false; 588 + 589 + prev_pkg_id = pkg_id; 590 + } 591 + 592 + return true; 593 + } 594 + 595 + /* 596 + * Compute a sanitized SLIT table for SNC; notably SNC-3 can end up with 597 + * asymmetric off-trace clusters, reflecting physical assymmetries. However 598 + * this leads to 'unfortunate' sched_domain configurations. 599 + * 600 + * For example dual socket GNR with SNC-3: 601 + * 602 + * node distances: 603 + * node 0 1 2 3 4 5 604 + * 0: 10 15 17 21 28 26 605 + * 1: 15 10 15 23 26 23 606 + * 2: 17 15 10 26 23 21 607 + * 3: 21 28 26 10 15 17 608 + * 4: 23 26 23 15 10 15 609 + * 5: 26 23 21 17 15 10 610 + * 611 + * Fix things up by averaging out the off-trace clusters; resulting in: 612 + * 613 + * node 0 1 2 3 4 5 614 + * 0: 10 15 17 24 24 24 615 + * 1: 15 10 15 24 24 24 616 + * 2: 17 15 10 24 24 24 617 + * 3: 24 24 24 10 15 17 618 + * 4: 24 24 24 15 10 15 619 + * 5: 24 24 24 17 15 10 620 + */ 621 + static int slit_cluster_distance(int i, int j) 622 + { 623 + static int slit_valid = -1; 624 + int u = topology_num_nodes_per_package(); 625 + long d = 0; 626 + int x, y; 627 + 628 + if (slit_valid < 0) { 629 + slit_valid = slit_validate(); 630 + if (!slit_valid) 631 + pr_err(FW_BUG "SLIT table doesn't have the expected form for SNC -- fixup disabled!\n"); 632 + else 633 + pr_info("Fixing up SNC SLIT table.\n"); 634 + } 635 + 636 + /* 637 + * Is this a unit cluster on the trace? 638 + */ 639 + if ((i / u) == (j / u) || !slit_valid) 640 + return node_distance(i, j); 641 + 642 + /* 643 + * Off-trace cluster. 644 + * 645 + * Notably average out the symmetric pair of off-trace clusters to 646 + * ensure the resulting SLIT table is symmetric. 647 + */ 648 + x = i - (i % u); 649 + y = j - (j % u); 650 + 651 + for (i = x; i < x + u; i++) { 652 + for (j = y; j < y + u; j++) { 653 + d += node_distance(i, j); 654 + d += node_distance(j, i); 655 + } 656 + } 657 + 658 + return d / (2*u*u); 536 659 } 537 660 538 661 int arch_sched_node_distance(int from, int to) ··· 658 549 switch (boot_cpu_data.x86_vfm) { 659 550 case INTEL_GRANITERAPIDS_X: 660 551 case INTEL_ATOM_DARKMONT_X: 661 - 662 - if (!x86_has_numa_in_package || topology_max_packages() == 1 || 663 - d < REMOTE_DISTANCE) 552 + if (topology_max_packages() == 1 || 553 + topology_num_nodes_per_package() < 3) 664 554 return d; 665 555 666 556 /* 667 - * With SNC enabled, there could be too many levels of remote 668 - * NUMA node distances, creating NUMA domain levels 669 - * including local nodes and partial remote nodes. 670 - * 671 - * Trim finer distance tuning for NUMA nodes in remote package 672 - * for the purpose of building sched domains. Group NUMA nodes 673 - * in the remote package in the same sched group. 674 - * Simplify NUMA domains and avoid extra NUMA levels including 675 - * different remote NUMA nodes and local nodes. 676 - * 677 - * GNR and CWF don't expect systems with more than 2 packages 678 - * and more than 2 hops between packages. Single average remote 679 - * distance won't be appropriate if there are more than 2 680 - * packages as average distance to different remote packages 681 - * could be different. 557 + * Handle SNC-3 asymmetries. 682 558 */ 683 - WARN_ONCE(topology_max_packages() > 2, 684 - "sched: Expect only up to 2 packages for GNR or CWF, " 685 - "but saw %d packages when building sched domains.", 686 - topology_max_packages()); 687 - 688 - d = avg_remote_numa_distance(); 559 + return slit_cluster_distance(from, to); 689 560 } 690 561 return d; 691 562 } ··· 695 606 o = &cpu_data(i); 696 607 697 608 if (match_pkg(c, o) && !topology_same_node(c, o)) 698 - x86_has_numa_in_package = true; 609 + WARN_ON_ONCE(topology_num_nodes_per_package() == 1); 699 610 700 611 if ((i == cpu) || (has_smt && match_smt(c, o))) 701 612 link_mask(topology_sibling_cpumask, cpu, i);
+1
arch/x86/kernel/vmlinux.lds.S
··· 427 427 .llvm_bb_addr_map : { *(.llvm_bb_addr_map) } 428 428 #endif 429 429 430 + MODINFO 430 431 ELF_DETAILS 431 432 432 433 DISCARDS
+8
arch/x86/mm/numa.c
··· 48 48 [0 ... MAX_LOCAL_APIC-1] = NUMA_NO_NODE 49 49 }; 50 50 51 + nodemask_t numa_phys_nodes_parsed __initdata; 52 + 51 53 int numa_cpu_node(int cpu) 52 54 { 53 55 u32 apicid = early_per_cpu(x86_cpu_to_apicid, cpu); ··· 57 55 if (apicid != BAD_APICID) 58 56 return __apicid_to_node[apicid]; 59 57 return NUMA_NO_NODE; 58 + } 59 + 60 + int __init num_phys_nodes(void) 61 + { 62 + return bitmap_weight(numa_phys_nodes_parsed.bits, MAX_NUMNODES); 60 63 } 61 64 62 65 cpumask_var_t node_to_cpumask_map[MAX_NUMNODES]; ··· 217 210 0LLU, PFN_PHYS(max_pfn) - 1); 218 211 219 212 node_set(0, numa_nodes_parsed); 213 + node_set(0, numa_phys_nodes_parsed); 220 214 numa_add_memblk(0, 0, PFN_PHYS(max_pfn)); 221 215 222 216 return 0;
+2
arch/x86/mm/srat.c
··· 57 57 } 58 58 set_apicid_to_node(apic_id, node); 59 59 node_set(node, numa_nodes_parsed); 60 + node_set(node, numa_phys_nodes_parsed); 60 61 pr_debug("SRAT: PXM %u -> APIC 0x%04x -> Node %u\n", pxm, apic_id, node); 61 62 } 62 63 ··· 98 97 99 98 set_apicid_to_node(apic_id, node); 100 99 node_set(node, numa_nodes_parsed); 100 + node_set(node, numa_phys_nodes_parsed); 101 101 pr_debug("SRAT: PXM %u -> APIC 0x%02x -> Node %u\n", pxm, apic_id, node); 102 102 } 103 103
+1 -1
arch/x86/platform/efi/efi.c
··· 836 836 } 837 837 838 838 efi_check_for_embedded_firmwares(); 839 - efi_free_boot_services(); 839 + efi_unmap_boot_services(); 840 840 841 841 if (!efi_is_mixed()) 842 842 efi_native_runtime_setup();
+52 -3
arch/x86/platform/efi/quirks.c
··· 341 341 342 342 /* 343 343 * Because the following memblock_reserve() is paired 344 - * with memblock_free_late() for this region in 344 + * with free_reserved_area() for this region in 345 345 * efi_free_boot_services(), we must be extremely 346 346 * careful not to reserve, and subsequently free, 347 347 * critical regions of memory (like the kernel image) or ··· 404 404 pr_err("Failed to unmap VA mapping for 0x%llx\n", va); 405 405 } 406 406 407 - void __init efi_free_boot_services(void) 407 + struct efi_freeable_range { 408 + u64 start; 409 + u64 end; 410 + }; 411 + 412 + static struct efi_freeable_range *ranges_to_free; 413 + 414 + void __init efi_unmap_boot_services(void) 408 415 { 409 416 struct efi_memory_map_data data = { 0 }; 410 417 efi_memory_desc_t *md; 411 418 int num_entries = 0; 419 + int idx = 0; 420 + size_t sz; 412 421 void *new, *new_md; 413 422 414 423 /* Keep all regions for /sys/kernel/debug/efi */ 415 424 if (efi_enabled(EFI_DBG)) 416 425 return; 426 + 427 + sz = sizeof(*ranges_to_free) * efi.memmap.nr_map + 1; 428 + ranges_to_free = kzalloc(sz, GFP_KERNEL); 429 + if (!ranges_to_free) { 430 + pr_err("Failed to allocate storage for freeable EFI regions\n"); 431 + return; 432 + } 417 433 418 434 for_each_efi_memory_desc(md) { 419 435 unsigned long long start = md->phys_addr; ··· 487 471 start = SZ_1M; 488 472 } 489 473 490 - memblock_free_late(start, size); 474 + /* 475 + * With CONFIG_DEFERRED_STRUCT_PAGE_INIT parts of the memory 476 + * map are still not initialized and we can't reliably free 477 + * memory here. 478 + * Queue the ranges to free at a later point. 479 + */ 480 + ranges_to_free[idx].start = start; 481 + ranges_to_free[idx].end = start + size; 482 + idx++; 491 483 } 492 484 493 485 if (!num_entries) ··· 535 511 return; 536 512 } 537 513 } 514 + 515 + static int __init efi_free_boot_services(void) 516 + { 517 + struct efi_freeable_range *range = ranges_to_free; 518 + unsigned long freed = 0; 519 + 520 + if (!ranges_to_free) 521 + return 0; 522 + 523 + while (range->start) { 524 + void *start = phys_to_virt(range->start); 525 + void *end = phys_to_virt(range->end); 526 + 527 + free_reserved_area(start, end, -1, NULL); 528 + freed += (end - start); 529 + range++; 530 + } 531 + kfree(ranges_to_free); 532 + 533 + if (freed) 534 + pr_info("Freeing EFI boot services memory: %ldK\n", freed / SZ_1K); 535 + 536 + return 0; 537 + } 538 + arch_initcall(efi_free_boot_services); 538 539 539 540 /* 540 541 * A number of config table entries get remapped to virtual addresses
+1 -6
arch/x86/platform/pvh/enlighten.c
··· 25 25 26 26 const unsigned int __initconst pvh_start_info_sz = sizeof(pvh_start_info); 27 27 28 - static u64 __init pvh_get_root_pointer(void) 29 - { 30 - return pvh_start_info.rsdp_paddr; 31 - } 32 - 33 28 /* 34 29 * Xen guests are able to obtain the memory map from the hypervisor via the 35 30 * HYPERVISOR_memory_op hypercall. ··· 90 95 pvh_bootparams.hdr.version = (2 << 8) | 12; 91 96 pvh_bootparams.hdr.type_of_loader = ((xen_guest ? 0x9 : 0xb) << 4) | 0; 92 97 93 - x86_init.acpi.get_root_pointer = pvh_get_root_pointer; 98 + pvh_bootparams.acpi_rsdp_addr = pvh_start_info.rsdp_paddr; 94 99 } 95 100 96 101 /*
+1 -1
arch/x86/xen/enlighten_pv.c
··· 392 392 393 393 /* 394 394 * Xen PV would need some work to support PCID: CR3 handling as well 395 - * as xen_flush_tlb_others() would need updating. 395 + * as xen_flush_tlb_multi() would need updating. 396 396 */ 397 397 setup_clear_cpu_cap(X86_FEATURE_PCID); 398 398
+9
arch/x86/xen/mmu_pv.c
··· 105 105 static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss; 106 106 #endif 107 107 108 + static pud_t level3_ident_pgt[PTRS_PER_PUD] __page_aligned_bss; 109 + static pmd_t level2_ident_pgt[PTRS_PER_PMD] __page_aligned_bss; 110 + 108 111 /* 109 112 * Protects atomic reservation decrease/increase against concurrent increases. 110 113 * Also protects non-atomic updates of current_pages and balloon lists. ··· 1779 1776 1780 1777 /* Zap identity mapping */ 1781 1778 init_top_pgt[0] = __pgd(0); 1779 + 1780 + init_top_pgt[pgd_index(__PAGE_OFFSET_BASE_L4)].pgd = 1781 + __pa_symbol(level3_ident_pgt) + _KERNPG_TABLE_NOENC; 1782 + init_top_pgt[pgd_index(__START_KERNEL_map)].pgd = 1783 + __pa_symbol(level3_kernel_pgt) + _PAGE_TABLE_NOENC; 1784 + level3_ident_pgt[0].pud = __pa_symbol(level2_ident_pgt) + _KERNPG_TABLE_NOENC; 1782 1785 1783 1786 /* Pre-constructed entries are in pfn, so convert to mfn */ 1784 1787 /* L4[273] -> level3_ident_pgt */
+1 -2
block/blk-map.c
··· 398 398 if (op_is_write(op)) 399 399 memcpy(page_address(page), p, bytes); 400 400 401 - if (bio_add_page(bio, page, bytes, 0) < bytes) 402 - break; 401 + __bio_add_page(bio, page, bytes, 0); 403 402 404 403 len -= bytes; 405 404 p += bytes;
+30 -15
block/blk-mq.c
··· 4793 4793 } 4794 4794 } 4795 4795 4796 - static int blk_mq_realloc_tag_set_tags(struct blk_mq_tag_set *set, 4797 - int new_nr_hw_queues) 4796 + static struct blk_mq_tags **blk_mq_prealloc_tag_set_tags( 4797 + struct blk_mq_tag_set *set, 4798 + int new_nr_hw_queues) 4798 4799 { 4799 4800 struct blk_mq_tags **new_tags; 4800 4801 int i; 4801 4802 4802 4803 if (set->nr_hw_queues >= new_nr_hw_queues) 4803 - goto done; 4804 + return NULL; 4804 4805 4805 4806 new_tags = kcalloc_node(new_nr_hw_queues, sizeof(struct blk_mq_tags *), 4806 4807 GFP_KERNEL, set->numa_node); 4807 4808 if (!new_tags) 4808 - return -ENOMEM; 4809 + return ERR_PTR(-ENOMEM); 4809 4810 4810 4811 if (set->tags) 4811 4812 memcpy(new_tags, set->tags, set->nr_hw_queues * 4812 4813 sizeof(*set->tags)); 4813 - kfree(set->tags); 4814 - set->tags = new_tags; 4815 4814 4816 4815 for (i = set->nr_hw_queues; i < new_nr_hw_queues; i++) { 4817 - if (!__blk_mq_alloc_map_and_rqs(set, i)) { 4818 - while (--i >= set->nr_hw_queues) 4819 - __blk_mq_free_map_and_rqs(set, i); 4820 - return -ENOMEM; 4816 + if (blk_mq_is_shared_tags(set->flags)) { 4817 + new_tags[i] = set->shared_tags; 4818 + } else { 4819 + new_tags[i] = blk_mq_alloc_map_and_rqs(set, i, 4820 + set->queue_depth); 4821 + if (!new_tags[i]) 4822 + goto out_unwind; 4821 4823 } 4822 4824 cond_resched(); 4823 4825 } 4824 4826 4825 - done: 4826 - set->nr_hw_queues = new_nr_hw_queues; 4827 - return 0; 4827 + return new_tags; 4828 + out_unwind: 4829 + while (--i >= set->nr_hw_queues) { 4830 + if (!blk_mq_is_shared_tags(set->flags)) 4831 + blk_mq_free_map_and_rqs(set, new_tags[i], i); 4832 + } 4833 + kfree(new_tags); 4834 + return ERR_PTR(-ENOMEM); 4828 4835 } 4829 4836 4830 4837 /* ··· 5120 5113 unsigned int memflags; 5121 5114 int i; 5122 5115 struct xarray elv_tbl; 5116 + struct blk_mq_tags **new_tags; 5123 5117 bool queues_frozen = false; 5124 5118 5125 5119 lockdep_assert_held(&set->tag_list_lock); ··· 5155 5147 if (blk_mq_elv_switch_none(q, &elv_tbl)) 5156 5148 goto switch_back; 5157 5149 5150 + new_tags = blk_mq_prealloc_tag_set_tags(set, nr_hw_queues); 5151 + if (IS_ERR(new_tags)) 5152 + goto switch_back; 5153 + 5158 5154 list_for_each_entry(q, &set->tag_list, tag_set_list) 5159 5155 blk_mq_freeze_queue_nomemsave(q); 5160 5156 queues_frozen = true; 5161 - if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0) 5162 - goto switch_back; 5157 + if (new_tags) { 5158 + kfree(set->tags); 5159 + set->tags = new_tags; 5160 + } 5161 + set->nr_hw_queues = nr_hw_queues; 5163 5162 5164 5163 fallback: 5165 5164 blk_mq_update_queue_map(set);
+7 -1
block/blk-sysfs.c
··· 78 78 /* 79 79 * Serialize updating nr_requests with concurrent queue_requests_store() 80 80 * and switching elevator. 81 + * 82 + * Use trylock to avoid circular lock dependency with kernfs active 83 + * reference during concurrent disk deletion: 84 + * update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del) 85 + * kn->active -> update_nr_hwq_lock (via this sysfs write path) 81 86 */ 82 - down_write(&set->update_nr_hwq_lock); 87 + if (!down_write_trylock(&set->update_nr_hwq_lock)) 88 + return -EBUSY; 83 89 84 90 if (nr == q->nr_requests) 85 91 goto unlock;
+11 -1
block/elevator.c
··· 807 807 elv_iosched_load_module(ctx.name); 808 808 ctx.type = elevator_find_get(ctx.name); 809 809 810 - down_read(&set->update_nr_hwq_lock); 810 + /* 811 + * Use trylock to avoid circular lock dependency with kernfs active 812 + * reference during concurrent disk deletion: 813 + * update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del) 814 + * kn->active -> update_nr_hwq_lock (via this sysfs write path) 815 + */ 816 + if (!down_read_trylock(&set->update_nr_hwq_lock)) { 817 + ret = -EBUSY; 818 + goto out; 819 + } 811 820 if (!blk_queue_no_elv_switch(q)) { 812 821 ret = elevator_change(q, &ctx); 813 822 if (!ret) ··· 826 817 } 827 818 up_read(&set->update_nr_hwq_lock); 828 819 820 + out: 829 821 if (ctx.type) 830 822 elevator_put(ctx.type); 831 823 return ret;
-9
crypto/Kconfig
··· 876 876 - blake2b-384 877 877 - blake2b-512 878 878 879 - Used by the btrfs filesystem. 880 - 881 879 See https://blake2.net for further information. 882 880 883 881 config CRYPTO_CMAC ··· 963 965 10118-3), including HMAC support. 964 966 965 967 This is required for IPsec AH (XFRM_AH) and IPsec ESP (XFRM_ESP). 966 - Used by the btrfs filesystem, Ceph, NFS, and SMB. 967 968 968 969 config CRYPTO_SHA512 969 970 tristate "SHA-384 and SHA-512" ··· 1036 1039 1037 1040 Extremely fast, working at speeds close to RAM limits. 1038 1041 1039 - Used by the btrfs filesystem. 1040 - 1041 1042 endmenu 1042 1043 1043 1044 menu "CRCs (cyclic redundancy checks)" ··· 1053 1058 on Communications, Vol. 41, No. 6, June 1993, selected for use with 1054 1059 iSCSI. 1055 1060 1056 - Used by btrfs, ext4, jbd2, NVMeoF/TCP, and iSCSI. 1057 - 1058 1061 config CRYPTO_CRC32 1059 1062 tristate "CRC32" 1060 1063 select CRYPTO_HASH 1061 1064 select CRC32 1062 1065 help 1063 1066 CRC32 CRC algorithm (IEEE 802.3) 1064 - 1065 - Used by RoCEv2 and f2fs. 1066 1067 1067 1068 endmenu 1068 1069
+2 -2
crypto/testmgr.c
··· 4132 4132 .fips_allowed = 1, 4133 4133 }, { 4134 4134 .alg = "authenc(hmac(sha224),cbc(aes))", 4135 - .generic_driver = "authenc(hmac-sha224-lib,cbc(aes-generic))", 4135 + .generic_driver = "authenc(hmac-sha224-lib,cbc(aes-lib))", 4136 4136 .test = alg_test_aead, 4137 4137 .suite = { 4138 4138 .aead = __VECS(hmac_sha224_aes_cbc_tv_temp) ··· 4194 4194 .fips_allowed = 1, 4195 4195 }, { 4196 4196 .alg = "authenc(hmac(sha384),cbc(aes))", 4197 - .generic_driver = "authenc(hmac-sha384-lib,cbc(aes-generic))", 4197 + .generic_driver = "authenc(hmac-sha384-lib,cbc(aes-lib))", 4198 4198 .test = alg_test_aead, 4199 4199 .suite = { 4200 4200 .aead = __VECS(hmac_sha384_aes_cbc_tv_temp)
+8 -15
drivers/accel/amdxdna/aie2_ctx.c
··· 186 186 cmd_abo = job->cmd_bo; 187 187 188 188 if (unlikely(job->job_timeout)) { 189 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT); 189 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_TIMEOUT); 190 190 ret = -EINVAL; 191 191 goto out; 192 192 } 193 193 194 194 if (unlikely(!data) || unlikely(size != sizeof(u32))) { 195 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT); 195 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ABORT); 196 196 ret = -EINVAL; 197 197 goto out; 198 198 } ··· 202 202 if (status == AIE2_STATUS_SUCCESS) 203 203 amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_COMPLETED); 204 204 else 205 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ERROR); 205 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ERROR); 206 206 207 207 out: 208 208 aie2_sched_notify(job); ··· 244 244 cmd_abo = job->cmd_bo; 245 245 246 246 if (unlikely(job->job_timeout)) { 247 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT); 247 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_TIMEOUT); 248 248 ret = -EINVAL; 249 249 goto out; 250 250 } 251 251 252 252 if (unlikely(!data) || unlikely(size != sizeof(u32) * 3)) { 253 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT); 253 + amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ABORT); 254 254 ret = -EINVAL; 255 255 goto out; 256 256 } ··· 270 270 fail_cmd_idx, fail_cmd_status); 271 271 272 272 if (fail_cmd_status == AIE2_STATUS_SUCCESS) { 273 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT); 273 + amdxdna_cmd_set_error(cmd_abo, job, fail_cmd_idx, ERT_CMD_STATE_ABORT); 274 274 ret = -EINVAL; 275 - goto out; 275 + } else { 276 + amdxdna_cmd_set_error(cmd_abo, job, fail_cmd_idx, ERT_CMD_STATE_ERROR); 276 277 } 277 - amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ERROR); 278 278 279 - if (amdxdna_cmd_get_op(cmd_abo) == ERT_CMD_CHAIN) { 280 - struct amdxdna_cmd_chain *cc = amdxdna_cmd_get_payload(cmd_abo, NULL); 281 - 282 - cc->error_index = fail_cmd_idx; 283 - if (cc->error_index >= cc->command_count) 284 - cc->error_index = 0; 285 - } 286 279 out: 287 280 aie2_sched_notify(job); 288 281 return ret;
+28 -8
drivers/accel/amdxdna/aie2_message.c
··· 40 40 return -ENODEV; 41 41 42 42 ret = xdna_send_msg_wait(xdna, ndev->mgmt_chann, msg); 43 - if (ret == -ETIME) { 44 - xdna_mailbox_stop_channel(ndev->mgmt_chann); 45 - xdna_mailbox_destroy_channel(ndev->mgmt_chann); 46 - ndev->mgmt_chann = NULL; 47 - } 43 + if (ret == -ETIME) 44 + aie2_destroy_mgmt_chann(ndev); 48 45 49 46 if (!ret && *hdl->status != AIE2_STATUS_SUCCESS) { 50 47 XDNA_ERR(xdna, "command opcode 0x%x failed, status 0x%x", ··· 293 296 } 294 297 295 298 intr_reg = i2x.mb_head_ptr_reg + 4; 296 - hwctx->priv->mbox_chann = xdna_mailbox_create_channel(ndev->mbox, &x2i, &i2x, 297 - intr_reg, ret); 299 + hwctx->priv->mbox_chann = xdna_mailbox_alloc_channel(ndev->mbox); 298 300 if (!hwctx->priv->mbox_chann) { 299 301 XDNA_ERR(xdna, "Not able to create channel"); 300 302 ret = -EINVAL; 301 303 goto del_ctx_req; 304 + } 305 + 306 + ret = xdna_mailbox_start_channel(hwctx->priv->mbox_chann, &x2i, &i2x, 307 + intr_reg, ret); 308 + if (ret) { 309 + XDNA_ERR(xdna, "Not able to create channel"); 310 + ret = -EINVAL; 311 + goto free_channel; 302 312 } 303 313 ndev->hwctx_num++; 304 314 ··· 314 310 315 311 return 0; 316 312 313 + free_channel: 314 + xdna_mailbox_free_channel(hwctx->priv->mbox_chann); 317 315 del_ctx_req: 318 316 aie2_destroy_context_req(ndev, hwctx->fw_ctx_id); 319 317 return ret; ··· 331 325 332 326 xdna_mailbox_stop_channel(hwctx->priv->mbox_chann); 333 327 ret = aie2_destroy_context_req(ndev, hwctx->fw_ctx_id); 334 - xdna_mailbox_destroy_channel(hwctx->priv->mbox_chann); 328 + xdna_mailbox_free_channel(hwctx->priv->mbox_chann); 335 329 XDNA_DBG(xdna, "Destroyed fw ctx %d", hwctx->fw_ctx_id); 336 330 hwctx->priv->mbox_chann = NULL; 337 331 hwctx->fw_ctx_id = -1; ··· 918 912 ndev->exec_msg_ops = &npu_exec_message_ops; 919 913 else 920 914 ndev->exec_msg_ops = &legacy_exec_message_ops; 915 + } 916 + 917 + void aie2_destroy_mgmt_chann(struct amdxdna_dev_hdl *ndev) 918 + { 919 + struct amdxdna_dev *xdna = ndev->xdna; 920 + 921 + drm_WARN_ON(&xdna->ddev, !mutex_is_locked(&xdna->dev_lock)); 922 + 923 + if (!ndev->mgmt_chann) 924 + return; 925 + 926 + xdna_mailbox_stop_channel(ndev->mgmt_chann); 927 + xdna_mailbox_free_channel(ndev->mgmt_chann); 928 + ndev->mgmt_chann = NULL; 921 929 } 922 930 923 931 static inline struct amdxdna_gem_obj *
+37 -29
drivers/accel/amdxdna/aie2_pci.c
··· 330 330 331 331 aie2_runtime_cfg(ndev, AIE2_RT_CFG_CLK_GATING, NULL); 332 332 aie2_mgmt_fw_fini(ndev); 333 - xdna_mailbox_stop_channel(ndev->mgmt_chann); 334 - xdna_mailbox_destroy_channel(ndev->mgmt_chann); 335 - ndev->mgmt_chann = NULL; 333 + aie2_destroy_mgmt_chann(ndev); 336 334 drmm_kfree(&xdna->ddev, ndev->mbox); 337 335 ndev->mbox = NULL; 338 336 aie2_psp_stop(ndev->psp_hdl); ··· 361 363 } 362 364 pci_set_master(pdev); 363 365 366 + mbox_res.ringbuf_base = ndev->sram_base; 367 + mbox_res.ringbuf_size = pci_resource_len(pdev, xdna->dev_info->sram_bar); 368 + mbox_res.mbox_base = ndev->mbox_base; 369 + mbox_res.mbox_size = MBOX_SIZE(ndev); 370 + mbox_res.name = "xdna_mailbox"; 371 + ndev->mbox = xdnam_mailbox_create(&xdna->ddev, &mbox_res); 372 + if (!ndev->mbox) { 373 + XDNA_ERR(xdna, "failed to create mailbox device"); 374 + ret = -ENODEV; 375 + goto disable_dev; 376 + } 377 + 378 + ndev->mgmt_chann = xdna_mailbox_alloc_channel(ndev->mbox); 379 + if (!ndev->mgmt_chann) { 380 + XDNA_ERR(xdna, "failed to alloc channel"); 381 + ret = -ENODEV; 382 + goto disable_dev; 383 + } 384 + 364 385 ret = aie2_smu_init(ndev); 365 386 if (ret) { 366 387 XDNA_ERR(xdna, "failed to init smu, ret %d", ret); 367 - goto disable_dev; 388 + goto free_channel; 368 389 } 369 390 370 391 ret = aie2_psp_start(ndev->psp_hdl); ··· 398 381 goto stop_psp; 399 382 } 400 383 401 - mbox_res.ringbuf_base = ndev->sram_base; 402 - mbox_res.ringbuf_size = pci_resource_len(pdev, xdna->dev_info->sram_bar); 403 - mbox_res.mbox_base = ndev->mbox_base; 404 - mbox_res.mbox_size = MBOX_SIZE(ndev); 405 - mbox_res.name = "xdna_mailbox"; 406 - ndev->mbox = xdnam_mailbox_create(&xdna->ddev, &mbox_res); 407 - if (!ndev->mbox) { 408 - XDNA_ERR(xdna, "failed to create mailbox device"); 409 - ret = -ENODEV; 410 - goto stop_psp; 411 - } 412 - 413 384 mgmt_mb_irq = pci_irq_vector(pdev, ndev->mgmt_chan_idx); 414 385 if (mgmt_mb_irq < 0) { 415 386 ret = mgmt_mb_irq; ··· 406 401 } 407 402 408 403 xdna_mailbox_intr_reg = ndev->mgmt_i2x.mb_head_ptr_reg + 4; 409 - ndev->mgmt_chann = xdna_mailbox_create_channel(ndev->mbox, 410 - &ndev->mgmt_x2i, 411 - &ndev->mgmt_i2x, 412 - xdna_mailbox_intr_reg, 413 - mgmt_mb_irq); 414 - if (!ndev->mgmt_chann) { 415 - XDNA_ERR(xdna, "failed to create management mailbox channel"); 404 + ret = xdna_mailbox_start_channel(ndev->mgmt_chann, 405 + &ndev->mgmt_x2i, 406 + &ndev->mgmt_i2x, 407 + xdna_mailbox_intr_reg, 408 + mgmt_mb_irq); 409 + if (ret) { 410 + XDNA_ERR(xdna, "failed to start management mailbox channel"); 416 411 ret = -EINVAL; 417 412 goto stop_psp; 418 413 } ··· 420 415 ret = aie2_mgmt_fw_init(ndev); 421 416 if (ret) { 422 417 XDNA_ERR(xdna, "initial mgmt firmware failed, ret %d", ret); 423 - goto destroy_mgmt_chann; 418 + goto stop_fw; 424 419 } 425 420 426 421 ret = aie2_pm_init(ndev); 427 422 if (ret) { 428 423 XDNA_ERR(xdna, "failed to init pm, ret %d", ret); 429 - goto destroy_mgmt_chann; 424 + goto stop_fw; 430 425 } 431 426 432 427 ret = aie2_mgmt_fw_query(ndev); 433 428 if (ret) { 434 429 XDNA_ERR(xdna, "failed to query fw, ret %d", ret); 435 - goto destroy_mgmt_chann; 430 + goto stop_fw; 436 431 } 437 432 438 433 ret = aie2_error_async_events_alloc(ndev); 439 434 if (ret) { 440 435 XDNA_ERR(xdna, "Allocate async events failed, ret %d", ret); 441 - goto destroy_mgmt_chann; 436 + goto stop_fw; 442 437 } 443 438 444 439 ndev->dev_status = AIE2_DEV_START; 445 440 446 441 return 0; 447 442 448 - destroy_mgmt_chann: 443 + stop_fw: 444 + aie2_suspend_fw(ndev); 449 445 xdna_mailbox_stop_channel(ndev->mgmt_chann); 450 - xdna_mailbox_destroy_channel(ndev->mgmt_chann); 451 446 stop_psp: 452 447 aie2_psp_stop(ndev->psp_hdl); 453 448 fini_smu: 454 449 aie2_smu_fini(ndev); 450 + free_channel: 451 + xdna_mailbox_free_channel(ndev->mgmt_chann); 452 + ndev->mgmt_chann = NULL; 455 453 disable_dev: 456 454 pci_disable_device(pdev); 457 455
+1
drivers/accel/amdxdna/aie2_pci.h
··· 303 303 304 304 /* aie2_message.c */ 305 305 void aie2_msg_init(struct amdxdna_dev_hdl *ndev); 306 + void aie2_destroy_mgmt_chann(struct amdxdna_dev_hdl *ndev); 306 307 int aie2_suspend_fw(struct amdxdna_dev_hdl *ndev); 307 308 int aie2_resume_fw(struct amdxdna_dev_hdl *ndev); 308 309 int aie2_set_runtime_cfg(struct amdxdna_dev_hdl *ndev, u32 type, u64 value);
+27
drivers/accel/amdxdna/amdxdna_ctx.c
··· 135 135 return INVALID_CU_IDX; 136 136 } 137 137 138 + int amdxdna_cmd_set_error(struct amdxdna_gem_obj *abo, 139 + struct amdxdna_sched_job *job, u32 cmd_idx, 140 + enum ert_cmd_state error_state) 141 + { 142 + struct amdxdna_client *client = job->hwctx->client; 143 + struct amdxdna_cmd *cmd = abo->mem.kva; 144 + struct amdxdna_cmd_chain *cc = NULL; 145 + 146 + cmd->header &= ~AMDXDNA_CMD_STATE; 147 + cmd->header |= FIELD_PREP(AMDXDNA_CMD_STATE, error_state); 148 + 149 + if (amdxdna_cmd_get_op(abo) == ERT_CMD_CHAIN) { 150 + cc = amdxdna_cmd_get_payload(abo, NULL); 151 + cc->error_index = (cmd_idx < cc->command_count) ? cmd_idx : 0; 152 + abo = amdxdna_gem_get_obj(client, cc->data[0], AMDXDNA_BO_CMD); 153 + if (!abo) 154 + return -EINVAL; 155 + cmd = abo->mem.kva; 156 + } 157 + 158 + memset(cmd->data, 0xff, abo->mem.size - sizeof(*cmd)); 159 + if (cc) 160 + amdxdna_gem_put_obj(abo); 161 + 162 + return 0; 163 + } 164 + 138 165 /* 139 166 * This should be called in close() and remove(). DO NOT call in other syscalls. 140 167 * This guarantee that when hwctx and resources will be released, if user
+3
drivers/accel/amdxdna/amdxdna_ctx.h
··· 167 167 168 168 void *amdxdna_cmd_get_payload(struct amdxdna_gem_obj *abo, u32 *size); 169 169 u32 amdxdna_cmd_get_cu_idx(struct amdxdna_gem_obj *abo); 170 + int amdxdna_cmd_set_error(struct amdxdna_gem_obj *abo, 171 + struct amdxdna_sched_job *job, u32 cmd_idx, 172 + enum ert_cmd_state error_state); 170 173 171 174 void amdxdna_sched_job_cleanup(struct amdxdna_sched_job *job); 172 175 void amdxdna_hwctx_remove_all(struct amdxdna_client *client);
+49 -50
drivers/accel/amdxdna/amdxdna_mailbox.c
··· 460 460 return ret; 461 461 } 462 462 463 - struct mailbox_channel * 464 - xdna_mailbox_create_channel(struct mailbox *mb, 465 - const struct xdna_mailbox_chann_res *x2i, 466 - const struct xdna_mailbox_chann_res *i2x, 467 - u32 iohub_int_addr, 468 - int mb_irq) 463 + struct mailbox_channel *xdna_mailbox_alloc_channel(struct mailbox *mb) 469 464 { 470 465 struct mailbox_channel *mb_chann; 471 - int ret; 472 - 473 - if (!is_power_of_2(x2i->rb_size) || !is_power_of_2(i2x->rb_size)) { 474 - pr_err("Ring buf size must be power of 2"); 475 - return NULL; 476 - } 477 466 478 467 mb_chann = kzalloc_obj(*mb_chann); 479 468 if (!mb_chann) 480 469 return NULL; 481 470 471 + INIT_WORK(&mb_chann->rx_work, mailbox_rx_worker); 472 + mb_chann->work_q = create_singlethread_workqueue(MAILBOX_NAME); 473 + if (!mb_chann->work_q) { 474 + MB_ERR(mb_chann, "Create workqueue failed"); 475 + goto free_chann; 476 + } 482 477 mb_chann->mb = mb; 478 + 479 + return mb_chann; 480 + 481 + free_chann: 482 + kfree(mb_chann); 483 + return NULL; 484 + } 485 + 486 + void xdna_mailbox_free_channel(struct mailbox_channel *mb_chann) 487 + { 488 + destroy_workqueue(mb_chann->work_q); 489 + kfree(mb_chann); 490 + } 491 + 492 + int 493 + xdna_mailbox_start_channel(struct mailbox_channel *mb_chann, 494 + const struct xdna_mailbox_chann_res *x2i, 495 + const struct xdna_mailbox_chann_res *i2x, 496 + u32 iohub_int_addr, 497 + int mb_irq) 498 + { 499 + int ret; 500 + 501 + if (!is_power_of_2(x2i->rb_size) || !is_power_of_2(i2x->rb_size)) { 502 + pr_err("Ring buf size must be power of 2"); 503 + return -EINVAL; 504 + } 505 + 483 506 mb_chann->msix_irq = mb_irq; 484 507 mb_chann->iohub_int_addr = iohub_int_addr; 485 508 memcpy(&mb_chann->res[CHAN_RES_X2I], x2i, sizeof(*x2i)); ··· 512 489 mb_chann->x2i_tail = mailbox_get_tailptr(mb_chann, CHAN_RES_X2I); 513 490 mb_chann->i2x_head = mailbox_get_headptr(mb_chann, CHAN_RES_I2X); 514 491 515 - INIT_WORK(&mb_chann->rx_work, mailbox_rx_worker); 516 - mb_chann->work_q = create_singlethread_workqueue(MAILBOX_NAME); 517 - if (!mb_chann->work_q) { 518 - MB_ERR(mb_chann, "Create workqueue failed"); 519 - goto free_and_out; 520 - } 521 - 522 492 /* Everything look good. Time to enable irq handler */ 523 493 ret = request_irq(mb_irq, mailbox_irq_handler, 0, MAILBOX_NAME, mb_chann); 524 494 if (ret) { 525 495 MB_ERR(mb_chann, "Failed to request irq %d ret %d", mb_irq, ret); 526 - goto destroy_wq; 496 + return ret; 527 497 } 528 498 529 499 mb_chann->bad_state = false; 530 500 mailbox_reg_write(mb_chann, mb_chann->iohub_int_addr, 0); 531 501 532 - MB_DBG(mb_chann, "Mailbox channel created (irq: %d)", mb_chann->msix_irq); 533 - return mb_chann; 534 - 535 - destroy_wq: 536 - destroy_workqueue(mb_chann->work_q); 537 - free_and_out: 538 - kfree(mb_chann); 539 - return NULL; 540 - } 541 - 542 - int xdna_mailbox_destroy_channel(struct mailbox_channel *mb_chann) 543 - { 544 - struct mailbox_msg *mb_msg; 545 - unsigned long msg_id; 546 - 547 - MB_DBG(mb_chann, "IRQ disabled and RX work cancelled"); 548 - free_irq(mb_chann->msix_irq, mb_chann); 549 - destroy_workqueue(mb_chann->work_q); 550 - /* We can clean up and release resources */ 551 - 552 - xa_for_each(&mb_chann->chan_xa, msg_id, mb_msg) 553 - mailbox_release_msg(mb_chann, mb_msg); 554 - 555 - xa_destroy(&mb_chann->chan_xa); 556 - 557 - MB_DBG(mb_chann, "Mailbox channel destroyed, irq: %d", mb_chann->msix_irq); 558 - kfree(mb_chann); 502 + MB_DBG(mb_chann, "Mailbox channel started (irq: %d)", mb_chann->msix_irq); 559 503 return 0; 560 504 } 561 505 562 506 void xdna_mailbox_stop_channel(struct mailbox_channel *mb_chann) 563 507 { 508 + struct mailbox_msg *mb_msg; 509 + unsigned long msg_id; 510 + 564 511 /* Disable an irq and wait. This might sleep. */ 565 - disable_irq(mb_chann->msix_irq); 512 + free_irq(mb_chann->msix_irq, mb_chann); 566 513 567 514 /* Cancel RX work and wait for it to finish */ 568 - cancel_work_sync(&mb_chann->rx_work); 569 - MB_DBG(mb_chann, "IRQ disabled and RX work cancelled"); 515 + drain_workqueue(mb_chann->work_q); 516 + 517 + /* We can clean up and release resources */ 518 + xa_for_each(&mb_chann->chan_xa, msg_id, mb_msg) 519 + mailbox_release_msg(mb_chann, mb_msg); 520 + xa_destroy(&mb_chann->chan_xa); 521 + 522 + MB_DBG(mb_chann, "Mailbox channel stopped, irq: %d", mb_chann->msix_irq); 570 523 } 571 524 572 525 struct mailbox *xdnam_mailbox_create(struct drm_device *ddev,
+17 -14
drivers/accel/amdxdna/amdxdna_mailbox.h
··· 74 74 const struct xdna_mailbox_res *res); 75 75 76 76 /* 77 - * xdna_mailbox_create_channel() -- Create a mailbox channel instance 77 + * xdna_mailbox_alloc_channel() -- alloc a mailbox channel 78 78 * 79 - * @mailbox: the handle return from xdna_mailbox_create() 79 + * @mb: mailbox handle 80 + */ 81 + struct mailbox_channel *xdna_mailbox_alloc_channel(struct mailbox *mb); 82 + 83 + /* 84 + * xdna_mailbox_start_channel() -- start a mailbox channel instance 85 + * 86 + * @mb_chann: the handle return from xdna_mailbox_alloc_channel() 80 87 * @x2i: host to firmware mailbox resources 81 88 * @i2x: firmware to host mailbox resources 82 89 * @xdna_mailbox_intr_reg: register addr of MSI-X interrupt ··· 91 84 * 92 85 * Return: If success, return a handle of mailbox channel. Otherwise, return NULL. 93 86 */ 94 - struct mailbox_channel * 95 - xdna_mailbox_create_channel(struct mailbox *mailbox, 96 - const struct xdna_mailbox_chann_res *x2i, 97 - const struct xdna_mailbox_chann_res *i2x, 98 - u32 xdna_mailbox_intr_reg, 99 - int mb_irq); 87 + int 88 + xdna_mailbox_start_channel(struct mailbox_channel *mb_chann, 89 + const struct xdna_mailbox_chann_res *x2i, 90 + const struct xdna_mailbox_chann_res *i2x, 91 + u32 xdna_mailbox_intr_reg, 92 + int mb_irq); 100 93 101 94 /* 102 - * xdna_mailbox_destroy_channel() -- destroy mailbox channel 95 + * xdna_mailbox_free_channel() -- free mailbox channel 103 96 * 104 97 * @mailbox_chann: the handle return from xdna_mailbox_create_channel() 105 - * 106 - * Return: if success, return 0. otherwise return error code 107 98 */ 108 - int xdna_mailbox_destroy_channel(struct mailbox_channel *mailbox_chann); 99 + void xdna_mailbox_free_channel(struct mailbox_channel *mailbox_chann); 109 100 110 101 /* 111 102 * xdna_mailbox_stop_channel() -- stop mailbox channel 112 103 * 113 104 * @mailbox_chann: the handle return from xdna_mailbox_create_channel() 114 - * 115 - * Return: if success, return 0. otherwise return error code 116 105 */ 117 106 void xdna_mailbox_stop_channel(struct mailbox_channel *mailbox_chann); 118 107
+1 -1
drivers/accel/amdxdna/npu1_regs.c
··· 67 67 68 68 static const struct aie2_fw_feature_tbl npu1_fw_feature_table[] = { 69 69 { .major = 5, .min_minor = 7 }, 70 - { .features = BIT_U64(AIE2_NPU_COMMAND), .min_minor = 8 }, 70 + { .features = BIT_U64(AIE2_NPU_COMMAND), .major = 5, .min_minor = 8 }, 71 71 { 0 } 72 72 }; 73 73
+9 -3
drivers/accel/ethosu/ethosu_gem.c
··· 245 245 ((st->ifm.stride_kernel >> 1) & 0x1) + 1; 246 246 u32 stride_x = ((st->ifm.stride_kernel >> 5) & 0x2) + 247 247 (st->ifm.stride_kernel & 0x1) + 1; 248 - u32 ifm_height = st->ofm.height[2] * stride_y + 248 + s32 ifm_height = st->ofm.height[2] * stride_y + 249 249 st->ifm.height[2] - (st->ifm.pad_top + st->ifm.pad_bottom); 250 - u32 ifm_width = st->ofm.width * stride_x + 250 + s32 ifm_width = st->ofm.width * stride_x + 251 251 st->ifm.width - (st->ifm.pad_left + st->ifm.pad_right); 252 + 253 + if (ifm_height < 0 || ifm_width < 0) 254 + return -EINVAL; 252 255 253 256 len = feat_matrix_length(info, &st->ifm, ifm_width, 254 257 ifm_height, st->ifm.depth); ··· 420 417 return ret; 421 418 break; 422 419 case NPU_OP_ELEMENTWISE: 423 - use_ifm2 = !((st.ifm2.broadcast == 8) || (param == 5) || 420 + use_scale = ethosu_is_u65(edev) ? 421 + (st.ifm2.broadcast & 0x80) : 422 + (st.ifm2.broadcast == 8); 423 + use_ifm2 = !(use_scale || (param == 5) || 424 424 (param == 6) || (param == 7) || (param == 0x24)); 425 425 use_ifm = st.ifm.broadcast != 8; 426 426 ret = calc_sizes_elemwise(ddev, info, cmd, &st, use_ifm, use_ifm2);
+19 -9
drivers/accel/ethosu/ethosu_job.c
··· 143 143 return ret; 144 144 } 145 145 146 - static void ethosu_job_cleanup(struct kref *ref) 146 + static void ethosu_job_err_cleanup(struct ethosu_job *job) 147 147 { 148 - struct ethosu_job *job = container_of(ref, struct ethosu_job, 149 - refcount); 150 148 unsigned int i; 151 - 152 - pm_runtime_put_autosuspend(job->dev->base.dev); 153 - 154 - dma_fence_put(job->done_fence); 155 - dma_fence_put(job->inference_done_fence); 156 149 157 150 for (i = 0; i < job->region_cnt; i++) 158 151 drm_gem_object_put(job->region_bo[i]); ··· 153 160 drm_gem_object_put(job->cmd_bo); 154 161 155 162 kfree(job); 163 + } 164 + 165 + static void ethosu_job_cleanup(struct kref *ref) 166 + { 167 + struct ethosu_job *job = container_of(ref, struct ethosu_job, 168 + refcount); 169 + 170 + pm_runtime_put_autosuspend(job->dev->base.dev); 171 + 172 + dma_fence_put(job->done_fence); 173 + dma_fence_put(job->inference_done_fence); 174 + 175 + ethosu_job_err_cleanup(job); 156 176 } 157 177 158 178 static void ethosu_job_put(struct ethosu_job *job) ··· 460 454 } 461 455 } 462 456 ret = ethosu_job_push(ejob); 457 + if (!ret) { 458 + ethosu_job_put(ejob); 459 + return 0; 460 + } 463 461 464 462 out_cleanup_job: 465 463 if (ret) 466 464 drm_sched_job_cleanup(&ejob->base); 467 465 out_put_job: 468 - ethosu_job_put(ejob); 466 + ethosu_job_err_cleanup(ejob); 469 467 470 468 return ret; 471 469 }
+3 -2
drivers/acpi/acpica/acpredef.h
··· 379 379 380 380 {{"_CPC", METHOD_0ARGS, 381 381 METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Ints/Bufs) */ 382 - PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_INTEGER | ACPI_RTYPE_BUFFER, 0, 383 - 0, 0, 0), 382 + PACKAGE_INFO(ACPI_PTYPE1_VAR, 383 + ACPI_RTYPE_INTEGER | ACPI_RTYPE_BUFFER | 384 + ACPI_RTYPE_PACKAGE, 0, 0, 0, 0), 384 385 385 386 {{"_CR3", METHOD_0ARGS, /* ACPI 6.0 */ 386 387 METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
-9
drivers/acpi/device_pm.c
··· 1457 1457 return 0; 1458 1458 1459 1459 /* 1460 - * Skip devices whose ACPI companions don't support power management and 1461 - * don't have a wakeup GPE. 1462 - */ 1463 - if (!acpi_device_power_manageable(adev) && !acpi_device_can_wakeup(adev)) { 1464 - dev_dbg(dev, "No ACPI power management or wakeup GPE\n"); 1465 - return 0; 1466 - } 1467 - 1468 - /* 1469 1460 * Only attach the power domain to the first device if the 1470 1461 * companion is shared by multiple. This is to prevent doing power 1471 1462 * management twice.
+2
drivers/ata/libata-core.c
··· 4189 4189 ATA_QUIRK_FIRMWARE_WARN }, 4190 4190 4191 4191 /* Seagate disks with LPM issues */ 4192 + { "ST1000DM010-2EP102", NULL, ATA_QUIRK_NOLPM }, 4192 4193 { "ST2000DM008-2FR102", NULL, ATA_QUIRK_NOLPM }, 4193 4194 4194 4195 /* drives which fail FPDMA_AA activation (some may freeze afterwards) ··· 4232 4231 /* Devices that do not need bridging limits applied */ 4233 4232 { "MTRON MSP-SATA*", NULL, ATA_QUIRK_BRIDGE_OK }, 4234 4233 { "BUFFALO HD-QSU2/R5", NULL, ATA_QUIRK_BRIDGE_OK }, 4234 + { "QEMU HARDDISK", "2.5+", ATA_QUIRK_BRIDGE_OK }, 4235 4235 4236 4236 /* Devices which aren't very happy with higher link speeds */ 4237 4237 { "WD My Book", NULL, ATA_QUIRK_1_5_GBPS },
+2 -1
drivers/ata/libata-eh.c
··· 647 647 break; 648 648 } 649 649 650 - if (qc == ap->deferred_qc) { 650 + if (i < ATA_MAX_QUEUE && qc == ap->deferred_qc) { 651 651 /* 652 652 * This is a deferred command that timed out while 653 653 * waiting for the command queue to drain. Since the qc ··· 659 659 */ 660 660 WARN_ON_ONCE(qc->flags & ATA_QCFLAG_ACTIVE); 661 661 ap->deferred_qc = NULL; 662 + cancel_work(&ap->deferred_qc_work); 662 663 set_host_byte(scmd, DID_TIME_OUT); 663 664 scsi_eh_finish_cmd(scmd, &ap->eh_done_q); 664 665 } else if (i < ATA_MAX_QUEUE) {
+1
drivers/ata/libata-scsi.c
··· 1699 1699 1700 1700 scmd = qc->scsicmd; 1701 1701 ap->deferred_qc = NULL; 1702 + cancel_work(&ap->deferred_qc_work); 1702 1703 ata_qc_free(qc); 1703 1704 scmd->result = (DID_SOFT_ERROR << 16); 1704 1705 scsi_done(scmd);
+1 -10
drivers/base/base.h
··· 179 179 void driver_detach(const struct device_driver *drv); 180 180 void driver_deferred_probe_del(struct device *dev); 181 181 void device_set_deferred_probe_reason(const struct device *dev, struct va_format *vaf); 182 - static inline int driver_match_device_locked(const struct device_driver *drv, 183 - struct device *dev) 184 - { 185 - device_lock_assert(dev); 186 - 187 - return drv->bus->match ? drv->bus->match(dev, drv) : 1; 188 - } 189 - 190 182 static inline int driver_match_device(const struct device_driver *drv, 191 183 struct device *dev) 192 184 { 193 - guard(device)(dev); 194 - return driver_match_device_locked(drv, dev); 185 + return drv->bus->match ? drv->bus->match(dev, drv) : 1; 195 186 } 196 187 197 188 static inline void dev_sync_state(struct device *dev)
+1 -1
drivers/base/dd.c
··· 928 928 bool async_allowed; 929 929 int ret; 930 930 931 - ret = driver_match_device_locked(drv, dev); 931 + ret = driver_match_device(drv, dev); 932 932 if (ret == 0) { 933 933 /* no match */ 934 934 return 0;
+12 -12
drivers/block/zram/zram_drv.c
··· 549 549 return ret; 550 550 } 551 551 552 - static ssize_t writeback_compressed_store(struct device *dev, 552 + static ssize_t compressed_writeback_store(struct device *dev, 553 553 struct device_attribute *attr, 554 554 const char *buf, size_t len) 555 555 { ··· 564 564 return -EBUSY; 565 565 } 566 566 567 - zram->wb_compressed = val; 567 + zram->compressed_wb = val; 568 568 569 569 return len; 570 570 } 571 571 572 - static ssize_t writeback_compressed_show(struct device *dev, 572 + static ssize_t compressed_writeback_show(struct device *dev, 573 573 struct device_attribute *attr, 574 574 char *buf) 575 575 { ··· 577 577 struct zram *zram = dev_to_zram(dev); 578 578 579 579 guard(rwsem_read)(&zram->dev_lock); 580 - val = zram->wb_compressed; 580 + val = zram->compressed_wb; 581 581 582 582 return sysfs_emit(buf, "%d\n", val); 583 583 } ··· 946 946 goto out; 947 947 } 948 948 949 - if (zram->wb_compressed) { 949 + if (zram->compressed_wb) { 950 950 /* 951 951 * ZRAM_WB slots get freed, we need to preserve data required 952 952 * for read decompression. ··· 960 960 set_slot_flag(zram, index, ZRAM_WB); 961 961 set_slot_handle(zram, index, req->blk_idx); 962 962 963 - if (zram->wb_compressed) { 963 + if (zram->compressed_wb) { 964 964 if (huge) 965 965 set_slot_flag(zram, index, ZRAM_HUGE); 966 966 set_slot_size(zram, index, size); ··· 1100 1100 */ 1101 1101 if (!test_slot_flag(zram, index, ZRAM_PP_SLOT)) 1102 1102 goto next; 1103 - if (zram->wb_compressed) 1103 + if (zram->compressed_wb) 1104 1104 err = read_from_zspool_raw(zram, req->page, index); 1105 1105 else 1106 1106 err = read_from_zspool(zram, req->page, index); ··· 1429 1429 * 1430 1430 * Keep the existing behavior for now. 1431 1431 */ 1432 - if (zram->wb_compressed == false) { 1432 + if (zram->compressed_wb == false) { 1433 1433 /* No decompression needed, complete the parent IO */ 1434 1434 bio_endio(req->parent); 1435 1435 bio_put(bio); ··· 1508 1508 flush_work(&req.work); 1509 1509 destroy_work_on_stack(&req.work); 1510 1510 1511 - if (req.error || zram->wb_compressed == false) 1511 + if (req.error || zram->compressed_wb == false) 1512 1512 return req.error; 1513 1513 1514 1514 return decompress_bdev_page(zram, page, index); ··· 3007 3007 static DEVICE_ATTR_RW(writeback_limit); 3008 3008 static DEVICE_ATTR_RW(writeback_limit_enable); 3009 3009 static DEVICE_ATTR_RW(writeback_batch_size); 3010 - static DEVICE_ATTR_RW(writeback_compressed); 3010 + static DEVICE_ATTR_RW(compressed_writeback); 3011 3011 #endif 3012 3012 #ifdef CONFIG_ZRAM_MULTI_COMP 3013 3013 static DEVICE_ATTR_RW(recomp_algorithm); ··· 3031 3031 &dev_attr_writeback_limit.attr, 3032 3032 &dev_attr_writeback_limit_enable.attr, 3033 3033 &dev_attr_writeback_batch_size.attr, 3034 - &dev_attr_writeback_compressed.attr, 3034 + &dev_attr_compressed_writeback.attr, 3035 3035 #endif 3036 3036 &dev_attr_io_stat.attr, 3037 3037 &dev_attr_mm_stat.attr, ··· 3091 3091 init_rwsem(&zram->dev_lock); 3092 3092 #ifdef CONFIG_ZRAM_WRITEBACK 3093 3093 zram->wb_batch_size = 32; 3094 - zram->wb_compressed = false; 3094 + zram->compressed_wb = false; 3095 3095 #endif 3096 3096 3097 3097 /* gendisk structure */
+1 -1
drivers/block/zram/zram_drv.h
··· 133 133 #ifdef CONFIG_ZRAM_WRITEBACK 134 134 struct file *backing_dev; 135 135 bool wb_limit_enable; 136 - bool wb_compressed; 136 + bool compressed_wb; 137 137 u32 wb_batch_size; 138 138 u64 bd_wb_limit; 139 139 struct block_device *bdev;
+3 -2
drivers/crypto/atmel-sha204a.c
··· 52 52 rng->priv = 0; 53 53 } else { 54 54 work_data = kmalloc_obj(*work_data, GFP_ATOMIC); 55 - if (!work_data) 55 + if (!work_data) { 56 + atomic_dec(&i2c_priv->tfm_count); 56 57 return -ENOMEM; 57 - 58 + } 58 59 work_data->ctx = i2c_priv; 59 60 work_data->client = i2c_priv->client; 60 61
+1 -1
drivers/crypto/ccp/sev-dev-tsm.c
··· 378 378 return; 379 379 380 380 error_exit: 381 - kfree(t); 382 381 pr_err("Failed to enable SEV-TIO: ret=%d en=%d initdone=%d SEV=%d\n", 383 382 ret, t->tio_en, t->tio_init_done, boot_cpu_has(X86_FEATURE_SEV)); 383 + kfree(t); 384 384 } 385 385 386 386 void sev_tsm_uninit(struct sev_device *sev)
+4 -6
drivers/crypto/ccp/sev-dev.c
··· 1105 1105 { 1106 1106 struct psp_device *psp_master = psp_get_master_device(); 1107 1107 struct snp_hv_fixed_pages_entry *entry; 1108 - struct sev_device *sev; 1109 1108 unsigned int order; 1110 1109 struct page *page; 1111 1110 1112 - if (!psp_master || !psp_master->sev_data) 1111 + if (!psp_master) 1113 1112 return NULL; 1114 - 1115 - sev = psp_master->sev_data; 1116 1113 1117 1114 order = get_order(PMD_SIZE * num_2mb_pages); 1118 1115 ··· 1123 1126 * This API uses SNP_INIT_EX to transition allocated pages to HV_Fixed 1124 1127 * page state, fail if SNP is already initialized. 1125 1128 */ 1126 - if (sev->snp_initialized) 1129 + if (psp_master->sev_data && 1130 + ((struct sev_device *)psp_master->sev_data)->snp_initialized) 1127 1131 return NULL; 1128 1132 1129 1133 /* Re-use freed pages that match the request */ ··· 1160 1162 struct psp_device *psp_master = psp_get_master_device(); 1161 1163 struct snp_hv_fixed_pages_entry *entry, *nentry; 1162 1164 1163 - if (!psp_master || !psp_master->sev_data) 1165 + if (!psp_master) 1164 1166 return; 1165 1167 1166 1168 /*
+1 -1
drivers/firmware/efi/mokvar-table.c
··· 85 85 * as an alternative to ordinary EFI variables, due to platform-dependent 86 86 * limitations. The memory occupied by this table is marked as reserved. 87 87 * 88 - * This routine must be called before efi_free_boot_services() in order 88 + * This routine must be called before efi_unmap_boot_services() in order 89 89 * to guarantee that it can mark the table as reserved. 90 90 * 91 91 * Implicit inputs:
+5 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1439 1439 *process_info = info; 1440 1440 } 1441 1441 1442 - vm->process_info = *process_info; 1442 + if (cmpxchg(&vm->process_info, NULL, *process_info) != NULL) { 1443 + ret = -EINVAL; 1444 + goto already_acquired; 1445 + } 1443 1446 1444 1447 /* Validate page directory and attach eviction fence */ 1445 1448 ret = amdgpu_bo_reserve(vm->root.bo, true); ··· 1482 1479 amdgpu_bo_unreserve(vm->root.bo); 1483 1480 reserve_pd_fail: 1484 1481 vm->process_info = NULL; 1482 + already_acquired: 1485 1483 if (info) { 1486 1484 dma_fence_put(&info->eviction_fence->base); 1487 1485 *process_info = NULL;
+81 -35
drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
··· 446 446 return ret; 447 447 } 448 448 449 - static void amdgpu_userq_cleanup(struct amdgpu_usermode_queue *queue, 450 - int queue_id) 449 + static void amdgpu_userq_cleanup(struct amdgpu_usermode_queue *queue) 451 450 { 452 451 struct amdgpu_userq_mgr *uq_mgr = queue->userq_mgr; 453 452 struct amdgpu_device *adev = uq_mgr->adev; ··· 460 461 uq_funcs->mqd_destroy(queue); 461 462 amdgpu_userq_fence_driver_free(queue); 462 463 /* Use interrupt-safe locking since IRQ handlers may access these XArrays */ 463 - xa_erase_irq(&uq_mgr->userq_xa, (unsigned long)queue_id); 464 464 xa_erase_irq(&adev->userq_doorbell_xa, queue->doorbell_index); 465 465 queue->userq_mgr = NULL; 466 466 list_del(&queue->userq_va_list); 467 467 kfree(queue); 468 468 469 469 up_read(&adev->reset_domain->sem); 470 - } 471 - 472 - static struct amdgpu_usermode_queue * 473 - amdgpu_userq_find(struct amdgpu_userq_mgr *uq_mgr, int qid) 474 - { 475 - return xa_load(&uq_mgr->userq_xa, qid); 476 470 } 477 471 478 472 void ··· 617 625 } 618 626 619 627 static int 620 - amdgpu_userq_destroy(struct drm_file *filp, int queue_id) 628 + amdgpu_userq_destroy(struct amdgpu_userq_mgr *uq_mgr, struct amdgpu_usermode_queue *queue) 621 629 { 622 - struct amdgpu_fpriv *fpriv = filp->driver_priv; 623 - struct amdgpu_userq_mgr *uq_mgr = &fpriv->userq_mgr; 624 630 struct amdgpu_device *adev = uq_mgr->adev; 625 - struct amdgpu_usermode_queue *queue; 626 631 int r = 0; 627 632 628 633 cancel_delayed_work_sync(&uq_mgr->resume_work); 629 634 mutex_lock(&uq_mgr->userq_mutex); 630 - queue = amdgpu_userq_find(uq_mgr, queue_id); 631 - if (!queue) { 632 - drm_dbg_driver(adev_to_drm(uq_mgr->adev), "Invalid queue id to destroy\n"); 633 - mutex_unlock(&uq_mgr->userq_mutex); 634 - return -EINVAL; 635 - } 636 635 amdgpu_userq_wait_for_last_fence(queue); 637 636 /* Cancel any pending hang detection work and cleanup */ 638 637 if (queue->hang_detect_fence) { ··· 655 672 drm_warn(adev_to_drm(uq_mgr->adev), "trying to destroy a HW mapping userq\n"); 656 673 queue->state = AMDGPU_USERQ_STATE_HUNG; 657 674 } 658 - amdgpu_userq_cleanup(queue, queue_id); 675 + amdgpu_userq_cleanup(queue); 659 676 mutex_unlock(&uq_mgr->userq_mutex); 660 677 661 678 pm_runtime_put_autosuspend(adev_to_drm(adev)->dev); 662 679 663 680 return r; 681 + } 682 + 683 + static void amdgpu_userq_kref_destroy(struct kref *kref) 684 + { 685 + int r; 686 + struct amdgpu_usermode_queue *queue = 687 + container_of(kref, struct amdgpu_usermode_queue, refcount); 688 + struct amdgpu_userq_mgr *uq_mgr = queue->userq_mgr; 689 + 690 + r = amdgpu_userq_destroy(uq_mgr, queue); 691 + if (r) 692 + drm_file_err(uq_mgr->file, "Failed to destroy usermode queue %d\n", r); 693 + } 694 + 695 + struct amdgpu_usermode_queue *amdgpu_userq_get(struct amdgpu_userq_mgr *uq_mgr, u32 qid) 696 + { 697 + struct amdgpu_usermode_queue *queue; 698 + 699 + xa_lock(&uq_mgr->userq_xa); 700 + queue = xa_load(&uq_mgr->userq_xa, qid); 701 + if (queue) 702 + kref_get(&queue->refcount); 703 + xa_unlock(&uq_mgr->userq_xa); 704 + 705 + return queue; 706 + } 707 + 708 + void amdgpu_userq_put(struct amdgpu_usermode_queue *queue) 709 + { 710 + if (queue) 711 + kref_put(&queue->refcount, amdgpu_userq_kref_destroy); 664 712 } 665 713 666 714 static int amdgpu_userq_priority_permit(struct drm_file *filp, ··· 848 834 goto unlock; 849 835 } 850 836 837 + /* drop this refcount during queue destroy */ 838 + kref_init(&queue->refcount); 839 + 851 840 /* Wait for mode-1 reset to complete */ 852 841 down_read(&adev->reset_domain->sem); 853 842 r = xa_err(xa_store_irq(&adev->userq_doorbell_xa, index, queue, GFP_KERNEL)); ··· 1002 985 struct drm_file *filp) 1003 986 { 1004 987 union drm_amdgpu_userq *args = data; 1005 - int r; 988 + struct amdgpu_fpriv *fpriv = filp->driver_priv; 989 + struct amdgpu_usermode_queue *queue; 990 + int r = 0; 1006 991 1007 992 if (!amdgpu_userq_enabled(dev)) 1008 993 return -ENOTSUPP; ··· 1019 1000 drm_file_err(filp, "Failed to create usermode queue\n"); 1020 1001 break; 1021 1002 1022 - case AMDGPU_USERQ_OP_FREE: 1023 - r = amdgpu_userq_destroy(filp, args->in.queue_id); 1024 - if (r) 1025 - drm_file_err(filp, "Failed to destroy usermode queue\n"); 1003 + case AMDGPU_USERQ_OP_FREE: { 1004 + xa_lock(&fpriv->userq_mgr.userq_xa); 1005 + queue = __xa_erase(&fpriv->userq_mgr.userq_xa, args->in.queue_id); 1006 + xa_unlock(&fpriv->userq_mgr.userq_xa); 1007 + if (!queue) 1008 + return -ENOENT; 1009 + 1010 + amdgpu_userq_put(queue); 1026 1011 break; 1012 + } 1027 1013 1028 1014 default: 1029 1015 drm_dbg_driver(dev, "Invalid user queue op specified: %d\n", args->in.op); ··· 1047 1023 1048 1024 /* Resume all the queues for this process */ 1049 1025 xa_for_each(&uq_mgr->userq_xa, queue_id, queue) { 1026 + queue = amdgpu_userq_get(uq_mgr, queue_id); 1027 + if (!queue) 1028 + continue; 1029 + 1050 1030 if (!amdgpu_userq_buffer_vas_mapped(queue)) { 1051 1031 drm_file_err(uq_mgr->file, 1052 1032 "trying restore queue without va mapping\n"); 1053 1033 queue->state = AMDGPU_USERQ_STATE_INVALID_VA; 1034 + amdgpu_userq_put(queue); 1054 1035 continue; 1055 1036 } 1056 1037 1057 1038 r = amdgpu_userq_restore_helper(queue); 1058 1039 if (r) 1059 1040 ret = r; 1041 + 1042 + amdgpu_userq_put(queue); 1060 1043 } 1061 1044 1062 1045 if (ret) ··· 1297 1266 amdgpu_userq_detect_and_reset_queues(uq_mgr); 1298 1267 /* Try to unmap all the queues in this process ctx */ 1299 1268 xa_for_each(&uq_mgr->userq_xa, queue_id, queue) { 1269 + queue = amdgpu_userq_get(uq_mgr, queue_id); 1270 + if (!queue) 1271 + continue; 1300 1272 r = amdgpu_userq_preempt_helper(queue); 1301 1273 if (r) 1302 1274 ret = r; 1275 + amdgpu_userq_put(queue); 1303 1276 } 1304 1277 1305 1278 if (ret) ··· 1336 1301 int ret; 1337 1302 1338 1303 xa_for_each(&uq_mgr->userq_xa, queue_id, queue) { 1304 + queue = amdgpu_userq_get(uq_mgr, queue_id); 1305 + if (!queue) 1306 + continue; 1307 + 1339 1308 struct dma_fence *f = queue->last_fence; 1340 1309 1341 - if (!f || dma_fence_is_signaled(f)) 1310 + if (!f || dma_fence_is_signaled(f)) { 1311 + amdgpu_userq_put(queue); 1342 1312 continue; 1313 + } 1343 1314 ret = dma_fence_wait_timeout(f, true, msecs_to_jiffies(100)); 1344 1315 if (ret <= 0) { 1345 1316 drm_file_err(uq_mgr->file, "Timed out waiting for fence=%llu:%llu\n", 1346 1317 f->context, f->seqno); 1318 + amdgpu_userq_put(queue); 1347 1319 return -ETIMEDOUT; 1348 1320 } 1321 + amdgpu_userq_put(queue); 1349 1322 } 1350 1323 1351 1324 return 0; ··· 1404 1361 void amdgpu_userq_mgr_fini(struct amdgpu_userq_mgr *userq_mgr) 1405 1362 { 1406 1363 struct amdgpu_usermode_queue *queue; 1407 - unsigned long queue_id; 1364 + unsigned long queue_id = 0; 1408 1365 1409 - cancel_delayed_work_sync(&userq_mgr->resume_work); 1366 + for (;;) { 1367 + xa_lock(&userq_mgr->userq_xa); 1368 + queue = xa_find(&userq_mgr->userq_xa, &queue_id, ULONG_MAX, 1369 + XA_PRESENT); 1370 + if (queue) 1371 + __xa_erase(&userq_mgr->userq_xa, queue_id); 1372 + xa_unlock(&userq_mgr->userq_xa); 1410 1373 1411 - mutex_lock(&userq_mgr->userq_mutex); 1412 - amdgpu_userq_detect_and_reset_queues(userq_mgr); 1413 - xa_for_each(&userq_mgr->userq_xa, queue_id, queue) { 1414 - amdgpu_userq_wait_for_last_fence(queue); 1415 - amdgpu_userq_unmap_helper(queue); 1416 - amdgpu_userq_cleanup(queue, queue_id); 1374 + if (!queue) 1375 + break; 1376 + 1377 + amdgpu_userq_put(queue); 1417 1378 } 1418 1379 1419 1380 xa_destroy(&userq_mgr->userq_xa); 1420 - mutex_unlock(&userq_mgr->userq_mutex); 1421 1381 mutex_destroy(&userq_mgr->userq_mutex); 1422 1382 } 1423 1383
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_userq.h
··· 74 74 struct dentry *debugfs_queue; 75 75 struct delayed_work hang_detect_work; 76 76 struct dma_fence *hang_detect_fence; 77 + struct kref refcount; 77 78 78 79 struct list_head userq_va_list; 79 80 }; ··· 112 111 uint32_t doorbell_offset; 113 112 struct amdgpu_userq_obj *db_obj; 114 113 }; 114 + 115 + struct amdgpu_usermode_queue *amdgpu_userq_get(struct amdgpu_userq_mgr *uq_mgr, u32 qid); 116 + void amdgpu_userq_put(struct amdgpu_usermode_queue *queue); 115 117 116 118 int amdgpu_userq_ioctl(struct drm_device *dev, void *data, struct drm_file *filp); 117 119
+15 -27
drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
··· 466 466 struct drm_amdgpu_userq_signal *args = data; 467 467 struct drm_gem_object **gobj_write = NULL; 468 468 struct drm_gem_object **gobj_read = NULL; 469 - struct amdgpu_usermode_queue *queue; 469 + struct amdgpu_usermode_queue *queue = NULL; 470 470 struct amdgpu_userq_fence *userq_fence; 471 471 struct drm_syncobj **syncobj = NULL; 472 472 u32 *bo_handles_write, num_write_bo_handles; ··· 553 553 } 554 554 555 555 /* Retrieve the user queue */ 556 - queue = xa_load(&userq_mgr->userq_xa, args->queue_id); 556 + queue = amdgpu_userq_get(userq_mgr, args->queue_id); 557 557 if (!queue) { 558 558 r = -ENOENT; 559 559 goto put_gobj_write; ··· 648 648 free_syncobj_handles: 649 649 kfree(syncobj_handles); 650 650 651 + if (queue) 652 + amdgpu_userq_put(queue); 653 + 651 654 return r; 652 655 } 653 656 ··· 663 660 struct drm_amdgpu_userq_wait *wait_info = data; 664 661 struct amdgpu_fpriv *fpriv = filp->driver_priv; 665 662 struct amdgpu_userq_mgr *userq_mgr = &fpriv->userq_mgr; 666 - struct amdgpu_usermode_queue *waitq; 663 + struct amdgpu_usermode_queue *waitq = NULL; 667 664 struct drm_gem_object **gobj_write; 668 665 struct drm_gem_object **gobj_read; 669 666 struct dma_fence **fences = NULL; ··· 929 926 */ 930 927 num_fences = dma_fence_dedup_array(fences, num_fences); 931 928 932 - waitq = xa_load(&userq_mgr->userq_xa, wait_info->waitq_id); 929 + waitq = amdgpu_userq_get(userq_mgr, wait_info->waitq_id); 933 930 if (!waitq) { 934 931 r = -EINVAL; 935 932 goto free_fences; ··· 986 983 r = -EFAULT; 987 984 goto free_fences; 988 985 } 989 - 990 - kfree(fences); 991 - kfree(fence_info); 992 986 } 993 987 994 - drm_exec_fini(&exec); 995 - for (i = 0; i < num_read_bo_handles; i++) 996 - drm_gem_object_put(gobj_read[i]); 997 - kfree(gobj_read); 998 - 999 - for (i = 0; i < num_write_bo_handles; i++) 1000 - drm_gem_object_put(gobj_write[i]); 1001 - kfree(gobj_write); 1002 - 1003 - kfree(timeline_points); 1004 - kfree(timeline_handles); 1005 - kfree(syncobj_handles); 1006 - kfree(bo_handles_write); 1007 - kfree(bo_handles_read); 1008 - 1009 - return 0; 1010 - 1011 988 free_fences: 1012 - while (num_fences-- > 0) 1013 - dma_fence_put(fences[num_fences]); 1014 - kfree(fences); 989 + if (fences) { 990 + while (num_fences-- > 0) 991 + dma_fence_put(fences[num_fences]); 992 + kfree(fences); 993 + } 1015 994 free_fence_info: 1016 995 kfree(fence_info); 1017 996 exec_fini: ··· 1016 1031 kfree(bo_handles_write); 1017 1032 free_bo_handles_read: 1018 1033 kfree(bo_handles_read); 1034 + 1035 + if (waitq) 1036 + amdgpu_userq_put(waitq); 1019 1037 1020 1038 return r; 1021 1039 }
+10 -10
drivers/gpu/drm/amd/amdgpu/psp_v15_0.c
··· 69 69 0x80000000, 0x80000000, false); 70 70 } else { 71 71 /* Write the ring destroy command*/ 72 - WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_64, 72 + WREG32_SOC15(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_64, 73 73 GFX_CTRL_CMD_ID_DESTROY_RINGS); 74 74 /* there might be handshake issue with hardware which needs delay */ 75 75 mdelay(20); 76 76 /* Wait for response flag (bit 31) */ 77 - ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64), 77 + ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_64), 78 78 0x80000000, 0x80000000, false); 79 79 } 80 80 ··· 116 116 117 117 } else { 118 118 /* Wait for sOS ready for ring creation */ 119 - ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64), 119 + ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_64), 120 120 0x80000000, 0x80000000, false); 121 121 if (ret) { 122 122 DRM_ERROR("Failed to wait for trust OS ready for ring creation\n"); ··· 125 125 126 126 /* Write low address of the ring to C2PMSG_69 */ 127 127 psp_ring_reg = lower_32_bits(ring->ring_mem_mc_addr); 128 - WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_69, psp_ring_reg); 128 + WREG32_SOC15(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_69, psp_ring_reg); 129 129 /* Write high address of the ring to C2PMSG_70 */ 130 130 psp_ring_reg = upper_32_bits(ring->ring_mem_mc_addr); 131 - WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_70, psp_ring_reg); 131 + WREG32_SOC15(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_70, psp_ring_reg); 132 132 /* Write size of ring to C2PMSG_71 */ 133 133 psp_ring_reg = ring->ring_size; 134 - WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_71, psp_ring_reg); 134 + WREG32_SOC15(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_71, psp_ring_reg); 135 135 /* Write the ring initialization command to C2PMSG_64 */ 136 136 psp_ring_reg = ring_type; 137 137 psp_ring_reg = psp_ring_reg << 16; 138 - WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_64, psp_ring_reg); 138 + WREG32_SOC15(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_64, psp_ring_reg); 139 139 140 140 /* there might be handshake issue with hardware which needs delay */ 141 141 mdelay(20); 142 142 143 143 /* Wait for response flag (bit 31) in C2PMSG_64 */ 144 - ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64), 144 + ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_64), 145 145 0x80000000, 0x8000FFFF, false); 146 146 } 147 147 ··· 174 174 if (amdgpu_sriov_vf(adev)) 175 175 data = RREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_102); 176 176 else 177 - data = RREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_67); 177 + data = RREG32_SOC15(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_67); 178 178 179 179 return data; 180 180 } ··· 188 188 WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_101, 189 189 GFX_CTRL_CMD_ID_CONSUME_CMD); 190 190 } else 191 - WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_67, value); 191 + WREG32_SOC15(MP0, 0, regMPASP_PCRU1_MPASP_C2PMSG_67, value); 192 192 } 193 193 194 194 static const struct psp_funcs psp_v15_0_0_funcs = {
+3 -1
drivers/gpu/drm/amd/amdgpu/soc21.c
··· 858 858 AMD_CG_SUPPORT_IH_CG | 859 859 AMD_CG_SUPPORT_BIF_MGCG | 860 860 AMD_CG_SUPPORT_BIF_LS; 861 - adev->pg_flags = AMD_PG_SUPPORT_VCN | 861 + adev->pg_flags = AMD_PG_SUPPORT_VCN_DPG | 862 + AMD_PG_SUPPORT_VCN | 863 + AMD_PG_SUPPORT_JPEG_DPG | 862 864 AMD_PG_SUPPORT_JPEG | 863 865 AMD_PG_SUPPORT_GFX_PG; 864 866 adev->external_rev_id = adev->rev_id + 0x1;
+4 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
··· 1706 1706 struct dc_transfer_func *tf = &dc_plane_state->in_shaper_func; 1707 1707 struct drm_atomic_state *state = plane_state->state; 1708 1708 const struct amdgpu_device *adev = drm_to_adev(colorop->dev); 1709 + bool has_3dlut = adev->dm.dc->caps.color.dpp.hw_3d_lut || adev->dm.dc->caps.color.mpc.preblend; 1709 1710 const struct drm_device *dev = colorop->dev; 1710 1711 const struct drm_color_lut32 *lut3d; 1711 1712 uint32_t lut3d_size; ··· 1723 1722 } 1724 1723 1725 1724 if (colorop_state && !colorop_state->bypass && colorop->type == DRM_COLOROP_3D_LUT) { 1726 - if (!adev->dm.dc->caps.color.dpp.hw_3d_lut) { 1725 + if (!has_3dlut) { 1727 1726 drm_dbg(dev, "3D LUT is not supported by hardware\n"); 1728 1727 return -EINVAL; 1729 1728 } ··· 1876 1875 struct drm_colorop *colorop = plane_state->color_pipeline; 1877 1876 struct drm_device *dev = plane_state->plane->dev; 1878 1877 struct amdgpu_device *adev = drm_to_adev(dev); 1878 + bool has_3dlut = adev->dm.dc->caps.color.dpp.hw_3d_lut || adev->dm.dc->caps.color.mpc.preblend; 1879 1879 int ret; 1880 1880 1881 1881 /* 1D Curve - DEGAM TF */ ··· 1909 1907 if (ret) 1910 1908 return ret; 1911 1909 1912 - if (adev->dm.dc->caps.color.dpp.hw_3d_lut) { 1910 + if (has_3dlut) { 1913 1911 /* 1D Curve & LUT - SHAPER TF & LUT */ 1914 1912 colorop = colorop->next; 1915 1913 if (!colorop) {
+2 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_colorop.c
··· 60 60 struct drm_colorop *ops[MAX_COLOR_PIPELINE_OPS]; 61 61 struct drm_device *dev = plane->dev; 62 62 struct amdgpu_device *adev = drm_to_adev(dev); 63 + bool has_3dlut = adev->dm.dc->caps.color.dpp.hw_3d_lut || adev->dm.dc->caps.color.mpc.preblend; 63 64 int ret; 64 65 int i = 0; 65 66 ··· 113 112 114 113 i++; 115 114 116 - if (adev->dm.dc->caps.color.dpp.hw_3d_lut) { 115 + if (has_3dlut) { 117 116 /* 1D curve - SHAPER TF */ 118 117 ops[i] = kzalloc_obj(*ops[0]); 119 118 if (!ops[i]) {
+8 -8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 765 765 dm->adev->mode_info.crtcs[crtc_index] = acrtc; 766 766 767 767 /* Don't enable DRM CRTC degamma property for 768 - * 1. Degamma is replaced by color pipeline. 769 - * 2. DCE since it doesn't support programmable degamma anywhere. 770 - * 3. DCN401 since pre-blending degamma LUT doesn't apply to cursor. 768 + * 1. DCE since it doesn't support programmable degamma anywhere. 769 + * 2. DCN401 since pre-blending degamma LUT doesn't apply to cursor. 770 + * Note: DEGAMMA properties are created even if the primary plane has the 771 + * COLOR_PIPELINE property. User space can use either the DEGAMMA properties 772 + * or the COLOR_PIPELINE property. An atomic commit which attempts to enable 773 + * both is rejected. 771 774 */ 772 - if (plane->color_pipeline_property) 773 - has_degamma = false; 774 - else 775 - has_degamma = dm->adev->dm.dc->caps.color.dpp.dcn_arch && 776 - dm->adev->dm.dc->ctx->dce_version != DCN_VERSION_4_01; 775 + has_degamma = dm->adev->dm.dc->caps.color.dpp.dcn_arch && 776 + dm->adev->dm.dc->ctx->dce_version != DCN_VERSION_4_01; 777 777 778 778 drm_crtc_enable_color_mgmt(&acrtc->base, has_degamma ? MAX_COLOR_LUT_ENTRIES : 0, 779 779 true, MAX_COLOR_LUT_ENTRIES);
+8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
··· 1256 1256 if (ret) 1257 1257 return ret; 1258 1258 1259 + /* Reject commits that attempt to use both COLOR_PIPELINE and CRTC DEGAMMA_LUT */ 1260 + if (new_plane_state->color_pipeline && new_crtc_state->degamma_lut) { 1261 + drm_dbg_atomic(plane->dev, 1262 + "[PLANE:%d:%s] COLOR_PIPELINE and CRTC DEGAMMA_LUT cannot be enabled simultaneously\n", 1263 + plane->base.id, plane->name); 1264 + return -EINVAL; 1265 + } 1266 + 1259 1267 ret = amdgpu_dm_plane_fill_dc_scaling_info(adev, new_plane_state, &scaling_info); 1260 1268 if (ret) 1261 1269 return ret;
+5 -1
drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
··· 72 72 * audio corruption. Read current DISPCLK from DENTIST and request the same 73 73 * freq to ensure that the timing is valid and unchanged. 74 74 */ 75 - clocks->dispclk_khz = dc->clk_mgr->funcs->get_dispclk_from_dentist(dc->clk_mgr); 75 + if (dc->clk_mgr->funcs->get_dispclk_from_dentist) { 76 + clocks->dispclk_khz = dc->clk_mgr->funcs->get_dispclk_from_dentist(dc->clk_mgr); 77 + } else { 78 + clocks->dispclk_khz = dc->clk_mgr->boot_snapshot.dispclk * 1000; 79 + } 76 80 } 77 81 clocks->ref_dtbclk_khz = dc->clk_mgr->bw_params->clk_table.entries[0].dtbclk_mhz * 1000; 78 82 clocks->fclk_p_state_change_support = true;
+18
drivers/gpu/drm/amd/include/asic_reg/mp/mp_15_0_0_offset.h
··· 82 82 #define regMPASP_SMN_IH_SW_INT_CTRL 0x0142 83 83 #define regMPASP_SMN_IH_SW_INT_CTRL_BASE_IDX 0 84 84 85 + // addressBlock: mp_SmuMpASPPub_PcruDec 86 + // base address: 0x3800000 87 + #define regMPASP_PCRU1_MPASP_C2PMSG_64 0x4280 88 + #define regMPASP_PCRU1_MPASP_C2PMSG_64_BASE_IDX 3 89 + #define regMPASP_PCRU1_MPASP_C2PMSG_65 0x4281 90 + #define regMPASP_PCRU1_MPASP_C2PMSG_65_BASE_IDX 3 91 + #define regMPASP_PCRU1_MPASP_C2PMSG_66 0x4282 92 + #define regMPASP_PCRU1_MPASP_C2PMSG_66_BASE_IDX 3 93 + #define regMPASP_PCRU1_MPASP_C2PMSG_67 0x4283 94 + #define regMPASP_PCRU1_MPASP_C2PMSG_67_BASE_IDX 3 95 + #define regMPASP_PCRU1_MPASP_C2PMSG_68 0x4284 96 + #define regMPASP_PCRU1_MPASP_C2PMSG_68_BASE_IDX 3 97 + #define regMPASP_PCRU1_MPASP_C2PMSG_69 0x4285 98 + #define regMPASP_PCRU1_MPASP_C2PMSG_69_BASE_IDX 3 99 + #define regMPASP_PCRU1_MPASP_C2PMSG_70 0x4286 100 + #define regMPASP_PCRU1_MPASP_C2PMSG_70_BASE_IDX 3 101 + #define regMPASP_PCRU1_MPASP_C2PMSG_71 0x4287 102 + #define regMPASP_PCRU1_MPASP_C2PMSG_71_BASE_IDX 3 85 103 86 104 // addressBlock: mp_SmuMp1_SmnDec 87 105 // base address: 0x0
+7 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 2034 2034 smu, SMU_DRIVER_TABLE_GPU_METRICS); 2035 2035 SmuMetricsExternal_t metrics_ext; 2036 2036 SmuMetrics_t *metrics = &metrics_ext.SmuMetrics; 2037 + uint32_t mp1_ver = amdgpu_ip_version(smu->adev, MP1_HWIP, 0); 2037 2038 int ret = 0; 2038 2039 2039 2040 ret = smu_cmn_get_metrics_table(smu, ··· 2059 2058 metrics->Vcn1ActivityPercentage); 2060 2059 2061 2060 gpu_metrics->average_socket_power = metrics->AverageSocketPower; 2062 - gpu_metrics->energy_accumulator = metrics->EnergyAccumulator; 2061 + 2062 + if ((mp1_ver == IP_VERSION(13, 0, 0) && smu->smc_fw_version <= 0x004e1e00) || 2063 + (mp1_ver == IP_VERSION(13, 0, 10) && smu->smc_fw_version <= 0x00500800)) 2064 + gpu_metrics->energy_accumulator = metrics->EnergyAccumulator; 2065 + else 2066 + gpu_metrics->energy_accumulator = UINT_MAX; 2063 2067 2064 2068 if (metrics->AverageGfxActivity <= SMU_13_0_0_BUSY_THRESHOLD) 2065 2069 gpu_metrics->average_gfxclk_frequency = metrics->AverageGfxclkFrequencyPostDs;
+2 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 2065 2065 metrics->Vcn1ActivityPercentage); 2066 2066 2067 2067 gpu_metrics->average_socket_power = metrics->AverageSocketPower; 2068 - gpu_metrics->energy_accumulator = metrics->EnergyAccumulator; 2068 + gpu_metrics->energy_accumulator = smu->smc_fw_version <= 0x00521400 ? 2069 + metrics->EnergyAccumulator : UINT_MAX; 2069 2070 2070 2071 if (metrics->AverageGfxActivity <= SMU_13_0_7_BUSY_THRESHOLD) 2071 2072 gpu_metrics->average_gfxclk_frequency = metrics->AverageGfxclkFrequencyPostDs;
+2 -12
drivers/gpu/drm/drm_pagemap.c
··· 480 480 .start = start, 481 481 .end = end, 482 482 .pgmap_owner = pagemap->owner, 483 - /* 484 - * FIXME: MIGRATE_VMA_SELECT_DEVICE_PRIVATE intermittently 485 - * causes 'xe_exec_system_allocator --r *race*no*' to trigger aa 486 - * engine reset and a hard hang due to getting stuck on a folio 487 - * lock. This should work and needs to be root-caused. The only 488 - * downside of not selecting MIGRATE_VMA_SELECT_DEVICE_PRIVATE 489 - * is that device-to-device migrations won’t work; instead, 490 - * memory will bounce through system memory. This path should be 491 - * rare and only occur when the madvise attributes of memory are 492 - * changed or atomics are being used. 493 - */ 494 - .flags = MIGRATE_VMA_SELECT_SYSTEM | MIGRATE_VMA_SELECT_DEVICE_COHERENT, 483 + .flags = MIGRATE_VMA_SELECT_SYSTEM | MIGRATE_VMA_SELECT_DEVICE_COHERENT | 484 + MIGRATE_VMA_SELECT_DEVICE_PRIVATE, 495 485 }; 496 486 unsigned long i, npages = npages_in_range(start, end); 497 487 unsigned long own_pages = 0, migrated_pages = 0;
+8 -3
drivers/gpu/drm/i915/display/intel_psr.c
··· 1307 1307 u16 sink_y_granularity = crtc_state->has_panel_replay ? 1308 1308 connector->dp.panel_replay_caps.su_y_granularity : 1309 1309 connector->dp.psr_caps.su_y_granularity; 1310 - u16 sink_w_granularity = crtc_state->has_panel_replay ? 1311 - connector->dp.panel_replay_caps.su_w_granularity : 1312 - connector->dp.psr_caps.su_w_granularity; 1310 + u16 sink_w_granularity; 1311 + 1312 + if (crtc_state->has_panel_replay) 1313 + sink_w_granularity = connector->dp.panel_replay_caps.su_w_granularity == 1314 + DP_PANEL_REPLAY_FULL_LINE_GRANULARITY ? 1315 + crtc_hdisplay : connector->dp.panel_replay_caps.su_w_granularity; 1316 + else 1317 + sink_w_granularity = connector->dp.psr_caps.su_w_granularity; 1313 1318 1314 1319 /* PSR2 HW only send full lines so we only need to validate the width */ 1315 1320 if (crtc_hdisplay % sink_w_granularity)
+3
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 1230 1230 u8 size = msg->size; 1231 1231 int ret; 1232 1232 1233 + if (pm_runtime_suspended(nv_connector->base.dev->dev)) 1234 + return -EBUSY; 1235 + 1233 1236 nv_encoder = find_encoder(&nv_connector->base, DCB_OUTPUT_DP); 1234 1237 if (!nv_encoder) 1235 1238 return -ENODEV;
+5 -4
drivers/gpu/drm/panthor/panthor_sched.c
··· 893 893 894 894 out_sync: 895 895 /* Make sure the CPU caches are invalidated before the seqno is read. 896 - * drm_gem_shmem_sync() is a NOP if map_wc=true, so no need to check 896 + * panthor_gem_sync() is a NOP if map_wc=true, so no need to check 897 897 * it here. 898 898 */ 899 - panthor_gem_sync(&bo->base.base, queue->syncwait.offset, 899 + panthor_gem_sync(&bo->base.base, 900 + DRM_PANTHOR_BO_SYNC_CPU_CACHE_FLUSH_AND_INVALIDATE, 901 + queue->syncwait.offset, 900 902 queue->syncwait.sync64 ? 901 903 sizeof(struct panthor_syncobj_64b) : 902 - sizeof(struct panthor_syncobj_32b), 903 - DRM_PANTHOR_BO_SYNC_CPU_CACHE_FLUSH_AND_INVALIDATE); 904 + sizeof(struct panthor_syncobj_32b)); 904 905 905 906 return queue->syncwait.kmap + queue->syncwait.offset; 906 907
+15 -1
drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
··· 1122 1122 struct mipi_dsi_device *device) 1123 1123 { 1124 1124 struct rzg2l_mipi_dsi *dsi = host_to_rzg2l_mipi_dsi(host); 1125 + int bpp; 1125 1126 int ret; 1126 1127 1127 1128 if (device->lanes > dsi->num_data_lanes) { ··· 1132 1131 return -EINVAL; 1133 1132 } 1134 1133 1135 - switch (mipi_dsi_pixel_format_to_bpp(device->format)) { 1134 + bpp = mipi_dsi_pixel_format_to_bpp(device->format); 1135 + switch (bpp) { 1136 1136 case 24: 1137 1137 break; 1138 1138 case 18: ··· 1163 1161 } 1164 1162 1165 1163 drm_bridge_add(&dsi->bridge); 1164 + 1165 + /* 1166 + * Report the required division ratio setting for the MIPI clock dividers. 1167 + * 1168 + * vclk * bpp = hsclk * 8 * num_lanes 1169 + * 1170 + * vclk * DSI_AB_divider = hsclk * 16 1171 + * 1172 + * which simplifies to... 1173 + * DSI_AB_divider = bpp * 2 / num_lanes 1174 + */ 1175 + rzg2l_cpg_dsi_div_set_divider(bpp * 2 / dsi->lanes, PLL5_TARGET_DSI); 1166 1176 1167 1177 return 0; 1168 1178 }
+1
drivers/gpu/drm/scheduler/sched_main.c
··· 361 361 /** 362 362 * drm_sched_job_done - complete a job 363 363 * @s_job: pointer to the job which is done 364 + * @result: 0 on success, -ERRNO on error 364 365 * 365 366 * Finish the job's fence and resubmit the work items. 366 367 */
+2 -4
drivers/gpu/drm/solomon/ssd130x.c
··· 737 737 unsigned int height = drm_rect_height(rect); 738 738 unsigned int line_length = DIV_ROUND_UP(width, 8); 739 739 unsigned int page_height = SSD130X_PAGE_HEIGHT; 740 + u8 page_start = ssd130x->page_offset + y / page_height; 740 741 unsigned int pages = DIV_ROUND_UP(height, page_height); 741 742 struct drm_device *drm = &ssd130x->drm; 742 743 u32 array_idx = 0; ··· 775 774 */ 776 775 777 776 if (!ssd130x->page_address_mode) { 778 - u8 page_start; 779 - 780 777 /* Set address range for horizontal addressing mode */ 781 778 ret = ssd130x_set_col_range(ssd130x, ssd130x->col_offset + x, width); 782 779 if (ret < 0) 783 780 return ret; 784 781 785 - page_start = ssd130x->page_offset + y / page_height; 786 782 ret = ssd130x_set_page_range(ssd130x, page_start, pages); 787 783 if (ret < 0) 788 784 return ret; ··· 811 813 */ 812 814 if (ssd130x->page_address_mode) { 813 815 ret = ssd130x_set_page_pos(ssd130x, 814 - ssd130x->page_offset + i, 816 + page_start + i, 815 817 ssd130x->col_offset + x); 816 818 if (ret < 0) 817 819 return ret;
+2 -2
drivers/gpu/drm/ttm/tests/ttm_bo_test.c
··· 222 222 KUNIT_FAIL(test, "Couldn't create ttm bo reserve task\n"); 223 223 224 224 /* Take a lock so the threaded reserve has to wait */ 225 - mutex_lock(&bo->base.resv->lock.base); 225 + dma_resv_lock(bo->base.resv, NULL); 226 226 227 227 wake_up_process(task); 228 228 msleep(20); 229 229 err = kthread_stop(task); 230 230 231 - mutex_unlock(&bo->base.resv->lock.base); 231 + dma_resv_unlock(bo->base.resv); 232 232 233 233 KUNIT_ASSERT_EQ(test, err, -ERESTARTSYS); 234 234 }
+5 -6
drivers/gpu/drm/ttm/ttm_bo.c
··· 1107 1107 static s64 1108 1108 ttm_bo_swapout_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo) 1109 1109 { 1110 - struct ttm_resource *res = bo->resource; 1111 - struct ttm_place place = { .mem_type = res->mem_type }; 1110 + struct ttm_place place = { .mem_type = bo->resource->mem_type }; 1112 1111 struct ttm_bo_swapout_walk *swapout_walk = 1113 1112 container_of(walk, typeof(*swapout_walk), walk); 1114 1113 struct ttm_operation_ctx *ctx = walk->arg.ctx; ··· 1147 1148 /* 1148 1149 * Move to system cached 1149 1150 */ 1150 - if (res->mem_type != TTM_PL_SYSTEM) { 1151 + if (bo->resource->mem_type != TTM_PL_SYSTEM) { 1151 1152 struct ttm_resource *evict_mem; 1152 1153 struct ttm_place hop; 1153 1154 ··· 1179 1180 1180 1181 if (ttm_tt_is_populated(tt)) { 1181 1182 spin_lock(&bdev->lru_lock); 1182 - ttm_resource_del_bulk_move(res, bo); 1183 + ttm_resource_del_bulk_move(bo->resource, bo); 1183 1184 spin_unlock(&bdev->lru_lock); 1184 1185 1185 1186 ret = ttm_tt_swapout(bdev, tt, swapout_walk->gfp_flags); 1186 1187 1187 1188 spin_lock(&bdev->lru_lock); 1188 1189 if (ret) 1189 - ttm_resource_add_bulk_move(res, bo); 1190 - ttm_resource_move_to_lru_tail(res); 1190 + ttm_resource_add_bulk_move(bo->resource, bo); 1191 + ttm_resource_move_to_lru_tail(bo->resource); 1191 1192 spin_unlock(&bdev->lru_lock); 1192 1193 } 1193 1194
+1 -1
drivers/gpu/drm/ttm/ttm_pool_internal.h
··· 17 17 return pool->alloc_flags & TTM_ALLOCATION_POOL_USE_DMA32; 18 18 } 19 19 20 - static inline bool ttm_pool_beneficial_order(struct ttm_pool *pool) 20 + static inline unsigned int ttm_pool_beneficial_order(struct ttm_pool *pool) 21 21 { 22 22 return pool->alloc_flags & 0xff; 23 23 }
+1
drivers/gpu/drm/xe/xe_configfs.c
··· 830 830 831 831 mutex_destroy(&dev->lock); 832 832 833 + kfree(dev->config.ctx_restore_mid_bb[0].cs); 833 834 kfree(dev->config.ctx_restore_post_bb[0].cs); 834 835 kfree(dev); 835 836 }
+11 -12
drivers/gpu/drm/xe/xe_exec_queue.c
··· 266 266 return q; 267 267 } 268 268 269 + static void __xe_exec_queue_fini(struct xe_exec_queue *q) 270 + { 271 + int i; 272 + 273 + q->ops->fini(q); 274 + 275 + for (i = 0; i < q->width; ++i) 276 + xe_lrc_put(q->lrc[i]); 277 + } 278 + 269 279 static int __xe_exec_queue_init(struct xe_exec_queue *q, u32 exec_queue_flags) 270 280 { 271 281 int i, err; ··· 330 320 return 0; 331 321 332 322 err_lrc: 333 - for (i = i - 1; i >= 0; --i) 334 - xe_lrc_put(q->lrc[i]); 323 + __xe_exec_queue_fini(q); 335 324 return err; 336 - } 337 - 338 - static void __xe_exec_queue_fini(struct xe_exec_queue *q) 339 - { 340 - int i; 341 - 342 - q->ops->fini(q); 343 - 344 - for (i = 0; i < q->width; ++i) 345 - xe_lrc_put(q->lrc[i]); 346 325 } 347 326 348 327 struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *vm,
+35 -8
drivers/gpu/drm/xe/xe_gsc_proxy.c
··· 435 435 return 0; 436 436 } 437 437 438 - static void xe_gsc_proxy_remove(void *arg) 438 + static void xe_gsc_proxy_stop(struct xe_gsc *gsc) 439 439 { 440 - struct xe_gsc *gsc = arg; 441 440 struct xe_gt *gt = gsc_to_gt(gsc); 442 441 struct xe_device *xe = gt_to_xe(gt); 443 - 444 - if (!gsc->proxy.component_added) 445 - return; 446 442 447 443 /* disable HECI2 IRQs */ 448 444 scoped_guard(xe_pm_runtime, xe) { ··· 451 455 } 452 456 453 457 xe_gsc_wait_for_worker_completion(gsc); 458 + gsc->proxy.started = false; 459 + } 460 + 461 + static void xe_gsc_proxy_remove(void *arg) 462 + { 463 + struct xe_gsc *gsc = arg; 464 + struct xe_gt *gt = gsc_to_gt(gsc); 465 + struct xe_device *xe = gt_to_xe(gt); 466 + 467 + if (!gsc->proxy.component_added) 468 + return; 469 + 470 + /* 471 + * GSC proxy start is an async process that can be ongoing during 472 + * Xe module load/unload. Using devm managed action to register 473 + * xe_gsc_proxy_stop could cause issues if Xe module unload has 474 + * already started when the action is registered, potentially leading 475 + * to the cleanup being called at the wrong time. Therefore, instead 476 + * of registering a separate devm action to undo what is done in 477 + * proxy start, we call it from here, but only if the start has 478 + * completed successfully (tracked with the 'started' flag). 479 + */ 480 + if (gsc->proxy.started) 481 + xe_gsc_proxy_stop(gsc); 454 482 455 483 component_del(xe->drm.dev, &xe_gsc_proxy_component_ops); 456 484 gsc->proxy.component_added = false; ··· 530 510 */ 531 511 int xe_gsc_proxy_start(struct xe_gsc *gsc) 532 512 { 513 + struct xe_gt *gt = gsc_to_gt(gsc); 533 514 int err; 534 515 535 516 /* enable the proxy interrupt in the GSC shim layer */ ··· 542 521 */ 543 522 err = xe_gsc_proxy_request_handler(gsc); 544 523 if (err) 545 - return err; 524 + goto err_irq_disable; 546 525 547 526 if (!xe_gsc_proxy_init_done(gsc)) { 548 - xe_gt_err(gsc_to_gt(gsc), "GSC FW reports proxy init not completed\n"); 549 - return -EIO; 527 + xe_gt_err(gt, "GSC FW reports proxy init not completed\n"); 528 + err = -EIO; 529 + goto err_irq_disable; 550 530 } 551 531 532 + gsc->proxy.started = true; 552 533 return 0; 534 + 535 + err_irq_disable: 536 + gsc_proxy_irq_toggle(gsc, false); 537 + return err; 553 538 }
+2
drivers/gpu/drm/xe/xe_gsc_types.h
··· 58 58 struct mutex mutex; 59 59 /** @proxy.component_added: whether the component has been added */ 60 60 bool component_added; 61 + /** @proxy.started: whether the proxy has been started */ 62 + bool started; 61 63 /** @proxy.bo: object to store message to and from the GSC */ 62 64 struct xe_bo *bo; 63 65 /** @proxy.to_gsc: map of the memory used to send messages to the GSC */
+2 -1
drivers/gpu/drm/xe/xe_lrc.h
··· 75 75 */ 76 76 static inline void xe_lrc_put(struct xe_lrc *lrc) 77 77 { 78 - kref_put(&lrc->refcount, xe_lrc_destroy); 78 + if (lrc) 79 + kref_put(&lrc->refcount, xe_lrc_destroy); 79 80 } 80 81 81 82 /**
+3 -1
drivers/gpu/drm/xe/xe_reg_sr.c
··· 98 98 *pentry = *e; 99 99 ret = xa_err(xa_store(&sr->xa, idx, pentry, GFP_KERNEL)); 100 100 if (ret) 101 - goto fail; 101 + goto fail_free; 102 102 103 103 return 0; 104 104 105 + fail_free: 106 + kfree(pentry); 105 107 fail: 106 108 xe_gt_err(gt, 107 109 "discarding save-restore reg %04lx (clear: %08x, set: %08x, masked: %s, mcr: %s): ret=%d\n",
+9
drivers/gpu/drm/xe/xe_ring_ops.c
··· 280 280 281 281 i = emit_bb_start(batch_addr, ppgtt_flag, dw, i); 282 282 283 + /* Don't preempt fence signaling */ 284 + dw[i++] = MI_ARB_ON_OFF | MI_ARB_DISABLE; 285 + 283 286 if (job->user_fence.used) { 284 287 i = emit_flush_dw(dw, i); 285 288 i = emit_store_imm_ppgtt_posted(job->user_fence.addr, ··· 348 345 349 346 i = emit_bb_start(batch_addr, ppgtt_flag, dw, i); 350 347 348 + /* Don't preempt fence signaling */ 349 + dw[i++] = MI_ARB_ON_OFF | MI_ARB_DISABLE; 350 + 351 351 if (job->user_fence.used) { 352 352 i = emit_flush_dw(dw, i); 353 353 i = emit_store_imm_ppgtt_posted(job->user_fence.addr, ··· 402 396 seqno, dw, i); 403 397 404 398 i = emit_bb_start(batch_addr, ppgtt_flag, dw, i); 399 + 400 + /* Don't preempt fence signaling */ 401 + dw[i++] = MI_ARB_ON_OFF | MI_ARB_DISABLE; 405 402 406 403 i = emit_render_cache_flush(job, dw, i); 407 404
+2 -1
drivers/gpu/drm/xe/xe_vm_madvise.c
··· 453 453 madvise_range.num_vmas, 454 454 args->atomic.val)) { 455 455 err = -EINVAL; 456 - goto madv_fini; 456 + goto free_vmas; 457 457 } 458 458 } 459 459 ··· 490 490 err_fini: 491 491 if (madvise_range.has_bo_vmas) 492 492 drm_exec_fini(&exec); 493 + free_vmas: 493 494 kfree(madvise_range.vmas); 494 495 madvise_range.vmas = NULL; 495 496 madv_fini:
+7 -6
drivers/gpu/drm/xe/xe_wa.c
··· 241 241 242 242 { XE_RTP_NAME("16025250150"), 243 243 XE_RTP_RULES(GRAPHICS_VERSION(2001)), 244 - XE_RTP_ACTIONS(SET(LSN_VC_REG2, 245 - LSN_LNI_WGT(1) | 246 - LSN_LNE_WGT(1) | 247 - LSN_DIM_X_WGT(1) | 248 - LSN_DIM_Y_WGT(1) | 249 - LSN_DIM_Z_WGT(1))) 244 + XE_RTP_ACTIONS(FIELD_SET(LSN_VC_REG2, 245 + LSN_LNI_WGT_MASK | LSN_LNE_WGT_MASK | 246 + LSN_DIM_X_WGT_MASK | LSN_DIM_Y_WGT_MASK | 247 + LSN_DIM_Z_WGT_MASK, 248 + LSN_LNI_WGT(1) | LSN_LNE_WGT(1) | 249 + LSN_DIM_X_WGT(1) | LSN_DIM_Y_WGT(1) | 250 + LSN_DIM_Z_WGT(1))) 250 251 }, 251 252 252 253 /* Xe2_HPM */
+4 -3
drivers/hid/hid-apple.c
··· 365 365 { "A3R" }, 366 366 { "hfd.cn" }, 367 367 { "WKB603" }, 368 + { "TH87" }, /* EPOMAKER TH87 BT mode */ 369 + { "HFD Epomaker TH87" }, /* EPOMAKER TH87 USB mode */ 370 + { "2.4G Wireless Receiver" }, /* EPOMAKER TH87 dongle */ 368 371 }; 369 372 370 373 static bool apple_is_non_apple_keyboard(struct hid_device *hdev) ··· 689 686 hid_info(hdev, 690 687 "fixing up Magic Keyboard battery report descriptor\n"); 691 688 *rsize = *rsize - 1; 692 - rdesc = kmemdup(rdesc + 1, *rsize, GFP_KERNEL); 693 - if (!rdesc) 694 - return NULL; 689 + rdesc = rdesc + 1; 695 690 696 691 rdesc[0] = 0x05; 697 692 rdesc[1] = 0x01;
+14 -4
drivers/hid/hid-asus.c
··· 1399 1399 */ 1400 1400 if (*rsize == rsize_orig && 1401 1401 rdesc[offs] == 0x09 && rdesc[offs + 1] == 0x76) { 1402 - *rsize = rsize_orig + 1; 1403 - rdesc = kmemdup(rdesc, *rsize, GFP_KERNEL); 1404 - if (!rdesc) 1405 - return NULL; 1402 + __u8 *new_rdesc; 1403 + 1404 + new_rdesc = devm_kzalloc(&hdev->dev, rsize_orig + 1, 1405 + GFP_KERNEL); 1406 + if (!new_rdesc) 1407 + return rdesc; 1406 1408 1407 1409 hid_info(hdev, "Fixing up %s keyb report descriptor\n", 1408 1410 drvdata->quirks & QUIRK_T100CHI ? 1409 1411 "T100CHI" : "T90CHI"); 1412 + 1413 + memcpy(new_rdesc, rdesc, rsize_orig); 1414 + *rsize = rsize_orig + 1; 1415 + rdesc = new_rdesc; 1416 + 1410 1417 memmove(rdesc + offs + 4, rdesc + offs + 2, 12); 1411 1418 rdesc[offs] = 0x19; 1412 1419 rdesc[offs + 1] = 0x00; ··· 1497 1490 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1498 1491 USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X), 1499 1492 QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD | QUIRK_ROG_ALLY_XPAD }, 1493 + { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1494 + USB_DEVICE_ID_ASUSTEK_XGM_2023), 1495 + }, 1500 1496 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1501 1497 USB_DEVICE_ID_ASUSTEK_ROG_CLAYMORE_II_KEYBOARD), 1502 1498 QUIRK_ROG_CLAYMORE_II_KEYBOARD },
+1 -1
drivers/hid/hid-cmedia.c
··· 99 99 { 100 100 struct cmhid *cm = hid_get_drvdata(hid); 101 101 102 - if (len != CM6533_JD_RAWEV_LEN) 102 + if (len != CM6533_JD_RAWEV_LEN || !(hid->claimed & HID_CLAIMED_INPUT)) 103 103 goto out; 104 104 if (memcmp(data+CM6533_JD_SFX_OFFSET, ji_sfx, sizeof(ji_sfx))) 105 105 goto out;
+1 -1
drivers/hid/hid-creative-sb0540.c
··· 153 153 u64 code, main_code; 154 154 int key; 155 155 156 - if (len != 6) 156 + if (len != 6 || !(hid->claimed & HID_CLAIMED_INPUT)) 157 157 return 0; 158 158 159 159 /* From daemons/hw_hiddev.c sb0540_rec() in lirc */
+1
drivers/hid/hid-ids.h
··· 229 229 #define USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X 0x1b4c 230 230 #define USB_DEVICE_ID_ASUSTEK_ROG_CLAYMORE_II_KEYBOARD 0x196b 231 231 #define USB_DEVICE_ID_ASUSTEK_FX503VD_KEYBOARD 0x1869 232 + #define USB_DEVICE_ID_ASUSTEK_XGM_2023 0x1a9a 232 233 233 234 #define USB_VENDOR_ID_ATEN 0x0557 234 235 #define USB_DEVICE_ID_ATEN_UC100KM 0x2004
+2 -4
drivers/hid/hid-magicmouse.c
··· 990 990 */ 991 991 if ((is_usb_magicmouse2(hdev->vendor, hdev->product) || 992 992 is_usb_magictrackpad2(hdev->vendor, hdev->product)) && 993 - *rsize == 83 && rdesc[46] == 0x84 && rdesc[58] == 0x85) { 993 + *rsize >= 83 && rdesc[46] == 0x84 && rdesc[58] == 0x85) { 994 994 hid_info(hdev, 995 995 "fixing up magicmouse battery report descriptor\n"); 996 996 *rsize = *rsize - 1; 997 - rdesc = kmemdup(rdesc + 1, *rsize, GFP_KERNEL); 998 - if (!rdesc) 999 - return NULL; 997 + rdesc = rdesc + 1; 1000 998 1001 999 rdesc[0] = 0x05; 1002 1000 rdesc[1] = 0x01;
+2
drivers/hid/hid-mcp2221.c
··· 353 353 usleep_range(90, 100); 354 354 retries++; 355 355 } else { 356 + usleep_range(980, 1000); 357 + mcp_cancel_last_cmd(mcp); 356 358 return ret; 357 359 } 358 360 } else {
+38 -5
drivers/hid/hid-multitouch.c
··· 77 77 #define MT_QUIRK_ORIENTATION_INVERT BIT(22) 78 78 #define MT_QUIRK_APPLE_TOUCHBAR BIT(23) 79 79 #define MT_QUIRK_YOGABOOK9I BIT(24) 80 + #define MT_QUIRK_KEEP_LATENCY_ON_CLOSE BIT(25) 80 81 81 82 #define MT_INPUTMODE_TOUCHSCREEN 0x02 82 83 #define MT_INPUTMODE_TOUCHPAD 0x03 ··· 215 214 #define MT_CLS_WIN_8_DISABLE_WAKEUP 0x0016 216 215 #define MT_CLS_WIN_8_NO_STICKY_FINGERS 0x0017 217 216 #define MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU 0x0018 217 + #define MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE 0x0019 218 218 219 219 /* vendor specific classes */ 220 220 #define MT_CLS_3M 0x0101 ··· 235 233 #define MT_CLS_SMART_TECH 0x0113 236 234 #define MT_CLS_APPLE_TOUCHBAR 0x0114 237 235 #define MT_CLS_YOGABOOK9I 0x0115 236 + #define MT_CLS_EGALAX_P80H84 0x0116 238 237 #define MT_CLS_SIS 0x0457 239 238 240 239 #define MT_DEFAULT_MAXCONTACT 10 ··· 336 333 MT_QUIRK_HOVERING | 337 334 MT_QUIRK_CONTACT_CNT_ACCURATE | 338 335 MT_QUIRK_WIN8_PTP_BUTTONS, 336 + .export_all_inputs = true }, 337 + { .name = MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE, 338 + .quirks = MT_QUIRK_ALWAYS_VALID | 339 + MT_QUIRK_IGNORE_DUPLICATES | 340 + MT_QUIRK_HOVERING | 341 + MT_QUIRK_CONTACT_CNT_ACCURATE | 342 + MT_QUIRK_STICKY_FINGERS | 343 + MT_QUIRK_WIN8_PTP_BUTTONS | 344 + MT_QUIRK_KEEP_LATENCY_ON_CLOSE, 339 345 .export_all_inputs = true }, 340 346 341 347 /* ··· 449 437 MT_QUIRK_HOVERING | 450 438 MT_QUIRK_YOGABOOK9I, 451 439 .export_all_inputs = true 440 + }, 441 + { .name = MT_CLS_EGALAX_P80H84, 442 + .quirks = MT_QUIRK_ALWAYS_VALID | 443 + MT_QUIRK_IGNORE_DUPLICATES | 444 + MT_QUIRK_CONTACT_CNT_ACCURATE, 452 445 }, 453 446 { } 454 447 }; ··· 866 849 if ((cls->name == MT_CLS_WIN_8 || 867 850 cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT || 868 851 cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU || 869 - cls->name == MT_CLS_WIN_8_DISABLE_WAKEUP) && 852 + cls->name == MT_CLS_WIN_8_DISABLE_WAKEUP || 853 + cls->name == MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE) && 870 854 (field->application == HID_DG_TOUCHPAD || 871 855 field->application == HID_DG_TOUCHSCREEN)) 872 856 app->quirks |= MT_QUIRK_CONFIDENCE; ··· 1780 1762 int ret; 1781 1763 1782 1764 if (td->is_haptic_touchpad && (td->mtclass.name == MT_CLS_WIN_8 || 1783 - td->mtclass.name == MT_CLS_WIN_8_FORCE_MULTI_INPUT)) { 1765 + td->mtclass.name == MT_CLS_WIN_8_FORCE_MULTI_INPUT || 1766 + td->mtclass.name == MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE)) { 1784 1767 if (hid_haptic_input_configured(hdev, td->haptic, hi) == 0) 1785 1768 td->is_haptic_touchpad = false; 1786 1769 } else { ··· 2094 2075 2095 2076 static void mt_on_hid_hw_close(struct hid_device *hdev) 2096 2077 { 2097 - mt_set_modes(hdev, HID_LATENCY_HIGH, TOUCHPAD_REPORT_NONE); 2078 + struct mt_device *td = hid_get_drvdata(hdev); 2079 + 2080 + if (td->mtclass.quirks & MT_QUIRK_KEEP_LATENCY_ON_CLOSE) 2081 + mt_set_modes(hdev, HID_LATENCY_NORMAL, TOUCHPAD_REPORT_NONE); 2082 + else 2083 + mt_set_modes(hdev, HID_LATENCY_HIGH, TOUCHPAD_REPORT_NONE); 2098 2084 } 2099 2085 2100 2086 /* ··· 2239 2215 { .driver_data = MT_CLS_EGALAX_SERIAL, 2240 2216 MT_USB_DEVICE(USB_VENDOR_ID_DWAV, 2241 2217 USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C000) }, 2242 - { .driver_data = MT_CLS_EGALAX, 2243 - MT_USB_DEVICE(USB_VENDOR_ID_DWAV, 2218 + { .driver_data = MT_CLS_EGALAX_P80H84, 2219 + HID_DEVICE(HID_BUS_ANY, HID_GROUP_MULTITOUCH_WIN_8, 2220 + USB_VENDOR_ID_DWAV, 2244 2221 USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002) }, 2245 2222 2246 2223 /* Elan devices */ ··· 2485 2460 { .driver_data = MT_CLS_NSMU, 2486 2461 MT_USB_DEVICE(USB_VENDOR_ID_UNITEC, 2487 2462 USB_DEVICE_ID_UNITEC_USB_TOUCH_0A19) }, 2463 + 2464 + /* Uniwill touchpads */ 2465 + { .driver_data = MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE, 2466 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 2467 + USB_VENDOR_ID_PIXART, 0x0255) }, 2468 + { .driver_data = MT_CLS_WIN_8_KEEP_LATENCY_ON_CLOSE, 2469 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 2470 + USB_VENDOR_ID_PIXART, 0x0274) }, 2488 2471 2489 2472 /* VTL panels */ 2490 2473 { .driver_data = MT_CLS_VTL,
+1 -1
drivers/hid/hid-zydacron.c
··· 114 114 unsigned key; 115 115 unsigned short index; 116 116 117 - if (report->id == data[0]) { 117 + if (report->id == data[0] && (hdev->claimed & HID_CLAIMED_INPUT)) { 118 118 119 119 /* break keys */ 120 120 for (index = 0; index < 4; index++) {
+2
drivers/hid/intel-ish-hid/ipc/hw-ish.h
··· 39 39 #define PCI_DEVICE_ID_INTEL_ISH_PTL_H 0xE345 40 40 #define PCI_DEVICE_ID_INTEL_ISH_PTL_P 0xE445 41 41 #define PCI_DEVICE_ID_INTEL_ISH_WCL 0x4D45 42 + #define PCI_DEVICE_ID_INTEL_ISH_NVL_H 0xD354 43 + #define PCI_DEVICE_ID_INTEL_ISH_NVL_S 0x6E78 42 44 43 45 #define REVISION_ID_CHT_A0 0x6 44 46 #define REVISION_ID_CHT_Ax_SI 0x0
+12
drivers/hid/intel-ish-hid/ipc/pci-ish.c
··· 28 28 ISHTP_DRIVER_DATA_LNL_M, 29 29 ISHTP_DRIVER_DATA_PTL, 30 30 ISHTP_DRIVER_DATA_WCL, 31 + ISHTP_DRIVER_DATA_NVL_H, 32 + ISHTP_DRIVER_DATA_NVL_S, 31 33 }; 32 34 33 35 #define ISH_FW_GEN_LNL_M "lnlm" 34 36 #define ISH_FW_GEN_PTL "ptl" 35 37 #define ISH_FW_GEN_WCL "wcl" 38 + #define ISH_FW_GEN_NVL_H "nvlh" 39 + #define ISH_FW_GEN_NVL_S "nvls" 36 40 37 41 #define ISH_FIRMWARE_PATH(gen) "intel/ish/ish_" gen ".bin" 38 42 #define ISH_FIRMWARE_PATH_ALL "intel/ish/ish_*.bin" ··· 50 46 }, 51 47 [ISHTP_DRIVER_DATA_WCL] = { 52 48 .fw_generation = ISH_FW_GEN_WCL, 49 + }, 50 + [ISHTP_DRIVER_DATA_NVL_H] = { 51 + .fw_generation = ISH_FW_GEN_NVL_H, 52 + }, 53 + [ISHTP_DRIVER_DATA_NVL_S] = { 54 + .fw_generation = ISH_FW_GEN_NVL_S, 53 55 }, 54 56 }; 55 57 ··· 86 76 {PCI_DEVICE_DATA(INTEL, ISH_PTL_H, ISHTP_DRIVER_DATA_PTL)}, 87 77 {PCI_DEVICE_DATA(INTEL, ISH_PTL_P, ISHTP_DRIVER_DATA_PTL)}, 88 78 {PCI_DEVICE_DATA(INTEL, ISH_WCL, ISHTP_DRIVER_DATA_WCL)}, 79 + {PCI_DEVICE_DATA(INTEL, ISH_NVL_H, ISHTP_DRIVER_DATA_NVL_H)}, 80 + {PCI_DEVICE_DATA(INTEL, ISH_NVL_S, ISHTP_DRIVER_DATA_NVL_S)}, 89 81 {} 90 82 }; 91 83 MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+7 -4
drivers/hid/usbhid/hid-pidff.c
··· 1452 1452 hid_warn(pidff->hid, "unknown ramp effect layout\n"); 1453 1453 1454 1454 if (PIDFF_FIND_FIELDS(set_condition, PID_SET_CONDITION, 1)) { 1455 - if (test_and_clear_bit(FF_SPRING, dev->ffbit) || 1456 - test_and_clear_bit(FF_DAMPER, dev->ffbit) || 1457 - test_and_clear_bit(FF_FRICTION, dev->ffbit) || 1458 - test_and_clear_bit(FF_INERTIA, dev->ffbit)) 1455 + bool test = false; 1456 + 1457 + test |= test_and_clear_bit(FF_SPRING, dev->ffbit); 1458 + test |= test_and_clear_bit(FF_DAMPER, dev->ffbit); 1459 + test |= test_and_clear_bit(FF_FRICTION, dev->ffbit); 1460 + test |= test_and_clear_bit(FF_INERTIA, dev->ffbit); 1461 + if (test) 1459 1462 hid_warn(pidff->hid, "unknown condition effect layout\n"); 1460 1463 } 1461 1464
-10
drivers/hwmon/Kconfig
··· 1927 1927 This driver can also be built as a module. If so, the module 1928 1928 will be called raspberrypi-hwmon. 1929 1929 1930 - config SENSORS_SA67MCU 1931 - tristate "Kontron sa67mcu hardware monitoring driver" 1932 - depends on MFD_SL28CPLD || COMPILE_TEST 1933 - help 1934 - If you say yes here you get support for the voltage and temperature 1935 - monitor of the sa67 board management controller. 1936 - 1937 - This driver can also be built as a module. If so, the module 1938 - will be called sa67mcu-hwmon. 1939 - 1940 1930 config SENSORS_SL28CPLD 1941 1931 tristate "Kontron sl28cpld hardware monitoring driver" 1942 1932 depends on MFD_SL28CPLD || COMPILE_TEST
-1
drivers/hwmon/Makefile
··· 199 199 obj-$(CONFIG_SENSORS_PWM_FAN) += pwm-fan.o 200 200 obj-$(CONFIG_SENSORS_QNAP_MCU_HWMON) += qnap-mcu-hwmon.o 201 201 obj-$(CONFIG_SENSORS_RASPBERRYPI_HWMON) += raspberrypi-hwmon.o 202 - obj-$(CONFIG_SENSORS_SA67MCU) += sa67mcu-hwmon.o 203 202 obj-$(CONFIG_SENSORS_SBTSI) += sbtsi_temp.o 204 203 obj-$(CONFIG_SENSORS_SBRMI) += sbrmi.o 205 204 obj-$(CONFIG_SENSORS_SCH56XX_COMMON)+= sch56xx-common.o
+4 -2
drivers/hwmon/aht10.c
··· 37 37 #define AHT10_CMD_MEAS 0b10101100 38 38 #define AHT10_CMD_RST 0b10111010 39 39 40 - #define DHT20_CMD_INIT 0x71 40 + #define AHT20_CMD_INIT 0b10111110 41 + 42 + #define DHT20_CMD_INIT 0b01110001 41 43 42 44 /* 43 45 * Flags in the answer byte/command ··· 343 341 data->meas_size = AHT20_MEAS_SIZE; 344 342 data->crc8 = true; 345 343 crc8_populate_msb(crc8_table, AHT20_CRC8_POLY); 346 - data->init_cmd = AHT10_CMD_INIT; 344 + data->init_cmd = AHT20_CMD_INIT; 347 345 break; 348 346 case dht20: 349 347 data->meas_size = AHT20_MEAS_SIZE;
+4 -1
drivers/hwmon/it87.c
··· 3590 3590 { 3591 3591 struct platform_device *pdev = to_platform_device(dev); 3592 3592 struct it87_data *data = dev_get_drvdata(dev); 3593 + int err; 3593 3594 3594 3595 it87_resume_sio(pdev); 3595 3596 3596 - it87_lock(data); 3597 + err = it87_lock(data); 3598 + if (err) 3599 + return err; 3597 3600 3598 3601 it87_check_pwm(dev); 3599 3602 it87_check_limit_regs(data);
+26 -25
drivers/hwmon/macsmc-hwmon.c
··· 22 22 23 23 #include <linux/bitfield.h> 24 24 #include <linux/hwmon.h> 25 + #include <linux/math64.h> 25 26 #include <linux/mfd/macsmc.h> 26 27 #include <linux/module.h> 27 28 #include <linux/of.h> ··· 131 130 if (ret < 0) 132 131 return ret; 133 132 134 - *p = mult_frac(val, scale, 65536); 133 + *p = mul_u64_u32_div(val, scale, 65536); 135 134 136 135 return 0; 137 136 } ··· 141 140 * them. 142 141 */ 143 142 static int macsmc_hwmon_read_f32_scaled(struct apple_smc *smc, smc_key key, 144 - int *p, int scale) 143 + long *p, int scale) 145 144 { 146 145 u32 fval; 147 146 u64 val; ··· 163 162 val = 0; 164 163 else if (exp < 0) 165 164 val >>= -exp; 166 - else if (exp != 0 && (val & ~((1UL << (64 - exp)) - 1))) /* overflow */ 165 + else if (exp != 0 && (val & ~((1ULL << (64 - exp)) - 1))) /* overflow */ 167 166 val = U64_MAX; 168 167 else 169 168 val <<= exp; 170 169 171 170 if (fval & FLT_SIGN_MASK) { 172 - if (val > (-(s64)INT_MIN)) 173 - *p = INT_MIN; 171 + if (val > (u64)LONG_MAX + 1) 172 + *p = LONG_MIN; 174 173 else 175 - *p = -val; 174 + *p = -(long)val; 176 175 } else { 177 - if (val > INT_MAX) 178 - *p = INT_MAX; 176 + if (val > (u64)LONG_MAX) 177 + *p = LONG_MAX; 179 178 else 180 - *p = val; 179 + *p = (long)val; 181 180 } 182 181 183 182 return 0; ··· 196 195 switch (sensor->info.type_code) { 197 196 /* 32-bit IEEE 754 float */ 198 197 case __SMC_KEY('f', 'l', 't', ' '): { 199 - u32 flt_ = 0; 198 + long flt_ = 0; 200 199 201 200 ret = macsmc_hwmon_read_f32_scaled(smc, sensor->macsmc_key, 202 201 &flt_, scale); ··· 215 214 if (ret) 216 215 return ret; 217 216 218 - *val = (long)ioft; 217 + if (ioft > LONG_MAX) 218 + *val = LONG_MAX; 219 + else 220 + *val = (long)ioft; 219 221 break; 220 222 } 221 223 default: ··· 228 224 return 0; 229 225 } 230 226 231 - static int macsmc_hwmon_write_f32(struct apple_smc *smc, smc_key key, int value) 227 + static int macsmc_hwmon_write_f32(struct apple_smc *smc, smc_key key, long value) 232 228 { 233 229 u64 val; 234 230 u32 fval = 0; 235 - int exp = 0, neg; 231 + int exp, neg; 236 232 233 + neg = value < 0; 237 234 val = abs(value); 238 - neg = val != value; 239 235 240 236 if (val) { 241 - int msb = __fls(val) - exp; 237 + exp = __fls(val); 242 238 243 - if (msb > 23) { 244 - val >>= msb - FLT_MANT_BIAS; 245 - exp -= msb - FLT_MANT_BIAS; 246 - } else if (msb < 23) { 247 - val <<= FLT_MANT_BIAS - msb; 248 - exp += msb; 249 - } 239 + if (exp > 23) 240 + val >>= exp - 23; 241 + else 242 + val <<= 23 - exp; 250 243 251 244 fval = FIELD_PREP(FLT_SIGN_MASK, neg) | 252 245 FIELD_PREP(FLT_EXP_MASK, exp + FLT_EXP_BIAS) | 253 - FIELD_PREP(FLT_MANT_MASK, val); 246 + FIELD_PREP(FLT_MANT_MASK, val & FLT_MANT_MASK); 254 247 } 255 248 256 249 return apple_smc_write_u32(smc, key, fval); ··· 664 663 if (!hwmon->volt.sensors) 665 664 return -ENOMEM; 666 665 667 - for_each_child_of_node_with_prefix(hwmon_node, key_node, "volt-") { 668 - sensor = &hwmon->temp.sensors[hwmon->temp.count]; 666 + for_each_child_of_node_with_prefix(hwmon_node, key_node, "voltage-") { 667 + sensor = &hwmon->volt.sensors[hwmon->volt.count]; 669 668 if (!macsmc_hwmon_create_sensor(hwmon->dev, hwmon->smc, key_node, sensor)) { 670 669 sensor->attrs = HWMON_I_INPUT; 671 670
+1 -1
drivers/hwmon/max6639.c
··· 607 607 return err; 608 608 609 609 /* Fans PWM polarity high by default */ 610 - err = regmap_write(data->regmap, MAX6639_REG_FAN_CONFIG2a(i), 0x00); 610 + err = regmap_write(data->regmap, MAX6639_REG_FAN_CONFIG2a(i), 0x02); 611 611 if (err) 612 612 return err; 613 613
+10 -9
drivers/hwmon/pmbus/q54sj108a2.c
··· 79 79 int idx = *idxp; 80 80 struct q54sj108a2_data *psu = to_psu(idxp, idx); 81 81 char data[I2C_SMBUS_BLOCK_MAX + 2] = { 0 }; 82 - char data_char[I2C_SMBUS_BLOCK_MAX + 2] = { 0 }; 82 + char data_char[I2C_SMBUS_BLOCK_MAX * 2 + 2] = { 0 }; 83 + char *out = data; 83 84 char *res; 84 85 85 86 switch (idx) { ··· 151 150 if (rc < 0) 152 151 return rc; 153 152 154 - res = bin2hex(data, data_char, 32); 155 - rc = res - data; 156 - 153 + res = bin2hex(data_char, data, rc); 154 + rc = res - data_char; 155 + out = data_char; 157 156 break; 158 157 case Q54SJ108A2_DEBUGFS_FLASH_KEY: 159 158 rc = i2c_smbus_read_block_data(psu->client, PMBUS_FLASH_KEY_WRITE, data); 160 159 if (rc < 0) 161 160 return rc; 162 161 163 - res = bin2hex(data, data_char, 4); 164 - rc = res - data; 165 - 162 + res = bin2hex(data_char, data, rc); 163 + rc = res - data_char; 164 + out = data_char; 166 165 break; 167 166 default: 168 167 return -EINVAL; 169 168 } 170 169 171 - data[rc] = '\n'; 170 + out[rc] = '\n'; 172 171 rc += 2; 173 172 174 - return simple_read_from_buffer(buf, count, ppos, data, rc); 173 + return simple_read_from_buffer(buf, count, ppos, out, rc); 175 174 } 176 175 177 176 static ssize_t q54sj108a2_debugfs_write(struct file *file, const char __user *buf,
-161
drivers/hwmon/sa67mcu-hwmon.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * sl67mcu hardware monitoring driver 4 - * 5 - * Copyright 2025 Kontron Europe GmbH 6 - */ 7 - 8 - #include <linux/bitfield.h> 9 - #include <linux/hwmon.h> 10 - #include <linux/kernel.h> 11 - #include <linux/mod_devicetable.h> 12 - #include <linux/module.h> 13 - #include <linux/platform_device.h> 14 - #include <linux/property.h> 15 - #include <linux/regmap.h> 16 - 17 - #define SA67MCU_VOLTAGE(n) (0x00 + ((n) * 2)) 18 - #define SA67MCU_TEMP(n) (0x04 + ((n) * 2)) 19 - 20 - struct sa67mcu_hwmon { 21 - struct regmap *regmap; 22 - u32 offset; 23 - }; 24 - 25 - static int sa67mcu_hwmon_read(struct device *dev, 26 - enum hwmon_sensor_types type, u32 attr, 27 - int channel, long *input) 28 - { 29 - struct sa67mcu_hwmon *hwmon = dev_get_drvdata(dev); 30 - unsigned int offset; 31 - u8 reg[2]; 32 - int ret; 33 - 34 - switch (type) { 35 - case hwmon_in: 36 - switch (attr) { 37 - case hwmon_in_input: 38 - offset = hwmon->offset + SA67MCU_VOLTAGE(channel); 39 - break; 40 - default: 41 - return -EOPNOTSUPP; 42 - } 43 - break; 44 - case hwmon_temp: 45 - switch (attr) { 46 - case hwmon_temp_input: 47 - offset = hwmon->offset + SA67MCU_TEMP(channel); 48 - break; 49 - default: 50 - return -EOPNOTSUPP; 51 - } 52 - break; 53 - default: 54 - return -EOPNOTSUPP; 55 - } 56 - 57 - /* Reading the low byte will capture the value */ 58 - ret = regmap_bulk_read(hwmon->regmap, offset, reg, ARRAY_SIZE(reg)); 59 - if (ret) 60 - return ret; 61 - 62 - *input = reg[1] << 8 | reg[0]; 63 - 64 - /* Temperatures are s16 and in 0.1degC steps. */ 65 - if (type == hwmon_temp) 66 - *input = sign_extend32(*input, 15) * 100; 67 - 68 - return 0; 69 - } 70 - 71 - static const struct hwmon_channel_info * const sa67mcu_hwmon_info[] = { 72 - HWMON_CHANNEL_INFO(in, 73 - HWMON_I_INPUT | HWMON_I_LABEL, 74 - HWMON_I_INPUT | HWMON_I_LABEL), 75 - HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT), 76 - NULL 77 - }; 78 - 79 - static const char *const sa67mcu_hwmon_in_labels[] = { 80 - "VDDIN", 81 - "VDD_RTC", 82 - }; 83 - 84 - static int sa67mcu_hwmon_read_string(struct device *dev, 85 - enum hwmon_sensor_types type, u32 attr, 86 - int channel, const char **str) 87 - { 88 - switch (type) { 89 - case hwmon_in: 90 - switch (attr) { 91 - case hwmon_in_label: 92 - *str = sa67mcu_hwmon_in_labels[channel]; 93 - return 0; 94 - default: 95 - return -EOPNOTSUPP; 96 - } 97 - default: 98 - return -EOPNOTSUPP; 99 - } 100 - } 101 - 102 - static const struct hwmon_ops sa67mcu_hwmon_ops = { 103 - .visible = 0444, 104 - .read = sa67mcu_hwmon_read, 105 - .read_string = sa67mcu_hwmon_read_string, 106 - }; 107 - 108 - static const struct hwmon_chip_info sa67mcu_hwmon_chip_info = { 109 - .ops = &sa67mcu_hwmon_ops, 110 - .info = sa67mcu_hwmon_info, 111 - }; 112 - 113 - static int sa67mcu_hwmon_probe(struct platform_device *pdev) 114 - { 115 - struct sa67mcu_hwmon *hwmon; 116 - struct device *hwmon_dev; 117 - int ret; 118 - 119 - if (!pdev->dev.parent) 120 - return -ENODEV; 121 - 122 - hwmon = devm_kzalloc(&pdev->dev, sizeof(*hwmon), GFP_KERNEL); 123 - if (!hwmon) 124 - return -ENOMEM; 125 - 126 - hwmon->regmap = dev_get_regmap(pdev->dev.parent, NULL); 127 - if (!hwmon->regmap) 128 - return -ENODEV; 129 - 130 - ret = device_property_read_u32(&pdev->dev, "reg", &hwmon->offset); 131 - if (ret) 132 - return -EINVAL; 133 - 134 - hwmon_dev = devm_hwmon_device_register_with_info(&pdev->dev, 135 - "sa67mcu_hwmon", hwmon, 136 - &sa67mcu_hwmon_chip_info, 137 - NULL); 138 - if (IS_ERR(hwmon_dev)) 139 - dev_err(&pdev->dev, "failed to register as hwmon device"); 140 - 141 - return PTR_ERR_OR_ZERO(hwmon_dev); 142 - } 143 - 144 - static const struct of_device_id sa67mcu_hwmon_of_match[] = { 145 - { .compatible = "kontron,sa67mcu-hwmon", }, 146 - {} 147 - }; 148 - MODULE_DEVICE_TABLE(of, sa67mcu_hwmon_of_match); 149 - 150 - static struct platform_driver sa67mcu_hwmon_driver = { 151 - .probe = sa67mcu_hwmon_probe, 152 - .driver = { 153 - .name = "sa67mcu-hwmon", 154 - .of_match_table = sa67mcu_hwmon_of_match, 155 - }, 156 - }; 157 - module_platform_driver(sa67mcu_hwmon_driver); 158 - 159 - MODULE_DESCRIPTION("sa67mcu Hardware Monitoring Driver"); 160 - MODULE_AUTHOR("Michael Walle <mwalle@kernel.org>"); 161 - MODULE_LICENSE("GPL");
+10 -4
drivers/i2c/busses/i2c-i801.c
··· 310 310 311 311 /* 312 312 * If set to true the host controller registers are reserved for 313 - * ACPI AML use. 313 + * ACPI AML use. Needs extra protection by acpi_lock. 314 314 */ 315 315 bool acpi_reserved; 316 + struct mutex acpi_lock; 316 317 }; 317 318 318 319 #define FEATURE_SMBUS_PEC BIT(0) ··· 895 894 int hwpec, ret; 896 895 struct i801_priv *priv = i2c_get_adapdata(adap); 897 896 898 - if (priv->acpi_reserved) 897 + mutex_lock(&priv->acpi_lock); 898 + if (priv->acpi_reserved) { 899 + mutex_unlock(&priv->acpi_lock); 899 900 return -EBUSY; 901 + } 900 902 901 903 pm_runtime_get_sync(&priv->pci_dev->dev); 902 904 ··· 939 935 iowrite8(SMBHSTSTS_INUSE_STS | STATUS_FLAGS, SMBHSTSTS(priv)); 940 936 941 937 pm_runtime_put_autosuspend(&priv->pci_dev->dev); 938 + mutex_unlock(&priv->acpi_lock); 942 939 return ret; 943 940 } 944 941 ··· 1470 1465 * further access from the driver itself. This device is now owned 1471 1466 * by the system firmware. 1472 1467 */ 1473 - i2c_lock_bus(&priv->adapter, I2C_LOCK_SEGMENT); 1468 + mutex_lock(&priv->acpi_lock); 1474 1469 1475 1470 if (!priv->acpi_reserved && i801_acpi_is_smbus_ioport(priv, address)) { 1476 1471 priv->acpi_reserved = true; ··· 1490 1485 else 1491 1486 status = acpi_os_write_port(address, (u32)*value, bits); 1492 1487 1493 - i2c_unlock_bus(&priv->adapter, I2C_LOCK_SEGMENT); 1488 + mutex_unlock(&priv->acpi_lock); 1494 1489 1495 1490 return status; 1496 1491 } ··· 1550 1545 priv->adapter.dev.parent = &dev->dev; 1551 1546 acpi_use_parent_companion(&priv->adapter.dev); 1552 1547 priv->adapter.retries = 3; 1548 + mutex_init(&priv->acpi_lock); 1553 1549 1554 1550 priv->pci_dev = dev; 1555 1551 priv->features = id->driver_data;
+63 -7
drivers/net/bonding/bond_main.c
··· 1509 1509 return features; 1510 1510 } 1511 1511 1512 + static int bond_header_create(struct sk_buff *skb, struct net_device *bond_dev, 1513 + unsigned short type, const void *daddr, 1514 + const void *saddr, unsigned int len) 1515 + { 1516 + struct bonding *bond = netdev_priv(bond_dev); 1517 + const struct header_ops *slave_ops; 1518 + struct slave *slave; 1519 + int ret = 0; 1520 + 1521 + rcu_read_lock(); 1522 + slave = rcu_dereference(bond->curr_active_slave); 1523 + if (slave) { 1524 + slave_ops = READ_ONCE(slave->dev->header_ops); 1525 + if (slave_ops && slave_ops->create) 1526 + ret = slave_ops->create(skb, slave->dev, 1527 + type, daddr, saddr, len); 1528 + } 1529 + rcu_read_unlock(); 1530 + return ret; 1531 + } 1532 + 1533 + static int bond_header_parse(const struct sk_buff *skb, unsigned char *haddr) 1534 + { 1535 + struct bonding *bond = netdev_priv(skb->dev); 1536 + const struct header_ops *slave_ops; 1537 + struct slave *slave; 1538 + int ret = 0; 1539 + 1540 + rcu_read_lock(); 1541 + slave = rcu_dereference(bond->curr_active_slave); 1542 + if (slave) { 1543 + slave_ops = READ_ONCE(slave->dev->header_ops); 1544 + if (slave_ops && slave_ops->parse) 1545 + ret = slave_ops->parse(skb, haddr); 1546 + } 1547 + rcu_read_unlock(); 1548 + return ret; 1549 + } 1550 + 1551 + static const struct header_ops bond_header_ops = { 1552 + .create = bond_header_create, 1553 + .parse = bond_header_parse, 1554 + }; 1555 + 1512 1556 static void bond_setup_by_slave(struct net_device *bond_dev, 1513 1557 struct net_device *slave_dev) 1514 1558 { ··· 1560 1516 1561 1517 dev_close(bond_dev); 1562 1518 1563 - bond_dev->header_ops = slave_dev->header_ops; 1519 + bond_dev->header_ops = slave_dev->header_ops ? 1520 + &bond_header_ops : NULL; 1564 1521 1565 1522 bond_dev->type = slave_dev->type; 1566 1523 bond_dev->hard_header_len = slave_dev->hard_header_len; ··· 2846 2801 2847 2802 continue; 2848 2803 2804 + case BOND_LINK_FAIL: 2805 + case BOND_LINK_BACK: 2806 + slave_dbg(bond->dev, slave->dev, "link_new_state %d on slave\n", 2807 + slave->link_new_state); 2808 + continue; 2809 + 2849 2810 default: 2850 - slave_err(bond->dev, slave->dev, "invalid new link %d on slave\n", 2811 + slave_err(bond->dev, slave->dev, "invalid link_new_state %d on slave\n", 2851 2812 slave->link_new_state); 2852 2813 bond_propose_link_state(slave, BOND_LINK_NOCHANGE); 2853 2814 ··· 3428 3377 } else if (is_arp) { 3429 3378 return bond_arp_rcv(skb, bond, slave); 3430 3379 #if IS_ENABLED(CONFIG_IPV6) 3431 - } else if (is_ipv6) { 3380 + } else if (is_ipv6 && likely(ipv6_mod_enabled())) { 3432 3381 return bond_na_rcv(skb, bond, slave); 3433 3382 #endif 3434 3383 } else { ··· 5120 5069 { 5121 5070 struct bond_up_slave *usable, *all; 5122 5071 5123 - usable = rtnl_dereference(bond->usable_slaves); 5124 - rcu_assign_pointer(bond->usable_slaves, usable_slaves); 5125 - kfree_rcu(usable, rcu); 5126 - 5127 5072 all = rtnl_dereference(bond->all_slaves); 5128 5073 rcu_assign_pointer(bond->all_slaves, all_slaves); 5129 5074 kfree_rcu(all, rcu); 5075 + 5076 + if (BOND_MODE(bond) == BOND_MODE_BROADCAST) { 5077 + kfree_rcu(usable_slaves, rcu); 5078 + return; 5079 + } 5080 + 5081 + usable = rtnl_dereference(bond->usable_slaves); 5082 + rcu_assign_pointer(bond->usable_slaves, usable_slaves); 5083 + kfree_rcu(usable, rcu); 5130 5084 } 5131 5085 5132 5086 static void bond_reset_slave_arr(struct bonding *bond)
+3
drivers/net/caif/caif_serial.c
··· 297 297 dev_close(ser->dev); 298 298 unregister_netdevice(ser->dev); 299 299 debugfs_deinit(ser); 300 + tty_kref_put(tty->link); 300 301 tty_kref_put(tty); 301 302 } 302 303 rtnl_unlock(); ··· 332 331 333 332 ser = netdev_priv(dev); 334 333 ser->tty = tty_kref_get(tty); 334 + tty_kref_get(tty->link); 335 335 ser->dev = dev; 336 336 debugfs_init(ser, tty); 337 337 tty->receive_room = 4096; ··· 341 339 rtnl_lock(); 342 340 result = register_netdevice(dev); 343 341 if (result) { 342 + tty_kref_put(tty->link); 344 343 tty_kref_put(tty); 345 344 rtnl_unlock(); 346 345 free_netdev(dev);
+1 -1
drivers/net/can/dev/calc_bittiming.c
··· 8 8 #include <linux/units.h> 9 9 #include <linux/can/dev.h> 10 10 11 - #define CAN_CALC_MAX_ERROR 50 /* in one-tenth of a percent */ 11 + #define CAN_CALC_MAX_ERROR 500 /* max error 5% */ 12 12 13 13 /* CiA recommended sample points for Non Return to Zero encoding. */ 14 14 static int can_calc_sample_point_nrz(const struct can_bittiming *bt)
+4 -1
drivers/net/can/spi/hi311x.c
··· 755 755 return ret; 756 756 757 757 mutex_lock(&priv->hi3110_lock); 758 - hi3110_power_enable(priv->transceiver, 1); 758 + ret = hi3110_power_enable(priv->transceiver, 1); 759 + if (ret) 760 + goto out_close_candev; 759 761 760 762 priv->force_quit = 0; 761 763 priv->tx_skb = NULL; ··· 792 790 hi3110_hw_sleep(spi); 793 791 out_close: 794 792 hi3110_power_enable(priv->transceiver, 0); 793 + out_close_candev: 795 794 close_candev(net); 796 795 mutex_unlock(&priv->hi3110_lock); 797 796 return ret;
+8 -3
drivers/net/dsa/microchip/ksz_ptp.c
··· 1108 1108 const struct ksz_dev_ops *ops = port->ksz_dev->dev_ops; 1109 1109 struct ksz_irq *ptpirq = &port->ptpirq; 1110 1110 struct ksz_ptp_irq *ptpmsg_irq; 1111 + int ret; 1111 1112 1112 1113 ptpmsg_irq = &port->ptpmsg_irq[n]; 1113 1114 ptpmsg_irq->num = irq_create_mapping(ptpirq->domain, n); ··· 1120 1119 1121 1120 strscpy(ptpmsg_irq->name, name[n]); 1122 1121 1123 - return request_threaded_irq(ptpmsg_irq->num, NULL, 1124 - ksz_ptp_msg_thread_fn, IRQF_ONESHOT, 1125 - ptpmsg_irq->name, ptpmsg_irq); 1122 + ret = request_threaded_irq(ptpmsg_irq->num, NULL, 1123 + ksz_ptp_msg_thread_fn, IRQF_ONESHOT, 1124 + ptpmsg_irq->name, ptpmsg_irq); 1125 + if (ret) 1126 + irq_dispose_mapping(ptpmsg_irq->num); 1127 + 1128 + return ret; 1126 1129 } 1127 1130 1128 1131 int ksz_ptp_irq_setup(struct dsa_switch *ds, u8 p)
-1
drivers/net/dsa/mxl862xx/mxl862xx.c
··· 149 149 return -ENOMEM; 150 150 151 151 bus->priv = priv; 152 - ds->user_mii_bus = bus; 153 152 bus->name = KBUILD_MODNAME "-mii"; 154 153 snprintf(bus->id, MII_BUS_ID_SIZE, "%s-mii", dev_name(dev)); 155 154 bus->read_c45 = mxl862xx_phy_read_c45_mii_bus;
+1 -2
drivers/net/dsa/realtek/rtl8365mb.c
··· 1480 1480 1481 1481 stats->rx_packets = cnt[RTL8365MB_MIB_ifInUcastPkts] + 1482 1482 cnt[RTL8365MB_MIB_ifInMulticastPkts] + 1483 - cnt[RTL8365MB_MIB_ifInBroadcastPkts] - 1484 - cnt[RTL8365MB_MIB_ifOutDiscards]; 1483 + cnt[RTL8365MB_MIB_ifInBroadcastPkts]; 1485 1484 1486 1485 stats->tx_packets = cnt[RTL8365MB_MIB_ifOutUcastPkts] + 1487 1486 cnt[RTL8365MB_MIB_ifOutMulticastPkts] +
+3 -3
drivers/net/dsa/realtek/rtl8366rb-leds.c
··· 12 12 case 0: 13 13 return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 14 14 case 1: 15 - return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 15 + return FIELD_PREP(RTL8366RB_LED_X_1_CTRL_MASK, BIT(port)); 16 16 case 2: 17 - return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 17 + return FIELD_PREP(RTL8366RB_LED_2_X_CTRL_MASK, BIT(port)); 18 18 case 3: 19 - return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 19 + return FIELD_PREP(RTL8366RB_LED_X_3_CTRL_MASK, BIT(port)); 20 20 default: 21 21 return 0; 22 22 }
+3 -4
drivers/net/dsa/sja1105/sja1105_main.c
··· 2339 2339 goto out; 2340 2340 } 2341 2341 2342 + rc = sja1105_reload_cbs(priv); 2343 + 2344 + out: 2342 2345 dsa_switch_for_each_available_port(dp, ds) 2343 2346 if (dp->pl) 2344 2347 phylink_replay_link_end(dp->pl); 2345 2348 2346 - rc = sja1105_reload_cbs(priv); 2347 - if (rc < 0) 2348 - goto out; 2349 - out: 2350 2349 mutex_unlock(&priv->mgmt_lock); 2351 2350 mutex_unlock(&priv->fdb_lock); 2352 2351
+10 -9
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 1264 1264 if (ret) 1265 1265 goto err_napi; 1266 1266 1267 + /* Reset the phy settings */ 1268 + ret = xgbe_phy_reset(pdata); 1269 + if (ret) 1270 + goto err_irqs; 1271 + 1272 + /* Start the phy */ 1267 1273 ret = phy_if->phy_start(pdata); 1268 1274 if (ret) 1269 1275 goto err_irqs; 1270 1276 1271 1277 hw_if->enable_tx(pdata); 1272 1278 hw_if->enable_rx(pdata); 1279 + /* Synchronize flag with hardware state after enabling TX/RX. 1280 + * This prevents stale state after device restart cycles. 1281 + */ 1282 + pdata->data_path_stopped = false; 1273 1283 1274 1284 udp_tunnel_nic_reset_ntf(netdev); 1275 - 1276 - /* Reset the phy settings */ 1277 - ret = xgbe_phy_reset(pdata); 1278 - if (ret) 1279 - goto err_txrx; 1280 1285 1281 1286 netif_tx_start_all_queues(netdev); 1282 1287 ··· 1291 1286 clear_bit(XGBE_STOPPED, &pdata->dev_state); 1292 1287 1293 1288 return 0; 1294 - 1295 - err_txrx: 1296 - hw_if->disable_rx(pdata); 1297 - hw_if->disable_tx(pdata); 1298 1289 1299 1290 err_irqs: 1300 1291 xgbe_free_irqs(pdata);
+75 -7
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
··· 1942 1942 static void xgbe_rx_adaptation(struct xgbe_prv_data *pdata) 1943 1943 { 1944 1944 struct xgbe_phy_data *phy_data = pdata->phy_data; 1945 - unsigned int reg; 1945 + int reg; 1946 1946 1947 1947 /* step 2: force PCS to send RX_ADAPT Req to PHY */ 1948 1948 XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_EQ_CTRL4, ··· 1964 1964 1965 1965 /* Step 4: Check for Block lock */ 1966 1966 1967 - /* Link status is latched low, so read once to clear 1968 - * and then read again to get current state 1967 + reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); 1968 + if (reg < 0) 1969 + goto set_mode; 1970 + 1971 + /* Link status is latched low so that momentary link drops 1972 + * can be detected. If link was already down read again 1973 + * to get the latest state. 1969 1974 */ 1970 - reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); 1971 - reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); 1975 + if (!pdata->phy.link && !(reg & MDIO_STAT1_LSTATUS)) { 1976 + reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); 1977 + if (reg < 0) 1978 + goto set_mode; 1979 + } 1980 + 1972 1981 if (reg & MDIO_STAT1_LSTATUS) { 1973 1982 /* If the block lock is found, update the helpers 1974 1983 * and declare the link up ··· 2015 2006 2016 2007 /* perform rx adaptation */ 2017 2008 xgbe_rx_adaptation(pdata); 2009 + } 2010 + 2011 + /* 2012 + * xgbe_phy_stop_data_path - Stop TX/RX to prevent packet corruption 2013 + * @pdata: driver private data 2014 + * 2015 + * This function stops the data path (TX and RX) to prevent packet 2016 + * corruption during critical PHY operations like RX adaptation. 2017 + * Must be called before initiating RX adaptation when link goes down. 2018 + */ 2019 + static void xgbe_phy_stop_data_path(struct xgbe_prv_data *pdata) 2020 + { 2021 + if (pdata->data_path_stopped) 2022 + return; 2023 + 2024 + /* Stop TX/RX to prevent packet corruption during RX adaptation */ 2025 + pdata->hw_if.disable_tx(pdata); 2026 + pdata->hw_if.disable_rx(pdata); 2027 + pdata->data_path_stopped = true; 2028 + 2029 + netif_dbg(pdata, link, pdata->netdev, 2030 + "stopping data path for RX adaptation\n"); 2031 + } 2032 + 2033 + /* 2034 + * xgbe_phy_start_data_path - Re-enable TX/RX after RX adaptation 2035 + * @pdata: driver private data 2036 + * 2037 + * This function re-enables the data path (TX and RX) after RX adaptation 2038 + * has completed successfully. Only called when link is confirmed up. 2039 + */ 2040 + static void xgbe_phy_start_data_path(struct xgbe_prv_data *pdata) 2041 + { 2042 + if (!pdata->data_path_stopped) 2043 + return; 2044 + 2045 + pdata->hw_if.enable_rx(pdata); 2046 + pdata->hw_if.enable_tx(pdata); 2047 + pdata->data_path_stopped = false; 2048 + 2049 + netif_dbg(pdata, link, pdata->netdev, 2050 + "restarting data path after RX adaptation\n"); 2018 2051 } 2019 2052 2020 2053 static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata) ··· 2868 2817 if (pdata->en_rx_adap) { 2869 2818 /* if the link is available and adaptation is done, 2870 2819 * declare link up 2820 + * 2821 + * Note: When link is up and adaptation is done, we can 2822 + * safely re-enable the data path if it was stopped 2823 + * for adaptation. 2871 2824 */ 2872 - if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done) 2825 + if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done) { 2826 + xgbe_phy_start_data_path(pdata); 2873 2827 return 1; 2828 + } 2874 2829 /* If either link is not available or adaptation is not done, 2875 2830 * retrigger the adaptation logic. (if the mode is not set, 2876 2831 * then issue mailbox command first) 2877 2832 */ 2833 + 2834 + /* CRITICAL: Stop data path BEFORE triggering RX adaptation 2835 + * to prevent CRC errors from packets corrupted during 2836 + * the adaptation process. This is especially important 2837 + * when AN is OFF in 10G KR mode. 2838 + */ 2839 + xgbe_phy_stop_data_path(pdata); 2840 + 2878 2841 if (pdata->mode_set) { 2879 2842 xgbe_phy_rx_adaptation(pdata); 2880 2843 } else { ··· 2896 2831 xgbe_phy_set_mode(pdata, phy_data->cur_mode); 2897 2832 } 2898 2833 2899 - if (pdata->rx_adapt_done) 2834 + if (pdata->rx_adapt_done) { 2835 + /* Adaptation complete, safe to re-enable data path */ 2836 + xgbe_phy_start_data_path(pdata); 2900 2837 return 1; 2838 + } 2901 2839 } else if (reg & MDIO_STAT1_LSTATUS) 2902 2840 return 1; 2903 2841
+4
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 1262 1262 bool en_rx_adap; 1263 1263 int rx_adapt_retries; 1264 1264 bool rx_adapt_done; 1265 + /* Flag to track if data path (TX/RX) was stopped for RX adaptation. 1266 + * This prevents packet corruption during the adaptation window. 1267 + */ 1268 + bool data_path_stopped; 1265 1269 bool mode_set; 1266 1270 bool sph; 1267 1271 };
+11
drivers/net/ethernet/arc/emac_main.c
··· 934 934 /* Set poll rate so that it polls every 1 ms */ 935 935 arc_reg_set(priv, R_POLLRATE, clock_frequency / 1000000); 936 936 937 + /* 938 + * Put the device into a known quiescent state before requesting 939 + * the IRQ. Clear only EMAC interrupt status bits here; leave the 940 + * MDIO completion bit alone and avoid writing TXPL_MASK, which is 941 + * used to force TX polling rather than acknowledge interrupts. 942 + */ 943 + arc_reg_set(priv, R_ENABLE, 0); 944 + arc_reg_set(priv, R_STATUS, RXINT_MASK | TXINT_MASK | ERR_MASK | 945 + TXCH_MASK | MSER_MASK | RXCR_MASK | 946 + RXFR_MASK | RXFL_MASK); 947 + 937 948 ndev->irq = irq; 938 949 dev_info(dev, "IRQ is %d\n", ndev->irq); 939 950
+2 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 979 979 980 980 if (bnxt_get_nr_rss_ctxs(bp, req_rx_rings) != 981 981 bnxt_get_nr_rss_ctxs(bp, bp->rx_nr_rings) && 982 - netif_is_rxfh_configured(dev)) { 983 - netdev_warn(dev, "RSS table size change required, RSS table entries must be default to proceed\n"); 982 + (netif_is_rxfh_configured(dev) || bp->num_rss_ctx)) { 983 + netdev_warn(dev, "RSS table size change required, RSS table entries must be default (with no additional RSS contexts present) to proceed\n"); 984 984 return -EINVAL; 985 985 } 986 986
+12 -19
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1342 1342 } 1343 1343 } 1344 1344 1345 - void bcmgenet_eee_enable_set(struct net_device *dev, bool enable, 1346 - bool tx_lpi_enabled) 1345 + void bcmgenet_eee_enable_set(struct net_device *dev, bool enable) 1347 1346 { 1348 1347 struct bcmgenet_priv *priv = netdev_priv(dev); 1349 1348 u32 off = priv->hw_params->tbuf_offset + TBUF_ENERGY_CTRL; ··· 1362 1363 1363 1364 /* Enable EEE and switch to a 27Mhz clock automatically */ 1364 1365 reg = bcmgenet_readl(priv->base + off); 1365 - if (tx_lpi_enabled) 1366 + if (enable) 1366 1367 reg |= TBUF_EEE_EN | TBUF_PM_EN; 1367 1368 else 1368 1369 reg &= ~(TBUF_EEE_EN | TBUF_PM_EN); ··· 1381 1382 priv->clk_eee_enabled = false; 1382 1383 } 1383 1384 1384 - priv->eee.eee_enabled = enable; 1385 - priv->eee.tx_lpi_enabled = tx_lpi_enabled; 1386 1385 } 1387 1386 1388 1387 static int bcmgenet_get_eee(struct net_device *dev, struct ethtool_keee *e) 1389 1388 { 1390 1389 struct bcmgenet_priv *priv = netdev_priv(dev); 1391 - struct ethtool_keee *p = &priv->eee; 1390 + int ret; 1392 1391 1393 1392 if (GENET_IS_V1(priv)) 1394 1393 return -EOPNOTSUPP; ··· 1394 1397 if (!dev->phydev) 1395 1398 return -ENODEV; 1396 1399 1397 - e->tx_lpi_enabled = p->tx_lpi_enabled; 1400 + ret = phy_ethtool_get_eee(dev->phydev, e); 1401 + if (ret) 1402 + return ret; 1403 + 1404 + /* tx_lpi_timer is maintained by the MAC hardware register; the 1405 + * PHY-level eee_cfg timer is not set for GENET. 1406 + */ 1398 1407 e->tx_lpi_timer = bcmgenet_umac_readl(priv, UMAC_EEE_LPI_TIMER); 1399 1408 1400 - return phy_ethtool_get_eee(dev->phydev, e); 1409 + return 0; 1401 1410 } 1402 1411 1403 1412 static int bcmgenet_set_eee(struct net_device *dev, struct ethtool_keee *e) 1404 1413 { 1405 1414 struct bcmgenet_priv *priv = netdev_priv(dev); 1406 - struct ethtool_keee *p = &priv->eee; 1407 - bool active; 1408 1415 1409 1416 if (GENET_IS_V1(priv)) 1410 1417 return -EOPNOTSUPP; ··· 1416 1415 if (!dev->phydev) 1417 1416 return -ENODEV; 1418 1417 1419 - p->eee_enabled = e->eee_enabled; 1420 - 1421 - if (!p->eee_enabled) { 1422 - bcmgenet_eee_enable_set(dev, false, false); 1423 - } else { 1424 - active = phy_init_eee(dev->phydev, false) >= 0; 1425 - bcmgenet_umac_writel(priv, e->tx_lpi_timer, UMAC_EEE_LPI_TIMER); 1426 - bcmgenet_eee_enable_set(dev, active, e->tx_lpi_enabled); 1427 - } 1418 + bcmgenet_umac_writel(priv, e->tx_lpi_timer, UMAC_EEE_LPI_TIMER); 1428 1419 1429 1420 return phy_ethtool_set_eee(dev->phydev, e); 1430 1421 }
+1 -4
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 665 665 u8 sopass[SOPASS_MAX]; 666 666 667 667 struct bcmgenet_mib_counters mib; 668 - 669 - struct ethtool_keee eee; 670 668 }; 671 669 672 670 static inline bool bcmgenet_has_40bits(struct bcmgenet_priv *priv) ··· 747 749 int bcmgenet_wol_power_up_cfg(struct bcmgenet_priv *priv, 748 750 enum bcmgenet_power_mode mode); 749 751 750 - void bcmgenet_eee_enable_set(struct net_device *dev, bool enable, 751 - bool tx_lpi_enabled); 752 + void bcmgenet_eee_enable_set(struct net_device *dev, bool enable); 752 753 753 754 #endif /* __BCMGENET_H__ */
+5 -5
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 29 29 struct bcmgenet_priv *priv = netdev_priv(dev); 30 30 struct phy_device *phydev = dev->phydev; 31 31 u32 reg, cmd_bits = 0; 32 - bool active; 33 32 34 33 /* speed */ 35 34 if (phydev->speed == SPEED_1000) ··· 89 90 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 90 91 spin_unlock_bh(&priv->reg_lock); 91 92 92 - active = phy_init_eee(phydev, 0) >= 0; 93 - bcmgenet_eee_enable_set(dev, 94 - priv->eee.eee_enabled && active, 95 - priv->eee.tx_lpi_enabled); 96 93 } 97 94 98 95 /* setup netdev link state when PHY link status change and ··· 107 112 reg &= ~RGMII_LINK; 108 113 bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL); 109 114 } 115 + 116 + bcmgenet_eee_enable_set(dev, phydev->enable_tx_lpi); 110 117 111 118 phy_print_status(phydev); 112 119 } ··· 408 411 409 412 /* Indicate that the MAC is responsible for PHY PM */ 410 413 dev->phydev->mac_managed_pm = true; 414 + 415 + if (!GENET_IS_V1(priv)) 416 + phy_support_eee(dev->phydev); 411 417 412 418 return 0; 413 419 }
+95 -3
drivers/net/ethernet/cadence/macb_main.c
··· 37 37 #include <linux/tcp.h> 38 38 #include <linux/types.h> 39 39 #include <linux/udp.h> 40 + #include <linux/gcd.h> 40 41 #include <net/pkt_sched.h> 41 42 #include "macb.h" 42 43 ··· 787 786 netif_tx_stop_all_queues(ndev); 788 787 } 789 788 789 + /* Use juggling algorithm to left rotate tx ring and tx skb array */ 790 + static void gem_shuffle_tx_one_ring(struct macb_queue *queue) 791 + { 792 + unsigned int head, tail, count, ring_size, desc_size; 793 + struct macb_tx_skb tx_skb, *skb_curr, *skb_next; 794 + struct macb_dma_desc *desc_curr, *desc_next; 795 + unsigned int i, cycles, shift, curr, next; 796 + struct macb *bp = queue->bp; 797 + unsigned char desc[24]; 798 + unsigned long flags; 799 + 800 + desc_size = macb_dma_desc_get_size(bp); 801 + 802 + if (WARN_ON_ONCE(desc_size > ARRAY_SIZE(desc))) 803 + return; 804 + 805 + spin_lock_irqsave(&queue->tx_ptr_lock, flags); 806 + head = queue->tx_head; 807 + tail = queue->tx_tail; 808 + ring_size = bp->tx_ring_size; 809 + count = CIRC_CNT(head, tail, ring_size); 810 + 811 + if (!(tail % ring_size)) 812 + goto unlock; 813 + 814 + if (!count) { 815 + queue->tx_head = 0; 816 + queue->tx_tail = 0; 817 + goto unlock; 818 + } 819 + 820 + shift = tail % ring_size; 821 + cycles = gcd(ring_size, shift); 822 + 823 + for (i = 0; i < cycles; i++) { 824 + memcpy(&desc, macb_tx_desc(queue, i), desc_size); 825 + memcpy(&tx_skb, macb_tx_skb(queue, i), 826 + sizeof(struct macb_tx_skb)); 827 + 828 + curr = i; 829 + next = (curr + shift) % ring_size; 830 + 831 + while (next != i) { 832 + desc_curr = macb_tx_desc(queue, curr); 833 + desc_next = macb_tx_desc(queue, next); 834 + 835 + memcpy(desc_curr, desc_next, desc_size); 836 + 837 + if (next == ring_size - 1) 838 + desc_curr->ctrl &= ~MACB_BIT(TX_WRAP); 839 + if (curr == ring_size - 1) 840 + desc_curr->ctrl |= MACB_BIT(TX_WRAP); 841 + 842 + skb_curr = macb_tx_skb(queue, curr); 843 + skb_next = macb_tx_skb(queue, next); 844 + memcpy(skb_curr, skb_next, sizeof(struct macb_tx_skb)); 845 + 846 + curr = next; 847 + next = (curr + shift) % ring_size; 848 + } 849 + 850 + desc_curr = macb_tx_desc(queue, curr); 851 + memcpy(desc_curr, &desc, desc_size); 852 + if (i == ring_size - 1) 853 + desc_curr->ctrl &= ~MACB_BIT(TX_WRAP); 854 + if (curr == ring_size - 1) 855 + desc_curr->ctrl |= MACB_BIT(TX_WRAP); 856 + memcpy(macb_tx_skb(queue, curr), &tx_skb, 857 + sizeof(struct macb_tx_skb)); 858 + } 859 + 860 + queue->tx_head = count; 861 + queue->tx_tail = 0; 862 + 863 + /* Make descriptor updates visible to hardware */ 864 + wmb(); 865 + 866 + unlock: 867 + spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); 868 + } 869 + 870 + /* Rotate the queue so that the tail is at index 0 */ 871 + static void gem_shuffle_tx_rings(struct macb *bp) 872 + { 873 + struct macb_queue *queue; 874 + int q; 875 + 876 + for (q = 0, queue = bp->queues; q < bp->num_queues; q++, queue++) 877 + gem_shuffle_tx_one_ring(queue); 878 + } 879 + 790 880 static void macb_mac_link_up(struct phylink_config *config, 791 881 struct phy_device *phy, 792 882 unsigned int mode, phy_interface_t interface, ··· 916 824 ctrl |= MACB_BIT(PAE); 917 825 918 826 for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) { 919 - queue->tx_head = 0; 920 - queue->tx_tail = 0; 921 827 queue_writel(queue, IER, 922 828 bp->rx_intr_mask | MACB_TX_INT_FLAGS | MACB_BIT(HRESP)); 923 829 } ··· 929 839 930 840 spin_unlock_irqrestore(&bp->lock, flags); 931 841 932 - if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) 842 + if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) { 933 843 macb_set_tx_clk(bp, speed); 844 + gem_shuffle_tx_rings(bp); 845 + } 934 846 935 847 /* Enable Rx and Tx; Enable PTP unicast */ 936 848 ctrl = macb_readl(bp, NCR);
+10 -14
drivers/net/ethernet/freescale/enetc/netc_blk_ctrl.c
··· 333 333 334 334 mdio_node = of_get_child_by_name(np, "mdio"); 335 335 if (!mdio_node) 336 - return 0; 336 + return -ENODEV; 337 337 338 338 phy_node = of_get_next_child(mdio_node, NULL); 339 - if (!phy_node) 339 + if (!phy_node) { 340 + err = -ENODEV; 340 341 goto of_put_mdio_node; 342 + } 341 343 342 344 err = of_property_read_u32(phy_node, "reg", &addr); 343 345 if (err) ··· 425 423 426 424 addr = netc_get_phy_addr(gchild); 427 425 if (addr < 0) { 426 + if (addr == -ENODEV) 427 + continue; 428 + 428 429 dev_err(dev, "Failed to get PHY address\n"); 429 430 return addr; 430 431 } ··· 437 432 "Find same PHY address in EMDIO and ENETC node\n"); 438 433 return -EINVAL; 439 434 } 440 - 441 - /* The default value of LaBCR[MDIO_PHYAD_PRTAD ] is 442 - * 0, so no need to set the register. 443 - */ 444 - if (!addr) 445 - continue; 446 435 447 436 switch (bus_devfn) { 448 437 case IMX95_ENETC0_BUS_DEVFN: ··· 577 578 578 579 addr = netc_get_phy_addr(np); 579 580 if (addr < 0) { 581 + if (addr == -ENODEV) 582 + return 0; 583 + 580 584 dev_err(dev, "Failed to get PHY address\n"); 581 585 return addr; 582 586 } 583 - 584 - /* The default value of LaBCR[MDIO_PHYAD_PRTAD] is 0, 585 - * so no need to set the register. 586 - */ 587 - if (!addr) 588 - return 0; 589 587 590 588 if (phy_mask & BIT(addr)) { 591 589 dev_err(dev,
-2
drivers/net/ethernet/intel/e1000/e1000_main.c
··· 2952 2952 dma_error: 2953 2953 dev_err(&pdev->dev, "TX DMA map failed\n"); 2954 2954 buffer_info->dma = 0; 2955 - if (count) 2956 - count--; 2957 2955 2958 2956 while (count--) { 2959 2957 if (i == 0)
-2
drivers/net/ethernet/intel/e1000e/netdev.c
··· 5652 5652 dma_error: 5653 5653 dev_err(&pdev->dev, "Tx DMA map failed\n"); 5654 5654 buffer_info->dma = 0; 5655 - if (count) 5656 - count--; 5657 5655 5658 5656 while (count--) { 5659 5657 if (i == 0)
+7 -7
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 3833 3833 cfilter.n_proto = ETH_P_IP; 3834 3834 if (mask.dst_ip[0] & tcf.dst_ip[0]) 3835 3835 memcpy(&cfilter.ip.v4.dst_ip, tcf.dst_ip, 3836 - ARRAY_SIZE(tcf.dst_ip)); 3837 - else if (mask.src_ip[0] & tcf.dst_ip[0]) 3836 + sizeof(cfilter.ip.v4.dst_ip)); 3837 + else if (mask.src_ip[0] & tcf.src_ip[0]) 3838 3838 memcpy(&cfilter.ip.v4.src_ip, tcf.src_ip, 3839 - ARRAY_SIZE(tcf.dst_ip)); 3839 + sizeof(cfilter.ip.v4.src_ip)); 3840 3840 break; 3841 3841 case VIRTCHNL_TCP_V6_FLOW: 3842 3842 cfilter.n_proto = ETH_P_IPV6; ··· 3891 3891 /* for ipv6, mask is set for all sixteen bytes (4 words) */ 3892 3892 if (cfilter.n_proto == ETH_P_IPV6 && mask.dst_ip[3]) 3893 3893 if (memcmp(&cfilter.ip.v6.dst_ip6, &cf->ip.v6.dst_ip6, 3894 - sizeof(cfilter.ip.v6.src_ip6))) 3894 + sizeof(cfilter.ip.v6.dst_ip6))) 3895 3895 continue; 3896 3896 if (mask.vlan_id) 3897 3897 if (cfilter.vlan_id != cf->vlan_id) ··· 3979 3979 cfilter->n_proto = ETH_P_IP; 3980 3980 if (mask.dst_ip[0] & tcf.dst_ip[0]) 3981 3981 memcpy(&cfilter->ip.v4.dst_ip, tcf.dst_ip, 3982 - ARRAY_SIZE(tcf.dst_ip)); 3983 - else if (mask.src_ip[0] & tcf.dst_ip[0]) 3982 + sizeof(cfilter->ip.v4.dst_ip)); 3983 + else if (mask.src_ip[0] & tcf.src_ip[0]) 3984 3984 memcpy(&cfilter->ip.v4.src_ip, tcf.src_ip, 3985 - ARRAY_SIZE(tcf.dst_ip)); 3985 + sizeof(cfilter->ip.v4.src_ip)); 3986 3986 break; 3987 3987 case VIRTCHNL_TCP_V6_FLOW: 3988 3988 cfilter->n_proto = ETH_P_IPV6;
+1 -2
drivers/net/ethernet/intel/iavf/iavf.h
··· 260 260 struct work_struct adminq_task; 261 261 struct work_struct finish_config; 262 262 wait_queue_head_t down_waitqueue; 263 - wait_queue_head_t reset_waitqueue; 264 263 wait_queue_head_t vc_waitqueue; 265 264 struct iavf_q_vector *q_vectors; 266 265 struct list_head vlan_filter_list; ··· 625 626 void iavf_del_adv_rss_cfg(struct iavf_adapter *adapter); 626 627 struct iavf_mac_filter *iavf_add_filter(struct iavf_adapter *adapter, 627 628 const u8 *macaddr); 628 - int iavf_wait_for_reset(struct iavf_adapter *adapter); 629 + void iavf_reset_step(struct iavf_adapter *adapter); 629 630 #endif /* _IAVF_H_ */
+6 -13
drivers/net/ethernet/intel/iavf/iavf_ethtool.c
··· 492 492 { 493 493 struct iavf_adapter *adapter = netdev_priv(netdev); 494 494 u32 new_rx_count, new_tx_count; 495 - int ret = 0; 496 495 497 496 if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending)) 498 497 return -EINVAL; ··· 536 537 } 537 538 538 539 if (netif_running(netdev)) { 539 - iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 540 - ret = iavf_wait_for_reset(adapter); 541 - if (ret) 542 - netdev_warn(netdev, "Changing ring parameters timeout or interrupted waiting for reset"); 540 + adapter->flags |= IAVF_FLAG_RESET_NEEDED; 541 + iavf_reset_step(adapter); 543 542 } 544 543 545 - return ret; 544 + return 0; 546 545 } 547 546 548 547 /** ··· 1720 1723 { 1721 1724 struct iavf_adapter *adapter = netdev_priv(netdev); 1722 1725 u32 num_req = ch->combined_count; 1723 - int ret = 0; 1724 1726 1725 1727 if ((adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ) && 1726 1728 adapter->num_tc) { ··· 1741 1745 1742 1746 adapter->num_req_queues = num_req; 1743 1747 adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED; 1744 - iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 1748 + adapter->flags |= IAVF_FLAG_RESET_NEEDED; 1749 + iavf_reset_step(adapter); 1745 1750 1746 - ret = iavf_wait_for_reset(adapter); 1747 - if (ret) 1748 - netdev_warn(netdev, "Changing channel count timeout or interrupted waiting for reset"); 1749 - 1750 - return ret; 1751 + return 0; 1751 1752 } 1752 1753 1753 1754 /**
+28 -53
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 186 186 } 187 187 188 188 /** 189 - * iavf_wait_for_reset - Wait for reset to finish. 190 - * @adapter: board private structure 191 - * 192 - * Returns 0 if reset finished successfully, negative on timeout or interrupt. 193 - */ 194 - int iavf_wait_for_reset(struct iavf_adapter *adapter) 195 - { 196 - int ret = wait_event_interruptible_timeout(adapter->reset_waitqueue, 197 - !iavf_is_reset_in_progress(adapter), 198 - msecs_to_jiffies(5000)); 199 - 200 - /* If ret < 0 then it means wait was interrupted. 201 - * If ret == 0 then it means we got a timeout while waiting 202 - * for reset to finish. 203 - * If ret > 0 it means reset has finished. 204 - */ 205 - if (ret > 0) 206 - return 0; 207 - else if (ret < 0) 208 - return -EINTR; 209 - else 210 - return -EBUSY; 211 - } 212 - 213 - /** 214 189 * iavf_allocate_dma_mem_d - OS specific memory alloc for shared code 215 190 * @hw: pointer to the HW structure 216 191 * @mem: ptr to mem struct to fill out ··· 3011 3036 3012 3037 adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED; 3013 3038 3039 + iavf_ptp_release(adapter); 3040 + 3014 3041 /* We don't use netif_running() because it may be true prior to 3015 3042 * ndo_open() returning, so we can't assume it means all our open 3016 3043 * tasks have finished, since we're not holding the rtnl_lock here. ··· 3088 3111 } 3089 3112 3090 3113 /** 3091 - * iavf_reset_task - Call-back task to handle hardware reset 3092 - * @work: pointer to work_struct 3114 + * iavf_reset_step - Perform the VF reset sequence 3115 + * @adapter: board private structure 3093 3116 * 3094 - * During reset we need to shut down and reinitialize the admin queue 3095 - * before we can use it to communicate with the PF again. We also clear 3096 - * and reinit the rings because that context is lost as well. 3097 - **/ 3098 - static void iavf_reset_task(struct work_struct *work) 3117 + * Requests a reset from PF, polls for completion, and reconfigures 3118 + * the driver. Caller must hold the netdev instance lock. 3119 + * 3120 + * This can sleep for several seconds while polling HW registers. 3121 + */ 3122 + void iavf_reset_step(struct iavf_adapter *adapter) 3099 3123 { 3100 - struct iavf_adapter *adapter = container_of(work, 3101 - struct iavf_adapter, 3102 - reset_task); 3103 3124 struct virtchnl_vf_resource *vfres = adapter->vf_res; 3104 3125 struct net_device *netdev = adapter->netdev; 3105 3126 struct iavf_hw *hw = &adapter->hw; ··· 3108 3133 int i = 0, err; 3109 3134 bool running; 3110 3135 3111 - netdev_lock(netdev); 3136 + netdev_assert_locked(netdev); 3112 3137 3113 3138 iavf_misc_irq_disable(adapter); 3114 3139 if (adapter->flags & IAVF_FLAG_RESET_NEEDED) { ··· 3153 3178 dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n", 3154 3179 reg_val); 3155 3180 iavf_disable_vf(adapter); 3156 - netdev_unlock(netdev); 3157 3181 return; /* Do not attempt to reinit. It's dead, Jim. */ 3158 3182 } 3159 3183 ··· 3164 3190 iavf_startup(adapter); 3165 3191 queue_delayed_work(adapter->wq, &adapter->watchdog_task, 3166 3192 msecs_to_jiffies(30)); 3167 - netdev_unlock(netdev); 3168 3193 return; 3169 3194 } 3170 3195 ··· 3183 3210 3184 3211 iavf_change_state(adapter, __IAVF_RESETTING); 3185 3212 adapter->flags &= ~IAVF_FLAG_RESET_PENDING; 3213 + 3214 + iavf_ptp_release(adapter); 3186 3215 3187 3216 /* free the Tx/Rx rings and descriptors, might be better to just 3188 3217 * re-use them sometime in the future ··· 3306 3331 3307 3332 adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED; 3308 3333 3309 - wake_up(&adapter->reset_waitqueue); 3310 - netdev_unlock(netdev); 3311 - 3312 3334 return; 3313 3335 reset_err: 3314 3336 if (running) { ··· 3314 3342 } 3315 3343 iavf_disable_vf(adapter); 3316 3344 3317 - netdev_unlock(netdev); 3318 3345 dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n"); 3346 + } 3347 + 3348 + static void iavf_reset_task(struct work_struct *work) 3349 + { 3350 + struct iavf_adapter *adapter = container_of(work, 3351 + struct iavf_adapter, 3352 + reset_task); 3353 + struct net_device *netdev = adapter->netdev; 3354 + 3355 + netdev_lock(netdev); 3356 + iavf_reset_step(adapter); 3357 + netdev_unlock(netdev); 3319 3358 } 3320 3359 3321 3360 /** ··· 4594 4611 static int iavf_change_mtu(struct net_device *netdev, int new_mtu) 4595 4612 { 4596 4613 struct iavf_adapter *adapter = netdev_priv(netdev); 4597 - int ret = 0; 4598 4614 4599 4615 netdev_dbg(netdev, "changing MTU from %d to %d\n", 4600 4616 netdev->mtu, new_mtu); 4601 4617 WRITE_ONCE(netdev->mtu, new_mtu); 4602 4618 4603 4619 if (netif_running(netdev)) { 4604 - iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 4605 - ret = iavf_wait_for_reset(adapter); 4606 - if (ret < 0) 4607 - netdev_warn(netdev, "MTU change interrupted waiting for reset"); 4608 - else if (ret) 4609 - netdev_warn(netdev, "MTU change timed out waiting for reset"); 4620 + adapter->flags |= IAVF_FLAG_RESET_NEEDED; 4621 + iavf_reset_step(adapter); 4610 4622 } 4611 4623 4612 - return ret; 4624 + return 0; 4613 4625 } 4614 4626 4615 4627 /** ··· 5408 5430 5409 5431 /* Setup the wait queue for indicating transition to down status */ 5410 5432 init_waitqueue_head(&adapter->down_waitqueue); 5411 - 5412 - /* Setup the wait queue for indicating transition to running state */ 5413 - init_waitqueue_head(&adapter->reset_waitqueue); 5414 5433 5415 5434 /* Setup the wait queue for indicating virtchannel events */ 5416 5435 init_waitqueue_head(&adapter->vc_waitqueue);
-1
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
··· 2736 2736 case VIRTCHNL_OP_ENABLE_QUEUES: 2737 2737 /* enable transmits */ 2738 2738 iavf_irq_enable(adapter, true); 2739 - wake_up(&adapter->reset_waitqueue); 2740 2739 adapter->flags &= ~IAVF_FLAG_QUEUES_DISABLED; 2741 2740 break; 2742 2741 case VIRTCHNL_OP_DISABLE_QUEUES:
+2 -2
drivers/net/ethernet/intel/ice/devlink/devlink.c
··· 1360 1360 1361 1361 cdev = pf->cdev_info; 1362 1362 if (!cdev) 1363 - return -ENODEV; 1363 + return -EOPNOTSUPP; 1364 1364 1365 1365 ctx->val.vbool = !!(cdev->rdma_protocol & IIDC_RDMA_PROTOCOL_ROCEV2); 1366 1366 ··· 1427 1427 1428 1428 cdev = pf->cdev_info; 1429 1429 if (!cdev) 1430 - return -ENODEV; 1430 + return -EOPNOTSUPP; 1431 1431 1432 1432 ctx->val.vbool = !!(cdev->rdma_protocol & IIDC_RDMA_PROTOCOL_IWARP); 1433 1433
+3 -3
drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
··· 328 328 rvu_report_pair_end(fmsg); 329 329 break; 330 330 case NIX_AF_RVU_RAS: 331 - intr_val = nix_event_context->nix_af_rvu_err; 331 + intr_val = nix_event_context->nix_af_rvu_ras; 332 332 rvu_report_pair_start(fmsg, "NIX_AF_RAS"); 333 333 devlink_fmsg_u64_pair_put(fmsg, "\tNIX RAS Interrupt Reg ", 334 - nix_event_context->nix_af_rvu_err); 334 + nix_event_context->nix_af_rvu_ras); 335 335 devlink_fmsg_string_put(fmsg, "\n\tPoison Data on:"); 336 336 if (intr_val & BIT_ULL(34)) 337 337 devlink_fmsg_string_put(fmsg, "\n\tNIX_AQ_INST_S"); ··· 476 476 if (blkaddr < 0) 477 477 return blkaddr; 478 478 479 - if (nix_event_ctx->nix_af_rvu_int) 479 + if (nix_event_ctx->nix_af_rvu_ras) 480 480 rvu_write64(rvu, blkaddr, NIX_AF_RAS_ENA_W1S, ~0ULL); 481 481 482 482 return 0;
-1
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
··· 47 47 "SQ 0x%x: cc (0x%x) != pc (0x%x)\n", 48 48 sq->sqn, sq->cc, sq->pc); 49 49 sq->cc = 0; 50 - sq->dma_fifo_cc = 0; 51 50 sq->pc = 0; 52 51 } 53 52
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
··· 2912 2912 goto out; 2913 2913 2914 2914 peer_priv = mlx5_devcom_get_next_peer_data(priv->devcom, &tmp); 2915 - if (peer_priv) 2915 + if (peer_priv && peer_priv->ipsec) 2916 2916 complete_all(&peer_priv->ipsec->comp); 2917 2917 2918 2918 mlx5_devcom_for_each_peer_end(priv->devcom);
+9 -14
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 1587 1587 struct skb_shared_info *sinfo; 1588 1588 u32 frag_consumed_bytes; 1589 1589 struct bpf_prog *prog; 1590 + u8 nr_frags_free = 0; 1590 1591 struct sk_buff *skb; 1591 1592 dma_addr_t addr; 1592 1593 u32 truesize; ··· 1630 1629 1631 1630 prog = rcu_dereference(rq->xdp_prog); 1632 1631 if (prog) { 1633 - u8 nr_frags_free, old_nr_frags = sinfo->nr_frags; 1632 + u8 old_nr_frags = sinfo->nr_frags; 1634 1633 1635 1634 if (mlx5e_xdp_handle(rq, prog, mxbuf)) { 1636 1635 if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, 1637 1636 rq->flags)) { 1638 1637 struct mlx5e_wqe_frag_info *pwi; 1639 - 1640 - wi -= old_nr_frags - sinfo->nr_frags; 1641 1638 1642 1639 for (pwi = head_wi; pwi < wi; pwi++) 1643 1640 pwi->frag_page->frags++; ··· 1644 1645 } 1645 1646 1646 1647 nr_frags_free = old_nr_frags - sinfo->nr_frags; 1647 - if (unlikely(nr_frags_free)) { 1648 - wi -= nr_frags_free; 1648 + if (unlikely(nr_frags_free)) 1649 1649 truesize -= nr_frags_free * frag_info->frag_stride; 1650 - } 1651 1650 } 1652 1651 1653 1652 skb = mlx5e_build_linear_skb( ··· 1661 1664 1662 1665 if (xdp_buff_has_frags(&mxbuf->xdp)) { 1663 1666 /* sinfo->nr_frags is reset by build_skb, calculate again. */ 1664 - xdp_update_skb_frags_info(skb, wi - head_wi - 1, 1667 + xdp_update_skb_frags_info(skb, wi - head_wi - nr_frags_free - 1, 1665 1668 sinfo->xdp_frags_size, truesize, 1666 1669 xdp_buff_get_skb_flags(&mxbuf->xdp)); 1667 1670 ··· 1956 1959 1957 1960 if (prog) { 1958 1961 u8 nr_frags_free, old_nr_frags = sinfo->nr_frags; 1962 + u8 new_nr_frags; 1959 1963 u32 len; 1960 1964 1961 1965 if (mlx5e_xdp_handle(rq, prog, mxbuf)) { 1962 1966 if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { 1963 1967 struct mlx5e_frag_page *pfp; 1964 - 1965 - frag_page -= old_nr_frags - sinfo->nr_frags; 1966 1968 1967 1969 for (pfp = head_page; pfp < frag_page; pfp++) 1968 1970 pfp->frags++; ··· 1973 1977 return NULL; /* page/packet was consumed by XDP */ 1974 1978 } 1975 1979 1976 - nr_frags_free = old_nr_frags - sinfo->nr_frags; 1977 - if (unlikely(nr_frags_free)) { 1978 - frag_page -= nr_frags_free; 1980 + new_nr_frags = sinfo->nr_frags; 1981 + nr_frags_free = old_nr_frags - new_nr_frags; 1982 + if (unlikely(nr_frags_free)) 1979 1983 truesize -= (nr_frags_free - 1) * page_size + 1980 1984 ALIGN(pg_consumed_bytes, 1981 1985 BIT(rq->mpwqe.log_stride_sz)); 1982 - } 1983 1986 1984 1987 len = mxbuf->xdp.data_end - mxbuf->xdp.data; 1985 1988 ··· 2000 2005 struct mlx5e_frag_page *pagep; 2001 2006 2002 2007 /* sinfo->nr_frags is reset by build_skb, calculate again. */ 2003 - xdp_update_skb_frags_info(skb, frag_page - head_page, 2008 + xdp_update_skb_frags_info(skb, new_nr_frags, 2004 2009 sinfo->xdp_frags_size, 2005 2010 truesize, 2006 2011 xdp_buff_get_skb_flags(&mxbuf->xdp));
+4 -3
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 1072 1072 1073 1073 static void mlx5_eswitch_event_handler_unregister(struct mlx5_eswitch *esw) 1074 1074 { 1075 - if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) 1075 + if (esw->mode == MLX5_ESWITCH_OFFLOADS && 1076 + mlx5_eswitch_is_funcs_handler(esw->dev)) { 1076 1077 mlx5_eq_notifier_unregister(esw->dev, &esw->esw_funcs.nb); 1077 - 1078 - flush_workqueue(esw->work_queue); 1078 + atomic_inc(&esw->esw_funcs.generation); 1079 + } 1079 1080 } 1080 1081 1081 1082 static void mlx5_eswitch_clear_vf_vports_info(struct mlx5_eswitch *esw)
+2
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
··· 335 335 struct mlx5_host_work { 336 336 struct work_struct work; 337 337 struct mlx5_eswitch *esw; 338 + int work_gen; 338 339 }; 339 340 340 341 struct mlx5_esw_functions { 341 342 struct mlx5_nb nb; 343 + atomic_t generation; 342 344 bool host_funcs_disabled; 343 345 u16 num_vfs; 344 346 u16 num_ec_vfs;
+25 -20
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 1241 1241 flows[peer_vport->index] = flow; 1242 1242 } 1243 1243 1244 - if (mlx5_esw_host_functions_enabled(esw->dev)) { 1245 - mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport, 1246 - mlx5_core_max_vfs(peer_dev)) { 1247 - esw_set_peer_miss_rule_source_port(esw, peer_esw, 1248 - spec, 1249 - peer_vport->vport); 1250 - 1251 - flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), 1252 - spec, &flow_act, &dest, 1); 1253 - if (IS_ERR(flow)) { 1254 - err = PTR_ERR(flow); 1255 - goto add_vf_flow_err; 1256 - } 1257 - flows[peer_vport->index] = flow; 1244 + mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport, 1245 + mlx5_core_max_vfs(peer_dev)) { 1246 + esw_set_peer_miss_rule_source_port(esw, peer_esw, spec, 1247 + peer_vport->vport); 1248 + flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), 1249 + spec, &flow_act, &dest, 1); 1250 + if (IS_ERR(flow)) { 1251 + err = PTR_ERR(flow); 1252 + goto add_vf_flow_err; 1258 1253 } 1254 + flows[peer_vport->index] = flow; 1259 1255 } 1260 1256 1261 1257 if (mlx5_core_ec_sriov_enabled(peer_dev)) { ··· 1343 1347 mlx5_del_flow_rules(flows[peer_vport->index]); 1344 1348 } 1345 1349 1346 - if (mlx5_core_is_ecpf_esw_manager(peer_dev)) { 1350 + if (mlx5_core_is_ecpf_esw_manager(peer_dev) && 1351 + mlx5_esw_host_functions_enabled(peer_dev)) { 1347 1352 peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF); 1348 1353 mlx5_del_flow_rules(flows[peer_vport->index]); 1349 1354 } ··· 3579 3582 } 3580 3583 3581 3584 static void 3582 - esw_vfs_changed_event_handler(struct mlx5_eswitch *esw, const u32 *out) 3585 + esw_vfs_changed_event_handler(struct mlx5_eswitch *esw, int work_gen, 3586 + const u32 *out) 3583 3587 { 3584 3588 struct devlink *devlink; 3585 3589 bool host_pf_disabled; 3586 3590 u16 new_num_vfs; 3591 + 3592 + devlink = priv_to_devlink(esw->dev); 3593 + devl_lock(devlink); 3594 + 3595 + /* Stale work from one or more mode changes ago. Bail out. */ 3596 + if (work_gen != atomic_read(&esw->esw_funcs.generation)) 3597 + goto unlock; 3587 3598 3588 3599 new_num_vfs = MLX5_GET(query_esw_functions_out, out, 3589 3600 host_params_context.host_num_of_vfs); ··· 3599 3594 host_params_context.host_pf_disabled); 3600 3595 3601 3596 if (new_num_vfs == esw->esw_funcs.num_vfs || host_pf_disabled) 3602 - return; 3597 + goto unlock; 3603 3598 3604 - devlink = priv_to_devlink(esw->dev); 3605 - devl_lock(devlink); 3606 3599 /* Number of VFs can only change from "0 to x" or "x to 0". */ 3607 3600 if (esw->esw_funcs.num_vfs > 0) { 3608 3601 mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs); ··· 3615 3612 } 3616 3613 } 3617 3614 esw->esw_funcs.num_vfs = new_num_vfs; 3615 + unlock: 3618 3616 devl_unlock(devlink); 3619 3617 } 3620 3618 ··· 3632 3628 if (IS_ERR(out)) 3633 3629 goto out; 3634 3630 3635 - esw_vfs_changed_event_handler(esw, out); 3631 + esw_vfs_changed_event_handler(esw, host_work->work_gen, out); 3636 3632 kvfree(out); 3637 3633 out: 3638 3634 kfree(host_work); ··· 3652 3648 esw = container_of(esw_funcs, struct mlx5_eswitch, esw_funcs); 3653 3649 3654 3650 host_work->esw = esw; 3651 + host_work->work_gen = atomic_read(&esw_funcs->generation); 3655 3652 3656 3653 INIT_WORK(&host_work->work, esw_functions_changed_event_handler); 3657 3654 queue_work(esw->work_queue, &host_work->work);
+1
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 1966 1966 mana_gd_remove_irqs(pdev); 1967 1967 free_workqueue: 1968 1968 destroy_workqueue(gc->service_wq); 1969 + gc->service_wq = NULL; 1969 1970 dev_err(&pdev->dev, "%s failed (error %d)\n", __func__, err); 1970 1971 return err; 1971 1972 }
+13 -6
drivers/net/ethernet/spacemit/k1_emac.c
··· 562 562 DMA_FROM_DEVICE); 563 563 if (dma_mapping_error(&priv->pdev->dev, rx_buf->dma_addr)) { 564 564 dev_err_ratelimited(&ndev->dev, "Mapping skb failed\n"); 565 - goto err_free_skb; 565 + dev_kfree_skb_any(skb); 566 + rx_buf->skb = NULL; 567 + break; 566 568 } 567 569 568 570 rx_desc_addr = &((struct emac_desc *)rx_ring->desc_addr)[i]; ··· 589 587 590 588 rx_ring->head = i; 591 589 return; 592 - 593 - err_free_skb: 594 - dev_kfree_skb_any(skb); 595 - rx_buf->skb = NULL; 596 590 } 597 591 598 592 /* Returns number of packets received */ ··· 730 732 struct emac_desc tx_desc, *tx_desc_addr; 731 733 struct device *dev = &priv->pdev->dev; 732 734 struct emac_tx_desc_buffer *tx_buf; 733 - u32 head, old_head, frag_num, f; 735 + u32 head, old_head, frag_num, f, i; 734 736 bool buf_idx; 735 737 736 738 frag_num = skb_shinfo(skb)->nr_frags; ··· 798 800 799 801 err_free_skb: 800 802 dev_dstats_tx_dropped(priv->ndev); 803 + 804 + i = old_head; 805 + while (i != head) { 806 + emac_free_tx_buf(priv, i); 807 + 808 + if (++i == tx_ring->total_cnt) 809 + i = 0; 810 + } 811 + 801 812 dev_kfree_skb_any(skb); 802 813 } 803 814
+9 -7
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1351 1351 ndev_priv = netdev_priv(ndev); 1352 1352 am65_cpsw_nuss_set_offload_fwd_mark(skb, ndev_priv->offload_fwd_mark); 1353 1353 skb_put(skb, pkt_len); 1354 - if (port->rx_ts_enabled) 1354 + if (port->rx_ts_filter) 1355 1355 am65_cpts_rx_timestamp(common->cpts, port_id, skb); 1356 1356 skb_mark_for_recycle(skb); 1357 1357 skb->protocol = eth_type_trans(skb, ndev); ··· 1811 1811 1812 1812 switch (cfg->rx_filter) { 1813 1813 case HWTSTAMP_FILTER_NONE: 1814 - port->rx_ts_enabled = false; 1814 + port->rx_ts_filter = HWTSTAMP_FILTER_NONE; 1815 1815 break; 1816 1816 case HWTSTAMP_FILTER_PTP_V1_L4_EVENT: 1817 1817 case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: 1818 1818 case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: 1819 + port->rx_ts_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 1820 + cfg->rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 1821 + break; 1819 1822 case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: 1820 1823 case HWTSTAMP_FILTER_PTP_V2_L4_SYNC: 1821 1824 case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ: ··· 1828 1825 case HWTSTAMP_FILTER_PTP_V2_EVENT: 1829 1826 case HWTSTAMP_FILTER_PTP_V2_SYNC: 1830 1827 case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: 1831 - port->rx_ts_enabled = true; 1832 - cfg->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT | HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 1828 + port->rx_ts_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; 1829 + cfg->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; 1833 1830 break; 1834 1831 case HWTSTAMP_FILTER_ALL: 1835 1832 case HWTSTAMP_FILTER_SOME: ··· 1866 1863 ts_ctrl |= AM65_CPSW_TS_TX_ANX_ALL_EN | 1867 1864 AM65_CPSW_PN_TS_CTL_TX_VLAN_LT1_EN; 1868 1865 1869 - if (port->rx_ts_enabled) 1866 + if (port->rx_ts_filter) 1870 1867 ts_ctrl |= AM65_CPSW_TS_RX_ANX_ALL_EN | 1871 1868 AM65_CPSW_PN_TS_CTL_RX_VLAN_LT1_EN; 1872 1869 ··· 1891 1888 cfg->flags = 0; 1892 1889 cfg->tx_type = port->tx_ts_enabled ? 1893 1890 HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF; 1894 - cfg->rx_filter = port->rx_ts_enabled ? HWTSTAMP_FILTER_PTP_V2_EVENT | 1895 - HWTSTAMP_FILTER_PTP_V1_L4_EVENT : HWTSTAMP_FILTER_NONE; 1891 + cfg->rx_filter = port->rx_ts_filter; 1896 1892 1897 1893 return 0; 1898 1894 }
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.h
··· 52 52 bool disabled; 53 53 struct am65_cpsw_slave_data slave; 54 54 bool tx_ts_enabled; 55 - bool rx_ts_enabled; 55 + enum hwtstamp_rx_filters rx_ts_filter; 56 56 struct am65_cpsw_qos qos; 57 57 struct devlink_port devlink_port; 58 58 struct bpf_prog *xdp_prog;
+1
drivers/net/mctp/mctp-i2c.c
··· 343 343 } else { 344 344 status = NET_RX_DROP; 345 345 spin_unlock_irqrestore(&midev->lock, flags); 346 + kfree_skb(skb); 346 347 } 347 348 348 349 if (status == NET_RX_SUCCESS) {
+1 -2
drivers/net/mctp/mctp-usb.c
··· 329 329 SET_NETDEV_DEV(netdev, &intf->dev); 330 330 dev = netdev_priv(netdev); 331 331 dev->netdev = netdev; 332 - dev->usbdev = usb_get_dev(interface_to_usbdev(intf)); 332 + dev->usbdev = interface_to_usbdev(intf); 333 333 dev->intf = intf; 334 334 usb_set_intfdata(intf, dev); 335 335 ··· 365 365 mctp_unregister_netdev(dev->netdev); 366 366 usb_free_urb(dev->tx_urb); 367 367 usb_free_urb(dev->rx_urb); 368 - usb_put_dev(dev->usbdev); 369 368 free_netdev(dev->netdev); 370 369 } 371 370
+7 -1
drivers/net/phy/sfp.c
··· 367 367 sfp->state_ignore_mask |= SFP_F_TX_FAULT; 368 368 } 369 369 370 + static void sfp_fixup_ignore_tx_fault_and_los(struct sfp *sfp) 371 + { 372 + sfp_fixup_ignore_tx_fault(sfp); 373 + sfp_fixup_ignore_los(sfp); 374 + } 375 + 370 376 static void sfp_fixup_ignore_hw(struct sfp *sfp, unsigned int mask) 371 377 { 372 378 sfp->state_hw_mask &= ~mask; ··· 536 530 // Huawei MA5671A can operate at 2500base-X, but report 1.2GBd NRZ in 537 531 // their EEPROM 538 532 SFP_QUIRK("HUAWEI", "MA5671A", sfp_quirk_2500basex, 539 - sfp_fixup_ignore_tx_fault), 533 + sfp_fixup_ignore_tx_fault_and_los), 540 534 541 535 // Lantech 8330-262D-E and 8330-265D can operate at 2500base-X, but 542 536 // incorrectly report 2500MBd NRZ in their EEPROM.
+8 -4
drivers/net/usb/lan78xx.c
··· 3119 3119 int ret; 3120 3120 u32 buf; 3121 3121 3122 + /* LAN7850 is USB 2.0 and does not support LTM */ 3123 + if (dev->chipid == ID_REV_CHIP_ID_7850_) 3124 + return 0; 3125 + 3122 3126 ret = lan78xx_read_reg(dev, USB_CFG1, &buf); 3123 3127 if (ret < 0) 3124 3128 goto init_ltm_failed; ··· 3833 3829 */ 3834 3830 if (!(dev->net->features & NETIF_F_RXCSUM) || 3835 3831 unlikely(rx_cmd_a & RX_CMD_A_ICSM_) || 3832 + unlikely(rx_cmd_a & RX_CMD_A_CSE_MASK_) || 3836 3833 ((rx_cmd_a & RX_CMD_A_FVTG_) && 3837 3834 !(dev->net->features & NETIF_F_HW_VLAN_CTAG_RX))) { 3838 3835 skb->ip_summed = CHECKSUM_NONE; ··· 3906 3901 return 0; 3907 3902 } 3908 3903 3909 - if (unlikely(rx_cmd_a & RX_CMD_A_RED_)) { 3904 + if (unlikely(rx_cmd_a & RX_CMD_A_RED_) && 3905 + (rx_cmd_a & RX_CMD_A_RX_HARD_ERRS_MASK_)) { 3910 3906 netif_dbg(dev, rx_err, dev->net, 3911 3907 "Error rx_cmd_a=0x%08x", rx_cmd_a); 3912 3908 } else { ··· 4182 4176 } 4183 4177 4184 4178 tx_data += len; 4185 - entry->length += len; 4179 + entry->length += max_t(unsigned int, len, ETH_ZLEN); 4186 4180 entry->num_of_packet += skb_shinfo(skb)->gso_segs ?: 1; 4187 4181 4188 4182 dev_kfree_skb_any(skb); ··· 4549 4543 phylink_stop(dev->phylink); 4550 4544 phylink_disconnect_phy(dev->phylink); 4551 4545 rtnl_unlock(); 4552 - 4553 - netif_napi_del(&dev->napi); 4554 4546 4555 4547 unregister_netdev(net); 4556 4548
+3
drivers/net/usb/lan78xx.h
··· 74 74 #define RX_CMD_A_ICSM_ (0x00004000) 75 75 #define RX_CMD_A_LEN_MASK_ (0x00003FFF) 76 76 77 + #define RX_CMD_A_RX_HARD_ERRS_MASK_ \ 78 + (RX_CMD_A_RX_ERRS_MASK_ & ~RX_CMD_A_CSE_MASK_) 79 + 77 80 /* Rx Command B */ 78 81 #define RX_CMD_B_CSUM_SHIFT_ (16) 79 82 #define RX_CMD_B_CSUM_MASK_ (0xFFFF0000)
+2 -2
drivers/net/usb/qmi_wwan.c
··· 928 928 929 929 static const struct driver_info qmi_wwan_info = { 930 930 .description = "WWAN/QMI device", 931 - .flags = FLAG_WWAN | FLAG_SEND_ZLP, 931 + .flags = FLAG_WWAN | FLAG_NOMAXMTU | FLAG_SEND_ZLP, 932 932 .bind = qmi_wwan_bind, 933 933 .unbind = qmi_wwan_unbind, 934 934 .manage_power = qmi_wwan_manage_power, ··· 937 937 938 938 static const struct driver_info qmi_wwan_info_quirk_dtr = { 939 939 .description = "WWAN/QMI device", 940 - .flags = FLAG_WWAN | FLAG_SEND_ZLP, 940 + .flags = FLAG_WWAN | FLAG_NOMAXMTU | FLAG_SEND_ZLP, 941 941 .bind = qmi_wwan_bind, 942 942 .unbind = qmi_wwan_unbind, 943 943 .manage_power = qmi_wwan_manage_power,
+4 -3
drivers/net/usb/usbnet.c
··· 1829 1829 if ((dev->driver_info->flags & FLAG_NOARP) != 0) 1830 1830 net->flags |= IFF_NOARP; 1831 1831 1832 - if (net->max_mtu > (dev->hard_mtu - net->hard_header_len)) 1832 + if ((dev->driver_info->flags & FLAG_NOMAXMTU) == 0 && 1833 + net->max_mtu > (dev->hard_mtu - net->hard_header_len)) 1833 1834 net->max_mtu = dev->hard_mtu - net->hard_header_len; 1834 1835 1835 - if (net->mtu > net->max_mtu) 1836 - net->mtu = net->max_mtu; 1836 + if (net->mtu > (dev->hard_mtu - net->hard_header_len)) 1837 + net->mtu = dev->hard_mtu - net->hard_header_len; 1837 1838 1838 1839 } else if (!info->in || !info->out) 1839 1840 status = usbnet_get_endpoints(dev, udev);
+17 -17
drivers/net/xen-netfront.c
··· 1646 1646 1647 1647 /* avoid the race with XDP headroom adjustment */ 1648 1648 wait_event(module_wq, 1649 - xenbus_read_driver_state(np->xbdev->otherend) == 1649 + xenbus_read_driver_state(np->xbdev, np->xbdev->otherend) == 1650 1650 XenbusStateReconfigured); 1651 1651 np->netfront_xdp_enabled = true; 1652 1652 ··· 1764 1764 do { 1765 1765 xenbus_switch_state(dev, XenbusStateInitialising); 1766 1766 err = wait_event_timeout(module_wq, 1767 - xenbus_read_driver_state(dev->otherend) != 1767 + xenbus_read_driver_state(dev, dev->otherend) != 1768 1768 XenbusStateClosed && 1769 - xenbus_read_driver_state(dev->otherend) != 1769 + xenbus_read_driver_state(dev, dev->otherend) != 1770 1770 XenbusStateUnknown, XENNET_TIMEOUT); 1771 1771 } while (!err); 1772 1772 ··· 2626 2626 { 2627 2627 int ret; 2628 2628 2629 - if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed) 2629 + if (xenbus_read_driver_state(dev, dev->otherend) == XenbusStateClosed) 2630 2630 return; 2631 2631 do { 2632 2632 xenbus_switch_state(dev, XenbusStateClosing); 2633 2633 ret = wait_event_timeout(module_wq, 2634 - xenbus_read_driver_state(dev->otherend) == 2635 - XenbusStateClosing || 2636 - xenbus_read_driver_state(dev->otherend) == 2637 - XenbusStateClosed || 2638 - xenbus_read_driver_state(dev->otherend) == 2639 - XenbusStateUnknown, 2640 - XENNET_TIMEOUT); 2634 + xenbus_read_driver_state(dev, dev->otherend) == 2635 + XenbusStateClosing || 2636 + xenbus_read_driver_state(dev, dev->otherend) == 2637 + XenbusStateClosed || 2638 + xenbus_read_driver_state(dev, dev->otherend) == 2639 + XenbusStateUnknown, 2640 + XENNET_TIMEOUT); 2641 2641 } while (!ret); 2642 2642 2643 - if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed) 2643 + if (xenbus_read_driver_state(dev, dev->otherend) == XenbusStateClosed) 2644 2644 return; 2645 2645 2646 2646 do { 2647 2647 xenbus_switch_state(dev, XenbusStateClosed); 2648 2648 ret = wait_event_timeout(module_wq, 2649 - xenbus_read_driver_state(dev->otherend) == 2650 - XenbusStateClosed || 2651 - xenbus_read_driver_state(dev->otherend) == 2652 - XenbusStateUnknown, 2653 - XENNET_TIMEOUT); 2649 + xenbus_read_driver_state(dev, dev->otherend) == 2650 + XenbusStateClosed || 2651 + xenbus_read_driver_state(dev, dev->otherend) == 2652 + XenbusStateUnknown, 2653 + XENNET_TIMEOUT); 2654 2654 } while (!ret); 2655 2655 } 2656 2656
+12 -16
drivers/nvme/host/core.c
··· 2046 2046 if (id->nabspf) 2047 2047 boundary = (le16_to_cpu(id->nabspf) + 1) * bs; 2048 2048 } else { 2049 - /* 2050 - * Use the controller wide atomic write unit. This sucks 2051 - * because the limit is defined in terms of logical blocks while 2052 - * namespaces can have different formats, and because there is 2053 - * no clear language in the specification prohibiting different 2054 - * values for different controllers in the subsystem. 2055 - */ 2056 - atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs; 2049 + if (ns->ctrl->awupf) 2050 + dev_info_once(ns->ctrl->device, 2051 + "AWUPF ignored, only NAWUPF accepted\n"); 2052 + atomic_bs = bs; 2057 2053 } 2058 2054 2059 2055 lim->atomic_write_hw_max = atomic_bs; ··· 3218 3222 memcpy(subsys->model, id->mn, sizeof(subsys->model)); 3219 3223 subsys->vendor_id = le16_to_cpu(id->vid); 3220 3224 subsys->cmic = id->cmic; 3221 - subsys->awupf = le16_to_cpu(id->awupf); 3222 3225 3223 3226 /* Versions prior to 1.4 don't necessarily report a valid type */ 3224 3227 if (id->cntrltype == NVME_CTRL_DISC || ··· 3650 3655 dev_pm_qos_expose_latency_tolerance(ctrl->device); 3651 3656 else if (!ctrl->apst_enabled && prev_apst_enabled) 3652 3657 dev_pm_qos_hide_latency_tolerance(ctrl->device); 3658 + ctrl->awupf = le16_to_cpu(id->awupf); 3653 3659 out_free: 3654 3660 kfree(id); 3655 3661 return ret; ··· 4181 4185 4182 4186 nvme_mpath_add_disk(ns, info->anagrpid); 4183 4187 nvme_fault_inject_init(&ns->fault_inject, ns->disk->disk_name); 4184 - 4185 - /* 4186 - * Set ns->disk->device->driver_data to ns so we can access 4187 - * ns->head->passthru_err_log_enabled in 4188 - * nvme_io_passthru_err_log_enabled_[store | show](). 4189 - */ 4190 - dev_set_drvdata(disk_to_dev(ns->disk), ns); 4191 4188 4192 4189 return; 4193 4190 ··· 4853 4864 ret = blk_mq_alloc_tag_set(set); 4854 4865 if (ret) 4855 4866 return ret; 4867 + 4868 + /* 4869 + * If a previous admin queue exists (e.g., from before a reset), 4870 + * put it now before allocating a new one to avoid orphaning it. 4871 + */ 4872 + if (ctrl->admin_q) 4873 + blk_put_queue(ctrl->admin_q); 4856 4874 4857 4875 ctrl->admin_q = blk_mq_alloc_queue(set, &lim, NULL); 4858 4876 if (IS_ERR(ctrl->admin_q)) {
+2 -2
drivers/nvme/host/fabrics.c
··· 1290 1290 kfree(opts->subsysnqn); 1291 1291 kfree(opts->host_traddr); 1292 1292 kfree(opts->host_iface); 1293 - kfree(opts->dhchap_secret); 1294 - kfree(opts->dhchap_ctrl_secret); 1293 + kfree_sensitive(opts->dhchap_secret); 1294 + kfree_sensitive(opts->dhchap_ctrl_secret); 1295 1295 kfree(opts); 1296 1296 } 1297 1297 EXPORT_SYMBOL_GPL(nvmf_free_options);
+6 -8
drivers/nvme/host/multipath.c
··· 1300 1300 mutex_lock(&head->subsys->lock); 1301 1301 /* 1302 1302 * We are called when all paths have been removed, and at that point 1303 - * head->list is expected to be empty. However, nvme_remove_ns() and 1303 + * head->list is expected to be empty. However, nvme_ns_remove() and 1304 1304 * nvme_init_ns_head() can run concurrently and so if head->delayed_ 1305 1305 * removal_secs is configured, it is possible that by the time we reach 1306 1306 * this point, head->list may no longer be empty. Therefore, we recheck ··· 1310 1310 if (!list_empty(&head->list)) 1311 1311 goto out; 1312 1312 1313 - if (head->delayed_removal_secs) { 1314 - /* 1315 - * Ensure that no one could remove this module while the head 1316 - * remove work is pending. 1317 - */ 1318 - if (!try_module_get(THIS_MODULE)) 1319 - goto out; 1313 + /* 1314 + * Ensure that no one could remove this module while the head 1315 + * remove work is pending. 1316 + */ 1317 + if (head->delayed_removal_secs && try_module_get(THIS_MODULE)) { 1320 1318 mod_delayed_work(nvme_wq, &head->remove_work, 1321 1319 head->delayed_removal_secs * HZ); 1322 1320 } else {
+56 -1
drivers/nvme/host/nvme.h
··· 180 180 NVME_QUIRK_DMAPOOL_ALIGN_512 = (1 << 22), 181 181 }; 182 182 183 + static inline char *nvme_quirk_name(enum nvme_quirks q) 184 + { 185 + switch (q) { 186 + case NVME_QUIRK_STRIPE_SIZE: 187 + return "stripe_size"; 188 + case NVME_QUIRK_IDENTIFY_CNS: 189 + return "identify_cns"; 190 + case NVME_QUIRK_DEALLOCATE_ZEROES: 191 + return "deallocate_zeroes"; 192 + case NVME_QUIRK_DELAY_BEFORE_CHK_RDY: 193 + return "delay_before_chk_rdy"; 194 + case NVME_QUIRK_NO_APST: 195 + return "no_apst"; 196 + case NVME_QUIRK_NO_DEEPEST_PS: 197 + return "no_deepest_ps"; 198 + case NVME_QUIRK_QDEPTH_ONE: 199 + return "qdepth_one"; 200 + case NVME_QUIRK_MEDIUM_PRIO_SQ: 201 + return "medium_prio_sq"; 202 + case NVME_QUIRK_IGNORE_DEV_SUBNQN: 203 + return "ignore_dev_subnqn"; 204 + case NVME_QUIRK_DISABLE_WRITE_ZEROES: 205 + return "disable_write_zeroes"; 206 + case NVME_QUIRK_SIMPLE_SUSPEND: 207 + return "simple_suspend"; 208 + case NVME_QUIRK_SINGLE_VECTOR: 209 + return "single_vector"; 210 + case NVME_QUIRK_128_BYTES_SQES: 211 + return "128_bytes_sqes"; 212 + case NVME_QUIRK_SHARED_TAGS: 213 + return "shared_tags"; 214 + case NVME_QUIRK_NO_TEMP_THRESH_CHANGE: 215 + return "no_temp_thresh_change"; 216 + case NVME_QUIRK_NO_NS_DESC_LIST: 217 + return "no_ns_desc_list"; 218 + case NVME_QUIRK_DMA_ADDRESS_BITS_48: 219 + return "dma_address_bits_48"; 220 + case NVME_QUIRK_SKIP_CID_GEN: 221 + return "skip_cid_gen"; 222 + case NVME_QUIRK_BOGUS_NID: 223 + return "bogus_nid"; 224 + case NVME_QUIRK_NO_SECONDARY_TEMP_THRESH: 225 + return "no_secondary_temp_thresh"; 226 + case NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND: 227 + return "force_no_simple_suspend"; 228 + case NVME_QUIRK_BROKEN_MSI: 229 + return "broken_msi"; 230 + case NVME_QUIRK_DMAPOOL_ALIGN_512: 231 + return "dmapool_align_512"; 232 + } 233 + 234 + return "unknown"; 235 + } 236 + 183 237 /* 184 238 * Common request structure for NVMe passthrough. All drivers must have 185 239 * this structure as the first member of their request-private data. ··· 464 410 465 411 enum nvme_ctrl_type cntrltype; 466 412 enum nvme_dctype dctype; 413 + 414 + u16 awupf; /* 0's based value. */ 467 415 }; 468 416 469 417 static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl) ··· 498 442 u8 cmic; 499 443 enum nvme_subsys_type subtype; 500 444 u16 vendor_id; 501 - u16 awupf; /* 0's based value. */ 502 445 struct ida ns_ida; 503 446 #ifdef CONFIG_NVME_MULTIPATH 504 447 enum nvme_iopolicy iopolicy;
+184 -2
drivers/nvme/host/pci.c
··· 72 72 static_assert(MAX_PRP_RANGE / NVME_CTRL_PAGE_SIZE <= 73 73 (1 /* prp1 */ + NVME_MAX_NR_DESCRIPTORS * PRPS_PER_PAGE)); 74 74 75 + struct quirk_entry { 76 + u16 vendor_id; 77 + u16 dev_id; 78 + u32 enabled_quirks; 79 + u32 disabled_quirks; 80 + }; 81 + 75 82 static int use_threaded_interrupts; 76 83 module_param(use_threaded_interrupts, int, 0444); 77 84 ··· 108 101 static unsigned int io_queue_depth = 1024; 109 102 module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 0644); 110 103 MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2 and < 4096"); 104 + 105 + static struct quirk_entry *nvme_pci_quirk_list; 106 + static unsigned int nvme_pci_quirk_count; 107 + 108 + /* Helper to parse individual quirk names */ 109 + static int nvme_parse_quirk_names(char *quirk_str, struct quirk_entry *entry) 110 + { 111 + int i; 112 + size_t field_len; 113 + bool disabled, found; 114 + char *p = quirk_str, *field; 115 + 116 + while ((field = strsep(&p, ",")) && *field) { 117 + disabled = false; 118 + found = false; 119 + 120 + if (*field == '^') { 121 + /* Skip the '^' character */ 122 + disabled = true; 123 + field++; 124 + } 125 + 126 + field_len = strlen(field); 127 + for (i = 0; i < 32; i++) { 128 + unsigned int bit = 1U << i; 129 + char *q_name = nvme_quirk_name(bit); 130 + size_t q_len = strlen(q_name); 131 + 132 + if (!strcmp(q_name, "unknown")) 133 + break; 134 + 135 + if (!strcmp(q_name, field) && 136 + q_len == field_len) { 137 + if (disabled) 138 + entry->disabled_quirks |= bit; 139 + else 140 + entry->enabled_quirks |= bit; 141 + found = true; 142 + break; 143 + } 144 + } 145 + 146 + if (!found) { 147 + pr_err("nvme: unrecognized quirk %s\n", field); 148 + return -EINVAL; 149 + } 150 + } 151 + return 0; 152 + } 153 + 154 + /* Helper to parse a single VID:DID:quirk_names entry */ 155 + static int nvme_parse_quirk_entry(char *s, struct quirk_entry *entry) 156 + { 157 + char *field; 158 + 159 + field = strsep(&s, ":"); 160 + if (!field || kstrtou16(field, 16, &entry->vendor_id)) 161 + return -EINVAL; 162 + 163 + field = strsep(&s, ":"); 164 + if (!field || kstrtou16(field, 16, &entry->dev_id)) 165 + return -EINVAL; 166 + 167 + field = strsep(&s, ":"); 168 + if (!field) 169 + return -EINVAL; 170 + 171 + return nvme_parse_quirk_names(field, entry); 172 + } 173 + 174 + static int quirks_param_set(const char *value, const struct kernel_param *kp) 175 + { 176 + int count, err, i; 177 + struct quirk_entry *qlist; 178 + char *field, *val, *sep_ptr; 179 + 180 + err = param_set_copystring(value, kp); 181 + if (err) 182 + return err; 183 + 184 + val = kstrdup(value, GFP_KERNEL); 185 + if (!val) 186 + return -ENOMEM; 187 + 188 + if (!*val) 189 + goto out_free_val; 190 + 191 + count = 1; 192 + for (i = 0; val[i]; i++) { 193 + if (val[i] == '-') 194 + count++; 195 + } 196 + 197 + qlist = kcalloc(count, sizeof(*qlist), GFP_KERNEL); 198 + if (!qlist) { 199 + err = -ENOMEM; 200 + goto out_free_val; 201 + } 202 + 203 + i = 0; 204 + sep_ptr = val; 205 + while ((field = strsep(&sep_ptr, "-"))) { 206 + if (nvme_parse_quirk_entry(field, &qlist[i])) { 207 + pr_err("nvme: failed to parse quirk string %s\n", 208 + value); 209 + goto out_free_qlist; 210 + } 211 + 212 + i++; 213 + } 214 + 215 + kfree(nvme_pci_quirk_list); 216 + nvme_pci_quirk_count = count; 217 + nvme_pci_quirk_list = qlist; 218 + goto out_free_val; 219 + 220 + out_free_qlist: 221 + kfree(qlist); 222 + out_free_val: 223 + kfree(val); 224 + return err; 225 + } 226 + 227 + static char quirks_param[128]; 228 + static const struct kernel_param_ops quirks_param_ops = { 229 + .set = quirks_param_set, 230 + .get = param_get_string, 231 + }; 232 + 233 + static struct kparam_string quirks_param_string = { 234 + .maxlen = sizeof(quirks_param), 235 + .string = quirks_param, 236 + }; 237 + 238 + module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0444); 239 + MODULE_PARM_DESC(quirks, "Enable/disable NVMe quirks by specifying " 240 + "quirks=VID:DID:quirk_names"); 111 241 112 242 static int io_queue_count_set(const char *val, const struct kernel_param *kp) 113 243 { ··· 1640 1496 struct nvme_queue *nvmeq = hctx->driver_data; 1641 1497 bool found; 1642 1498 1643 - if (!nvme_cqe_pending(nvmeq)) 1499 + if (!test_bit(NVMEQ_POLLED, &nvmeq->flags) || 1500 + !nvme_cqe_pending(nvmeq)) 1644 1501 return 0; 1645 1502 1646 1503 spin_lock(&nvmeq->cq_poll_lock); ··· 2919 2774 dev->nr_write_queues = write_queues; 2920 2775 dev->nr_poll_queues = poll_queues; 2921 2776 2922 - nr_io_queues = dev->nr_allocated_queues - 1; 2777 + if (dev->ctrl.tagset) { 2778 + /* 2779 + * The set's maps are allocated only once at initialization 2780 + * time. We can't add special queues later if their mq_map 2781 + * wasn't preallocated. 2782 + */ 2783 + if (dev->ctrl.tagset->nr_maps < 3) 2784 + dev->nr_poll_queues = 0; 2785 + if (dev->ctrl.tagset->nr_maps < 2) 2786 + dev->nr_write_queues = 0; 2787 + } 2788 + 2789 + /* 2790 + * The initial number of allocated queue slots may be too large if the 2791 + * user reduced the special queue parameters. Cap the value to the 2792 + * number we need for this round. 2793 + */ 2794 + nr_io_queues = min(nvme_max_io_queues(dev), 2795 + dev->nr_allocated_queues - 1); 2923 2796 result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues); 2924 2797 if (result < 0) 2925 2798 return result; ··· 3621 3458 return 0; 3622 3459 } 3623 3460 3461 + static struct quirk_entry *detect_dynamic_quirks(struct pci_dev *pdev) 3462 + { 3463 + int i; 3464 + 3465 + for (i = 0; i < nvme_pci_quirk_count; i++) 3466 + if (pdev->vendor == nvme_pci_quirk_list[i].vendor_id && 3467 + pdev->device == nvme_pci_quirk_list[i].dev_id) 3468 + return &nvme_pci_quirk_list[i]; 3469 + 3470 + return NULL; 3471 + } 3472 + 3624 3473 static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev, 3625 3474 const struct pci_device_id *id) 3626 3475 { 3627 3476 unsigned long quirks = id->driver_data; 3628 3477 int node = dev_to_node(&pdev->dev); 3629 3478 struct nvme_dev *dev; 3479 + struct quirk_entry *qentry; 3630 3480 int ret = -ENOMEM; 3631 3481 3632 3482 dev = kzalloc_node(struct_size(dev, descriptor_pools, nr_node_ids), ··· 3670 3494 dev_info(&pdev->dev, 3671 3495 "platform quirk: setting simple suspend\n"); 3672 3496 quirks |= NVME_QUIRK_SIMPLE_SUSPEND; 3497 + } 3498 + qentry = detect_dynamic_quirks(pdev); 3499 + if (qentry) { 3500 + quirks |= qentry->enabled_quirks; 3501 + quirks &= ~qentry->disabled_quirks; 3673 3502 } 3674 3503 ret = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops, 3675 3504 quirks); ··· 4276 4095 4277 4096 static void __exit nvme_exit(void) 4278 4097 { 4098 + kfree(nvme_pci_quirk_list); 4279 4099 pci_unregister_driver(&nvme_driver); 4280 4100 flush_workqueue(nvme_wq); 4281 4101 }
+2 -2
drivers/nvme/host/pr.c
··· 242 242 if (rse_len > U32_MAX) 243 243 return -EINVAL; 244 244 245 - rse = kzalloc(rse_len, GFP_KERNEL); 245 + rse = kvzalloc(rse_len, GFP_KERNEL); 246 246 if (!rse) 247 247 return -ENOMEM; 248 248 ··· 267 267 } 268 268 269 269 free_rse: 270 - kfree(rse); 270 + kvfree(rse); 271 271 return ret; 272 272 } 273 273
+23
drivers/nvme/host/sysfs.c
··· 601 601 } 602 602 static DEVICE_ATTR_RO(dctype); 603 603 604 + static ssize_t quirks_show(struct device *dev, struct device_attribute *attr, 605 + char *buf) 606 + { 607 + int count = 0, i; 608 + struct nvme_ctrl *ctrl = dev_get_drvdata(dev); 609 + unsigned long quirks = ctrl->quirks; 610 + 611 + if (!quirks) 612 + return sysfs_emit(buf, "none\n"); 613 + 614 + for (i = 0; quirks; ++i) { 615 + if (quirks & 1) { 616 + count += sysfs_emit_at(buf, count, "%s\n", 617 + nvme_quirk_name(BIT(i))); 618 + } 619 + quirks >>= 1; 620 + } 621 + 622 + return count; 623 + } 624 + static DEVICE_ATTR_RO(quirks); 625 + 604 626 #ifdef CONFIG_NVME_HOST_AUTH 605 627 static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev, 606 628 struct device_attribute *attr, char *buf) ··· 764 742 &dev_attr_kato.attr, 765 743 &dev_attr_cntrltype.attr, 766 744 &dev_attr_dctype.attr, 745 + &dev_attr_quirks.attr, 767 746 #ifdef CONFIG_NVME_HOST_AUTH 768 747 &dev_attr_dhchap_secret.attr, 769 748 &dev_attr_dhchap_ctrl_secret.attr,
+3 -2
drivers/nvme/host/tcp.c
··· 25 25 26 26 struct nvme_tcp_queue; 27 27 28 - /* Define the socket priority to use for connections were it is desirable 28 + /* 29 + * Define the socket priority to use for connections where it is desirable 29 30 * that the NIC consider performing optimized packet processing or filtering. 30 31 * A non-zero value being sufficient to indicate general consideration of any 31 32 * possible optimization. Making it a module param allows for alternative ··· 927 926 req->curr_bio = req->curr_bio->bi_next; 928 927 929 928 /* 930 - * If we don`t have any bios it means that controller 929 + * If we don't have any bios it means the controller 931 930 * sent more data than we requested, hence error 932 931 */ 933 932 if (!req->curr_bio) {
+11 -4
drivers/nvme/target/fcloop.c
··· 491 491 struct fcloop_rport *rport = remoteport->private; 492 492 struct nvmet_fc_target_port *targetport = rport->targetport; 493 493 struct fcloop_tport *tport; 494 + int ret = 0; 494 495 495 496 if (!targetport) { 496 497 /* ··· 501 500 * We end up here from delete association exchange: 502 501 * nvmet_fc_xmt_disconnect_assoc sends an async request. 503 502 * 504 - * Return success because this is what LLDDs do; silently 505 - * drop the response. 503 + * Return success when remoteport is still online because this 504 + * is what LLDDs do and silently drop the response. Otherwise, 505 + * return with error to signal upper layer to perform the lsrsp 506 + * resource cleanup. 506 507 */ 507 - lsrsp->done(lsrsp); 508 + if (remoteport->port_state == FC_OBJSTATE_ONLINE) 509 + lsrsp->done(lsrsp); 510 + else 511 + ret = -ENODEV; 512 + 508 513 kmem_cache_free(lsreq_cache, tls_req); 509 - return 0; 514 + return ret; 510 515 } 511 516 512 517 memcpy(lsreq->rspaddr, lsrsp->rspbuf,
+4 -4
drivers/pci/xen-pcifront.c
··· 856 856 int err; 857 857 858 858 /* Only connect once */ 859 - if (xenbus_read_driver_state(pdev->xdev->nodename) != 859 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != 860 860 XenbusStateInitialised) 861 861 return; 862 862 ··· 876 876 enum xenbus_state prev_state; 877 877 878 878 879 - prev_state = xenbus_read_driver_state(pdev->xdev->nodename); 879 + prev_state = xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename); 880 880 881 881 if (prev_state >= XenbusStateClosing) 882 882 goto out; ··· 895 895 896 896 static void pcifront_attach_devices(struct pcifront_device *pdev) 897 897 { 898 - if (xenbus_read_driver_state(pdev->xdev->nodename) == 898 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) == 899 899 XenbusStateReconfiguring) 900 900 pcifront_connect(pdev); 901 901 } ··· 909 909 struct pci_dev *pci_dev; 910 910 char str[64]; 911 911 912 - state = xenbus_read_driver_state(pdev->xdev->nodename); 912 + state = xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename); 913 913 if (state == XenbusStateInitialised) { 914 914 dev_dbg(&pdev->xdev->dev, "Handle skipped connect.\n"); 915 915 /* We missed Connected and need to initialize. */
+75
drivers/platform/x86/asus-armoury.h
··· 348 348 static const struct dmi_system_id power_limits[] = { 349 349 { 350 350 .matches = { 351 + DMI_MATCH(DMI_BOARD_NAME, "FA401UM"), 352 + }, 353 + .driver_data = &(struct power_data) { 354 + .ac_data = &(struct power_limits) { 355 + .ppt_pl1_spl_min = 15, 356 + .ppt_pl1_spl_max = 80, 357 + .ppt_pl2_sppt_min = 35, 358 + .ppt_pl2_sppt_max = 80, 359 + .ppt_pl3_fppt_min = 35, 360 + .ppt_pl3_fppt_max = 80, 361 + .nv_dynamic_boost_min = 5, 362 + .nv_dynamic_boost_max = 15, 363 + .nv_temp_target_min = 75, 364 + .nv_temp_target_max = 87, 365 + }, 366 + .dc_data = &(struct power_limits) { 367 + .ppt_pl1_spl_min = 25, 368 + .ppt_pl1_spl_max = 35, 369 + .ppt_pl2_sppt_min = 31, 370 + .ppt_pl2_sppt_max = 44, 371 + .ppt_pl3_fppt_min = 45, 372 + .ppt_pl3_fppt_max = 65, 373 + .nv_temp_target_min = 75, 374 + .nv_temp_target_max = 87, 375 + }, 376 + }, 377 + }, 378 + { 379 + .matches = { 351 380 DMI_MATCH(DMI_BOARD_NAME, "FA401UV"), 352 381 }, 353 382 .driver_data = &(struct power_data) { ··· 1488 1459 }, 1489 1460 { 1490 1461 .matches = { 1462 + DMI_MATCH(DMI_BOARD_NAME, "GX650RX"), 1463 + }, 1464 + .driver_data = &(struct power_data) { 1465 + .ac_data = &(struct power_limits) { 1466 + .ppt_pl1_spl_min = 28, 1467 + .ppt_pl1_spl_def = 70, 1468 + .ppt_pl1_spl_max = 90, 1469 + .ppt_pl2_sppt_min = 28, 1470 + .ppt_pl2_sppt_def = 70, 1471 + .ppt_pl2_sppt_max = 100, 1472 + .ppt_pl3_fppt_min = 28, 1473 + .ppt_pl3_fppt_def = 110, 1474 + .ppt_pl3_fppt_max = 125, 1475 + .nv_dynamic_boost_min = 5, 1476 + .nv_dynamic_boost_max = 25, 1477 + .nv_temp_target_min = 76, 1478 + .nv_temp_target_max = 87, 1479 + }, 1480 + .dc_data = &(struct power_limits) { 1481 + .ppt_pl1_spl_min = 28, 1482 + .ppt_pl1_spl_max = 50, 1483 + .ppt_pl2_sppt_min = 28, 1484 + .ppt_pl2_sppt_max = 50, 1485 + .ppt_pl3_fppt_min = 28, 1486 + .ppt_pl3_fppt_max = 65, 1487 + .nv_temp_target_min = 76, 1488 + .nv_temp_target_max = 87, 1489 + }, 1490 + }, 1491 + }, 1492 + { 1493 + .matches = { 1491 1494 DMI_MATCH(DMI_BOARD_NAME, "G513I"), 1492 1495 }, 1493 1496 .driver_data = &(struct power_data) { ··· 1767 1706 .nv_temp_target_max = 87, 1768 1707 }, 1769 1708 .requires_fan_curve = true, 1709 + }, 1710 + }, 1711 + { 1712 + .matches = { 1713 + DMI_MATCH(DMI_BOARD_NAME, "G733QS"), 1714 + }, 1715 + .driver_data = &(struct power_data) { 1716 + .ac_data = &(struct power_limits) { 1717 + .ppt_pl1_spl_min = 15, 1718 + .ppt_pl1_spl_max = 80, 1719 + .ppt_pl2_sppt_min = 15, 1720 + .ppt_pl2_sppt_max = 80, 1721 + }, 1722 + .requires_fan_curve = false, 1770 1723 }, 1771 1724 }, 1772 1725 {
+1 -1
drivers/platform/x86/dell/alienware-wmi-wmax.c
··· 175 175 DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 176 176 DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m18"), 177 177 }, 178 - .driver_data = &generic_quirks, 178 + .driver_data = &g_series_quirks, 179 179 }, 180 180 { 181 181 .ident = "Alienware x15",
+6
drivers/platform/x86/dell/dell-wmi-base.c
··· 80 80 static const struct key_entry dell_wmi_keymap_type_0000[] = { 81 81 { KE_IGNORE, 0x003a, { KEY_CAPSLOCK } }, 82 82 83 + /* Audio mute toggle */ 84 + { KE_KEY, 0x0109, { KEY_MUTE } }, 85 + 86 + /* Mic mute toggle */ 87 + { KE_KEY, 0x0150, { KEY_MICMUTE } }, 88 + 83 89 /* Meta key lock */ 84 90 { KE_IGNORE, 0xe000, { KEY_RIGHTMETA } }, 85 91
-1
drivers/platform/x86/dell/dell-wmi-sysman/passwordattr-interface.c
··· 93 93 if (ret < 0) 94 94 goto out; 95 95 96 - print_hex_dump_bytes("set new password data: ", DUMP_PREFIX_NONE, buffer, buffer_size); 97 96 ret = call_password_interface(wmi_priv.password_attr_wdev, buffer, buffer_size); 98 97 /* on success copy the new password to current password */ 99 98 if (!ret)
+6 -3
drivers/platform/x86/hp/hp-bioscfg/enum-attributes.c
··· 94 94 bioscfg_drv.enumeration_instances_count = 95 95 hp_get_instance_count(HP_WMI_BIOS_ENUMERATION_GUID); 96 96 97 - bioscfg_drv.enumeration_data = kzalloc_objs(*bioscfg_drv.enumeration_data, 98 - bioscfg_drv.enumeration_instances_count); 97 + if (!bioscfg_drv.enumeration_instances_count) 98 + return -EINVAL; 99 + bioscfg_drv.enumeration_data = kvcalloc(bioscfg_drv.enumeration_instances_count, 100 + sizeof(*bioscfg_drv.enumeration_data), GFP_KERNEL); 101 + 99 102 if (!bioscfg_drv.enumeration_data) { 100 103 bioscfg_drv.enumeration_instances_count = 0; 101 104 return -ENOMEM; ··· 447 444 } 448 445 bioscfg_drv.enumeration_instances_count = 0; 449 446 450 - kfree(bioscfg_drv.enumeration_data); 447 + kvfree(bioscfg_drv.enumeration_data); 451 448 bioscfg_drv.enumeration_data = NULL; 452 449 }
+11 -1
drivers/platform/x86/hp/hp-wmi.c
··· 146 146 "8900", "8901", "8902", "8912", "8917", "8918", "8949", "894A", "89EB", 147 147 "8A15", "8A42", 148 148 "8BAD", 149 + "8E41", 149 150 }; 150 151 151 152 /* DMI Board names of Omen laptops that are specifically set to be thermal ··· 167 166 "8BAD", 168 167 }; 169 168 170 - /* DMI Board names of Victus 16-d1xxx laptops */ 169 + /* DMI Board names of Victus 16-d laptops */ 171 170 static const char * const victus_thermal_profile_boards[] = { 171 + "88F8", 172 172 "8A25", 173 173 }; 174 174 175 175 /* DMI Board names of Victus 16-r and Victus 16-s laptops */ 176 176 static const struct dmi_system_id victus_s_thermal_profile_boards[] __initconst = { 177 177 { 178 + .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BAB") }, 179 + .driver_data = (void *)&omen_v1_thermal_params, 180 + }, 181 + { 178 182 .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BBE") }, 179 183 .driver_data = (void *)&victus_s_thermal_params, 184 + }, 185 + { 186 + .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BCD") }, 187 + .driver_data = (void *)&omen_v1_thermal_params, 180 188 }, 181 189 { 182 190 .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BD4") },
+19
drivers/platform/x86/intel/hid.c
··· 136 136 }, 137 137 }, 138 138 { 139 + .ident = "Lenovo ThinkPad X1 Fold 16 Gen 1", 140 + .matches = { 141 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 142 + DMI_MATCH(DMI_PRODUCT_FAMILY, "ThinkPad X1 Fold 16 Gen 1"), 143 + }, 144 + }, 145 + { 139 146 .ident = "Microsoft Surface Go 3", 140 147 .matches = { 141 148 DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"), ··· 194 187 .matches = { 195 188 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 196 189 DMI_MATCH(DMI_PRODUCT_NAME, "Dell Pro Rugged 12 Tablet RA02260"), 190 + }, 191 + }, 192 + { 193 + .matches = { 194 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 195 + DMI_MATCH(DMI_PRODUCT_NAME, "Dell 14 Plus 2-in-1 DB04250"), 196 + }, 197 + }, 198 + { 199 + .matches = { 200 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 201 + DMI_MATCH(DMI_PRODUCT_NAME, "Dell 16 Plus 2-in-1 DB06250"), 197 202 }, 198 203 }, 199 204 { }
+7
drivers/platform/x86/intel/int3472/discrete.c
··· 223 223 *con_id = "avdd"; 224 224 *gpio_flags = GPIO_ACTIVE_HIGH; 225 225 break; 226 + case INT3472_GPIO_TYPE_DOVDD: 227 + *con_id = "dovdd"; 228 + *gpio_flags = GPIO_ACTIVE_HIGH; 229 + break; 226 230 case INT3472_GPIO_TYPE_HANDSHAKE: 227 231 *con_id = "dvdd"; 228 232 *gpio_flags = GPIO_ACTIVE_HIGH; ··· 255 251 * 0x0b Power enable 256 252 * 0x0c Clock enable 257 253 * 0x0d Privacy LED 254 + * 0x10 DOVDD (digital I/O voltage) 258 255 * 0x13 Hotplug detect 259 256 * 260 257 * There are some known platform specific quirks where that does not quite ··· 337 332 case INT3472_GPIO_TYPE_CLK_ENABLE: 338 333 case INT3472_GPIO_TYPE_PRIVACY_LED: 339 334 case INT3472_GPIO_TYPE_POWER_ENABLE: 335 + case INT3472_GPIO_TYPE_DOVDD: 340 336 case INT3472_GPIO_TYPE_HANDSHAKE: 341 337 gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, con_id, gpio_flags); 342 338 if (IS_ERR(gpio)) { ··· 362 356 case INT3472_GPIO_TYPE_POWER_ENABLE: 363 357 second_sensor = int3472->quirks.avdd_second_sensor; 364 358 fallthrough; 359 + case INT3472_GPIO_TYPE_DOVDD: 365 360 case INT3472_GPIO_TYPE_HANDSHAKE: 366 361 ret = skl_int3472_register_regulator(int3472, gpio, enable_time_us, 367 362 con_id, second_sensor);
+4 -2
drivers/platform/x86/lenovo/thinkpad_acpi.c
··· 9525 9525 { 9526 9526 switch (what) { 9527 9527 case THRESHOLD_START: 9528 - if ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_START, ret, battery)) 9528 + if (!battery_info.batteries[battery].start_support || 9529 + ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_START, ret, battery))) 9529 9530 return -ENODEV; 9530 9531 9531 9532 /* The value is in the low 8 bits of the response */ 9532 9533 *ret = *ret & 0xFF; 9533 9534 return 0; 9534 9535 case THRESHOLD_STOP: 9535 - if ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_STOP, ret, battery)) 9536 + if (!battery_info.batteries[battery].stop_support || 9537 + ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_STOP, ret, battery))) 9536 9538 return -ENODEV; 9537 9539 /* Value is in lower 8 bits */ 9538 9540 *ret = *ret & 0xFF;
+29 -1
drivers/platform/x86/oxpec.c
··· 11 11 * 12 12 * Copyright (C) 2022 Joaquín I. Aramendía <samsagax@gmail.com> 13 13 * Copyright (C) 2024 Derek J. Clark <derekjohn.clark@gmail.com> 14 - * Copyright (C) 2025 Antheas Kapenekakis <lkml@antheas.dev> 14 + * Copyright (C) 2025-2026 Antheas Kapenekakis <lkml@antheas.dev> 15 15 */ 16 16 17 17 #include <linux/acpi.h> ··· 117 117 { 118 118 .matches = { 119 119 DMI_MATCH(DMI_BOARD_VENDOR, "AOKZOE"), 120 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "AOKZOE A2 Pro"), 121 + }, 122 + .driver_data = (void *)aok_zoe_a1, 123 + }, 124 + { 125 + .matches = { 126 + DMI_MATCH(DMI_BOARD_VENDOR, "AOKZOE"), 120 127 DMI_EXACT_MATCH(DMI_BOARD_NAME, "AOKZOE A1X"), 121 128 }, 122 129 .driver_data = (void *)oxp_fly, ··· 148 141 DMI_MATCH(DMI_BOARD_NAME, "ONEXPLAYER 2"), 149 142 }, 150 143 .driver_data = (void *)oxp_2, 144 + }, 145 + { 146 + .matches = { 147 + DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 148 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER APEX"), 149 + }, 150 + .driver_data = (void *)oxp_fly, 151 151 }, 152 152 { 153 153 .matches = { ··· 229 215 { 230 216 .matches = { 231 217 DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 218 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1z"), 219 + }, 220 + .driver_data = (void *)oxp_x1, 221 + }, 222 + { 223 + .matches = { 224 + DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 232 225 DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1 A"), 233 226 }, 234 227 .driver_data = (void *)oxp_x1, ··· 244 223 .matches = { 245 224 DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 246 225 DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1 i"), 226 + }, 227 + .driver_data = (void *)oxp_x1, 228 + }, 229 + { 230 + .matches = { 231 + DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"), 232 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1Air"), 247 233 }, 248 234 .driver_data = (void *)oxp_x1, 249 235 },
+24 -1
drivers/platform/x86/redmi-wmi.c
··· 20 20 static const struct key_entry redmi_wmi_keymap[] = { 21 21 {KE_KEY, 0x00000201, {KEY_SELECTIVE_SCREENSHOT}}, 22 22 {KE_KEY, 0x00000301, {KEY_ALL_APPLICATIONS}}, 23 - {KE_KEY, 0x00001b01, {KEY_SETUP}}, 23 + {KE_KEY, 0x00001b01, {KEY_CONFIG}}, 24 + {KE_KEY, 0x00011b01, {KEY_CONFIG}}, 25 + {KE_KEY, 0x00010101, {KEY_SWITCHVIDEOMODE}}, 26 + {KE_KEY, 0x00001a01, {KEY_REFRESH_RATE_TOGGLE}}, 24 27 25 28 /* AI button has code for each position */ 26 29 {KE_KEY, 0x00011801, {KEY_ASSISTANT}}, ··· 34 31 {KE_IGNORE, 0x00800501, {}}, 35 32 {KE_IGNORE, 0x00050501, {}}, 36 33 {KE_IGNORE, 0x000a0501, {}}, 34 + 35 + /* Xiaomi G Command Center */ 36 + {KE_KEY, 0x00010a01, {KEY_VENDOR}}, 37 + 38 + /* OEM preset power mode */ 39 + {KE_IGNORE, 0x00011601, {}}, 40 + {KE_IGNORE, 0x00021601, {}}, 41 + {KE_IGNORE, 0x00031601, {}}, 42 + {KE_IGNORE, 0x00041601, {}}, 43 + 44 + /* Fn Lock state */ 45 + {KE_IGNORE, 0x00000701, {}}, 46 + {KE_IGNORE, 0x00010701, {}}, 47 + 48 + /* Fn+`/1/2/3/4 */ 49 + {KE_KEY, 0x00011101, {KEY_F13}}, 50 + {KE_KEY, 0x00011201, {KEY_F14}}, 51 + {KE_KEY, 0x00011301, {KEY_F15}}, 52 + {KE_KEY, 0x00011401, {KEY_F16}}, 53 + {KE_KEY, 0x00011501, {KEY_F17}}, 37 54 38 55 {KE_END} 39 56 };
+18
drivers/platform/x86/touchscreen_dmi.c
··· 410 410 .properties = gdix1001_upside_down_props, 411 411 }; 412 412 413 + static const struct property_entry gdix1001_y_inverted_props[] = { 414 + PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 415 + { } 416 + }; 417 + 418 + static const struct ts_dmi_data gdix1001_y_inverted_data = { 419 + .acpi_name = "GDIX1001", 420 + .properties = gdix1001_y_inverted_props, 421 + }; 422 + 413 423 static const struct property_entry gp_electronic_t701_props[] = { 414 424 PROPERTY_ENTRY_U32("touchscreen-size-x", 960), 415 425 PROPERTY_ENTRY_U32("touchscreen-size-y", 640), ··· 1666 1656 DMI_MATCH(DMI_SYS_VENDOR, "Globalspace Tech Pvt Ltd"), 1667 1657 DMI_MATCH(DMI_PRODUCT_NAME, "SolTIVW"), 1668 1658 DMI_MATCH(DMI_PRODUCT_SKU, "PN20170413488"), 1659 + }, 1660 + }, 1661 + { 1662 + /* SUPI S10 */ 1663 + .driver_data = (void *)&gdix1001_y_inverted_data, 1664 + .matches = { 1665 + DMI_MATCH(DMI_SYS_VENDOR, "SUPI"), 1666 + DMI_MATCH(DMI_PRODUCT_NAME, "S10"), 1669 1667 }, 1670 1668 }, 1671 1669 {
+75 -35
drivers/platform/x86/uniwill/uniwill-acpi.c
··· 314 314 #define LED_CHANNELS 3 315 315 #define LED_MAX_BRIGHTNESS 200 316 316 317 - #define UNIWILL_FEATURE_FN_LOCK_TOGGLE BIT(0) 318 - #define UNIWILL_FEATURE_SUPER_KEY_TOGGLE BIT(1) 317 + #define UNIWILL_FEATURE_FN_LOCK BIT(0) 318 + #define UNIWILL_FEATURE_SUPER_KEY BIT(1) 319 319 #define UNIWILL_FEATURE_TOUCHPAD_TOGGLE BIT(2) 320 320 #define UNIWILL_FEATURE_LIGHTBAR BIT(3) 321 321 #define UNIWILL_FEATURE_BATTERY BIT(4) ··· 330 330 struct acpi_battery_hook hook; 331 331 unsigned int last_charge_ctrl; 332 332 struct mutex battery_lock; /* Protects the list of currently registered batteries */ 333 + unsigned int last_status; 333 334 unsigned int last_switch_status; 334 335 struct mutex super_key_lock; /* Protects the toggling of the super key lock state */ 335 336 struct list_head batteries; ··· 378 377 { KE_IGNORE, UNIWILL_OSD_CAPSLOCK, { KEY_CAPSLOCK }}, 379 378 { KE_IGNORE, UNIWILL_OSD_NUMLOCK, { KEY_NUMLOCK }}, 380 379 381 - /* Reported when the user locks/unlocks the super key */ 382 - { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_LOCK_ENABLE, { KEY_UNKNOWN }}, 383 - { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_LOCK_DISABLE, { KEY_UNKNOWN }}, 380 + /* 381 + * Reported when the user enables/disables the super key. 382 + * Those events might even be reported when the change was done 383 + * using the sysfs attribute! 384 + */ 385 + { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_DISABLE, { KEY_UNKNOWN }}, 386 + { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_ENABLE, { KEY_UNKNOWN }}, 384 387 /* Optional, might not be reported by all devices */ 385 - { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_LOCK_CHANGED, { KEY_UNKNOWN }}, 388 + { KE_IGNORE, UNIWILL_OSD_SUPER_KEY_STATE_CHANGED, { KEY_UNKNOWN }}, 386 389 387 390 /* Reported in manual mode when toggling the airplane mode status */ 388 391 { KE_KEY, UNIWILL_OSD_RFKILL, { KEY_RFKILL }}, ··· 405 400 406 401 /* Reported when the user wants to toggle the mute status */ 407 402 { KE_IGNORE, UNIWILL_OSD_MUTE, { KEY_MUTE }}, 408 - 409 - /* Reported when the user locks/unlocks the Fn key */ 410 - { KE_IGNORE, UNIWILL_OSD_FN_LOCK, { KEY_FN_ESC }}, 411 403 412 404 /* Reported when the user wants to toggle the brightness of the keyboard */ 413 405 { KE_KEY, UNIWILL_OSD_KBDILLUMTOGGLE, { KEY_KBDILLUMTOGGLE }}, ··· 578 576 case EC_ADDR_SECOND_FAN_RPM_1: 579 577 case EC_ADDR_SECOND_FAN_RPM_2: 580 578 case EC_ADDR_BAT_ALERT: 579 + case EC_ADDR_BIOS_OEM: 581 580 case EC_ADDR_PWM_1: 582 581 case EC_ADDR_PWM_2: 583 582 case EC_ADDR_TRIGGER: ··· 603 600 .use_single_write = true, 604 601 }; 605 602 606 - static ssize_t fn_lock_toggle_enable_store(struct device *dev, struct device_attribute *attr, 607 - const char *buf, size_t count) 603 + static ssize_t fn_lock_store(struct device *dev, struct device_attribute *attr, const char *buf, 604 + size_t count) 608 605 { 609 606 struct uniwill_data *data = dev_get_drvdata(dev); 610 607 unsigned int value; ··· 627 624 return count; 628 625 } 629 626 630 - static ssize_t fn_lock_toggle_enable_show(struct device *dev, struct device_attribute *attr, 631 - char *buf) 627 + static ssize_t fn_lock_show(struct device *dev, struct device_attribute *attr, char *buf) 632 628 { 633 629 struct uniwill_data *data = dev_get_drvdata(dev); 634 630 unsigned int value; ··· 640 638 return sysfs_emit(buf, "%d\n", !!(value & FN_LOCK_STATUS)); 641 639 } 642 640 643 - static DEVICE_ATTR_RW(fn_lock_toggle_enable); 641 + static DEVICE_ATTR_RW(fn_lock); 644 642 645 - static ssize_t super_key_toggle_enable_store(struct device *dev, struct device_attribute *attr, 646 - const char *buf, size_t count) 643 + static ssize_t super_key_enable_store(struct device *dev, struct device_attribute *attr, 644 + const char *buf, size_t count) 647 645 { 648 646 struct uniwill_data *data = dev_get_drvdata(dev); 649 647 unsigned int value; ··· 675 673 return count; 676 674 } 677 675 678 - static ssize_t super_key_toggle_enable_show(struct device *dev, struct device_attribute *attr, 679 - char *buf) 676 + static ssize_t super_key_enable_show(struct device *dev, struct device_attribute *attr, char *buf) 680 677 { 681 678 struct uniwill_data *data = dev_get_drvdata(dev); 682 679 unsigned int value; ··· 688 687 return sysfs_emit(buf, "%d\n", !(value & SUPER_KEY_LOCK_STATUS)); 689 688 } 690 689 691 - static DEVICE_ATTR_RW(super_key_toggle_enable); 690 + static DEVICE_ATTR_RW(super_key_enable); 692 691 693 692 static ssize_t touchpad_toggle_enable_store(struct device *dev, struct device_attribute *attr, 694 693 const char *buf, size_t count) ··· 882 881 883 882 static struct attribute *uniwill_attrs[] = { 884 883 /* Keyboard-related */ 885 - &dev_attr_fn_lock_toggle_enable.attr, 886 - &dev_attr_super_key_toggle_enable.attr, 884 + &dev_attr_fn_lock.attr, 885 + &dev_attr_super_key_enable.attr, 887 886 &dev_attr_touchpad_toggle_enable.attr, 888 887 /* Lightbar-related */ 889 888 &dev_attr_rainbow_animation.attr, ··· 898 897 struct device *dev = kobj_to_dev(kobj); 899 898 struct uniwill_data *data = dev_get_drvdata(dev); 900 899 901 - if (attr == &dev_attr_fn_lock_toggle_enable.attr) { 902 - if (uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK_TOGGLE)) 900 + if (attr == &dev_attr_fn_lock.attr) { 901 + if (uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK)) 903 902 return attr->mode; 904 903 } 905 904 906 - if (attr == &dev_attr_super_key_toggle_enable.attr) { 907 - if (uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY_TOGGLE)) 905 + if (attr == &dev_attr_super_key_enable.attr) { 906 + if (uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY)) 908 907 return attr->mode; 909 908 } 910 909 ··· 1358 1357 1359 1358 switch (action) { 1360 1359 case UNIWILL_OSD_BATTERY_ALERT: 1360 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_BATTERY)) 1361 + return NOTIFY_DONE; 1362 + 1361 1363 mutex_lock(&data->battery_lock); 1362 1364 list_for_each_entry(entry, &data->batteries, head) { 1363 1365 power_supply_changed(entry->battery); ··· 1372 1368 /* noop for the time being, will change once charging priority 1373 1369 * gets implemented. 1374 1370 */ 1371 + 1372 + return NOTIFY_OK; 1373 + case UNIWILL_OSD_FN_LOCK: 1374 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK)) 1375 + return NOTIFY_DONE; 1376 + 1377 + sysfs_notify(&data->dev->kobj, NULL, "fn_lock"); 1375 1378 1376 1379 return NOTIFY_OK; 1377 1380 default: ··· 1514 1503 regmap_clear_bits(data->regmap, EC_ADDR_AP_OEM, ENABLE_MANUAL_CTRL); 1515 1504 } 1516 1505 1517 - static int uniwill_suspend_keyboard(struct uniwill_data *data) 1506 + static int uniwill_suspend_fn_lock(struct uniwill_data *data) 1518 1507 { 1519 - if (!uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY_TOGGLE)) 1508 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK)) 1509 + return 0; 1510 + 1511 + /* 1512 + * The EC_ADDR_BIOS_OEM is marked as volatile, so we have to restore it 1513 + * ourselves. 1514 + */ 1515 + return regmap_read(data->regmap, EC_ADDR_BIOS_OEM, &data->last_status); 1516 + } 1517 + 1518 + static int uniwill_suspend_super_key(struct uniwill_data *data) 1519 + { 1520 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY)) 1520 1521 return 0; 1521 1522 1522 1523 /* ··· 1565 1542 struct uniwill_data *data = dev_get_drvdata(dev); 1566 1543 int ret; 1567 1544 1568 - ret = uniwill_suspend_keyboard(data); 1545 + ret = uniwill_suspend_fn_lock(data); 1546 + if (ret < 0) 1547 + return ret; 1548 + 1549 + ret = uniwill_suspend_super_key(data); 1569 1550 if (ret < 0) 1570 1551 return ret; 1571 1552 ··· 1587 1560 return 0; 1588 1561 } 1589 1562 1590 - static int uniwill_resume_keyboard(struct uniwill_data *data) 1563 + static int uniwill_resume_fn_lock(struct uniwill_data *data) 1564 + { 1565 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_FN_LOCK)) 1566 + return 0; 1567 + 1568 + return regmap_update_bits(data->regmap, EC_ADDR_BIOS_OEM, FN_LOCK_STATUS, 1569 + data->last_status); 1570 + } 1571 + 1572 + static int uniwill_resume_super_key(struct uniwill_data *data) 1591 1573 { 1592 1574 unsigned int value; 1593 1575 int ret; 1594 1576 1595 - if (!uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY_TOGGLE)) 1577 + if (!uniwill_device_supports(data, UNIWILL_FEATURE_SUPER_KEY)) 1596 1578 return 0; 1597 1579 1598 1580 ret = regmap_read(data->regmap, EC_ADDR_SWITCH_STATUS, &value); ··· 1644 1608 if (ret < 0) 1645 1609 return ret; 1646 1610 1647 - ret = uniwill_resume_keyboard(data); 1611 + ret = uniwill_resume_fn_lock(data); 1612 + if (ret < 0) 1613 + return ret; 1614 + 1615 + ret = uniwill_resume_super_key(data); 1648 1616 if (ret < 0) 1649 1617 return ret; 1650 1618 ··· 1683 1643 }; 1684 1644 1685 1645 static struct uniwill_device_descriptor lapac71h_descriptor __initdata = { 1686 - .features = UNIWILL_FEATURE_FN_LOCK_TOGGLE | 1687 - UNIWILL_FEATURE_SUPER_KEY_TOGGLE | 1646 + .features = UNIWILL_FEATURE_FN_LOCK | 1647 + UNIWILL_FEATURE_SUPER_KEY | 1688 1648 UNIWILL_FEATURE_TOUCHPAD_TOGGLE | 1689 1649 UNIWILL_FEATURE_BATTERY | 1690 1650 UNIWILL_FEATURE_HWMON, 1691 1651 }; 1692 1652 1693 1653 static struct uniwill_device_descriptor lapkc71f_descriptor __initdata = { 1694 - .features = UNIWILL_FEATURE_FN_LOCK_TOGGLE | 1695 - UNIWILL_FEATURE_SUPER_KEY_TOGGLE | 1654 + .features = UNIWILL_FEATURE_FN_LOCK | 1655 + UNIWILL_FEATURE_SUPER_KEY | 1696 1656 UNIWILL_FEATURE_TOUCHPAD_TOGGLE | 1697 1657 UNIWILL_FEATURE_LIGHTBAR | 1698 1658 UNIWILL_FEATURE_BATTERY |
+3 -3
drivers/platform/x86/uniwill/uniwill-wmi.h
··· 64 64 #define UNIWILL_OSD_KB_LED_LEVEL3 0x3E 65 65 #define UNIWILL_OSD_KB_LED_LEVEL4 0x3F 66 66 67 - #define UNIWILL_OSD_SUPER_KEY_LOCK_ENABLE 0x40 68 - #define UNIWILL_OSD_SUPER_KEY_LOCK_DISABLE 0x41 67 + #define UNIWILL_OSD_SUPER_KEY_DISABLE 0x40 68 + #define UNIWILL_OSD_SUPER_KEY_ENABLE 0x41 69 69 70 70 #define UNIWILL_OSD_MENU_JP 0x42 71 71 ··· 74 74 75 75 #define UNIWILL_OSD_RFKILL 0xA4 76 76 77 - #define UNIWILL_OSD_SUPER_KEY_LOCK_CHANGED 0xA5 77 + #define UNIWILL_OSD_SUPER_KEY_STATE_CHANGED 0xA5 78 78 79 79 #define UNIWILL_OSD_LIGHTBAR_STATE_CHANGED 0xA6 80 80
+3 -3
drivers/pmdomain/bcm/bcm2835-power.c
··· 580 580 581 581 switch (id) { 582 582 case BCM2835_RESET_V3D: 583 - return !PM_READ(PM_GRAFX & PM_V3DRSTN); 583 + return !(PM_READ(PM_GRAFX) & PM_V3DRSTN); 584 584 case BCM2835_RESET_H264: 585 - return !PM_READ(PM_IMAGE & PM_H264RSTN); 585 + return !(PM_READ(PM_IMAGE) & PM_H264RSTN); 586 586 case BCM2835_RESET_ISP: 587 - return !PM_READ(PM_IMAGE & PM_ISPRSTN); 587 + return !(PM_READ(PM_IMAGE) & PM_ISPRSTN); 588 588 default: 589 589 return -EINVAL; 590 590 }
+1 -1
drivers/pmdomain/rockchip/pm-domains.c
··· 1311 1311 static const struct rockchip_domain_info rk3588_pm_domains[] = { 1312 1312 [RK3588_PD_GPU] = DOMAIN_RK3588("gpu", 0x0, BIT(0), 0, 0x0, 0, BIT(1), 0x0, BIT(0), BIT(0), false, true), 1313 1313 [RK3588_PD_NPU] = DOMAIN_RK3588("npu", 0x0, BIT(1), BIT(1), 0x0, 0, 0, 0x0, 0, 0, false, true), 1314 - [RK3588_PD_VCODEC] = DOMAIN_RK3588("vcodec", 0x0, BIT(2), BIT(2), 0x0, 0, 0, 0x0, 0, 0, false, false), 1314 + [RK3588_PD_VCODEC] = DOMAIN_RK3588("vcodec", 0x0, BIT(2), BIT(2), 0x0, 0, 0, 0x0, 0, 0, false, true), 1315 1315 [RK3588_PD_NPUTOP] = DOMAIN_RK3588("nputop", 0x0, BIT(3), 0, 0x0, BIT(11), BIT(2), 0x0, BIT(1), BIT(1), false, false), 1316 1316 [RK3588_PD_NPU1] = DOMAIN_RK3588("npu1", 0x0, BIT(4), 0, 0x0, BIT(12), BIT(3), 0x0, BIT(2), BIT(2), false, false), 1317 1317 [RK3588_PD_NPU2] = DOMAIN_RK3588("npu2", 0x0, BIT(5), 0, 0x0, BIT(13), BIT(4), 0x0, BIT(3), BIT(3), false, false),
+1 -3
drivers/regulator/mt6363-regulator.c
··· 899 899 "Failed to map IRQ%d\n", info->hwirq); 900 900 901 901 ret = devm_add_action_or_reset(dev, mt6363_irq_remove, &info->virq); 902 - if (ret) { 903 - irq_dispose_mapping(info->hwirq); 902 + if (ret) 904 903 return ret; 905 - } 906 904 907 905 config.driver_data = info; 908 906 INIT_DELAYED_WORK(&info->oc_work, mt6363_oc_irq_enable_work);
+1 -1
drivers/regulator/pf9453-regulator.c
··· 809 809 } 810 810 811 811 ret = devm_request_threaded_irq(pf9453->dev, pf9453->irq, NULL, pf9453_irq_handler, 812 - (IRQF_TRIGGER_FALLING | IRQF_ONESHOT), 812 + IRQF_ONESHOT, 813 813 "pf9453-irq", pf9453); 814 814 if (ret) 815 815 return dev_err_probe(pf9453->dev, ret, "Failed to request IRQ: %d\n", pf9453->irq);
+1 -1
drivers/remoteproc/imx_rproc.c
··· 617 617 618 618 err = of_reserved_mem_region_to_resource(np, i++, &res); 619 619 if (err) 620 - return 0; 620 + break; 621 621 622 622 /* 623 623 * Ignore the first memory region which will be used vdev buffer.
+39
drivers/remoteproc/mtk_scp.c
··· 1592 1592 }; 1593 1593 MODULE_DEVICE_TABLE(of, mtk_scp_of_match); 1594 1594 1595 + static int __maybe_unused scp_suspend(struct device *dev) 1596 + { 1597 + struct mtk_scp *scp = dev_get_drvdata(dev); 1598 + struct rproc *rproc = scp->rproc; 1599 + 1600 + /* 1601 + * Only unprepare if the SCP is running and holding the clock. 1602 + * 1603 + * Note: `scp_ops` doesn't implement .attach() callback, hence 1604 + * `rproc->state` can never be RPROC_ATTACHED. Otherwise, it 1605 + * should also be checked here. 1606 + */ 1607 + if (rproc->state == RPROC_RUNNING) 1608 + clk_unprepare(scp->clk); 1609 + return 0; 1610 + } 1611 + 1612 + static int __maybe_unused scp_resume(struct device *dev) 1613 + { 1614 + struct mtk_scp *scp = dev_get_drvdata(dev); 1615 + struct rproc *rproc = scp->rproc; 1616 + 1617 + /* 1618 + * Only prepare if the SCP was running and holding the clock. 1619 + * 1620 + * Note: `scp_ops` doesn't implement .attach() callback, hence 1621 + * `rproc->state` can never be RPROC_ATTACHED. Otherwise, it 1622 + * should also be checked here. 1623 + */ 1624 + if (rproc->state == RPROC_RUNNING) 1625 + return clk_prepare(scp->clk); 1626 + return 0; 1627 + } 1628 + 1629 + static const struct dev_pm_ops scp_pm_ops = { 1630 + SET_SYSTEM_SLEEP_PM_OPS(scp_suspend, scp_resume) 1631 + }; 1632 + 1595 1633 static struct platform_driver mtk_scp_driver = { 1596 1634 .probe = scp_probe, 1597 1635 .remove = scp_remove, 1598 1636 .driver = { 1599 1637 .name = "mtk-scp", 1600 1638 .of_match_table = mtk_scp_of_match, 1639 + .pm = &scp_pm_ops, 1601 1640 }, 1602 1641 }; 1603 1642
+1 -1
drivers/remoteproc/qcom_sysmon.c
··· 203 203 }; 204 204 205 205 struct ssctl_subsys_event_req { 206 - u8 subsys_name_len; 206 + u32 subsys_name_len; 207 207 char subsys_name[SSCTL_SUBSYS_NAME_LENGTH]; 208 208 u32 event; 209 209 u8 evt_driven_valid;
+1 -1
drivers/remoteproc/qcom_wcnss.c
··· 537 537 538 538 wcnss->mem_phys = wcnss->mem_reloc = res.start; 539 539 wcnss->mem_size = resource_size(&res); 540 - wcnss->mem_region = devm_ioremap_resource_wc(wcnss->dev, &res); 540 + wcnss->mem_region = devm_ioremap_wc(wcnss->dev, wcnss->mem_phys, wcnss->mem_size); 541 541 if (IS_ERR(wcnss->mem_region)) { 542 542 dev_err(wcnss->dev, "unable to map memory region: %pR\n", &res); 543 543 return PTR_ERR(wcnss->mem_region);
+10
drivers/scsi/mpi3mr/mpi3mr_fw.c
··· 1618 1618 ioc_info(mrioc, 1619 1619 "successfully transitioned to %s state\n", 1620 1620 mpi3mr_iocstate_name(ioc_state)); 1621 + mpi3mr_clear_reset_history(mrioc); 1621 1622 return 0; 1622 1623 } 1623 1624 ioc_status = readl(&mrioc->sysif_regs->ioc_status); ··· 1637 1636 msleep(100); 1638 1637 elapsed_time_sec = jiffies_to_msecs(jiffies - start_time)/1000; 1639 1638 } while (elapsed_time_sec < mrioc->ready_timeout); 1639 + 1640 + ioc_state = mpi3mr_get_iocstate(mrioc); 1641 + if (ioc_state == MRIOC_STATE_READY) { 1642 + ioc_info(mrioc, 1643 + "successfully transitioned to %s state after %llu seconds\n", 1644 + mpi3mr_iocstate_name(ioc_state), elapsed_time_sec); 1645 + mpi3mr_clear_reset_history(mrioc); 1646 + return 0; 1647 + } 1640 1648 1641 1649 out_failed: 1642 1650 elapsed_time_sec = jiffies_to_msecs(jiffies - start_time)/1000;
+1 -1
drivers/scsi/scsi_devinfo.c
··· 190 190 {"IBM", "2076", NULL, BLIST_NO_VPD_SIZE}, 191 191 {"IBM", "2105", NULL, BLIST_RETRY_HWERROR}, 192 192 {"iomega", "jaz 1GB", "J.86", BLIST_NOTQ | BLIST_NOLUN}, 193 - {"IOMEGA", "ZIP", NULL, BLIST_NOTQ | BLIST_NOLUN}, 193 + {"IOMEGA", "ZIP", NULL, BLIST_NOTQ | BLIST_NOLUN | BLIST_SKIP_IO_HINTS}, 194 194 {"IOMEGA", "Io20S *F", NULL, BLIST_KEY}, 195 195 {"INSITE", "Floptical F*8I", NULL, BLIST_KEY}, 196 196 {"INSITE", "I325VM", NULL, BLIST_KEY},
+1
drivers/scsi/scsi_scan.c
··· 361 361 * since we use this queue depth most of times. 362 362 */ 363 363 if (scsi_realloc_sdev_budget_map(sdev, depth)) { 364 + kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags); 364 365 put_device(&starget->dev); 365 366 kfree(sdev); 366 367 goto out;
+1 -1
drivers/scsi/xen-scsifront.c
··· 1175 1175 return; 1176 1176 } 1177 1177 1178 - if (xenbus_read_driver_state(dev->nodename) == 1178 + if (xenbus_read_driver_state(dev, dev->nodename) == 1179 1179 XenbusStateInitialised) 1180 1180 scsifront_do_lun_hotplug(info, VSCSIFRONT_OP_ADD_LUN); 1181 1181
+1 -1
drivers/spi/spi-dw-dma.c
··· 271 271 msecs_to_jiffies(ms)); 272 272 273 273 if (ms == 0) { 274 - dev_err(&dws->ctlr->cur_msg->spi->dev, 274 + dev_err(&dws->ctlr->dev, 275 275 "DMA transaction timed out\n"); 276 276 return -ETIMEDOUT; 277 277 }
+6 -9
drivers/target/target_core_configfs.c
··· 108 108 const char *page, size_t count) 109 109 { 110 110 ssize_t read_bytes; 111 - struct file *fp; 112 111 ssize_t r = -EINVAL; 112 + struct path path = {}; 113 113 114 114 mutex_lock(&target_devices_lock); 115 115 if (target_devices) { ··· 131 131 db_root_stage[read_bytes - 1] = '\0'; 132 132 133 133 /* validate new db root before accepting it */ 134 - fp = filp_open(db_root_stage, O_RDONLY, 0); 135 - if (IS_ERR(fp)) { 134 + r = kern_path(db_root_stage, LOOKUP_FOLLOW | LOOKUP_DIRECTORY, &path); 135 + if (r) { 136 136 pr_err("db_root: cannot open: %s\n", db_root_stage); 137 + if (r == -ENOTDIR) 138 + pr_err("db_root: not a directory: %s\n", db_root_stage); 137 139 goto unlock; 138 140 } 139 - if (!S_ISDIR(file_inode(fp)->i_mode)) { 140 - filp_close(fp, NULL); 141 - pr_err("db_root: not a directory: %s\n", db_root_stage); 142 - goto unlock; 143 - } 144 - filp_close(fp, NULL); 141 + path_put(&path); 145 142 146 143 strscpy(db_root, db_root_stage); 147 144 pr_debug("Target_Core_ConfigFS: db_root set to %s\n", db_root);
+6 -2
drivers/video/fbdev/au1100fb.c
··· 380 380 #define panel_is_color(panel) (panel->control_base & LCD_CONTROL_PC) 381 381 #define panel_swap_rgb(panel) (panel->control_base & LCD_CONTROL_CCO) 382 382 383 - #if defined(CONFIG_COMPILE_TEST) && !defined(CONFIG_MIPS) 384 - /* This is only defined to be able to compile this driver on non-mips platforms */ 383 + #if defined(CONFIG_COMPILE_TEST) && (!defined(CONFIG_MIPS) || defined(CONFIG_64BIT)) 384 + /* 385 + * KSEG1ADDR() is defined in arch/mips/include/asm/addrspace.h 386 + * for 32 bit configurations. Provide a stub for compile testing 387 + * on other platforms. 388 + */ 385 389 #define KSEG1ADDR(x) (x) 386 390 #endif 387 391
+2 -5
drivers/xen/xen-acpi-processor.c
··· 378 378 acpi_psd[acpi_id].domain); 379 379 } 380 380 381 - status = acpi_evaluate_object(handle, "_CST", NULL, &buffer); 382 - if (ACPI_FAILURE(status)) { 383 - if (!pblk) 384 - return AE_OK; 385 - } 381 + if (!pblk && !acpi_has_method(handle, "_CST")) 382 + return AE_OK; 386 383 /* .. and it has a C-state */ 387 384 __set_bit(acpi_id, acpi_id_cst_present); 388 385
+5 -5
drivers/xen/xen-pciback/xenbus.c
··· 149 149 150 150 mutex_lock(&pdev->dev_lock); 151 151 /* Make sure we only do this setup once */ 152 - if (xenbus_read_driver_state(pdev->xdev->nodename) != 152 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != 153 153 XenbusStateInitialised) 154 154 goto out; 155 155 156 156 /* Wait for frontend to state that it has published the configuration */ 157 - if (xenbus_read_driver_state(pdev->xdev->otherend) != 157 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->otherend) != 158 158 XenbusStateInitialised) 159 159 goto out; 160 160 ··· 374 374 dev_dbg(&pdev->xdev->dev, "Reconfiguring device ...\n"); 375 375 376 376 mutex_lock(&pdev->dev_lock); 377 - if (xenbus_read_driver_state(pdev->xdev->nodename) != state) 377 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != state) 378 378 goto out; 379 379 380 380 err = xenbus_scanf(XBT_NIL, pdev->xdev->nodename, "num_devs", "%d", ··· 572 572 /* It's possible we could get the call to setup twice, so make sure 573 573 * we're not already connected. 574 574 */ 575 - if (xenbus_read_driver_state(pdev->xdev->nodename) != 575 + if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != 576 576 XenbusStateInitWait) 577 577 goto out; 578 578 ··· 662 662 struct xen_pcibk_device *pdev = 663 663 container_of(watch, struct xen_pcibk_device, be_watch); 664 664 665 - switch (xenbus_read_driver_state(pdev->xdev->nodename)) { 665 + switch (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename)) { 666 666 case XenbusStateInitWait: 667 667 xen_pcibk_setup_backend(pdev); 668 668 break;
+14 -3
drivers/xen/xenbus/xenbus_client.c
··· 226 226 struct xenbus_transaction xbt; 227 227 int current_state; 228 228 int err, abort; 229 + bool vanished = false; 229 230 230 - if (state == dev->state) 231 + if (state == dev->state || dev->vanished) 231 232 return 0; 232 233 233 234 again: ··· 243 242 err = xenbus_scanf(xbt, dev->nodename, "state", "%d", &current_state); 244 243 if (err != 1) 245 244 goto abort; 245 + if (current_state != dev->state && current_state == XenbusStateInitialising) { 246 + vanished = true; 247 + goto abort; 248 + } 246 249 247 250 err = xenbus_printf(xbt, dev->nodename, "state", "%d", state); 248 251 if (err) { ··· 261 256 if (err == -EAGAIN && !abort) 262 257 goto again; 263 258 xenbus_switch_fatal(dev, depth, err, "ending transaction"); 264 - } else 259 + } else if (!vanished) 265 260 dev->state = state; 266 261 267 262 return 0; ··· 936 931 937 932 /** 938 933 * xenbus_read_driver_state - read state from a store path 934 + * @dev: xenbus device pointer 939 935 * @path: path for driver 940 936 * 941 937 * Returns: the state of the driver rooted at the given store path, or 942 938 * XenbusStateUnknown if no state can be read. 943 939 */ 944 - enum xenbus_state xenbus_read_driver_state(const char *path) 940 + enum xenbus_state xenbus_read_driver_state(const struct xenbus_device *dev, 941 + const char *path) 945 942 { 946 943 enum xenbus_state result; 944 + 945 + if (dev && dev->vanished) 946 + return XenbusStateUnknown; 947 + 947 948 int err = xenbus_gather(XBT_NIL, path, "state", "%d", &result, NULL); 948 949 if (err) 949 950 result = XenbusStateUnknown;
+39 -3
drivers/xen/xenbus/xenbus_probe.c
··· 191 191 return; 192 192 } 193 193 194 - state = xenbus_read_driver_state(dev->otherend); 194 + state = xenbus_read_driver_state(dev, dev->otherend); 195 195 196 196 dev_dbg(&dev->dev, "state is %d, (%s), %s, %s\n", 197 197 state, xenbus_strstate(state), dev->otherend_watch.node, path); ··· 364 364 * closed. 365 365 */ 366 366 if (!drv->allow_rebind || 367 - xenbus_read_driver_state(dev->nodename) == XenbusStateClosing) 367 + xenbus_read_driver_state(dev, dev->nodename) == XenbusStateClosing) 368 368 xenbus_switch_state(dev, XenbusStateClosed); 369 369 } 370 370 EXPORT_SYMBOL_GPL(xenbus_dev_remove); ··· 444 444 info.dev = NULL; 445 445 bus_for_each_dev(bus, NULL, &info, cleanup_dev); 446 446 if (info.dev) { 447 + dev_warn(&info.dev->dev, 448 + "device forcefully removed from xenstore\n"); 449 + info.dev->vanished = true; 447 450 device_unregister(&info.dev->dev); 448 451 put_device(&info.dev->dev); 449 452 } ··· 517 514 size_t stringlen; 518 515 char *tmpstring; 519 516 520 - enum xenbus_state state = xenbus_read_driver_state(nodename); 517 + enum xenbus_state state = xenbus_read_driver_state(NULL, nodename); 521 518 522 519 if (state != XenbusStateInitialising) { 523 520 /* Device is not new, so ignore it. This can happen if a ··· 662 659 return; 663 660 664 661 dev = xenbus_device_find(root, &bus->bus); 662 + /* 663 + * Backend domain crash results in not coordinated frontend removal, 664 + * without going through XenbusStateClosing. If this is a new instance 665 + * of the same device Xen tools will have reset the state to 666 + * XenbusStateInitializing. 667 + * It might be that the backend crashed early during the init phase of 668 + * device setup, in which case the known state would have been 669 + * XenbusStateInitializing. So test the backend domid to match the 670 + * saved one. In case the new backend happens to have the same domid as 671 + * the old one, we can just carry on, as there is no inconsistency 672 + * resulting in this case. 673 + */ 674 + if (dev && !strcmp(bus->root, "device")) { 675 + enum xenbus_state state = xenbus_read_driver_state(dev, dev->nodename); 676 + unsigned int backend = xenbus_read_unsigned(root, "backend-id", 677 + dev->otherend_id); 678 + 679 + if (state == XenbusStateInitialising && 680 + (state != dev->state || backend != dev->otherend_id)) { 681 + /* 682 + * State has been reset, assume the old one vanished 683 + * and new one needs to be probed. 684 + */ 685 + dev_warn(&dev->dev, 686 + "state reset occurred, reconnecting\n"); 687 + dev->vanished = true; 688 + } 689 + if (dev->vanished) { 690 + device_unregister(&dev->dev); 691 + put_device(&dev->dev); 692 + dev = NULL; 693 + } 694 + } 665 695 if (!dev) 666 696 xenbus_probe_node(bus, type, root); 667 697 else
+1 -1
drivers/xen/xenbus/xenbus_probe_frontend.c
··· 253 253 } else if (xendev->state < XenbusStateConnected) { 254 254 enum xenbus_state rstate = XenbusStateUnknown; 255 255 if (xendev->otherend) 256 - rstate = xenbus_read_driver_state(xendev->otherend); 256 + rstate = xenbus_read_driver_state(xendev, xendev->otherend); 257 257 pr_warn("Timeout connecting to device: %s (local state %d, remote state %d)\n", 258 258 xendev->nodename, xendev->state, rstate); 259 259 }
+4 -4
fs/afs/addr_list.c
··· 298 298 srx.transport.sin.sin_addr.s_addr = xdr; 299 299 300 300 peer = rxrpc_kernel_lookup_peer(net->socket, &srx, GFP_KERNEL); 301 - if (!peer) 302 - return -ENOMEM; 301 + if (IS_ERR(peer)) 302 + return PTR_ERR(peer); 303 303 304 304 for (i = 0; i < alist->nr_ipv4; i++) { 305 305 if (peer == alist->addrs[i].peer) { ··· 342 342 memcpy(&srx.transport.sin6.sin6_addr, xdr, 16); 343 343 344 344 peer = rxrpc_kernel_lookup_peer(net->socket, &srx, GFP_KERNEL); 345 - if (!peer) 346 - return -ENOMEM; 345 + if (IS_ERR(peer)) 346 + return PTR_ERR(peer); 347 347 348 348 for (i = alist->nr_ipv4; i < alist->nr_addrs; i++) { 349 349 if (peer == alist->addrs[i].peer) {
+2
fs/smb/client/Makefile
··· 56 56 quiet_cmd_gen_smb2_mapping = GEN $@ 57 57 cmd_gen_smb2_mapping = perl $(src)/gen_smb2_mapping $< $@ 58 58 59 + obj-$(CONFIG_SMB_KUNIT_TESTS) += smb2maperror_test.o 60 + 59 61 clean-files += smb2_mapping_table.c
+5 -2
fs/smb/client/cifsfs.c
··· 332 332 333 333 /* 334 334 * We need to release all dentries for the cached directories 335 - * before we kill the sb. 335 + * and close all deferred file handles before we kill the sb. 336 336 */ 337 337 if (cifs_sb->root) { 338 338 close_all_cached_dirs(cifs_sb); 339 + cifs_close_all_deferred_files_sb(cifs_sb); 340 + 341 + /* Wait for all pending oplock breaks to complete */ 342 + flush_workqueue(cifsoplockd_wq); 339 343 340 344 /* finally release root dentry */ 341 345 dput(cifs_sb->root); ··· 872 868 spin_unlock(&tcon->tc_lock); 873 869 spin_unlock(&cifs_tcp_ses_lock); 874 870 875 - cifs_close_all_deferred_files(tcon); 876 871 /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */ 877 872 /* cancel_notify_requests(tcon); */ 878 873 if (tcon->ses && tcon->ses->server) {
+1
fs/smb/client/cifsproto.h
··· 261 261 262 262 void cifs_close_all_deferred_files(struct cifs_tcon *tcon); 263 263 264 + void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb); 264 265 void cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, 265 266 struct dentry *dentry); 266 267
-11
fs/smb/client/file.c
··· 711 711 mutex_init(&cfile->fh_mutex); 712 712 spin_lock_init(&cfile->file_info_lock); 713 713 714 - cifs_sb_active(inode->i_sb); 715 - 716 714 /* 717 715 * If the server returned a read oplock and we have mandatory brlocks, 718 716 * set oplock level to None. ··· 765 767 struct inode *inode = d_inode(cifs_file->dentry); 766 768 struct cifsInodeInfo *cifsi = CIFS_I(inode); 767 769 struct cifsLockInfo *li, *tmp; 768 - struct super_block *sb = inode->i_sb; 769 770 770 771 /* 771 772 * Delete any outstanding lock records. We'll lose them when the file ··· 782 785 783 786 cifs_put_tlink(cifs_file->tlink); 784 787 dput(cifs_file->dentry); 785 - cifs_sb_deactive(sb); 786 788 kfree(cifs_file->symlink_target); 787 789 kfree(cifs_file); 788 790 } ··· 3159 3163 __u64 persistent_fid, volatile_fid; 3160 3164 __u16 net_fid; 3161 3165 3162 - /* 3163 - * Hold a reference to the superblock to prevent it and its inodes from 3164 - * being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put() 3165 - * may release the last reference to the sb and trigger inode eviction. 3166 - */ 3167 - cifs_sb_active(sb); 3168 3166 wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS, 3169 3167 TASK_UNINTERRUPTIBLE); 3170 3168 ··· 3243 3253 cifs_put_tlink(tlink); 3244 3254 out: 3245 3255 cifs_done_oplock_break(cinode); 3246 - cifs_sb_deactive(sb); 3247 3256 } 3248 3257 3249 3258 static int cifs_swap_activate(struct swap_info_struct *sis,
+42
fs/smb/client/misc.c
··· 28 28 #include "fs_context.h" 29 29 #include "cached_dir.h" 30 30 31 + struct tcon_list { 32 + struct list_head entry; 33 + struct cifs_tcon *tcon; 34 + }; 35 + 31 36 /* The xid serves as a useful identifier for each incoming vfs request, 32 37 in a similar way to the mid which is useful to track each sent smb, 33 38 and CurrentXid can also provide a running counter (although it ··· 555 550 list_for_each_entry_safe(tmp_list, tmp_next_list, &file_head, list) { 556 551 _cifsFileInfo_put(tmp_list->cfile, true, false); 557 552 list_del(&tmp_list->list); 553 + kfree(tmp_list); 554 + } 555 + } 556 + 557 + void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb) 558 + { 559 + struct rb_root *root = &cifs_sb->tlink_tree; 560 + struct rb_node *node; 561 + struct cifs_tcon *tcon; 562 + struct tcon_link *tlink; 563 + struct tcon_list *tmp_list, *q; 564 + LIST_HEAD(tcon_head); 565 + 566 + spin_lock(&cifs_sb->tlink_tree_lock); 567 + for (node = rb_first(root); node; node = rb_next(node)) { 568 + tlink = rb_entry(node, struct tcon_link, tl_rbnode); 569 + tcon = tlink_tcon(tlink); 570 + if (IS_ERR(tcon)) 571 + continue; 572 + tmp_list = kmalloc_obj(struct tcon_list, GFP_ATOMIC); 573 + if (tmp_list == NULL) 574 + break; 575 + tmp_list->tcon = tcon; 576 + /* Take a reference on tcon to prevent it from being freed */ 577 + spin_lock(&tcon->tc_lock); 578 + ++tcon->tc_count; 579 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 580 + netfs_trace_tcon_ref_get_close_defer_files); 581 + spin_unlock(&tcon->tc_lock); 582 + list_add_tail(&tmp_list->entry, &tcon_head); 583 + } 584 + spin_unlock(&cifs_sb->tlink_tree_lock); 585 + 586 + list_for_each_entry_safe(tmp_list, q, &tcon_head, entry) { 587 + cifs_close_all_deferred_files(tmp_list->tcon); 588 + list_del(&tmp_list->entry); 589 + cifs_put_tcon(tmp_list->tcon, netfs_trace_tcon_ref_put_close_defer_files); 558 590 kfree(tmp_list); 559 591 } 560 592 }
+2 -1
fs/smb/client/smb1encrypt.c
··· 11 11 12 12 #include <linux/fips.h> 13 13 #include <crypto/md5.h> 14 + #include <crypto/utils.h> 14 15 #include "cifsproto.h" 15 16 #include "smb1proto.h" 16 17 #include "cifs_debug.h" ··· 132 131 /* cifs_dump_mem("what we think it should be: ", 133 132 what_we_think_sig_should_be, 16); */ 134 133 135 - if (memcmp(server_response_sig, what_we_think_sig_should_be, 8)) 134 + if (crypto_memneq(server_response_sig, what_we_think_sig_should_be, 8)) 136 135 return -EACCES; 137 136 else 138 137 return 0;
+12
fs/smb/client/smb2glob.h
··· 46 46 #define END_OF_CHAIN 4 47 47 #define RELATED_REQUEST 8 48 48 49 + /* 50 + ***************************************************************** 51 + * Struct definitions go here 52 + ***************************************************************** 53 + */ 54 + 55 + struct status_to_posix_error { 56 + __u32 smb2_status; 57 + int posix_error; 58 + char *status_string; 59 + }; 60 + 49 61 #endif /* _SMB2_GLOB_H */
+5 -3
fs/smb/client/smb2inode.c
··· 325 325 cfile->fid.volatile_fid, 326 326 SMB_FIND_FILE_POSIX_INFO, 327 327 SMB2_O_INFO_FILE, 0, 328 - sizeof(struct smb311_posix_qinfo *) + 328 + sizeof(struct smb311_posix_qinfo) + 329 329 (PATH_MAX * 2) + 330 330 (sizeof(struct smb_sid) * 2), 0, NULL); 331 331 } else { ··· 335 335 COMPOUND_FID, 336 336 SMB_FIND_FILE_POSIX_INFO, 337 337 SMB2_O_INFO_FILE, 0, 338 - sizeof(struct smb311_posix_qinfo *) + 338 + sizeof(struct smb311_posix_qinfo) + 339 339 (PATH_MAX * 2) + 340 340 (sizeof(struct smb_sid) * 2), 0, NULL); 341 341 } ··· 1216 1216 memset(resp_buftype, 0, sizeof(resp_buftype)); 1217 1217 memset(rsp_iov, 0, sizeof(rsp_iov)); 1218 1218 1219 + memset(open_iov, 0, sizeof(open_iov)); 1219 1220 rqst[0].rq_iov = open_iov; 1220 1221 rqst[0].rq_nvec = ARRAY_SIZE(open_iov); 1221 1222 ··· 1241 1240 creq = rqst[0].rq_iov[0].iov_base; 1242 1241 creq->ShareAccess = FILE_SHARE_DELETE_LE; 1243 1242 1243 + memset(&close_iov, 0, sizeof(close_iov)); 1244 1244 rqst[1].rq_iov = &close_iov; 1245 1245 rqst[1].rq_nvec = 1; 1246 1246 1247 1247 rc = SMB2_close_init(tcon, server, &rqst[1], 1248 1248 COMPOUND_FID, COMPOUND_FID, false); 1249 - smb2_set_related(&rqst[1]); 1250 1249 if (rc) 1251 1250 goto err_free; 1251 + smb2_set_related(&rqst[1]); 1252 1252 1253 1253 if (retries) { 1254 1254 /* Back-off before retry */
+15 -13
fs/smb/client/smb2maperror.c
··· 8 8 * 9 9 */ 10 10 #include <linux/errno.h> 11 - #include "cifsglob.h" 12 11 #include "cifsproto.h" 13 12 #include "cifs_debug.h" 14 13 #include "smb2proto.h" 15 14 #include "smb2glob.h" 16 15 #include "../common/smb2status.h" 17 16 #include "trace.h" 18 - 19 - struct status_to_posix_error { 20 - __u32 smb2_status; 21 - int posix_error; 22 - char *status_string; 23 - }; 24 17 25 18 static const struct status_to_posix_error smb2_error_map_table[] = { 26 19 /* ··· 108 115 return 0; 109 116 } 110 117 111 - #define SMB_CLIENT_KUNIT_AVAILABLE \ 112 - ((IS_MODULE(CONFIG_CIFS) && IS_ENABLED(CONFIG_KUNIT)) || \ 113 - (IS_BUILTIN(CONFIG_CIFS) && IS_BUILTIN(CONFIG_KUNIT))) 118 + #if IS_ENABLED(CONFIG_SMB_KUNIT_TESTS) 119 + /* Previous prototype for eliminating the build warning. */ 120 + const struct status_to_posix_error *smb2_get_err_map_test(__u32 smb2_status); 114 121 115 - #if SMB_CLIENT_KUNIT_AVAILABLE && IS_ENABLED(CONFIG_SMB_KUNIT_TESTS) 116 - #include "smb2maperror_test.c" 117 - #endif /* CONFIG_SMB_KUNIT_TESTS */ 122 + const struct status_to_posix_error *smb2_get_err_map_test(__u32 smb2_status) 123 + { 124 + return smb2_get_err_map(smb2_status); 125 + } 126 + EXPORT_SYMBOL_GPL(smb2_get_err_map_test); 127 + 128 + const struct status_to_posix_error *smb2_error_map_table_test = smb2_error_map_table; 129 + EXPORT_SYMBOL_GPL(smb2_error_map_table_test); 130 + 131 + unsigned int smb2_error_map_num = ARRAY_SIZE(smb2_error_map_table); 132 + EXPORT_SYMBOL_GPL(smb2_error_map_num); 133 + #endif
+9 -3
fs/smb/client/smb2maperror_test.c
··· 9 9 */ 10 10 11 11 #include <kunit/test.h> 12 + #include "smb2glob.h" 13 + 14 + const struct status_to_posix_error *smb2_get_err_map_test(__u32 smb2_status); 15 + extern const struct status_to_posix_error *smb2_error_map_table_test; 16 + extern unsigned int smb2_error_map_num; 12 17 13 18 static void 14 19 test_cmp_map(struct kunit *test, const struct status_to_posix_error *expect) 15 20 { 16 21 const struct status_to_posix_error *result; 17 22 18 - result = smb2_get_err_map(expect->smb2_status); 23 + result = smb2_get_err_map_test(expect->smb2_status); 19 24 KUNIT_EXPECT_PTR_NE(test, NULL, result); 20 25 KUNIT_EXPECT_EQ(test, expect->smb2_status, result->smb2_status); 21 26 KUNIT_EXPECT_EQ(test, expect->posix_error, result->posix_error); ··· 31 26 { 32 27 unsigned int i; 33 28 34 - for (i = 0; i < ARRAY_SIZE(smb2_error_map_table); i++) 35 - test_cmp_map(test, &smb2_error_map_table[i]); 29 + for (i = 0; i < smb2_error_map_num; i++) 30 + test_cmp_map(test, &smb2_error_map_table_test[i]); 36 31 } 37 32 38 33 static struct kunit_case maperror_test_cases[] = { ··· 48 43 kunit_test_suite(maperror_suite); 49 44 50 45 MODULE_LICENSE("GPL"); 46 + MODULE_DESCRIPTION("KUnit tests of SMB2 maperror");
-18
fs/smb/client/smb2pdu.c
··· 3989 3989 NULL); 3990 3990 } 3991 3991 3992 - #if 0 3993 - /* currently unused, as now we are doing compounding instead (see smb311_posix_query_path_info) */ 3994 - int 3995 - SMB311_posix_query_info(const unsigned int xid, struct cifs_tcon *tcon, 3996 - u64 persistent_fid, u64 volatile_fid, 3997 - struct smb311_posix_qinfo *data, u32 *plen) 3998 - { 3999 - size_t output_len = sizeof(struct smb311_posix_qinfo *) + 4000 - (sizeof(struct smb_sid) * 2) + (PATH_MAX * 2); 4001 - *plen = 0; 4002 - 4003 - return query_info(xid, tcon, persistent_fid, volatile_fid, 4004 - SMB_FIND_FILE_POSIX_INFO, SMB2_O_INFO_FILE, 0, 4005 - output_len, sizeof(struct smb311_posix_qinfo), (void **)&data, plen); 4006 - /* Note caller must free "data" (passed in above). It may be allocated in query_info call */ 4007 - } 4008 - #endif 4009 - 4010 3992 int 4011 3993 SMB2_query_acl(const unsigned int xid, struct cifs_tcon *tcon, 4012 3994 u64 persistent_fid, u64 volatile_fid,
+5 -2
fs/smb/client/smb2pdu.h
··· 224 224 __le32 Tag; 225 225 } __packed; 226 226 227 - /* See MS-FSCC 2.4.21 */ 227 + /* See MS-FSCC 2.4.26 */ 228 228 struct smb2_file_id_information { 229 229 __le64 VolumeSerialNumber; 230 230 __u64 PersistentFileId; /* opaque endianness */ ··· 251 251 252 252 extern char smb2_padding[7]; 253 253 254 - /* equivalent of the contents of SMB3.1.1 POSIX open context response */ 254 + /* 255 + * See POSIX-SMB2 2.2.14.2.16 256 + * Link: https://gitlab.com/samba-team/smb3-posix-spec/-/blob/master/smb3_posix_extensions.md 257 + */ 255 258 struct create_posix_rsp { 256 259 u32 nlink; 257 260 u32 reparse_tag;
-3
fs/smb/client/smb2proto.h
··· 167 167 struct cifs_tcon *tcon, struct TCP_Server_Info *server, 168 168 u64 persistent_fid, u64 volatile_fid); 169 169 void SMB2_flush_free(struct smb_rqst *rqst); 170 - int SMB311_posix_query_info(const unsigned int xid, struct cifs_tcon *tcon, 171 - u64 persistent_fid, u64 volatile_fid, 172 - struct smb311_posix_qinfo *data, u32 *plen); 173 170 int SMB2_query_info(const unsigned int xid, struct cifs_tcon *tcon, 174 171 u64 persistent_fid, u64 volatile_fid, 175 172 struct smb2_file_all_info *data);
+3 -1
fs/smb/client/smb2transport.c
··· 20 20 #include <linux/highmem.h> 21 21 #include <crypto/aead.h> 22 22 #include <crypto/sha2.h> 23 + #include <crypto/utils.h> 23 24 #include "cifsglob.h" 24 25 #include "cifsproto.h" 25 26 #include "smb2proto.h" ··· 618 617 if (rc) 619 618 return rc; 620 619 621 - if (memcmp(server_response_sig, shdr->Signature, SMB2_SIGNATURE_SIZE)) { 620 + if (crypto_memneq(server_response_sig, shdr->Signature, 621 + SMB2_SIGNATURE_SIZE)) { 622 622 cifs_dbg(VFS, "sign fail cmd 0x%x message id 0x%llx\n", 623 623 shdr->Command, shdr->MessageId); 624 624 return -EACCES;
+2
fs/smb/client/trace.h
··· 176 176 EM(netfs_trace_tcon_ref_get_cached_laundromat, "GET Ch-Lau") \ 177 177 EM(netfs_trace_tcon_ref_get_cached_lease_break, "GET Ch-Lea") \ 178 178 EM(netfs_trace_tcon_ref_get_cancelled_close, "GET Cn-Cls") \ 179 + EM(netfs_trace_tcon_ref_get_close_defer_files, "GET Cl-Def") \ 179 180 EM(netfs_trace_tcon_ref_get_dfs_refer, "GET DfsRef") \ 180 181 EM(netfs_trace_tcon_ref_get_find, "GET Find ") \ 181 182 EM(netfs_trace_tcon_ref_get_find_sess_tcon, "GET FndSes") \ ··· 188 187 EM(netfs_trace_tcon_ref_put_cancelled_close, "PUT Cn-Cls") \ 189 188 EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \ 190 189 EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \ 190 + EM(netfs_trace_tcon_ref_put_close_defer_files, "PUT Cl-Def") \ 191 191 EM(netfs_trace_tcon_ref_put_mnt_ctx, "PUT MntCtx") \ 192 192 EM(netfs_trace_tcon_ref_put_dfs_refer, "PUT DfsRfr") \ 193 193 EM(netfs_trace_tcon_ref_put_reconnect_server, "PUT Reconn") \
+2 -20
fs/smb/server/auth.c
··· 589 589 if (!(conn->dialect >= SMB30_PROT_ID && signing->binding)) 590 590 memcpy(chann->smb3signingkey, key, SMB3_SIGN_KEY_SIZE); 591 591 592 - ksmbd_debug(AUTH, "dumping generated AES signing keys\n"); 592 + ksmbd_debug(AUTH, "generated SMB3 signing key\n"); 593 593 ksmbd_debug(AUTH, "Session Id %llu\n", sess->id); 594 - ksmbd_debug(AUTH, "Session Key %*ph\n", 595 - SMB2_NTLMV2_SESSKEY_SIZE, sess->sess_key); 596 - ksmbd_debug(AUTH, "Signing Key %*ph\n", 597 - SMB3_SIGN_KEY_SIZE, key); 598 594 return 0; 599 595 } 600 596 ··· 648 652 ptwin->decryption.context, 649 653 sess->smb3decryptionkey, SMB3_ENC_DEC_KEY_SIZE); 650 654 651 - ksmbd_debug(AUTH, "dumping generated AES encryption keys\n"); 655 + ksmbd_debug(AUTH, "generated SMB3 encryption/decryption keys\n"); 652 656 ksmbd_debug(AUTH, "Cipher type %d\n", conn->cipher_type); 653 657 ksmbd_debug(AUTH, "Session Id %llu\n", sess->id); 654 - ksmbd_debug(AUTH, "Session Key %*ph\n", 655 - SMB2_NTLMV2_SESSKEY_SIZE, sess->sess_key); 656 - if (conn->cipher_type == SMB2_ENCRYPTION_AES256_CCM || 657 - conn->cipher_type == SMB2_ENCRYPTION_AES256_GCM) { 658 - ksmbd_debug(AUTH, "ServerIn Key %*ph\n", 659 - SMB3_GCM256_CRYPTKEY_SIZE, sess->smb3encryptionkey); 660 - ksmbd_debug(AUTH, "ServerOut Key %*ph\n", 661 - SMB3_GCM256_CRYPTKEY_SIZE, sess->smb3decryptionkey); 662 - } else { 663 - ksmbd_debug(AUTH, "ServerIn Key %*ph\n", 664 - SMB3_GCM128_CRYPTKEY_SIZE, sess->smb3encryptionkey); 665 - ksmbd_debug(AUTH, "ServerOut Key %*ph\n", 666 - SMB3_GCM128_CRYPTKEY_SIZE, sess->smb3decryptionkey); 667 - } 668 658 } 669 659 670 660 void ksmbd_gen_smb30_encryptionkey(struct ksmbd_conn *conn,
+25 -10
fs/smb/server/oplock.c
··· 120 120 kfree(lease); 121 121 } 122 122 123 - static void free_opinfo(struct oplock_info *opinfo) 123 + static void __free_opinfo(struct oplock_info *opinfo) 124 124 { 125 125 if (opinfo->is_lease) 126 126 free_lease(opinfo); 127 127 if (opinfo->conn && atomic_dec_and_test(&opinfo->conn->refcnt)) 128 128 kfree(opinfo->conn); 129 129 kfree(opinfo); 130 + } 131 + 132 + static void free_opinfo_rcu(struct rcu_head *rcu) 133 + { 134 + struct oplock_info *opinfo = container_of(rcu, struct oplock_info, rcu); 135 + 136 + __free_opinfo(opinfo); 137 + } 138 + 139 + static void free_opinfo(struct oplock_info *opinfo) 140 + { 141 + call_rcu(&opinfo->rcu, free_opinfo_rcu); 130 142 } 131 143 132 144 struct oplock_info *opinfo_get(struct ksmbd_file *fp) ··· 188 176 free_opinfo(opinfo); 189 177 } 190 178 191 - static void opinfo_add(struct oplock_info *opinfo) 179 + static void opinfo_add(struct oplock_info *opinfo, struct ksmbd_file *fp) 192 180 { 193 - struct ksmbd_inode *ci = opinfo->o_fp->f_ci; 181 + struct ksmbd_inode *ci = fp->f_ci; 194 182 195 183 down_write(&ci->m_lock); 196 184 list_add(&opinfo->op_entry, &ci->m_op_list); ··· 1135 1123 1136 1124 rcu_read_lock(); 1137 1125 opinfo = rcu_dereference(fp->f_opinfo); 1138 - rcu_read_unlock(); 1139 1126 1140 - if (!opinfo || !opinfo->is_lease || opinfo->o_lease->version != 2) 1127 + if (!opinfo || !opinfo->is_lease || opinfo->o_lease->version != 2) { 1128 + rcu_read_unlock(); 1141 1129 return; 1130 + } 1131 + rcu_read_unlock(); 1142 1132 1143 1133 p_ci = ksmbd_inode_lookup_lock(fp->filp->f_path.dentry->d_parent); 1144 1134 if (!p_ci) ··· 1291 1277 set_oplock_level(opinfo, req_op_level, lctx); 1292 1278 1293 1279 out: 1294 - rcu_assign_pointer(fp->f_opinfo, opinfo); 1295 - opinfo->o_fp = fp; 1296 - 1297 1280 opinfo_count_inc(fp); 1298 - opinfo_add(opinfo); 1281 + opinfo_add(opinfo, fp); 1282 + 1299 1283 if (opinfo->is_lease) { 1300 1284 err = add_lease_global_list(opinfo); 1301 1285 if (err) 1302 1286 goto err_out; 1303 1287 } 1304 1288 1289 + rcu_assign_pointer(fp->f_opinfo, opinfo); 1290 + opinfo->o_fp = fp; 1291 + 1305 1292 return 0; 1306 1293 err_out: 1307 - free_opinfo(opinfo); 1294 + __free_opinfo(opinfo); 1308 1295 return err; 1309 1296 } 1310 1297
+3 -2
fs/smb/server/oplock.h
··· 69 69 struct lease *o_lease; 70 70 struct list_head op_entry; 71 71 struct list_head lease_entry; 72 - wait_queue_head_t oplock_q; /* Other server threads */ 73 - wait_queue_head_t oplock_brk; /* oplock breaking wait */ 72 + wait_queue_head_t oplock_q; /* Other server threads */ 73 + wait_queue_head_t oplock_brk; /* oplock breaking wait */ 74 + struct rcu_head rcu; 74 75 }; 75 76 76 77 struct lease_break_info {
+4 -4
fs/smb/server/smb2pdu.c
··· 3012 3012 goto err_out2; 3013 3013 } 3014 3014 3015 + fp = dh_info.fp; 3016 + 3015 3017 if (ksmbd_override_fsids(work)) { 3016 3018 rc = -ENOMEM; 3017 3019 ksmbd_put_durable_fd(dh_info.fp); 3018 3020 goto err_out2; 3019 3021 } 3020 3022 3021 - fp = dh_info.fp; 3022 3023 file_info = FILE_OPENED; 3023 3024 3024 3025 rc = ksmbd_vfs_getattr(&fp->filp->f_path, &stat); ··· 3617 3616 3618 3617 reconnected_fp: 3619 3618 rsp->StructureSize = cpu_to_le16(89); 3620 - rcu_read_lock(); 3621 - opinfo = rcu_dereference(fp->f_opinfo); 3619 + opinfo = opinfo_get(fp); 3622 3620 rsp->OplockLevel = opinfo != NULL ? opinfo->level : 0; 3623 - rcu_read_unlock(); 3624 3621 rsp->Flags = 0; 3625 3622 rsp->CreateAction = cpu_to_le32(file_info); 3626 3623 rsp->CreationTime = cpu_to_le64(fp->create_time); ··· 3659 3660 next_ptr = &lease_ccontext->Next; 3660 3661 next_off = conn->vals->create_lease_size; 3661 3662 } 3663 + opinfo_put(opinfo); 3662 3664 3663 3665 if (maximal_access_ctxt) { 3664 3666 struct create_context *mxac_ccontext;
+4 -1
fs/smb/server/smb2pdu.h
··· 83 83 } Data; 84 84 } __packed; 85 85 86 - /* equivalent of the contents of SMB3.1.1 POSIX open context response */ 86 + /* 87 + * See POSIX-SMB2 2.2.14.2.16 88 + * Link: https://gitlab.com/samba-team/smb3-posix-spec/-/blob/master/smb3_posix_extensions.md 89 + */ 87 90 struct create_posix_rsp { 88 91 struct create_context_hdr ccontext; 89 92 __u8 Name[16];
+5 -5
fs/smb/server/vfs_cache.c
··· 87 87 88 88 rcu_read_lock(); 89 89 opinfo = rcu_dereference(fp->f_opinfo); 90 - rcu_read_unlock(); 91 - 92 - if (!opinfo) { 93 - seq_printf(m, " %-15s", " "); 94 - } else { 90 + if (opinfo) { 95 91 const struct ksmbd_const_name *const_names; 96 92 int count; 97 93 unsigned int level; ··· 101 105 count = ARRAY_SIZE(ksmbd_oplock_const_names); 102 106 level = opinfo->level; 103 107 } 108 + rcu_read_unlock(); 104 109 ksmbd_proc_show_const_name(m, " %-15s", 105 110 const_names, count, level); 111 + } else { 112 + rcu_read_unlock(); 113 + seq_printf(m, " %-15s", " "); 106 114 } 107 115 108 116 seq_printf(m, " %#010x %#010x %s\n",
+3
fs/verity/Kconfig
··· 2 2 3 3 config FS_VERITY 4 4 bool "FS Verity (read-only file-based authenticity protection)" 5 + # Filesystems cache the Merkle tree at a 64K aligned offset in the 6 + # pagecache. That approach assumes the page size is at most 64K. 7 + depends on PAGE_SHIFT <= 16 5 8 select CRYPTO_HASH_INFO 6 9 select CRYPTO_LIB_SHA256 7 10 select CRYPTO_LIB_SHA512
+3 -1
include/asm-generic/vmlinux.lds.h
··· 848 848 849 849 /* Required sections not related to debugging. */ 850 850 #define ELF_DETAILS \ 851 - .modinfo : { *(.modinfo) . = ALIGN(8); } \ 852 851 .comment 0 : { *(.comment) } \ 853 852 .symtab 0 : { *(.symtab) } \ 854 853 .strtab 0 : { *(.strtab) } \ 855 854 .shstrtab 0 : { *(.shstrtab) } 855 + 856 + #define MODINFO \ 857 + .modinfo : { *(.modinfo) . = ALIGN(8); } 856 858 857 859 #ifdef CONFIG_GENERIC_BUG 858 860 #define BUG_TABLE \
+2
include/drm/display/drm_dp.h
··· 571 571 # define DP_PANEL_REPLAY_LINK_OFF_SUPPORTED_IN_PR_AFTER_ADAPTIVE_SYNC_SDP (1 << 7) 572 572 573 573 #define DP_PANEL_REPLAY_CAP_X_GRANULARITY 0xb2 574 + # define DP_PANEL_REPLAY_FULL_LINE_GRANULARITY 0xffff 575 + 574 576 #define DP_PANEL_REPLAY_CAP_Y_GRANULARITY 0xb4 575 577 576 578 /* Link Configuration */
+28 -16
include/kunit/run-in-irq-context.h
··· 12 12 #include <linux/hrtimer.h> 13 13 #include <linux/workqueue.h> 14 14 15 - #define KUNIT_IRQ_TEST_HRTIMER_INTERVAL us_to_ktime(5) 16 - 17 15 struct kunit_irq_test_state { 18 16 bool (*func)(void *test_specific_state); 19 17 void *test_specific_state; 20 18 bool task_func_reported_failure; 21 19 bool hardirq_func_reported_failure; 22 20 bool softirq_func_reported_failure; 21 + atomic_t task_func_calls; 23 22 atomic_t hardirq_func_calls; 24 23 atomic_t softirq_func_calls; 24 + ktime_t interval; 25 25 struct hrtimer timer; 26 26 struct work_struct bh_work; 27 27 }; ··· 30 30 { 31 31 struct kunit_irq_test_state *state = 32 32 container_of(timer, typeof(*state), timer); 33 + int task_calls, hardirq_calls, softirq_calls; 33 34 34 35 WARN_ON_ONCE(!in_hardirq()); 35 - atomic_inc(&state->hardirq_func_calls); 36 + task_calls = atomic_read(&state->task_func_calls); 37 + hardirq_calls = atomic_inc_return(&state->hardirq_func_calls); 38 + softirq_calls = atomic_read(&state->softirq_func_calls); 39 + 40 + /* 41 + * If the timer is firing too often for the softirq or task to ever have 42 + * a chance to run, increase the timer interval. This is needed on very 43 + * slow systems. 44 + */ 45 + if (hardirq_calls >= 20 && (softirq_calls == 0 || task_calls == 0)) 46 + state->interval = ktime_add_ns(state->interval, 250); 36 47 37 48 if (!state->func(state->test_specific_state)) 38 49 state->hardirq_func_reported_failure = true; 39 50 40 - hrtimer_forward_now(&state->timer, KUNIT_IRQ_TEST_HRTIMER_INTERVAL); 51 + hrtimer_forward_now(&state->timer, state->interval); 41 52 queue_work(system_bh_wq, &state->bh_work); 42 53 return HRTIMER_RESTART; 43 54 } ··· 97 86 struct kunit_irq_test_state state = { 98 87 .func = func, 99 88 .test_specific_state = test_specific_state, 89 + /* 90 + * Start with a 5us timer interval. If the system can't keep 91 + * up, kunit_irq_test_timer_func() will increase it. 92 + */ 93 + .interval = us_to_ktime(5), 100 94 }; 101 95 unsigned long end_jiffies; 102 - int hardirq_calls, softirq_calls; 103 - bool allctx = false; 96 + int task_calls, hardirq_calls, softirq_calls; 104 97 105 98 /* 106 99 * Set up a hrtimer (the way we access hardirq context) and a work ··· 119 104 * and hardirq), or 1 second, whichever comes first. 120 105 */ 121 106 end_jiffies = jiffies + HZ; 122 - hrtimer_start(&state.timer, KUNIT_IRQ_TEST_HRTIMER_INTERVAL, 123 - HRTIMER_MODE_REL_HARD); 124 - for (int task_calls = 0, calls = 0; 125 - ((calls < max_iterations) || !allctx) && 126 - !time_after(jiffies, end_jiffies); 127 - task_calls++) { 107 + hrtimer_start(&state.timer, state.interval, HRTIMER_MODE_REL_HARD); 108 + do { 128 109 if (!func(test_specific_state)) 129 110 state.task_func_reported_failure = true; 130 111 112 + task_calls = atomic_inc_return(&state.task_func_calls); 131 113 hardirq_calls = atomic_read(&state.hardirq_func_calls); 132 114 softirq_calls = atomic_read(&state.softirq_func_calls); 133 - calls = task_calls + hardirq_calls + softirq_calls; 134 - allctx = (task_calls > 0) && (hardirq_calls > 0) && 135 - (softirq_calls > 0); 136 - } 115 + } while ((task_calls + hardirq_calls + softirq_calls < max_iterations || 116 + (task_calls == 0 || hardirq_calls == 0 || 117 + softirq_calls == 0)) && 118 + !time_after(jiffies, end_jiffies)); 137 119 138 120 /* Cancel the timer and work. */ 139 121 hrtimer_cancel(&state.timer);
+2
include/linux/device/bus.h
··· 35 35 * otherwise. It may also return error code if determining that 36 36 * the driver supports the device is not possible. In case of 37 37 * -EPROBE_DEFER it will queue the device for deferred probing. 38 + * Note: This callback may be invoked with or without the device 39 + * lock held. 38 40 * @uevent: Called when a device is added, removed, or a few other things 39 41 * that generate uevents to add the environment variables. 40 42 * @probe: Called when a new device or driver add to this bus, and callback
+7 -4
include/linux/eventpoll.h
··· 82 82 epoll_put_uevent(__poll_t revents, __u64 data, 83 83 struct epoll_event __user *uevent) 84 84 { 85 - if (__put_user(revents, &uevent->events) || 86 - __put_user(data, &uevent->data)) 87 - return NULL; 88 - 85 + scoped_user_write_access_size(uevent, sizeof(*uevent), efault) { 86 + unsafe_put_user(revents, &uevent->events, efault); 87 + unsafe_put_user(data, &uevent->data, efault); 88 + } 89 89 return uevent+1; 90 + 91 + efault: 92 + return NULL; 90 93 } 91 94 #endif 92 95
+6
include/linux/hid.h
··· 836 836 * raw_event and event should return negative on error, any other value will 837 837 * pass the event on to .event() typically return 0 for success. 838 838 * 839 + * report_fixup must return a report descriptor pointer whose lifetime is at 840 + * least that of the input rdesc. This is usually done by mutating the input 841 + * rdesc and returning it or a sub-portion of it. In case a new buffer is 842 + * allocated and returned, the implementation of report_fixup is responsible for 843 + * freeing it later. 844 + * 839 845 * input_mapping shall return a negative value to completely ignore this usage 840 846 * (e.g. doubled or invalid usage), zero to continue with parsing of this 841 847 * usage by generic code (no special handling needed) or positive to skip
+6 -1
include/linux/ipv6.h
··· 333 333 }; 334 334 335 335 #if IS_ENABLED(CONFIG_IPV6) 336 - bool ipv6_mod_enabled(void); 336 + extern int disable_ipv6_mod; 337 + 338 + static inline bool ipv6_mod_enabled(void) 339 + { 340 + return disable_ipv6_mod == 0; 341 + } 337 342 338 343 static inline struct ipv6_pinfo *inet6_sk(const struct sock *__sk) 339 344 {
+9 -1
include/linux/migrate.h
··· 65 65 66 66 int migrate_huge_page_move_mapping(struct address_space *mapping, 67 67 struct folio *dst, struct folio *src); 68 - void migration_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl) 68 + void softleaf_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl) 69 69 __releases(ptl); 70 70 void folio_migrate_flags(struct folio *newfolio, struct folio *folio); 71 71 int folio_migrate_mapping(struct address_space *mapping, ··· 95 95 static inline int set_movable_ops(const struct movable_operations *ops, enum pagetype type) 96 96 { 97 97 return -ENOSYS; 98 + } 99 + 100 + static inline void softleaf_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl) 101 + __releases(ptl) 102 + { 103 + WARN_ON_ONCE(1); 104 + 105 + spin_unlock(ptl); 98 106 } 99 107 100 108 #endif /* CONFIG_MIGRATION */
+6 -11
include/linux/mm.h
··· 3514 3514 static inline void ptlock_free(struct ptdesc *ptdesc) {} 3515 3515 #endif /* defined(CONFIG_SPLIT_PTE_PTLOCKS) */ 3516 3516 3517 - static inline unsigned long ptdesc_nr_pages(const struct ptdesc *ptdesc) 3518 - { 3519 - return compound_nr(ptdesc_page(ptdesc)); 3520 - } 3521 - 3522 3517 static inline void __pagetable_ctor(struct ptdesc *ptdesc) 3523 3518 { 3524 - pg_data_t *pgdat = NODE_DATA(memdesc_nid(ptdesc->pt_flags)); 3519 + struct folio *folio = ptdesc_folio(ptdesc); 3525 3520 3526 - __SetPageTable(ptdesc_page(ptdesc)); 3527 - mod_node_page_state(pgdat, NR_PAGETABLE, ptdesc_nr_pages(ptdesc)); 3521 + __folio_set_pgtable(folio); 3522 + lruvec_stat_add_folio(folio, NR_PAGETABLE); 3528 3523 } 3529 3524 3530 3525 static inline void pagetable_dtor(struct ptdesc *ptdesc) 3531 3526 { 3532 - pg_data_t *pgdat = NODE_DATA(memdesc_nid(ptdesc->pt_flags)); 3527 + struct folio *folio = ptdesc_folio(ptdesc); 3533 3528 3534 3529 ptlock_free(ptdesc); 3535 - __ClearPageTable(ptdesc_page(ptdesc)); 3536 - mod_node_page_state(pgdat, NR_PAGETABLE, -ptdesc_nr_pages(ptdesc)); 3530 + __folio_clear_pgtable(folio); 3531 + lruvec_stat_sub_folio(folio, NR_PAGETABLE); 3537 3532 } 3538 3533 3539 3534 static inline void pagetable_dtor_free(struct ptdesc *ptdesc)
+16 -15
include/linux/mmu_notifier.h
··· 234 234 }; 235 235 236 236 /** 237 - * struct mmu_interval_notifier_ops 237 + * struct mmu_interval_notifier_ops - callback for range notification 238 238 * @invalidate: Upon return the caller must stop using any SPTEs within this 239 239 * range. This function can sleep. Return false only if sleeping 240 240 * was required but mmu_notifier_range_blockable(range) is false. ··· 309 309 310 310 /** 311 311 * mmu_interval_set_seq - Save the invalidation sequence 312 - * @interval_sub - The subscription passed to invalidate 313 - * @cur_seq - The cur_seq passed to the invalidate() callback 312 + * @interval_sub: The subscription passed to invalidate 313 + * @cur_seq: The cur_seq passed to the invalidate() callback 314 314 * 315 315 * This must be called unconditionally from the invalidate callback of a 316 316 * struct mmu_interval_notifier_ops under the same lock that is used to call ··· 329 329 330 330 /** 331 331 * mmu_interval_read_retry - End a read side critical section against a VA range 332 - * interval_sub: The subscription 333 - * seq: The return of the paired mmu_interval_read_begin() 332 + * @interval_sub: The subscription 333 + * @seq: The return of the paired mmu_interval_read_begin() 334 334 * 335 335 * This MUST be called under a user provided lock that is also held 336 336 * unconditionally by op->invalidate() when it calls mmu_interval_set_seq(). ··· 338 338 * Each call should be paired with a single mmu_interval_read_begin() and 339 339 * should be used to conclude the read side. 340 340 * 341 - * Returns true if an invalidation collided with this critical section, and 341 + * Returns: true if an invalidation collided with this critical section, and 342 342 * the caller should retry. 343 343 */ 344 344 static inline bool ··· 350 350 351 351 /** 352 352 * mmu_interval_check_retry - Test if a collision has occurred 353 - * interval_sub: The subscription 354 - * seq: The return of the matching mmu_interval_read_begin() 353 + * @interval_sub: The subscription 354 + * @seq: The return of the matching mmu_interval_read_begin() 355 355 * 356 356 * This can be used in the critical section between mmu_interval_read_begin() 357 - * and mmu_interval_read_retry(). A return of true indicates an invalidation 358 - * has collided with this critical region and a future 359 - * mmu_interval_read_retry() will return true. 360 - * 361 - * False is not reliable and only suggests a collision may not have 362 - * occurred. It can be called many times and does not have to hold the user 363 - * provided lock. 357 + * and mmu_interval_read_retry(). 364 358 * 365 359 * This call can be used as part of loops and other expensive operations to 366 360 * expedite a retry. 361 + * It can be called many times and does not have to hold the user 362 + * provided lock. 363 + * 364 + * Returns: true indicates an invalidation has collided with this critical 365 + * region and a future mmu_interval_read_retry() will return true. 366 + * False is not reliable and only suggests a collision may not have 367 + * occurred. 367 368 */ 368 369 static inline bool 369 370 mmu_interval_check_retry(struct mmu_interval_notifier *interval_sub,
+32
include/linux/netdevice.h
··· 3576 3576 }; 3577 3577 DECLARE_PER_CPU(struct page_pool_bh, system_page_pool); 3578 3578 3579 + #define XMIT_RECURSION_LIMIT 8 3580 + 3579 3581 #ifndef CONFIG_PREEMPT_RT 3580 3582 static inline int dev_recursion_level(void) 3581 3583 { 3582 3584 return this_cpu_read(softnet_data.xmit.recursion); 3585 + } 3586 + 3587 + static inline bool dev_xmit_recursion(void) 3588 + { 3589 + return unlikely(__this_cpu_read(softnet_data.xmit.recursion) > 3590 + XMIT_RECURSION_LIMIT); 3591 + } 3592 + 3593 + static inline void dev_xmit_recursion_inc(void) 3594 + { 3595 + __this_cpu_inc(softnet_data.xmit.recursion); 3596 + } 3597 + 3598 + static inline void dev_xmit_recursion_dec(void) 3599 + { 3600 + __this_cpu_dec(softnet_data.xmit.recursion); 3583 3601 } 3584 3602 #else 3585 3603 static inline int dev_recursion_level(void) ··· 3605 3587 return current->net_xmit.recursion; 3606 3588 } 3607 3589 3590 + static inline bool dev_xmit_recursion(void) 3591 + { 3592 + return unlikely(current->net_xmit.recursion > XMIT_RECURSION_LIMIT); 3593 + } 3594 + 3595 + static inline void dev_xmit_recursion_inc(void) 3596 + { 3597 + current->net_xmit.recursion++; 3598 + } 3599 + 3600 + static inline void dev_xmit_recursion_dec(void) 3601 + { 3602 + current->net_xmit.recursion--; 3603 + } 3608 3604 #endif 3609 3605 3610 3606 void __netif_schedule(struct Qdisc *q);
+7 -7
include/linux/platform_data/mlxreg.h
··· 13 13 /** 14 14 * enum mlxreg_wdt_type - type of HW watchdog 15 15 * 16 - * TYPE1 HW watchdog implementation exist in old systems. 17 - * All new systems have TYPE2 HW watchdog. 18 - * TYPE3 HW watchdog can exist on all systems with new CPLD. 19 - * TYPE3 is selected by WD capability bit. 16 + * @MLX_WDT_TYPE1: HW watchdog implementation in old systems. 17 + * @MLX_WDT_TYPE2: All new systems have TYPE2 HW watchdog. 18 + * @MLX_WDT_TYPE3: HW watchdog that can exist on all systems with new CPLD. 19 + * TYPE3 is selected by WD capability bit. 20 20 */ 21 21 enum mlxreg_wdt_type { 22 22 MLX_WDT_TYPE1, ··· 35 35 * @MLXREG_HOTPLUG_LC_SYNCED: entry for line card synchronization events, coming 36 36 * after hardware-firmware synchronization handshake; 37 37 * @MLXREG_HOTPLUG_LC_READY: entry for line card ready events, indicating line card 38 - PHYs ready / unready state; 38 + * PHYs ready / unready state; 39 39 * @MLXREG_HOTPLUG_LC_ACTIVE: entry for line card active events, indicating firmware 40 40 * availability / unavailability for the ports on line card; 41 41 * @MLXREG_HOTPLUG_LC_THERMAL: entry for line card thermal shutdown events, positive ··· 123 123 * @reg_pwr: attribute power register; 124 124 * @reg_ena: attribute enable register; 125 125 * @mode: access mode; 126 - * @np - pointer to node platform associated with attribute; 127 - * @hpdev - hotplug device data; 126 + * @np: pointer to node platform associated with attribute; 127 + * @hpdev: hotplug device data; 128 128 * @notifier: pointer to event notifier block; 129 129 * @health_cntr: dynamic device health indication counter; 130 130 * @attached: true if device has been attached after good health indication;
+3 -2
include/linux/platform_data/x86/int3472.h
··· 26 26 #define INT3472_GPIO_TYPE_POWER_ENABLE 0x0b 27 27 #define INT3472_GPIO_TYPE_CLK_ENABLE 0x0c 28 28 #define INT3472_GPIO_TYPE_PRIVACY_LED 0x0d 29 + #define INT3472_GPIO_TYPE_DOVDD 0x10 29 30 #define INT3472_GPIO_TYPE_HANDSHAKE 0x12 30 31 #define INT3472_GPIO_TYPE_HOTPLUG_DETECT 0x13 31 32 ··· 34 33 #define INT3472_MAX_SENSOR_GPIOS 3 35 34 #define INT3472_MAX_REGULATORS 3 36 35 37 - /* E.g. "avdd\0" */ 38 - #define GPIO_SUPPLY_NAME_LENGTH 5 36 + /* E.g. "dovdd\0" */ 37 + #define GPIO_SUPPLY_NAME_LENGTH 6 39 38 /* 12 chars for acpi_dev_name() + "-", e.g. "ABCD1234:00-" */ 40 39 #define GPIO_REGULATOR_NAME_LENGTH (12 + GPIO_SUPPLY_NAME_LENGTH) 41 40 /* lower- and upper-case mapping */
+2 -2
include/linux/uaccess.h
··· 792 792 793 793 /** 794 794 * scoped_user_rw_access_size - Start a scoped user read/write access with given size 795 - * @uptr Pointer to the user space address to read from and write to 795 + * @uptr: Pointer to the user space address to read from and write to 796 796 * @size: Size of the access starting from @uptr 797 797 * @elbl: Error label to goto when the access region is rejected 798 798 * ··· 803 803 804 804 /** 805 805 * scoped_user_rw_access - Start a scoped user read/write access 806 - * @uptr Pointer to the user space address to read from and write to 806 + * @uptr: Pointer to the user space address to read from and write to 807 807 * @elbl: Error label to goto when the access region is rejected 808 808 * 809 809 * The size of the access starting from @uptr is determined via sizeof(*@uptr)).
+1
include/linux/usb/usbnet.h
··· 132 132 #define FLAG_MULTI_PACKET 0x2000 133 133 #define FLAG_RX_ASSEMBLE 0x4000 /* rx packets may span >1 frames */ 134 134 #define FLAG_NOARP 0x8000 /* device can't do ARP */ 135 + #define FLAG_NOMAXMTU 0x10000 /* allow max_mtu above hard_mtu */ 135 136 136 137 /* init device ... can sleep, or cause probe() failure */ 137 138 int (*bind)(struct usbnet *, struct usb_interface *);
+14
include/net/ip6_tunnel.h
··· 156 156 { 157 157 int pkt_len, err; 158 158 159 + if (unlikely(dev_recursion_level() > IP_TUNNEL_RECURSION_LIMIT)) { 160 + if (dev) { 161 + net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n", 162 + dev->name); 163 + DEV_STATS_INC(dev, tx_errors); 164 + } 165 + kfree_skb(skb); 166 + return; 167 + } 168 + 169 + dev_xmit_recursion_inc(); 170 + 159 171 memset(skb->cb, 0, sizeof(struct inet6_skb_parm)); 160 172 IP6CB(skb)->flags = ip6cb_flags; 161 173 pkt_len = skb->len - skb_inner_network_offset(skb); ··· 178 166 pkt_len = -1; 179 167 iptunnel_xmit_stats(dev, pkt_len); 180 168 } 169 + 170 + dev_xmit_recursion_dec(); 181 171 } 182 172 #endif 183 173 #endif
+7
include/net/ip_tunnels.h
··· 27 27 #include <net/ip6_route.h> 28 28 #endif 29 29 30 + /* Recursion limit for tunnel xmit to detect routing loops. 31 + * Unlike XMIT_RECURSION_LIMIT (8) used in the no-qdisc path, tunnel 32 + * recursion involves route lookups and full IP output, consuming much 33 + * more stack per level, so a lower limit is needed. 34 + */ 35 + #define IP_TUNNEL_RECURSION_LIMIT 4 36 + 30 37 /* Keep error state on tunnel for 30 sec */ 31 38 #define IPTUNNEL_ERR_TIMEO (30*HZ) 32 39
+1 -1
include/net/page_pool/types.h
··· 247 247 /* User-facing fields, protected by page_pools_lock */ 248 248 struct { 249 249 struct hlist_node list; 250 - u64 detach_time; 250 + ktime_t detach_time; 251 251 u32 id; 252 252 } user; 253 253 };
+1
include/sound/cs35l56.h
··· 406 406 extern const char * const cs35l56_tx_input_texts[CS35L56_NUM_INPUT_SRC]; 407 407 extern const unsigned int cs35l56_tx_input_values[CS35L56_NUM_INPUT_SRC]; 408 408 409 + int cs35l56_set_asp_patch(struct cs35l56_base *cs35l56_base); 409 410 int cs35l56_set_patch(struct cs35l56_base *cs35l56_base); 410 411 int cs35l56_mbox_send(struct cs35l56_base *cs35l56_base, unsigned int command); 411 412 int cs35l56_firmware_shutdown(struct cs35l56_base *cs35l56_base);
+1
include/sound/tas2781.h
··· 151 151 struct bulk_reg_val *cali_data_backup; 152 152 struct bulk_reg_val alp_cali_bckp; 153 153 struct tasdevice_fw *cali_data_fmw; 154 + void *cali_specific; 154 155 unsigned int dev_addr; 155 156 unsigned int err_code; 156 157 unsigned char cur_book;
+1
include/uapi/linux/dma-buf.h
··· 20 20 #ifndef _DMA_BUF_UAPI_H_ 21 21 #define _DMA_BUF_UAPI_H_ 22 22 23 + #include <linux/ioctl.h> 23 24 #include <linux/types.h> 24 25 25 26 /**
+2 -1
include/uapi/linux/io_uring.h
··· 188 188 /* 189 189 * If COOP_TASKRUN is set, get notified if task work is available for 190 190 * running and a kernel transition would be needed to run it. This sets 191 - * IORING_SQ_TASKRUN in the sq ring flags. Not valid with COOP_TASKRUN. 191 + * IORING_SQ_TASKRUN in the sq ring flags. Not valid without COOP_TASKRUN 192 + * or DEFER_TASKRUN. 192 193 */ 193 194 #define IORING_SETUP_TASKRUN_FLAG (1U << 9) 194 195 #define IORING_SETUP_SQE128 (1U << 10) /* SQEs are 128 byte */
+3 -1
include/xen/xenbus.h
··· 80 80 const char *devicetype; 81 81 const char *nodename; 82 82 const char *otherend; 83 + bool vanished; 83 84 int otherend_id; 84 85 struct xenbus_watch otherend_watch; 85 86 struct device dev; ··· 229 228 int xenbus_alloc_evtchn(struct xenbus_device *dev, evtchn_port_t *port); 230 229 int xenbus_free_evtchn(struct xenbus_device *dev, evtchn_port_t port); 231 230 232 - enum xenbus_state xenbus_read_driver_state(const char *path); 231 + enum xenbus_state xenbus_read_driver_state(const struct xenbus_device *dev, 232 + const char *path); 233 233 234 234 __printf(3, 4) 235 235 void xenbus_dev_error(struct xenbus_device *dev, int err, const char *fmt, ...);
+1 -1
init/Kconfig
··· 1902 1902 default n 1903 1903 depends on IO_URING 1904 1904 help 1905 - Enable mock files for io_uring subststem testing. The ABI might 1905 + Enable mock files for io_uring subsystem testing. The ABI might 1906 1906 still change, so it's still experimental and should only be enabled 1907 1907 for specific test purposes. 1908 1908
+2
io_uring/net.c
··· 375 375 kmsg->msg.msg_namelen = addr_len; 376 376 } 377 377 if (sr->flags & IORING_RECVSEND_FIXED_BUF) { 378 + if (sr->flags & IORING_SEND_VECTORIZED) 379 + return -EINVAL; 378 380 req->flags |= REQ_F_IMPORT_BUFFER; 379 381 return 0; 380 382 }
+5 -3
io_uring/zcrx.c
··· 837 837 if (ret) 838 838 goto netdev_put_unlock; 839 839 840 - mp_param.rx_page_size = 1U << ifq->niov_shift; 840 + if (reg.rx_buf_len) 841 + mp_param.rx_page_size = 1U << ifq->niov_shift; 841 842 mp_param.mp_ops = &io_uring_pp_zc_ops; 842 843 mp_param.mp_priv = ifq; 843 844 ret = __net_mp_open_rxq(ifq->netdev, reg.if_rxq, &mp_param, NULL); ··· 927 926 struct io_zcrx_ifq *ifq, 928 927 struct net_iov **ret_niov) 929 928 { 929 + __u64 off = READ_ONCE(rqe->off); 930 930 unsigned niov_idx, area_idx; 931 931 struct io_zcrx_area *area; 932 932 933 - area_idx = rqe->off >> IORING_ZCRX_AREA_SHIFT; 934 - niov_idx = (rqe->off & ~IORING_ZCRX_AREA_MASK) >> ifq->niov_shift; 933 + area_idx = off >> IORING_ZCRX_AREA_SHIFT; 934 + niov_idx = (off & ~IORING_ZCRX_AREA_MASK) >> ifq->niov_shift; 935 935 936 936 if (unlikely(rqe->__pad || area_idx)) 937 937 return false;
+1 -3
kernel/bpf/trampoline.c
··· 1002 1002 mutex_lock(&tr->mutex); 1003 1003 1004 1004 shim_link = cgroup_shim_find(tr, bpf_func); 1005 - if (shim_link) { 1005 + if (shim_link && !IS_ERR(bpf_link_inc_not_zero(&shim_link->link.link))) { 1006 1006 /* Reusing existing shim attached by the other program. */ 1007 - bpf_link_inc(&shim_link->link.link); 1008 - 1009 1007 mutex_unlock(&tr->mutex); 1010 1008 bpf_trampoline_put(tr); /* bpf_trampoline_get above */ 1011 1009 return 0;
+34 -4
kernel/bpf/verifier.c
··· 2511 2511 if ((u32)reg->s32_min_value <= (u32)reg->s32_max_value) { 2512 2512 reg->u32_min_value = max_t(u32, reg->s32_min_value, reg->u32_min_value); 2513 2513 reg->u32_max_value = min_t(u32, reg->s32_max_value, reg->u32_max_value); 2514 + } else { 2515 + if (reg->u32_max_value < (u32)reg->s32_min_value) { 2516 + /* See __reg64_deduce_bounds() for detailed explanation. 2517 + * Refine ranges in the following situation: 2518 + * 2519 + * 0 U32_MAX 2520 + * | [xxxxxxxxxxxxxx u32 range xxxxxxxxxxxxxx] | 2521 + * |----------------------------|----------------------------| 2522 + * |xxxxx s32 range xxxxxxxxx] [xxxxxxx| 2523 + * 0 S32_MAX S32_MIN -1 2524 + */ 2525 + reg->s32_min_value = (s32)reg->u32_min_value; 2526 + reg->u32_max_value = min_t(u32, reg->u32_max_value, reg->s32_max_value); 2527 + } else if ((u32)reg->s32_max_value < reg->u32_min_value) { 2528 + /* 2529 + * 0 U32_MAX 2530 + * | [xxxxxxxxxxxxxx u32 range xxxxxxxxxxxxxx] | 2531 + * |----------------------------|----------------------------| 2532 + * |xxxxxxxxx] [xxxxxxxxxxxx s32 range | 2533 + * 0 S32_MAX S32_MIN -1 2534 + */ 2535 + reg->s32_max_value = (s32)reg->u32_max_value; 2536 + reg->u32_min_value = max_t(u32, reg->u32_min_value, reg->s32_min_value); 2537 + } 2514 2538 } 2515 2539 } 2516 2540 ··· 17359 17335 * in verifier state, save R in linked_regs if R->id == id. 17360 17336 * If there are too many Rs sharing same id, reset id for leftover Rs. 17361 17337 */ 17362 - static void collect_linked_regs(struct bpf_verifier_state *vstate, u32 id, 17338 + static void collect_linked_regs(struct bpf_verifier_env *env, 17339 + struct bpf_verifier_state *vstate, 17340 + u32 id, 17363 17341 struct linked_regs *linked_regs) 17364 17342 { 17343 + struct bpf_insn_aux_data *aux = env->insn_aux_data; 17365 17344 struct bpf_func_state *func; 17366 17345 struct bpf_reg_state *reg; 17346 + u16 live_regs; 17367 17347 int i, j; 17368 17348 17369 17349 id = id & ~BPF_ADD_CONST; 17370 17350 for (i = vstate->curframe; i >= 0; i--) { 17351 + live_regs = aux[frame_insn_idx(vstate, i)].live_regs_before; 17371 17352 func = vstate->frame[i]; 17372 17353 for (j = 0; j < BPF_REG_FP; j++) { 17354 + if (!(live_regs & BIT(j))) 17355 + continue; 17373 17356 reg = &func->regs[j]; 17374 17357 __collect_linked_regs(linked_regs, reg, id, i, j, true); 17375 17358 } ··· 17591 17560 * if parent state is created. 17592 17561 */ 17593 17562 if (BPF_SRC(insn->code) == BPF_X && src_reg->type == SCALAR_VALUE && src_reg->id) 17594 - collect_linked_regs(this_branch, src_reg->id, &linked_regs); 17563 + collect_linked_regs(env, this_branch, src_reg->id, &linked_regs); 17595 17564 if (dst_reg->type == SCALAR_VALUE && dst_reg->id) 17596 - collect_linked_regs(this_branch, dst_reg->id, &linked_regs); 17565 + collect_linked_regs(env, this_branch, dst_reg->id, &linked_regs); 17597 17566 if (linked_regs.cnt > 1) { 17598 17567 err = push_jmp_history(env, this_branch, 0, linked_regs_pack(&linked_regs)); 17599 17568 if (err) ··· 25292 25261 BTF_ID(func, do_exit) 25293 25262 BTF_ID(func, do_group_exit) 25294 25263 BTF_ID(func, kthread_complete_and_exit) 25295 - BTF_ID(func, kthread_exit) 25296 25264 BTF_ID(func, make_task_dead) 25297 25265 BTF_SET_END(noreturn_deny) 25298 25266
+30
kernel/sched/syscalls.c
··· 284 284 uid_eq(cred->euid, pcred->uid)); 285 285 } 286 286 287 + #ifdef CONFIG_RT_MUTEXES 288 + static inline void __setscheduler_dl_pi(int newprio, int policy, 289 + struct task_struct *p, 290 + struct sched_change_ctx *scope) 291 + { 292 + /* 293 + * In case a DEADLINE task (either proper or boosted) gets 294 + * setscheduled to a lower priority class, check if it neeeds to 295 + * inherit parameters from a potential pi_task. In that case make 296 + * sure replenishment happens with the next enqueue. 297 + */ 298 + 299 + if (dl_prio(newprio) && !dl_policy(policy)) { 300 + struct task_struct *pi_task = rt_mutex_get_top_task(p); 301 + 302 + if (pi_task) { 303 + p->dl.pi_se = pi_task->dl.pi_se; 304 + scope->flags |= ENQUEUE_REPLENISH; 305 + } 306 + } 307 + } 308 + #else /* !CONFIG_RT_MUTEXES */ 309 + static inline void __setscheduler_dl_pi(int newprio, int policy, 310 + struct task_struct *p, 311 + struct sched_change_ctx *scope) 312 + { 313 + } 314 + #endif /* !CONFIG_RT_MUTEXES */ 315 + 287 316 #ifdef CONFIG_UCLAMP_TASK 288 317 289 318 static int uclamp_validate(struct task_struct *p, ··· 684 655 __setscheduler_params(p, attr); 685 656 p->sched_class = next_class; 686 657 p->prio = newprio; 658 + __setscheduler_dl_pi(newprio, policy, p, scope); 687 659 } 688 660 __setscheduler_uclamp(p, attr); 689 661
+4 -2
kernel/time/timekeeping.c
··· 2653 2653 2654 2654 if (aux_clock) { 2655 2655 /* Auxiliary clocks are similar to TAI and do not have leap seconds */ 2656 - if (txc->status & (STA_INS | STA_DEL)) 2656 + if (txc->modes & ADJ_STATUS && 2657 + txc->status & (STA_INS | STA_DEL)) 2657 2658 return -EINVAL; 2658 2659 2659 2660 /* No TAI offset setting */ ··· 2662 2661 return -EINVAL; 2663 2662 2664 2663 /* No PPS support either */ 2665 - if (txc->status & (STA_PPSFREQ | STA_PPSTIME)) 2664 + if (txc->modes & ADJ_STATUS && 2665 + txc->status & (STA_PPSFREQ | STA_PPSTIME)) 2666 2666 return -EINVAL; 2667 2667 } 2668 2668
+1 -2
kernel/trace/blktrace.c
··· 383 383 cpu = raw_smp_processor_id(); 384 384 385 385 if (blk_tracer) { 386 - tracing_record_cmdline(current); 387 - 388 386 buffer = blk_tr->array_buffer.buffer; 389 387 trace_ctx = tracing_gen_ctx_flags(0); 390 388 switch (bt->version) { ··· 417 419 if (!event) 418 420 return; 419 421 422 + tracing_record_cmdline(current); 420 423 switch (bt->version) { 421 424 case 1: 422 425 record_blktrace_event(ring_buffer_event_data(event),
+2
kernel/trace/ftrace.c
··· 6404 6404 new_filter_hash = old_filter_hash; 6405 6405 } 6406 6406 } else { 6407 + guard(mutex)(&ftrace_lock); 6407 6408 err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH); 6408 6409 /* 6409 6410 * new_filter_hash is dup-ed, so we need to release it anyway, ··· 6531 6530 ops->func_hash->filter_hash = NULL; 6532 6531 } 6533 6532 } else { 6533 + guard(mutex)(&ftrace_lock); 6534 6534 err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH); 6535 6535 /* 6536 6536 * new_filter_hash is dup-ed, so we need to release it anyway,
+3 -3
kernel/trace/trace.c
··· 9350 9350 } 9351 9351 9352 9352 static int 9353 - allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, int size) 9353 + allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, unsigned long size) 9354 9354 { 9355 9355 enum ring_buffer_flags rb_flags; 9356 9356 struct trace_scratch *tscratch; ··· 9405 9405 } 9406 9406 } 9407 9407 9408 - static int allocate_trace_buffers(struct trace_array *tr, int size) 9408 + static int allocate_trace_buffers(struct trace_array *tr, unsigned long size) 9409 9409 { 9410 9410 int ret; 9411 9411 ··· 10769 10769 10770 10770 __init static int tracer_alloc_buffers(void) 10771 10771 { 10772 - int ring_buf_size; 10772 + unsigned long ring_buf_size; 10773 10773 int ret = -ENOMEM; 10774 10774 10775 10775
+5 -1
kernel/trace/trace_events.c
··· 4493 4493 4494 4494 static __init int setup_trace_event(char *str) 4495 4495 { 4496 - strscpy(bootup_event_buf, str, COMMAND_LINE_SIZE); 4496 + if (bootup_event_buf[0] != '\0') 4497 + strlcat(bootup_event_buf, ",", COMMAND_LINE_SIZE); 4498 + 4499 + strlcat(bootup_event_buf, str, COMMAND_LINE_SIZE); 4500 + 4497 4501 trace_set_ring_buffer_expanded(NULL); 4498 4502 disable_tracing_selftest("running event tracing"); 4499 4503
+3
kernel/trace/trace_events_trigger.c
··· 50 50 51 51 void trigger_data_free(struct event_trigger_data *data) 52 52 { 53 + if (!data) 54 + return; 55 + 53 56 if (data->cmd_ops->set_filter) 54 57 data->cmd_ops->set_filter(NULL, data, NULL); 55 58
+34
lib/crypto/.kunitconfig
··· 1 + CONFIG_KUNIT=y 2 + 3 + # These kconfig options select all the CONFIG_CRYPTO_LIB_* symbols that have a 4 + # corresponding KUnit test. Those symbols cannot be directly enabled here, 5 + # since they are hidden symbols. 6 + CONFIG_CRYPTO=y 7 + CONFIG_CRYPTO_ADIANTUM=y 8 + CONFIG_CRYPTO_BLAKE2B=y 9 + CONFIG_CRYPTO_CHACHA20POLY1305=y 10 + CONFIG_CRYPTO_HCTR2=y 11 + CONFIG_CRYPTO_MD5=y 12 + CONFIG_CRYPTO_MLDSA=y 13 + CONFIG_CRYPTO_SHA1=y 14 + CONFIG_CRYPTO_SHA256=y 15 + CONFIG_CRYPTO_SHA512=y 16 + CONFIG_CRYPTO_SHA3=y 17 + CONFIG_INET=y 18 + CONFIG_IPV6=y 19 + CONFIG_NET=y 20 + CONFIG_NETDEVICES=y 21 + CONFIG_WIREGUARD=y 22 + 23 + CONFIG_CRYPTO_LIB_BLAKE2B_KUNIT_TEST=y 24 + CONFIG_CRYPTO_LIB_BLAKE2S_KUNIT_TEST=y 25 + CONFIG_CRYPTO_LIB_CURVE25519_KUNIT_TEST=y 26 + CONFIG_CRYPTO_LIB_MD5_KUNIT_TEST=y 27 + CONFIG_CRYPTO_LIB_MLDSA_KUNIT_TEST=y 28 + CONFIG_CRYPTO_LIB_NH_KUNIT_TEST=y 29 + CONFIG_CRYPTO_LIB_POLY1305_KUNIT_TEST=y 30 + CONFIG_CRYPTO_LIB_POLYVAL_KUNIT_TEST=y 31 + CONFIG_CRYPTO_LIB_SHA1_KUNIT_TEST=y 32 + CONFIG_CRYPTO_LIB_SHA256_KUNIT_TEST=y 33 + CONFIG_CRYPTO_LIB_SHA512_KUNIT_TEST=y 34 + CONFIG_CRYPTO_LIB_SHA3_KUNIT_TEST=y
+12 -23
lib/crypto/tests/Kconfig
··· 2 2 3 3 config CRYPTO_LIB_BLAKE2B_KUNIT_TEST 4 4 tristate "KUnit tests for BLAKE2b" if !KUNIT_ALL_TESTS 5 - depends on KUNIT 5 + depends on KUNIT && CRYPTO_LIB_BLAKE2B 6 6 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 7 7 select CRYPTO_LIB_BENCHMARK_VISIBLE 8 - select CRYPTO_LIB_BLAKE2B 9 8 help 10 9 KUnit tests for the BLAKE2b cryptographic hash function. 11 10 ··· 13 14 depends on KUNIT 14 15 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 15 16 select CRYPTO_LIB_BENCHMARK_VISIBLE 16 - # No need to select CRYPTO_LIB_BLAKE2S here, as that option doesn't 17 + # No need to depend on CRYPTO_LIB_BLAKE2S here, as that option doesn't 17 18 # exist; the BLAKE2s code is always built-in for the /dev/random driver. 18 19 help 19 20 KUnit tests for the BLAKE2s cryptographic hash function. 20 21 21 22 config CRYPTO_LIB_CURVE25519_KUNIT_TEST 22 23 tristate "KUnit tests for Curve25519" if !KUNIT_ALL_TESTS 23 - depends on KUNIT 24 + depends on KUNIT && CRYPTO_LIB_CURVE25519 24 25 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 25 26 select CRYPTO_LIB_BENCHMARK_VISIBLE 26 - select CRYPTO_LIB_CURVE25519 27 27 help 28 28 KUnit tests for the Curve25519 Diffie-Hellman function. 29 29 30 30 config CRYPTO_LIB_MD5_KUNIT_TEST 31 31 tristate "KUnit tests for MD5" if !KUNIT_ALL_TESTS 32 - depends on KUNIT 32 + depends on KUNIT && CRYPTO_LIB_MD5 33 33 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 34 34 select CRYPTO_LIB_BENCHMARK_VISIBLE 35 - select CRYPTO_LIB_MD5 36 35 help 37 36 KUnit tests for the MD5 cryptographic hash function and its 38 37 corresponding HMAC. 39 38 40 39 config CRYPTO_LIB_MLDSA_KUNIT_TEST 41 40 tristate "KUnit tests for ML-DSA" if !KUNIT_ALL_TESTS 42 - depends on KUNIT 41 + depends on KUNIT && CRYPTO_LIB_MLDSA 43 42 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 44 43 select CRYPTO_LIB_BENCHMARK_VISIBLE 45 - select CRYPTO_LIB_MLDSA 46 44 help 47 45 KUnit tests for the ML-DSA digital signature algorithm. 48 46 49 47 config CRYPTO_LIB_NH_KUNIT_TEST 50 48 tristate "KUnit tests for NH" if !KUNIT_ALL_TESTS 51 - depends on KUNIT 49 + depends on KUNIT && CRYPTO_LIB_NH 52 50 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 53 - select CRYPTO_LIB_NH 54 51 help 55 52 KUnit tests for the NH almost-universal hash function. 56 53 57 54 config CRYPTO_LIB_POLY1305_KUNIT_TEST 58 55 tristate "KUnit tests for Poly1305" if !KUNIT_ALL_TESTS 59 - depends on KUNIT 56 + depends on KUNIT && CRYPTO_LIB_POLY1305 60 57 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 61 58 select CRYPTO_LIB_BENCHMARK_VISIBLE 62 - select CRYPTO_LIB_POLY1305 63 59 help 64 60 KUnit tests for the Poly1305 library functions. 65 61 66 62 config CRYPTO_LIB_POLYVAL_KUNIT_TEST 67 63 tristate "KUnit tests for POLYVAL" if !KUNIT_ALL_TESTS 68 - depends on KUNIT 64 + depends on KUNIT && CRYPTO_LIB_POLYVAL 69 65 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 70 66 select CRYPTO_LIB_BENCHMARK_VISIBLE 71 - select CRYPTO_LIB_POLYVAL 72 67 help 73 68 KUnit tests for the POLYVAL library functions. 74 69 75 70 config CRYPTO_LIB_SHA1_KUNIT_TEST 76 71 tristate "KUnit tests for SHA-1" if !KUNIT_ALL_TESTS 77 - depends on KUNIT 72 + depends on KUNIT && CRYPTO_LIB_SHA1 78 73 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 79 74 select CRYPTO_LIB_BENCHMARK_VISIBLE 80 - select CRYPTO_LIB_SHA1 81 75 help 82 76 KUnit tests for the SHA-1 cryptographic hash function and its 83 77 corresponding HMAC. ··· 79 87 # included, for consistency with the naming used elsewhere (e.g. CRYPTO_SHA256). 80 88 config CRYPTO_LIB_SHA256_KUNIT_TEST 81 89 tristate "KUnit tests for SHA-224 and SHA-256" if !KUNIT_ALL_TESTS 82 - depends on KUNIT 90 + depends on KUNIT && CRYPTO_LIB_SHA256 83 91 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 84 92 select CRYPTO_LIB_BENCHMARK_VISIBLE 85 - select CRYPTO_LIB_SHA256 86 93 help 87 94 KUnit tests for the SHA-224 and SHA-256 cryptographic hash functions 88 95 and their corresponding HMACs. ··· 90 99 # included, for consistency with the naming used elsewhere (e.g. CRYPTO_SHA512). 91 100 config CRYPTO_LIB_SHA512_KUNIT_TEST 92 101 tristate "KUnit tests for SHA-384 and SHA-512" if !KUNIT_ALL_TESTS 93 - depends on KUNIT 102 + depends on KUNIT && CRYPTO_LIB_SHA512 94 103 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 95 104 select CRYPTO_LIB_BENCHMARK_VISIBLE 96 - select CRYPTO_LIB_SHA512 97 105 help 98 106 KUnit tests for the SHA-384 and SHA-512 cryptographic hash functions 99 107 and their corresponding HMACs. 100 108 101 109 config CRYPTO_LIB_SHA3_KUNIT_TEST 102 110 tristate "KUnit tests for SHA-3" if !KUNIT_ALL_TESTS 103 - depends on KUNIT 111 + depends on KUNIT && CRYPTO_LIB_SHA3 104 112 default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS 105 113 select CRYPTO_LIB_BENCHMARK_VISIBLE 106 - select CRYPTO_LIB_SHA3 107 114 help 108 115 KUnit tests for the SHA3 cryptographic hash and XOF functions, 109 116 including SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128 and
+125 -106
lib/kunit/test.c
··· 94 94 unsigned long total; 95 95 }; 96 96 97 - static bool kunit_should_print_stats(struct kunit_result_stats stats) 97 + static bool kunit_should_print_stats(struct kunit_result_stats *stats) 98 98 { 99 99 if (kunit_stats_enabled == 0) 100 100 return false; ··· 102 102 if (kunit_stats_enabled == 2) 103 103 return true; 104 104 105 - return (stats.total > 1); 105 + return (stats->total > 1); 106 106 } 107 107 108 108 static void kunit_print_test_stats(struct kunit *test, 109 - struct kunit_result_stats stats) 109 + struct kunit_result_stats *stats) 110 110 { 111 111 if (!kunit_should_print_stats(stats)) 112 112 return; ··· 115 115 KUNIT_SUBTEST_INDENT 116 116 "# %s: pass:%lu fail:%lu skip:%lu total:%lu", 117 117 test->name, 118 - stats.passed, 119 - stats.failed, 120 - stats.skipped, 121 - stats.total); 118 + stats->passed, 119 + stats->failed, 120 + stats->skipped, 121 + stats->total); 122 122 } 123 123 124 124 /* Append formatted message to log. */ ··· 600 600 } 601 601 602 602 static void kunit_print_suite_stats(struct kunit_suite *suite, 603 - struct kunit_result_stats suite_stats, 604 - struct kunit_result_stats param_stats) 603 + struct kunit_result_stats *suite_stats, 604 + struct kunit_result_stats *param_stats) 605 605 { 606 606 if (kunit_should_print_stats(suite_stats)) { 607 607 kunit_log(KERN_INFO, suite, 608 608 "# %s: pass:%lu fail:%lu skip:%lu total:%lu", 609 609 suite->name, 610 - suite_stats.passed, 611 - suite_stats.failed, 612 - suite_stats.skipped, 613 - suite_stats.total); 610 + suite_stats->passed, 611 + suite_stats->failed, 612 + suite_stats->skipped, 613 + suite_stats->total); 614 614 } 615 615 616 616 if (kunit_should_print_stats(param_stats)) { 617 617 kunit_log(KERN_INFO, suite, 618 618 "# Totals: pass:%lu fail:%lu skip:%lu total:%lu", 619 - param_stats.passed, 620 - param_stats.failed, 621 - param_stats.skipped, 622 - param_stats.total); 619 + param_stats->passed, 620 + param_stats->failed, 621 + param_stats->skipped, 622 + param_stats->total); 623 623 } 624 624 } 625 625 ··· 681 681 } 682 682 } 683 683 684 - int kunit_run_tests(struct kunit_suite *suite) 684 + static noinline_for_stack void 685 + kunit_run_param_test(struct kunit_suite *suite, struct kunit_case *test_case, 686 + struct kunit *test, 687 + struct kunit_result_stats *suite_stats, 688 + struct kunit_result_stats *total_stats, 689 + struct kunit_result_stats *param_stats) 685 690 { 686 691 char param_desc[KUNIT_PARAM_DESC_SIZE]; 692 + const void *curr_param; 693 + 694 + kunit_init_parent_param_test(test_case, test); 695 + if (test_case->status == KUNIT_FAILURE) { 696 + kunit_update_stats(param_stats, test->status); 697 + return; 698 + } 699 + /* Get initial param. */ 700 + param_desc[0] = '\0'; 701 + /* TODO: Make generate_params try-catch */ 702 + curr_param = test_case->generate_params(test, NULL, param_desc); 703 + test_case->status = KUNIT_SKIPPED; 704 + kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT 705 + "KTAP version 1\n"); 706 + kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT 707 + "# Subtest: %s", test_case->name); 708 + if (test->params_array.params && 709 + test_case->generate_params == kunit_array_gen_params) { 710 + kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT 711 + KUNIT_SUBTEST_INDENT "1..%zd\n", 712 + test->params_array.num_params); 713 + } 714 + 715 + while (curr_param) { 716 + struct kunit param_test = { 717 + .param_value = curr_param, 718 + .param_index = ++test->param_index, 719 + .parent = test, 720 + }; 721 + kunit_init_test(&param_test, test_case->name, NULL); 722 + param_test.log = test_case->log; 723 + kunit_run_case_catch_errors(suite, test_case, &param_test); 724 + 725 + if (param_desc[0] == '\0') { 726 + snprintf(param_desc, sizeof(param_desc), 727 + "param-%d", param_test.param_index); 728 + } 729 + 730 + kunit_print_ok_not_ok(&param_test, KUNIT_LEVEL_CASE_PARAM, 731 + param_test.status, 732 + param_test.param_index, 733 + param_desc, 734 + param_test.status_comment); 735 + 736 + kunit_update_stats(param_stats, param_test.status); 737 + 738 + /* Get next param. */ 739 + param_desc[0] = '\0'; 740 + curr_param = test_case->generate_params(test, curr_param, 741 + param_desc); 742 + } 743 + /* 744 + * TODO: Put into a try catch. Since we don't need suite->exit 745 + * for it we can't reuse kunit_try_run_cleanup for this yet. 746 + */ 747 + if (test_case->param_exit) 748 + test_case->param_exit(test); 749 + /* TODO: Put this kunit_cleanup into a try-catch. */ 750 + kunit_cleanup(test); 751 + } 752 + 753 + static noinline_for_stack void 754 + kunit_run_one_test(struct kunit_suite *suite, struct kunit_case *test_case, 755 + struct kunit_result_stats *suite_stats, 756 + struct kunit_result_stats *total_stats) 757 + { 758 + struct kunit test = { .param_value = NULL, .param_index = 0 }; 759 + struct kunit_result_stats param_stats = { 0 }; 760 + 761 + kunit_init_test(&test, test_case->name, test_case->log); 762 + if (test_case->status == KUNIT_SKIPPED) { 763 + /* Test marked as skip */ 764 + test.status = KUNIT_SKIPPED; 765 + kunit_update_stats(&param_stats, test.status); 766 + } else if (!test_case->generate_params) { 767 + /* Non-parameterised test. */ 768 + test_case->status = KUNIT_SKIPPED; 769 + kunit_run_case_catch_errors(suite, test_case, &test); 770 + kunit_update_stats(&param_stats, test.status); 771 + } else { 772 + kunit_run_param_test(suite, test_case, &test, suite_stats, 773 + total_stats, &param_stats); 774 + } 775 + kunit_print_attr((void *)test_case, true, KUNIT_LEVEL_CASE); 776 + 777 + kunit_print_test_stats(&test, &param_stats); 778 + 779 + kunit_print_ok_not_ok(&test, KUNIT_LEVEL_CASE, test_case->status, 780 + kunit_test_case_num(suite, test_case), 781 + test_case->name, 782 + test.status_comment); 783 + 784 + kunit_update_stats(suite_stats, test_case->status); 785 + kunit_accumulate_stats(total_stats, param_stats); 786 + } 787 + 788 + 789 + int kunit_run_tests(struct kunit_suite *suite) 790 + { 687 791 struct kunit_case *test_case; 688 792 struct kunit_result_stats suite_stats = { 0 }; 689 793 struct kunit_result_stats total_stats = { 0 }; 690 - const void *curr_param; 691 794 692 795 /* Taint the kernel so we know we've run tests. */ 693 796 add_taint(TAINT_TEST, LOCKDEP_STILL_OK); ··· 806 703 807 704 kunit_print_suite_start(suite); 808 705 809 - kunit_suite_for_each_test_case(suite, test_case) { 810 - struct kunit test = { .param_value = NULL, .param_index = 0 }; 811 - struct kunit_result_stats param_stats = { 0 }; 812 - 813 - kunit_init_test(&test, test_case->name, test_case->log); 814 - if (test_case->status == KUNIT_SKIPPED) { 815 - /* Test marked as skip */ 816 - test.status = KUNIT_SKIPPED; 817 - kunit_update_stats(&param_stats, test.status); 818 - } else if (!test_case->generate_params) { 819 - /* Non-parameterised test. */ 820 - test_case->status = KUNIT_SKIPPED; 821 - kunit_run_case_catch_errors(suite, test_case, &test); 822 - kunit_update_stats(&param_stats, test.status); 823 - } else { 824 - kunit_init_parent_param_test(test_case, &test); 825 - if (test_case->status == KUNIT_FAILURE) { 826 - kunit_update_stats(&param_stats, test.status); 827 - goto test_case_end; 828 - } 829 - /* Get initial param. */ 830 - param_desc[0] = '\0'; 831 - /* TODO: Make generate_params try-catch */ 832 - curr_param = test_case->generate_params(&test, NULL, param_desc); 833 - test_case->status = KUNIT_SKIPPED; 834 - kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT 835 - "KTAP version 1\n"); 836 - kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT 837 - "# Subtest: %s", test_case->name); 838 - if (test.params_array.params && 839 - test_case->generate_params == kunit_array_gen_params) { 840 - kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT 841 - KUNIT_SUBTEST_INDENT "1..%zd\n", 842 - test.params_array.num_params); 843 - } 844 - 845 - while (curr_param) { 846 - struct kunit param_test = { 847 - .param_value = curr_param, 848 - .param_index = ++test.param_index, 849 - .parent = &test, 850 - }; 851 - kunit_init_test(&param_test, test_case->name, NULL); 852 - param_test.log = test_case->log; 853 - kunit_run_case_catch_errors(suite, test_case, &param_test); 854 - 855 - if (param_desc[0] == '\0') { 856 - snprintf(param_desc, sizeof(param_desc), 857 - "param-%d", param_test.param_index); 858 - } 859 - 860 - kunit_print_ok_not_ok(&param_test, KUNIT_LEVEL_CASE_PARAM, 861 - param_test.status, 862 - param_test.param_index, 863 - param_desc, 864 - param_test.status_comment); 865 - 866 - kunit_update_stats(&param_stats, param_test.status); 867 - 868 - /* Get next param. */ 869 - param_desc[0] = '\0'; 870 - curr_param = test_case->generate_params(&test, curr_param, 871 - param_desc); 872 - } 873 - /* 874 - * TODO: Put into a try catch. Since we don't need suite->exit 875 - * for it we can't reuse kunit_try_run_cleanup for this yet. 876 - */ 877 - if (test_case->param_exit) 878 - test_case->param_exit(&test); 879 - /* TODO: Put this kunit_cleanup into a try-catch. */ 880 - kunit_cleanup(&test); 881 - } 882 - test_case_end: 883 - kunit_print_attr((void *)test_case, true, KUNIT_LEVEL_CASE); 884 - 885 - kunit_print_test_stats(&test, param_stats); 886 - 887 - kunit_print_ok_not_ok(&test, KUNIT_LEVEL_CASE, test_case->status, 888 - kunit_test_case_num(suite, test_case), 889 - test_case->name, 890 - test.status_comment); 891 - 892 - kunit_update_stats(&suite_stats, test_case->status); 893 - kunit_accumulate_stats(&total_stats, param_stats); 894 - } 706 + kunit_suite_for_each_test_case(suite, test_case) 707 + kunit_run_one_test(suite, test_case, &suite_stats, &total_stats); 895 708 896 709 if (suite->suite_exit) 897 710 suite->suite_exit(suite); 898 711 899 - kunit_print_suite_stats(suite, suite_stats, total_stats); 712 + kunit_print_suite_stats(suite, &suite_stats, &total_stats); 900 713 suite_end: 901 714 kunit_print_suite_end(suite); 902 715
+4 -1
mm/cma.c
··· 1013 1013 unsigned long count) 1014 1014 { 1015 1015 struct cma_memrange *cmr; 1016 + unsigned long ret = 0; 1016 1017 unsigned long i, pfn; 1017 1018 1018 1019 cmr = find_cma_memrange(cma, pages, count); ··· 1022 1021 1023 1022 pfn = page_to_pfn(pages); 1024 1023 for (i = 0; i < count; i++, pfn++) 1025 - VM_WARN_ON(!put_page_testzero(pfn_to_page(pfn))); 1024 + ret += !put_page_testzero(pfn_to_page(pfn)); 1025 + 1026 + WARN(ret, "%lu pages are still in use!\n", ret); 1026 1027 1027 1028 __cma_release_frozen(cma, cmr, pages, count); 1028 1029
+6 -1
mm/damon/core.c
··· 1562 1562 } 1563 1563 ctx->walk_control = control; 1564 1564 mutex_unlock(&ctx->walk_control_lock); 1565 - if (!damon_is_running(ctx)) 1565 + if (!damon_is_running(ctx)) { 1566 + mutex_lock(&ctx->walk_control_lock); 1567 + if (ctx->walk_control == control) 1568 + ctx->walk_control = NULL; 1569 + mutex_unlock(&ctx->walk_control_lock); 1566 1570 return -EINVAL; 1571 + } 1567 1572 wait_for_completion(&control->completion); 1568 1573 if (control->canceled) 1569 1574 return -ECANCELED;
+10 -5
mm/filemap.c
··· 1379 1379 1380 1380 #ifdef CONFIG_MIGRATION 1381 1381 /** 1382 - * migration_entry_wait_on_locked - Wait for a migration entry to be removed 1383 - * @entry: migration swap entry. 1382 + * softleaf_entry_wait_on_locked - Wait for a migration entry or 1383 + * device_private entry to be removed. 1384 + * @entry: migration or device_private swap entry. 1384 1385 * @ptl: already locked ptl. This function will drop the lock. 1385 1386 * 1386 - * Wait for a migration entry referencing the given page to be removed. This is 1387 + * Wait for a migration entry referencing the given page, or device_private 1388 + * entry referencing a dvice_private page to be unlocked. This is 1387 1389 * equivalent to folio_put_wait_locked(folio, TASK_UNINTERRUPTIBLE) except 1388 1390 * this can be called without taking a reference on the page. Instead this 1389 - * should be called while holding the ptl for the migration entry referencing 1391 + * should be called while holding the ptl for @entry referencing 1390 1392 * the page. 1391 1393 * 1392 1394 * Returns after unlocking the ptl. ··· 1396 1394 * This follows the same logic as folio_wait_bit_common() so see the comments 1397 1395 * there. 1398 1396 */ 1399 - void migration_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl) 1397 + void softleaf_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl) 1400 1398 __releases(ptl) 1401 1399 { 1402 1400 struct wait_page_queue wait_page; ··· 1430 1428 * If a migration entry exists for the page the migration path must hold 1431 1429 * a valid reference to the page, and it must take the ptl to remove the 1432 1430 * migration entry. So the page is valid until the ptl is dropped. 1431 + * Similarly any path attempting to drop the last reference to a 1432 + * device-private page needs to grab the ptl to remove the device-private 1433 + * entry. 1433 1434 */ 1434 1435 spin_unlock(ptl); 1435 1436
+9 -4
mm/huge_memory.c
··· 3631 3631 const bool is_anon = folio_test_anon(folio); 3632 3632 int old_order = folio_order(folio); 3633 3633 int start_order = split_type == SPLIT_TYPE_UNIFORM ? new_order : old_order - 1; 3634 + struct folio *old_folio = folio; 3634 3635 int split_order; 3635 3636 3636 3637 /* ··· 3652 3651 * uniform split has xas_split_alloc() called before 3653 3652 * irq is disabled to allocate enough memory, whereas 3654 3653 * non-uniform split can handle ENOMEM. 3654 + * Use the to-be-split folio, so that a parallel 3655 + * folio_try_get() waits on it until xarray is updated 3656 + * with after-split folios and the original one is 3657 + * unfrozen. 3655 3658 */ 3656 - if (split_type == SPLIT_TYPE_UNIFORM) 3657 - xas_split(xas, folio, old_order); 3658 - else { 3659 + if (split_type == SPLIT_TYPE_UNIFORM) { 3660 + xas_split(xas, old_folio, old_order); 3661 + } else { 3659 3662 xas_set_order(xas, folio->index, split_order); 3660 - xas_try_split(xas, folio, old_order); 3663 + xas_try_split(xas, old_folio, old_order); 3661 3664 if (xas_error(xas)) 3662 3665 return xas_error(xas); 3663 3666 }
+2 -2
mm/hugetlb.c
··· 3101 3101 * extract the actual node first. 3102 3102 */ 3103 3103 if (m) 3104 - listnode = early_pfn_to_nid(PHYS_PFN(virt_to_phys(m))); 3104 + listnode = early_pfn_to_nid(PHYS_PFN(__pa(m))); 3105 3105 } 3106 3106 3107 3107 if (m) { ··· 3160 3160 * The head struct page is used to get folio information by the HugeTLB 3161 3161 * subsystem like zone id and node id. 3162 3162 */ 3163 - memblock_reserved_mark_noinit(virt_to_phys((void *)m + PAGE_SIZE), 3163 + memblock_reserved_mark_noinit(__pa((void *)m + PAGE_SIZE), 3164 3164 huge_page_size(h) - PAGE_SIZE); 3165 3165 3166 3166 return 1;
+1 -1
mm/madvise.c
··· 1389 1389 new_flags |= VM_DONTCOPY; 1390 1390 break; 1391 1391 case MADV_DOFORK: 1392 - if (new_flags & VM_IO) 1392 + if (new_flags & VM_SPECIAL) 1393 1393 return -EINVAL; 1394 1394 new_flags &= ~VM_DONTCOPY; 1395 1395 break;
+1 -1
mm/memcontrol.c
··· 3086 3086 3087 3087 if (!local_trylock(&obj_stock.lock)) { 3088 3088 if (pgdat) 3089 - mod_objcg_mlstate(objcg, pgdat, idx, nr_bytes); 3089 + mod_objcg_mlstate(objcg, pgdat, idx, nr_acct); 3090 3090 nr_pages = nr_bytes >> PAGE_SHIFT; 3091 3091 nr_bytes = nr_bytes & (PAGE_SIZE - 1); 3092 3092 atomic_add(nr_bytes, &objcg->nr_charged_bytes);
+43 -6
mm/memfd_luo.c
··· 146 146 for (i = 0; i < nr_folios; i++) { 147 147 struct memfd_luo_folio_ser *pfolio = &folios_ser[i]; 148 148 struct folio *folio = folios[i]; 149 - unsigned int flags = 0; 150 149 151 150 err = kho_preserve_folio(folio); 152 151 if (err) 153 152 goto err_unpreserve; 154 153 155 - if (folio_test_dirty(folio)) 156 - flags |= MEMFD_LUO_FOLIO_DIRTY; 157 - if (folio_test_uptodate(folio)) 158 - flags |= MEMFD_LUO_FOLIO_UPTODATE; 154 + folio_lock(folio); 155 + 156 + /* 157 + * A dirty folio is one which has been written to. A clean folio 158 + * is its opposite. Since a clean folio does not carry user 159 + * data, it can be freed by page reclaim under memory pressure. 160 + * 161 + * Saving the dirty flag at prepare() time doesn't work since it 162 + * can change later. Saving it at freeze() also won't work 163 + * because the dirty bit is normally synced at unmap and there 164 + * might still be a mapping of the file at freeze(). 165 + * 166 + * To see why this is a problem, say a folio is clean at 167 + * preserve, but gets dirtied later. The pfolio flags will mark 168 + * it as clean. After retrieve, the next kernel might try to 169 + * reclaim this folio under memory pressure, losing user data. 170 + * 171 + * Unconditionally mark it dirty to avoid this problem. This 172 + * comes at the cost of making clean folios un-reclaimable after 173 + * live update. 174 + */ 175 + folio_mark_dirty(folio); 176 + 177 + /* 178 + * If the folio is not uptodate, it was fallocated but never 179 + * used. Saving this flag at prepare() doesn't work since it 180 + * might change later when someone uses the folio. 181 + * 182 + * Since we have taken the performance penalty of allocating, 183 + * zeroing, and pinning all the folios in the holes, take a bit 184 + * more and zero all non-uptodate folios too. 185 + * 186 + * NOTE: For someone looking to improve preserve performance, 187 + * this is a good place to look. 188 + */ 189 + if (!folio_test_uptodate(folio)) { 190 + folio_zero_range(folio, 0, folio_size(folio)); 191 + flush_dcache_folio(folio); 192 + folio_mark_uptodate(folio); 193 + } 194 + 195 + folio_unlock(folio); 159 196 160 197 pfolio->pfn = folio_pfn(folio); 161 - pfolio->flags = flags; 198 + pfolio->flags = MEMFD_LUO_FOLIO_DIRTY | MEMFD_LUO_FOLIO_UPTODATE; 162 199 pfolio->index = folio->index; 163 200 } 164 201
+2 -1
mm/memory.c
··· 4763 4763 unlock_page(vmf->page); 4764 4764 put_page(vmf->page); 4765 4765 } else { 4766 - pte_unmap_unlock(vmf->pte, vmf->ptl); 4766 + pte_unmap(vmf->pte); 4767 + softleaf_entry_wait_on_locked(entry, vmf->ptl); 4767 4768 } 4768 4769 } else if (softleaf_is_hwpoison(entry)) { 4769 4770 ret = VM_FAULT_HWPOISON;
+4 -4
mm/migrate.c
··· 500 500 if (!softleaf_is_migration(entry)) 501 501 goto out; 502 502 503 - migration_entry_wait_on_locked(entry, ptl); 503 + softleaf_entry_wait_on_locked(entry, ptl); 504 504 return; 505 505 out: 506 506 spin_unlock(ptl); ··· 532 532 * If migration entry existed, safe to release vma lock 533 533 * here because the pgtable page won't be freed without the 534 534 * pgtable lock released. See comment right above pgtable 535 - * lock release in migration_entry_wait_on_locked(). 535 + * lock release in softleaf_entry_wait_on_locked(). 536 536 */ 537 537 hugetlb_vma_unlock_read(vma); 538 - migration_entry_wait_on_locked(entry, ptl); 538 + softleaf_entry_wait_on_locked(entry, ptl); 539 539 return; 540 540 } 541 541 ··· 553 553 ptl = pmd_lock(mm, pmd); 554 554 if (!pmd_is_migration_entry(*pmd)) 555 555 goto unlock; 556 - migration_entry_wait_on_locked(softleaf_from_pmd(*pmd), ptl); 556 + softleaf_entry_wait_on_locked(softleaf_from_pmd(*pmd), ptl); 557 557 return; 558 558 unlock: 559 559 spin_unlock(ptl);
+1 -1
mm/migrate_device.c
··· 176 176 } 177 177 178 178 if (softleaf_is_migration(entry)) { 179 - migration_entry_wait_on_locked(entry, ptl); 179 + softleaf_entry_wait_on_locked(entry, ptl); 180 180 spin_unlock(ptl); 181 181 return -EAGAIN; 182 182 }
+5 -5
mm/slab.h
··· 59 59 * to save memory. In case ->stride field is not available, 60 60 * such optimizations are disabled. 61 61 */ 62 - unsigned short stride; 62 + unsigned int stride; 63 63 #endif 64 64 }; 65 65 }; ··· 559 559 } 560 560 561 561 #ifdef CONFIG_64BIT 562 - static inline void slab_set_stride(struct slab *slab, unsigned short stride) 562 + static inline void slab_set_stride(struct slab *slab, unsigned int stride) 563 563 { 564 564 slab->stride = stride; 565 565 } 566 - static inline unsigned short slab_get_stride(struct slab *slab) 566 + static inline unsigned int slab_get_stride(struct slab *slab) 567 567 { 568 568 return slab->stride; 569 569 } 570 570 #else 571 - static inline void slab_set_stride(struct slab *slab, unsigned short stride) 571 + static inline void slab_set_stride(struct slab *slab, unsigned int stride) 572 572 { 573 573 VM_WARN_ON_ONCE(stride != sizeof(struct slabobj_ext)); 574 574 } 575 - static inline unsigned short slab_get_stride(struct slab *slab) 575 + static inline unsigned int slab_get_stride(struct slab *slab) 576 576 { 577 577 return sizeof(struct slabobj_ext); 578 578 }
+47 -22
mm/slub.c
··· 2858 2858 * object pointers are moved to a on-stack array under the lock. To bound the 2859 2859 * stack usage, limit each batch to PCS_BATCH_MAX. 2860 2860 * 2861 - * returns true if at least partially flushed 2861 + * Must be called with s->cpu_sheaves->lock locked, returns with the lock 2862 + * unlocked. 2863 + * 2864 + * Returns how many objects are remaining to be flushed 2862 2865 */ 2863 - static bool sheaf_flush_main(struct kmem_cache *s) 2866 + static unsigned int __sheaf_flush_main_batch(struct kmem_cache *s) 2864 2867 { 2865 2868 struct slub_percpu_sheaves *pcs; 2866 2869 unsigned int batch, remaining; 2867 2870 void *objects[PCS_BATCH_MAX]; 2868 2871 struct slab_sheaf *sheaf; 2869 - bool ret = false; 2870 2872 2871 - next_batch: 2872 - if (!local_trylock(&s->cpu_sheaves->lock)) 2873 - return ret; 2873 + lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); 2874 2874 2875 2875 pcs = this_cpu_ptr(s->cpu_sheaves); 2876 2876 sheaf = pcs->main; ··· 2888 2888 2889 2889 stat_add(s, SHEAF_FLUSH, batch); 2890 2890 2891 - ret = true; 2891 + return remaining; 2892 + } 2892 2893 2893 - if (remaining) 2894 - goto next_batch; 2894 + static void sheaf_flush_main(struct kmem_cache *s) 2895 + { 2896 + unsigned int remaining; 2897 + 2898 + do { 2899 + local_lock(&s->cpu_sheaves->lock); 2900 + 2901 + remaining = __sheaf_flush_main_batch(s); 2902 + 2903 + } while (remaining); 2904 + } 2905 + 2906 + /* 2907 + * Returns true if the main sheaf was at least partially flushed. 2908 + */ 2909 + static bool sheaf_try_flush_main(struct kmem_cache *s) 2910 + { 2911 + unsigned int remaining; 2912 + bool ret = false; 2913 + 2914 + do { 2915 + if (!local_trylock(&s->cpu_sheaves->lock)) 2916 + return ret; 2917 + 2918 + ret = true; 2919 + remaining = __sheaf_flush_main_batch(s); 2920 + 2921 + } while (remaining); 2895 2922 2896 2923 return ret; 2897 2924 } ··· 4567 4540 struct slab_sheaf *empty = NULL; 4568 4541 struct slab_sheaf *full; 4569 4542 struct node_barn *barn; 4570 - bool can_alloc; 4543 + bool allow_spin; 4571 4544 4572 4545 lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); 4573 4546 ··· 4588 4561 return NULL; 4589 4562 } 4590 4563 4591 - full = barn_replace_empty_sheaf(barn, pcs->main, 4592 - gfpflags_allow_spinning(gfp)); 4564 + allow_spin = gfpflags_allow_spinning(gfp); 4565 + 4566 + full = barn_replace_empty_sheaf(barn, pcs->main, allow_spin); 4593 4567 4594 4568 if (full) { 4595 4569 stat(s, BARN_GET); ··· 4600 4572 4601 4573 stat(s, BARN_GET_FAIL); 4602 4574 4603 - can_alloc = gfpflags_allow_blocking(gfp); 4604 - 4605 - if (can_alloc) { 4575 + if (allow_spin) { 4606 4576 if (pcs->spare) { 4607 4577 empty = pcs->spare; 4608 4578 pcs->spare = NULL; ··· 4610 4584 } 4611 4585 4612 4586 local_unlock(&s->cpu_sheaves->lock); 4587 + pcs = NULL; 4613 4588 4614 - if (!can_alloc) 4589 + if (!allow_spin) 4615 4590 return NULL; 4616 4591 4617 4592 if (empty) { ··· 4632 4605 if (!full) 4633 4606 return NULL; 4634 4607 4635 - /* 4636 - * we can reach here only when gfpflags_allow_blocking 4637 - * so this must not be an irq 4638 - */ 4639 - local_lock(&s->cpu_sheaves->lock); 4608 + if (!local_trylock(&s->cpu_sheaves->lock)) 4609 + goto barn_put; 4640 4610 pcs = this_cpu_ptr(s->cpu_sheaves); 4641 4611 4642 4612 /* ··· 4664 4640 return pcs; 4665 4641 } 4666 4642 4643 + barn_put: 4667 4644 barn_put_full_sheaf(barn, full); 4668 4645 stat(s, BARN_PUT); 4669 4646 ··· 5729 5704 if (put_fail) 5730 5705 stat(s, BARN_PUT_FAIL); 5731 5706 5732 - if (!sheaf_flush_main(s)) 5707 + if (!sheaf_try_flush_main(s)) 5733 5708 return NULL; 5734 5709 5735 5710 if (!local_trylock(&s->cpu_sheaves->lock))
-35
net/core/dev.h
··· 366 366 367 367 void kick_defer_list_purge(unsigned int cpu); 368 368 369 - #define XMIT_RECURSION_LIMIT 8 370 - 371 - #ifndef CONFIG_PREEMPT_RT 372 - static inline bool dev_xmit_recursion(void) 373 - { 374 - return unlikely(__this_cpu_read(softnet_data.xmit.recursion) > 375 - XMIT_RECURSION_LIMIT); 376 - } 377 - 378 - static inline void dev_xmit_recursion_inc(void) 379 - { 380 - __this_cpu_inc(softnet_data.xmit.recursion); 381 - } 382 - 383 - static inline void dev_xmit_recursion_dec(void) 384 - { 385 - __this_cpu_dec(softnet_data.xmit.recursion); 386 - } 387 - #else 388 - static inline bool dev_xmit_recursion(void) 389 - { 390 - return unlikely(current->net_xmit.recursion > XMIT_RECURSION_LIMIT); 391 - } 392 - 393 - static inline void dev_xmit_recursion_inc(void) 394 - { 395 - current->net_xmit.recursion++; 396 - } 397 - 398 - static inline void dev_xmit_recursion_dec(void) 399 - { 400 - current->net_xmit.recursion--; 401 - } 402 - #endif 403 - 404 369 int dev_set_hwtstamp_phylib(struct net_device *dev, 405 370 struct kernel_hwtstamp_config *cfg, 406 371 struct netlink_ext_ack *extack);
+7
net/core/filter.c
··· 2228 2228 return -ENOMEM; 2229 2229 } 2230 2230 2231 + if (unlikely(!ipv6_mod_enabled())) 2232 + goto out_drop; 2233 + 2231 2234 rcu_read_lock(); 2232 2235 if (!nh) { 2233 2236 dst = skb_dst(skb); ··· 2338 2335 2339 2336 neigh = ip_neigh_for_gw(rt, skb, &is_v6gw); 2340 2337 } else if (nh->nh_family == AF_INET6) { 2338 + if (unlikely(!ipv6_mod_enabled())) { 2339 + rcu_read_unlock(); 2340 + goto out_drop; 2341 + } 2341 2342 neigh = ip_neigh_gw6(dev, &nh->ipv6_nh); 2342 2343 is_v6gw = true; 2343 2344 } else if (nh->nh_family == AF_INET) {
+2 -1
net/core/neighbour.c
··· 820 820 update: 821 821 WRITE_ONCE(n->flags, flags); 822 822 n->permanent = permanent; 823 - WRITE_ONCE(n->protocol, protocol); 823 + if (protocol) 824 + WRITE_ONCE(n->protocol, protocol); 824 825 out: 825 826 mutex_unlock(&tbl->phash_lock); 826 827 return err;
+2 -2
net/core/page_pool_user.c
··· 245 245 goto err_cancel; 246 246 if (pool->user.detach_time && 247 247 nla_put_uint(rsp, NETDEV_A_PAGE_POOL_DETACH_TIME, 248 - pool->user.detach_time)) 248 + ktime_divns(pool->user.detach_time, NSEC_PER_SEC))) 249 249 goto err_cancel; 250 250 251 251 if (pool->mp_ops && pool->mp_ops->nl_fill(pool->mp_priv, rsp, NULL)) ··· 337 337 void page_pool_detached(struct page_pool *pool) 338 338 { 339 339 mutex_lock(&page_pools_lock); 340 - pool->user.detach_time = ktime_get_boottime_seconds(); 340 + pool->user.detach_time = ktime_get_boottime(); 341 341 netdev_nl_page_pool_event(pool, NETDEV_CMD_PAGE_POOL_CHANGE_NTF); 342 342 mutex_unlock(&page_pools_lock); 343 343 }
+6
net/ipv4/af_inet.c
··· 124 124 125 125 #include <trace/events/sock.h> 126 126 127 + /* Keep the definition of IPv6 disable here for now, to avoid annoying linker 128 + * issues in case IPv6=m 129 + */ 130 + int disable_ipv6_mod; 131 + EXPORT_SYMBOL(disable_ipv6_mod); 132 + 127 133 /* The inetsw table contains everything that inet_create needs to 128 134 * build a new socket. 129 135 */
+15
net/ipv4/ip_tunnel_core.c
··· 58 58 struct iphdr *iph; 59 59 int err; 60 60 61 + if (unlikely(dev_recursion_level() > IP_TUNNEL_RECURSION_LIMIT)) { 62 + if (dev) { 63 + net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n", 64 + dev->name); 65 + DEV_STATS_INC(dev, tx_errors); 66 + } 67 + ip_rt_put(rt); 68 + kfree_skb(skb); 69 + return; 70 + } 71 + 72 + dev_xmit_recursion_inc(); 73 + 61 74 skb_scrub_packet(skb, xnet); 62 75 63 76 skb_clear_hash_if_not_l4(skb); ··· 101 88 pkt_len = 0; 102 89 iptunnel_xmit_stats(dev, pkt_len); 103 90 } 91 + 92 + dev_xmit_recursion_dec(); 104 93 } 105 94 EXPORT_SYMBOL_GPL(iptunnel_xmit); 106 95
+11 -3
net/ipv4/nexthop.c
··· 2002 2002 } 2003 2003 2004 2004 static void remove_nh_grp_entry(struct net *net, struct nh_grp_entry *nhge, 2005 - struct nl_info *nlinfo) 2005 + struct nl_info *nlinfo, 2006 + struct list_head *deferred_free) 2006 2007 { 2007 2008 struct nh_grp_entry *nhges, *new_nhges; 2008 2009 struct nexthop *nhp = nhge->nh_parent; ··· 2063 2062 rcu_assign_pointer(nhp->nh_grp, newg); 2064 2063 2065 2064 list_del(&nhge->nh_list); 2066 - free_percpu(nhge->stats); 2067 2065 nexthop_put(nhge->nh); 2066 + list_add(&nhge->nh_list, deferred_free); 2068 2067 2069 2068 /* Removal of a NH from a resilient group is notified through 2070 2069 * bucket notifications. ··· 2084 2083 struct nl_info *nlinfo) 2085 2084 { 2086 2085 struct nh_grp_entry *nhge, *tmp; 2086 + LIST_HEAD(deferred_free); 2087 2087 2088 2088 /* If there is nothing to do, let's avoid the costly call to 2089 2089 * synchronize_net() ··· 2093 2091 return; 2094 2092 2095 2093 list_for_each_entry_safe(nhge, tmp, &nh->grp_list, nh_list) 2096 - remove_nh_grp_entry(net, nhge, nlinfo); 2094 + remove_nh_grp_entry(net, nhge, nlinfo, &deferred_free); 2097 2095 2098 2096 /* make sure all see the newly published array before releasing rtnl */ 2099 2097 synchronize_net(); 2098 + 2099 + /* Now safe to free percpu stats — all RCU readers have finished */ 2100 + list_for_each_entry_safe(nhge, tmp, &deferred_free, nh_list) { 2101 + list_del(&nhge->nh_list); 2102 + free_percpu(nhge->stats); 2103 + } 2100 2104 } 2101 2105 2102 2106 static void remove_nexthop_group(struct nexthop *nh, struct nl_info *nlinfo)
-8
net/ipv6/af_inet6.c
··· 86 86 .autoconf = 1, 87 87 }; 88 88 89 - static int disable_ipv6_mod; 90 - 91 89 module_param_named(disable, disable_ipv6_mod, int, 0444); 92 90 MODULE_PARM_DESC(disable, "Disable IPv6 module such that it is non-functional"); 93 91 ··· 94 96 95 97 module_param_named(autoconf, ipv6_defaults.autoconf, int, 0444); 96 98 MODULE_PARM_DESC(autoconf, "Enable IPv6 address autoconfiguration on all interfaces"); 97 - 98 - bool ipv6_mod_enabled(void) 99 - { 100 - return disable_ipv6_mod == 0; 101 - } 102 - EXPORT_SYMBOL_GPL(ipv6_mod_enabled); 103 99 104 100 static struct ipv6_pinfo *inet6_sk_generic(struct sock *sk) 105 101 {
+8 -5
net/mctp/route.c
··· 359 359 { 360 360 struct mctp_sk_key *key; 361 361 struct mctp_flow *flow; 362 + unsigned long flags; 362 363 363 364 flow = skb_ext_find(skb, SKB_EXT_MCTP); 364 365 if (!flow) ··· 367 366 368 367 key = flow->key; 369 368 370 - if (key->dev) { 371 - WARN_ON(key->dev != dev); 372 - return; 373 - } 369 + spin_lock_irqsave(&key->lock, flags); 374 370 375 - mctp_dev_set_key(dev, key); 371 + if (!key->dev) 372 + mctp_dev_set_key(dev, key); 373 + else 374 + WARN_ON(key->dev != dev); 375 + 376 + spin_unlock_irqrestore(&key->lock, flags); 376 377 } 377 378 #else 378 379 static void mctp_skb_set_flow(struct sk_buff *skb, struct mctp_sk_key *key) {}
+2 -1
net/ncsi/ncsi-aen.c
··· 224 224 if (!nah) { 225 225 netdev_warn(ndp->ndev.dev, "Invalid AEN (0x%x) received\n", 226 226 h->type); 227 - return -ENOENT; 227 + ret = -ENOENT; 228 + goto out; 228 229 } 229 230 230 231 ret = ncsi_validate_aen_pkt(h, nah->payload);
+12 -4
net/ncsi/ncsi-rsp.c
··· 1176 1176 /* Find the NCSI device */ 1177 1177 nd = ncsi_find_dev(orig_dev); 1178 1178 ndp = nd ? TO_NCSI_DEV_PRIV(nd) : NULL; 1179 - if (!ndp) 1180 - return -ENODEV; 1179 + if (!ndp) { 1180 + ret = -ENODEV; 1181 + goto err_free_skb; 1182 + } 1181 1183 1182 1184 /* Check if it is AEN packet */ 1183 1185 hdr = (struct ncsi_pkt_hdr *)skb_network_header(skb); ··· 1201 1199 if (!nrh) { 1202 1200 netdev_err(nd->dev, "Received unrecognized packet (0x%x)\n", 1203 1201 hdr->type); 1204 - return -ENOENT; 1202 + ret = -ENOENT; 1203 + goto err_free_skb; 1205 1204 } 1206 1205 1207 1206 /* Associate with the request */ ··· 1210 1207 nr = &ndp->requests[hdr->id]; 1211 1208 if (!nr->used) { 1212 1209 spin_unlock_irqrestore(&ndp->lock, flags); 1213 - return -ENODEV; 1210 + ret = -ENODEV; 1211 + goto err_free_skb; 1214 1212 } 1215 1213 1216 1214 nr->rsp = skb; ··· 1264 1260 1265 1261 out: 1266 1262 ncsi_free_request(nr); 1263 + return ret; 1264 + 1265 + err_free_skb: 1266 + kfree_skb(skb); 1267 1267 return ret; 1268 1268 }
+1 -3
net/netfilter/nf_tables_api.c
··· 829 829 830 830 nft_set_elem_change_active(ctx->net, set, ext); 831 831 nft_setelem_data_deactivate(ctx->net, set, catchall->elem); 832 - break; 833 832 } 834 833 } 835 834 ··· 5817 5818 5818 5819 nft_clear(ctx->net, ext); 5819 5820 nft_setelem_data_activate(ctx->net, set, catchall->elem); 5820 - break; 5821 5821 } 5822 5822 } 5823 5823 ··· 9625 9627 break; 9626 9628 case NETDEV_REGISTER: 9627 9629 /* NOP if not matching or already registered */ 9628 - if (!match || (changename && ops)) 9630 + if (!match || ops) 9629 9631 continue; 9630 9632 9631 9633 ops = kzalloc_obj(struct nf_hook_ops,
+1 -1
net/netfilter/nft_chain_filter.c
··· 345 345 break; 346 346 case NETDEV_REGISTER: 347 347 /* NOP if not matching or already registered */ 348 - if (!match || (changename && ops)) 348 + if (!match || ops) 349 349 continue; 350 350 351 351 ops = kmemdup(&basechain->ops,
+2 -1
net/netfilter/nft_set_pipapo.c
··· 1640 1640 int i; 1641 1641 1642 1642 nft_pipapo_for_each_field(f, i, m) { 1643 + bool last = i == m->field_count - 1; 1643 1644 int g; 1644 1645 1645 1646 for (g = 0; g < f->groups; g++) { ··· 1660 1659 } 1661 1660 1662 1661 pipapo_unmap(f->mt, f->rules, rulemap[i].to, rulemap[i].n, 1663 - rulemap[i + 1].n, i == m->field_count - 1); 1662 + last ? 0 : rulemap[i + 1].n, last); 1664 1663 if (pipapo_resize(f, f->rules, f->rules - rulemap[i].n)) { 1665 1664 /* We can ignore this, a failure to shrink tables down 1666 1665 * doesn't make tables invalid.
+6
net/netfilter/xt_IDLETIMER.c
··· 318 318 319 319 info->timer = __idletimer_tg_find_by_label(info->label); 320 320 if (info->timer) { 321 + if (info->timer->timer_type & XT_IDLETIMER_ALARM) { 322 + pr_debug("Adding/Replacing rule with same label and different timer type is not allowed\n"); 323 + mutex_unlock(&list_mutex); 324 + return -EINVAL; 325 + } 326 + 321 327 info->timer->refcnt++; 322 328 mod_timer(&info->timer->timer, 323 329 secs_to_jiffies(info->timeout) + jiffies);
+2 -2
net/netfilter/xt_dccp.c
··· 62 62 return true; 63 63 } 64 64 65 - if (op[i] < 2) 65 + if (op[i] < 2 || i == optlen - 1) 66 66 i++; 67 67 else 68 - i += op[i+1]?:1; 68 + i += op[i + 1] ? : 1; 69 69 } 70 70 71 71 spin_unlock_bh(&dccp_buflock);
+4 -2
net/netfilter/xt_tcpudp.c
··· 59 59 60 60 for (i = 0; i < optlen; ) { 61 61 if (op[i] == option) return !invert; 62 - if (op[i] < 2) i++; 63 - else i += op[i+1]?:1; 62 + if (op[i] < 2 || i == optlen - 1) 63 + i++; 64 + else 65 + i += op[i + 1] ? : 1; 64 66 } 65 67 66 68 return invert;
+5 -3
net/rxrpc/af_rxrpc.c
··· 267 267 * Lookup or create a remote transport endpoint record for the specified 268 268 * address. 269 269 * 270 - * Return: The peer record found with a reference, %NULL if no record is found 271 - * or a negative error code if the address is invalid or unsupported. 270 + * Return: The peer record found with a reference or a negative error code if 271 + * the address is invalid or unsupported. 272 272 */ 273 273 struct rxrpc_peer *rxrpc_kernel_lookup_peer(struct socket *sock, 274 274 struct sockaddr_rxrpc *srx, gfp_t gfp) 275 275 { 276 + struct rxrpc_peer *peer; 276 277 struct rxrpc_sock *rx = rxrpc_sk(sock->sk); 277 278 int ret; 278 279 ··· 281 280 if (ret < 0) 282 281 return ERR_PTR(ret); 283 282 284 - return rxrpc_lookup_peer(rx->local, srx, gfp); 283 + peer = rxrpc_lookup_peer(rx->local, srx, gfp); 284 + return peer ?: ERR_PTR(-ENOMEM); 285 285 } 286 286 EXPORT_SYMBOL(rxrpc_kernel_lookup_peer); 287 287
+1
net/sched/sch_teql.c
··· 315 315 if (__netif_tx_trylock(slave_txq)) { 316 316 unsigned int length = qdisc_pkt_len(skb); 317 317 318 + skb->dev = slave; 318 319 if (!netif_xmit_frozen_or_stopped(slave_txq) && 319 320 netdev_start_xmit(skb, slave, slave_txq, false) == 320 321 NETDEV_TX_OK) {
+2 -9
net/shaper/shaper.c
··· 759 759 if (ret) 760 760 goto free_msg; 761 761 762 - ret = genlmsg_reply(msg, info); 763 - if (ret) 764 - goto free_msg; 765 - 766 - return 0; 762 + return genlmsg_reply(msg, info); 767 763 768 764 free_msg: 769 765 nlmsg_free(msg); ··· 1309 1313 if (ret) 1310 1314 goto free_msg; 1311 1315 1312 - ret = genlmsg_reply(msg, info); 1313 - if (ret) 1314 - goto free_msg; 1315 - return 0; 1316 + return genlmsg_reply(msg, info); 1316 1317 1317 1318 free_msg: 1318 1319 nlmsg_free(msg);
+2
net/tipc/socket.c
··· 2233 2233 if (skb_queue_empty(&sk->sk_write_queue)) 2234 2234 break; 2235 2235 get_random_bytes(&delay, 2); 2236 + if (tsk->conn_timeout < 4) 2237 + tsk->conn_timeout = 4; 2236 2238 delay %= (tsk->conn_timeout / 4); 2237 2239 delay = msecs_to_jiffies(delay + 100); 2238 2240 sk_reset_timer(sk, &sk->sk_timer, jiffies + delay);
+8
rust/kernel/kunit.rs
··· 14 14 /// Public but hidden since it should only be used from KUnit generated code. 15 15 #[doc(hidden)] 16 16 pub fn err(args: fmt::Arguments<'_>) { 17 + // `args` is unused if `CONFIG_PRINTK` is not set - this avoids a build-time warning. 18 + #[cfg(not(CONFIG_PRINTK))] 19 + let _ = args; 20 + 17 21 // SAFETY: The format string is null-terminated and the `%pA` specifier matches the argument we 18 22 // are passing. 19 23 #[cfg(CONFIG_PRINTK)] ··· 34 30 /// Public but hidden since it should only be used from KUnit generated code. 35 31 #[doc(hidden)] 36 32 pub fn info(args: fmt::Arguments<'_>) { 33 + // `args` is unused if `CONFIG_PRINTK` is not set - this avoids a build-time warning. 34 + #[cfg(not(CONFIG_PRINTK))] 35 + let _ = args; 36 + 37 37 // SAFETY: The format string is null-terminated and the `%pA` specifier matches the argument we 38 38 // are passing. 39 39 #[cfg(CONFIG_PRINTK)]
+2 -2
scripts/genksyms/parse.y
··· 325 325 { $$ = $4; } 326 326 | direct_declarator BRACKET_PHRASE 327 327 { $$ = $2; } 328 - | '(' declarator ')' 329 - { $$ = $3; } 328 + | '(' attribute_opt declarator ')' 329 + { $$ = $4; } 330 330 ; 331 331 332 332 /* Nested declarators differ from regular declarators in that they do
+4
scripts/package/install-extmod-build
··· 32 32 echo tools/objtool/objtool 33 33 fi 34 34 35 + if is_enabled CONFIG_DEBUG_INFO_BTF_MODULES; then 36 + echo tools/bpf/resolve_btfids/resolve_btfids 37 + fi 38 + 35 39 echo Module.symvers 36 40 echo "arch/${SRCARCH}/include/generated" 37 41 echo include/config/auto.conf
+134 -91
security/apparmor/apparmorfs.c
··· 32 32 #include "include/crypto.h" 33 33 #include "include/ipc.h" 34 34 #include "include/label.h" 35 + #include "include/lib.h" 35 36 #include "include/policy.h" 36 37 #include "include/policy_ns.h" 37 38 #include "include/resource.h" ··· 63 62 * securityfs and apparmorfs filesystems. 64 63 */ 65 64 65 + #define IREF_POISON 101 66 66 67 67 /* 68 68 * support fns ··· 81 79 if (!private) 82 80 return; 83 81 84 - aa_put_loaddata(private->loaddata); 82 + aa_put_i_loaddata(private->loaddata); 85 83 kvfree(private); 86 84 } 87 85 ··· 155 153 return 0; 156 154 } 157 155 156 + static struct aa_ns *get_ns_common_ref(struct aa_common_ref *ref) 157 + { 158 + if (ref) { 159 + struct aa_label *reflabel = container_of(ref, struct aa_label, 160 + count); 161 + return aa_get_ns(labels_ns(reflabel)); 162 + } 163 + 164 + return NULL; 165 + } 166 + 167 + static struct aa_proxy *get_proxy_common_ref(struct aa_common_ref *ref) 168 + { 169 + if (ref) 170 + return aa_get_proxy(container_of(ref, struct aa_proxy, count)); 171 + 172 + return NULL; 173 + } 174 + 175 + static struct aa_loaddata *get_loaddata_common_ref(struct aa_common_ref *ref) 176 + { 177 + if (ref) 178 + return aa_get_i_loaddata(container_of(ref, struct aa_loaddata, 179 + count)); 180 + return NULL; 181 + } 182 + 183 + static void aa_put_common_ref(struct aa_common_ref *ref) 184 + { 185 + if (!ref) 186 + return; 187 + 188 + switch (ref->reftype) { 189 + case REF_RAWDATA: 190 + aa_put_i_loaddata(container_of(ref, struct aa_loaddata, 191 + count)); 192 + break; 193 + case REF_PROXY: 194 + aa_put_proxy(container_of(ref, struct aa_proxy, 195 + count)); 196 + break; 197 + case REF_NS: 198 + /* ns count is held on its unconfined label */ 199 + aa_put_ns(labels_ns(container_of(ref, struct aa_label, count))); 200 + break; 201 + default: 202 + AA_BUG(true, "unknown refcount type"); 203 + break; 204 + } 205 + } 206 + 207 + static void aa_get_common_ref(struct aa_common_ref *ref) 208 + { 209 + kref_get(&ref->count); 210 + } 211 + 212 + static void aafs_evict(struct inode *inode) 213 + { 214 + struct aa_common_ref *ref = inode->i_private; 215 + 216 + clear_inode(inode); 217 + aa_put_common_ref(ref); 218 + inode->i_private = (void *) IREF_POISON; 219 + } 220 + 158 221 static void aafs_free_inode(struct inode *inode) 159 222 { 160 223 if (S_ISLNK(inode->i_mode)) ··· 229 162 230 163 static const struct super_operations aafs_super_ops = { 231 164 .statfs = simple_statfs, 165 + .evict_inode = aafs_evict, 232 166 .free_inode = aafs_free_inode, 233 167 .show_path = aafs_show_path, 234 168 }; ··· 330 262 * aafs_remove(). Will return ERR_PTR on failure. 331 263 */ 332 264 static struct dentry *aafs_create(const char *name, umode_t mode, 333 - struct dentry *parent, void *data, void *link, 265 + struct dentry *parent, 266 + struct aa_common_ref *data, void *link, 334 267 const struct file_operations *fops, 335 268 const struct inode_operations *iops) 336 269 { ··· 368 299 goto fail_dentry; 369 300 inode_unlock(dir); 370 301 302 + if (data) 303 + aa_get_common_ref(data); 304 + 371 305 return dentry; 372 306 373 307 fail_dentry: ··· 395 323 * see aafs_create 396 324 */ 397 325 static struct dentry *aafs_create_file(const char *name, umode_t mode, 398 - struct dentry *parent, void *data, 326 + struct dentry *parent, 327 + struct aa_common_ref *data, 399 328 const struct file_operations *fops) 400 329 { 401 330 return aafs_create(name, mode, parent, data, NULL, fops, NULL); ··· 482 409 483 410 data->size = copy_size; 484 411 if (copy_from_user(data->data, userbuf, copy_size)) { 485 - aa_put_loaddata(data); 412 + /* trigger free - don't need to put pcount */ 413 + aa_put_i_loaddata(data); 486 414 return ERR_PTR(-EFAULT); 487 415 } 488 416 ··· 491 417 } 492 418 493 419 static ssize_t policy_update(u32 mask, const char __user *buf, size_t size, 494 - loff_t *pos, struct aa_ns *ns) 420 + loff_t *pos, struct aa_ns *ns, 421 + const struct cred *ocred) 495 422 { 496 423 struct aa_loaddata *data; 497 424 struct aa_label *label; ··· 503 428 /* high level check about policy management - fine grained in 504 429 * below after unpack 505 430 */ 506 - error = aa_may_manage_policy(current_cred(), label, ns, mask); 431 + error = aa_may_manage_policy(current_cred(), label, ns, ocred, mask); 507 432 if (error) 508 433 goto end_section; 509 434 ··· 511 436 error = PTR_ERR(data); 512 437 if (!IS_ERR(data)) { 513 438 error = aa_replace_profiles(ns, label, mask, data); 514 - aa_put_loaddata(data); 439 + /* put pcount, which will put count and free if no 440 + * profiles referencing it. 441 + */ 442 + aa_put_profile_loaddata(data); 515 443 } 516 444 end_section: 517 445 end_current_label_crit_section(label); ··· 526 448 static ssize_t profile_load(struct file *f, const char __user *buf, size_t size, 527 449 loff_t *pos) 528 450 { 529 - struct aa_ns *ns = aa_get_ns(f->f_inode->i_private); 530 - int error = policy_update(AA_MAY_LOAD_POLICY, buf, size, pos, ns); 451 + struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private); 452 + int error = policy_update(AA_MAY_LOAD_POLICY, buf, size, pos, ns, 453 + f->f_cred); 531 454 532 455 aa_put_ns(ns); 533 456 ··· 544 465 static ssize_t profile_replace(struct file *f, const char __user *buf, 545 466 size_t size, loff_t *pos) 546 467 { 547 - struct aa_ns *ns = aa_get_ns(f->f_inode->i_private); 468 + struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private); 548 469 int error = policy_update(AA_MAY_LOAD_POLICY | AA_MAY_REPLACE_POLICY, 549 - buf, size, pos, ns); 470 + buf, size, pos, ns, f->f_cred); 550 471 aa_put_ns(ns); 551 472 552 473 return error; ··· 564 485 struct aa_loaddata *data; 565 486 struct aa_label *label; 566 487 ssize_t error; 567 - struct aa_ns *ns = aa_get_ns(f->f_inode->i_private); 488 + struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private); 568 489 569 490 label = begin_current_label_crit_section(); 570 491 /* high level check about policy management - fine grained in 571 492 * below after unpack 572 493 */ 573 494 error = aa_may_manage_policy(current_cred(), label, ns, 574 - AA_MAY_REMOVE_POLICY); 495 + f->f_cred, AA_MAY_REMOVE_POLICY); 575 496 if (error) 576 497 goto out; 577 498 ··· 585 506 if (!IS_ERR(data)) { 586 507 data->data[size] = 0; 587 508 error = aa_remove_profiles(ns, label, data->data, size); 588 - aa_put_loaddata(data); 509 + aa_put_profile_loaddata(data); 589 510 } 590 511 out: 591 512 end_current_label_crit_section(label); ··· 654 575 if (!rev) 655 576 return -ENOMEM; 656 577 657 - rev->ns = aa_get_ns(inode->i_private); 578 + rev->ns = get_ns_common_ref(inode->i_private); 658 579 if (!rev->ns) 659 580 rev->ns = aa_get_current_ns(); 660 581 file->private_data = rev; ··· 1140 1061 static int seq_profile_open(struct inode *inode, struct file *file, 1141 1062 int (*show)(struct seq_file *, void *)) 1142 1063 { 1143 - struct aa_proxy *proxy = aa_get_proxy(inode->i_private); 1064 + struct aa_proxy *proxy = get_proxy_common_ref(inode->i_private); 1144 1065 int error = single_open(file, show, proxy); 1145 1066 1146 1067 if (error) { ··· 1332 1253 static int seq_rawdata_open(struct inode *inode, struct file *file, 1333 1254 int (*show)(struct seq_file *, void *)) 1334 1255 { 1335 - struct aa_loaddata *data = __aa_get_loaddata(inode->i_private); 1256 + struct aa_loaddata *data = get_loaddata_common_ref(inode->i_private); 1336 1257 int error; 1337 1258 1338 1259 if (!data) 1339 - /* lost race this ent is being reaped */ 1340 1260 return -ENOENT; 1341 1261 1342 1262 error = single_open(file, show, data); 1343 1263 if (error) { 1344 1264 AA_BUG(file->private_data && 1345 1265 ((struct seq_file *)file->private_data)->private); 1346 - aa_put_loaddata(data); 1266 + aa_put_i_loaddata(data); 1347 1267 } 1348 1268 1349 1269 return error; ··· 1353 1275 struct seq_file *seq = (struct seq_file *) file->private_data; 1354 1276 1355 1277 if (seq) 1356 - aa_put_loaddata(seq->private); 1278 + aa_put_i_loaddata(seq->private); 1357 1279 1358 1280 return single_release(inode, file); 1359 1281 } ··· 1465 1387 if (!aa_current_policy_view_capable(NULL)) 1466 1388 return -EACCES; 1467 1389 1468 - loaddata = __aa_get_loaddata(inode->i_private); 1390 + loaddata = get_loaddata_common_ref(inode->i_private); 1469 1391 if (!loaddata) 1470 - /* lost race: this entry is being reaped */ 1471 1392 return -ENOENT; 1472 1393 1473 1394 private = rawdata_f_data_alloc(loaddata->size); ··· 1491 1414 return error; 1492 1415 1493 1416 fail_private_alloc: 1494 - aa_put_loaddata(loaddata); 1417 + aa_put_i_loaddata(loaddata); 1495 1418 return error; 1496 1419 } 1497 1420 ··· 1508 1431 1509 1432 for (i = 0; i < AAFS_LOADDATA_NDENTS; i++) { 1510 1433 if (!IS_ERR_OR_NULL(rawdata->dents[i])) { 1511 - /* no refcounts on i_private */ 1512 1434 aafs_remove(rawdata->dents[i]); 1513 1435 rawdata->dents[i] = NULL; 1514 1436 } ··· 1550 1474 return PTR_ERR(dir); 1551 1475 rawdata->dents[AAFS_LOADDATA_DIR] = dir; 1552 1476 1553 - dent = aafs_create_file("abi", S_IFREG | 0444, dir, rawdata, 1477 + dent = aafs_create_file("abi", S_IFREG | 0444, dir, &rawdata->count, 1554 1478 &seq_rawdata_abi_fops); 1555 1479 if (IS_ERR(dent)) 1556 1480 goto fail; 1557 1481 rawdata->dents[AAFS_LOADDATA_ABI] = dent; 1558 1482 1559 - dent = aafs_create_file("revision", S_IFREG | 0444, dir, rawdata, 1560 - &seq_rawdata_revision_fops); 1483 + dent = aafs_create_file("revision", S_IFREG | 0444, dir, 1484 + &rawdata->count, 1485 + &seq_rawdata_revision_fops); 1561 1486 if (IS_ERR(dent)) 1562 1487 goto fail; 1563 1488 rawdata->dents[AAFS_LOADDATA_REVISION] = dent; 1564 1489 1565 1490 if (aa_g_hash_policy) { 1566 1491 dent = aafs_create_file("sha256", S_IFREG | 0444, dir, 1567 - rawdata, &seq_rawdata_hash_fops); 1492 + &rawdata->count, 1493 + &seq_rawdata_hash_fops); 1568 1494 if (IS_ERR(dent)) 1569 1495 goto fail; 1570 1496 rawdata->dents[AAFS_LOADDATA_HASH] = dent; 1571 1497 } 1572 1498 1573 1499 dent = aafs_create_file("compressed_size", S_IFREG | 0444, dir, 1574 - rawdata, 1500 + &rawdata->count, 1575 1501 &seq_rawdata_compressed_size_fops); 1576 1502 if (IS_ERR(dent)) 1577 1503 goto fail; 1578 1504 rawdata->dents[AAFS_LOADDATA_COMPRESSED_SIZE] = dent; 1579 1505 1580 - dent = aafs_create_file("raw_data", S_IFREG | 0444, 1581 - dir, rawdata, &rawdata_fops); 1506 + dent = aafs_create_file("raw_data", S_IFREG | 0444, dir, 1507 + &rawdata->count, &rawdata_fops); 1582 1508 if (IS_ERR(dent)) 1583 1509 goto fail; 1584 1510 rawdata->dents[AAFS_LOADDATA_DATA] = dent; ··· 1588 1510 1589 1511 rawdata->ns = aa_get_ns(ns); 1590 1512 list_add(&rawdata->list, &ns->rawdata_list); 1591 - /* no refcount on inode rawdata */ 1592 1513 1593 1514 return 0; 1594 1515 1595 1516 fail: 1596 1517 remove_rawdata_dents(rawdata); 1597 - 1598 1518 return PTR_ERR(dent); 1599 1519 } 1600 1520 #endif /* CONFIG_SECURITY_APPARMOR_EXPORT_BINARY */ ··· 1616 1540 __aafs_profile_rmdir(child); 1617 1541 1618 1542 for (i = AAFS_PROF_SIZEOF - 1; i >= 0; --i) { 1619 - struct aa_proxy *proxy; 1620 1543 if (!profile->dents[i]) 1621 1544 continue; 1622 1545 1623 - proxy = d_inode(profile->dents[i])->i_private; 1624 1546 aafs_remove(profile->dents[i]); 1625 - aa_put_proxy(proxy); 1626 1547 profile->dents[i] = NULL; 1627 1548 } 1628 1549 } ··· 1653 1580 struct aa_profile *profile, 1654 1581 const struct file_operations *fops) 1655 1582 { 1656 - struct aa_proxy *proxy = aa_get_proxy(profile->label.proxy); 1657 - struct dentry *dent; 1658 - 1659 - dent = aafs_create_file(name, S_IFREG | 0444, dir, proxy, fops); 1660 - if (IS_ERR(dent)) 1661 - aa_put_proxy(proxy); 1662 - 1663 - return dent; 1583 + return aafs_create_file(name, S_IFREG | 0444, dir, &profile->label.proxy->count, fops); 1664 1584 } 1665 1585 1666 1586 #ifdef CONFIG_SECURITY_APPARMOR_EXPORT_BINARY ··· 1703 1637 struct delayed_call *done, 1704 1638 const char *name) 1705 1639 { 1706 - struct aa_proxy *proxy = inode->i_private; 1640 + struct aa_common_ref *ref = inode->i_private; 1641 + struct aa_proxy *proxy = container_of(ref, struct aa_proxy, count); 1707 1642 struct aa_label *label; 1708 1643 struct aa_profile *profile; 1709 1644 char *target; ··· 1846 1779 if (profile->rawdata) { 1847 1780 if (aa_g_hash_policy) { 1848 1781 dent = aafs_create("raw_sha256", S_IFLNK | 0444, dir, 1849 - profile->label.proxy, NULL, NULL, 1850 - &rawdata_link_sha256_iops); 1782 + &profile->label.proxy->count, NULL, 1783 + NULL, &rawdata_link_sha256_iops); 1851 1784 if (IS_ERR(dent)) 1852 1785 goto fail; 1853 - aa_get_proxy(profile->label.proxy); 1854 1786 profile->dents[AAFS_PROF_RAW_HASH] = dent; 1855 1787 } 1856 1788 dent = aafs_create("raw_abi", S_IFLNK | 0444, dir, 1857 - profile->label.proxy, NULL, NULL, 1789 + &profile->label.proxy->count, NULL, NULL, 1858 1790 &rawdata_link_abi_iops); 1859 1791 if (IS_ERR(dent)) 1860 1792 goto fail; 1861 - aa_get_proxy(profile->label.proxy); 1862 1793 profile->dents[AAFS_PROF_RAW_ABI] = dent; 1863 1794 1864 1795 dent = aafs_create("raw_data", S_IFLNK | 0444, dir, 1865 - profile->label.proxy, NULL, NULL, 1796 + &profile->label.proxy->count, NULL, NULL, 1866 1797 &rawdata_link_data_iops); 1867 1798 if (IS_ERR(dent)) 1868 1799 goto fail; 1869 - aa_get_proxy(profile->label.proxy); 1870 1800 profile->dents[AAFS_PROF_RAW_DATA] = dent; 1871 1801 } 1872 1802 #endif /*CONFIG_SECURITY_APPARMOR_EXPORT_BINARY */ ··· 1894 1830 int error; 1895 1831 1896 1832 label = begin_current_label_crit_section(); 1897 - error = aa_may_manage_policy(current_cred(), label, NULL, 1833 + error = aa_may_manage_policy(current_cred(), label, NULL, NULL, 1898 1834 AA_MAY_LOAD_POLICY); 1899 1835 end_current_label_crit_section(label); 1900 1836 if (error) 1901 1837 return ERR_PTR(error); 1902 1838 1903 - parent = aa_get_ns(dir->i_private); 1839 + parent = get_ns_common_ref(dir->i_private); 1904 1840 AA_BUG(d_inode(ns_subns_dir(parent)) != dir); 1905 1841 1906 1842 /* we have to unlock and then relock to get locking order right ··· 1944 1880 int error; 1945 1881 1946 1882 label = begin_current_label_crit_section(); 1947 - error = aa_may_manage_policy(current_cred(), label, NULL, 1883 + error = aa_may_manage_policy(current_cred(), label, NULL, NULL, 1948 1884 AA_MAY_LOAD_POLICY); 1949 1885 end_current_label_crit_section(label); 1950 1886 if (error) 1951 1887 return error; 1952 1888 1953 - parent = aa_get_ns(dir->i_private); 1889 + parent = get_ns_common_ref(dir->i_private); 1954 1890 /* rmdir calls the generic securityfs functions to remove files 1955 1891 * from the apparmor dir. It is up to the apparmor ns locking 1956 1892 * to avoid races. ··· 2020 1956 2021 1957 __aa_fs_list_remove_rawdata(ns); 2022 1958 2023 - if (ns_subns_dir(ns)) { 2024 - sub = d_inode(ns_subns_dir(ns))->i_private; 2025 - aa_put_ns(sub); 2026 - } 2027 - if (ns_subload(ns)) { 2028 - sub = d_inode(ns_subload(ns))->i_private; 2029 - aa_put_ns(sub); 2030 - } 2031 - if (ns_subreplace(ns)) { 2032 - sub = d_inode(ns_subreplace(ns))->i_private; 2033 - aa_put_ns(sub); 2034 - } 2035 - if (ns_subremove(ns)) { 2036 - sub = d_inode(ns_subremove(ns))->i_private; 2037 - aa_put_ns(sub); 2038 - } 2039 - if (ns_subrevision(ns)) { 2040 - sub = d_inode(ns_subrevision(ns))->i_private; 2041 - aa_put_ns(sub); 2042 - } 2043 - 2044 1959 for (i = AAFS_NS_SIZEOF - 1; i >= 0; --i) { 2045 1960 aafs_remove(ns->dents[i]); 2046 1961 ns->dents[i] = NULL; ··· 2044 2001 return PTR_ERR(dent); 2045 2002 ns_subdata_dir(ns) = dent; 2046 2003 2047 - dent = aafs_create_file("revision", 0444, dir, ns, 2004 + dent = aafs_create_file("revision", 0444, dir, 2005 + &ns->unconfined->label.count, 2048 2006 &aa_fs_ns_revision_fops); 2049 2007 if (IS_ERR(dent)) 2050 2008 return PTR_ERR(dent); 2051 - aa_get_ns(ns); 2052 2009 ns_subrevision(ns) = dent; 2053 2010 2054 - dent = aafs_create_file(".load", 0640, dir, ns, 2055 - &aa_fs_profile_load); 2011 + dent = aafs_create_file(".load", 0640, dir, 2012 + &ns->unconfined->label.count, 2013 + &aa_fs_profile_load); 2056 2014 if (IS_ERR(dent)) 2057 2015 return PTR_ERR(dent); 2058 - aa_get_ns(ns); 2059 2016 ns_subload(ns) = dent; 2060 2017 2061 - dent = aafs_create_file(".replace", 0640, dir, ns, 2062 - &aa_fs_profile_replace); 2018 + dent = aafs_create_file(".replace", 0640, dir, 2019 + &ns->unconfined->label.count, 2020 + &aa_fs_profile_replace); 2063 2021 if (IS_ERR(dent)) 2064 2022 return PTR_ERR(dent); 2065 - aa_get_ns(ns); 2066 2023 ns_subreplace(ns) = dent; 2067 2024 2068 - dent = aafs_create_file(".remove", 0640, dir, ns, 2069 - &aa_fs_profile_remove); 2025 + dent = aafs_create_file(".remove", 0640, dir, 2026 + &ns->unconfined->label.count, 2027 + &aa_fs_profile_remove); 2070 2028 if (IS_ERR(dent)) 2071 2029 return PTR_ERR(dent); 2072 - aa_get_ns(ns); 2073 2030 ns_subremove(ns) = dent; 2074 2031 2075 2032 /* use create_dentry so we can supply private data */ 2076 - dent = aafs_create("namespaces", S_IFDIR | 0755, dir, ns, NULL, NULL, 2077 - &ns_dir_inode_operations); 2033 + dent = aafs_create("namespaces", S_IFDIR | 0755, dir, 2034 + &ns->unconfined->label.count, 2035 + NULL, NULL, &ns_dir_inode_operations); 2078 2036 if (IS_ERR(dent)) 2079 2037 return PTR_ERR(dent); 2080 - aa_get_ns(ns); 2081 2038 ns_subns_dir(ns) = dent; 2082 2039 2083 2040 return 0;
+8 -8
security/apparmor/include/label.h
··· 102 102 103 103 struct aa_label; 104 104 struct aa_proxy { 105 - struct kref count; 105 + struct aa_common_ref count; 106 106 struct aa_label __rcu *label; 107 107 }; 108 108 ··· 125 125 * vec: vector of profiles comprising the compound label 126 126 */ 127 127 struct aa_label { 128 - struct kref count; 128 + struct aa_common_ref count; 129 129 struct rb_node node; 130 130 struct rcu_head rcu; 131 131 struct aa_proxy *proxy; ··· 357 357 */ 358 358 static inline struct aa_label *__aa_get_label(struct aa_label *l) 359 359 { 360 - if (l && kref_get_unless_zero(&l->count)) 360 + if (l && kref_get_unless_zero(&l->count.count)) 361 361 return l; 362 362 363 363 return NULL; ··· 366 366 static inline struct aa_label *aa_get_label(struct aa_label *l) 367 367 { 368 368 if (l) 369 - kref_get(&(l->count)); 369 + kref_get(&(l->count.count)); 370 370 371 371 return l; 372 372 } ··· 386 386 rcu_read_lock(); 387 387 do { 388 388 c = rcu_dereference(*l); 389 - } while (c && !kref_get_unless_zero(&c->count)); 389 + } while (c && !kref_get_unless_zero(&c->count.count)); 390 390 rcu_read_unlock(); 391 391 392 392 return c; ··· 426 426 static inline void aa_put_label(struct aa_label *l) 427 427 { 428 428 if (l) 429 - kref_put(&l->count, aa_label_kref); 429 + kref_put(&l->count.count, aa_label_kref); 430 430 } 431 431 432 432 /* wrapper fn to indicate semantics of the check */ ··· 443 443 static inline struct aa_proxy *aa_get_proxy(struct aa_proxy *proxy) 444 444 { 445 445 if (proxy) 446 - kref_get(&(proxy->count)); 446 + kref_get(&(proxy->count.count)); 447 447 448 448 return proxy; 449 449 } ··· 451 451 static inline void aa_put_proxy(struct aa_proxy *proxy) 452 452 { 453 453 if (proxy) 454 - kref_put(&proxy->count, aa_proxy_kref); 454 + kref_put(&proxy->count.count, aa_proxy_kref); 455 455 } 456 456 457 457 void __aa_proxy_redirect(struct aa_label *orig, struct aa_label *new);
+12
security/apparmor/include/lib.h
··· 102 102 /* Security blob offsets */ 103 103 extern struct lsm_blob_sizes apparmor_blob_sizes; 104 104 105 + enum reftype { 106 + REF_NS, 107 + REF_PROXY, 108 + REF_RAWDATA, 109 + }; 110 + 111 + /* common reference count used by data the shows up in aafs */ 112 + struct aa_common_ref { 113 + struct kref count; 114 + enum reftype reftype; 115 + }; 116 + 105 117 /** 106 118 * aa_strneq - compare null terminated @str to a non null terminated substring 107 119 * @str: a null terminated string
+1
security/apparmor/include/match.h
··· 185 185 #define MATCH_FLAG_DIFF_ENCODE 0x80000000 186 186 #define MARK_DIFF_ENCODE 0x40000000 187 187 #define MATCH_FLAG_OOB_TRANSITION 0x20000000 188 + #define MARK_DIFF_ENCODE_VERIFIED 0x10000000 188 189 #define MATCH_FLAGS_MASK 0xff000000 189 190 #define MATCH_FLAGS_VALID (MATCH_FLAG_DIFF_ENCODE | MATCH_FLAG_OOB_TRANSITION) 190 191 #define MATCH_FLAGS_INVALID (MATCH_FLAGS_MASK & ~MATCH_FLAGS_VALID)
+5 -5
security/apparmor/include/policy.h
··· 379 379 static inline struct aa_profile *aa_get_profile(struct aa_profile *p) 380 380 { 381 381 if (p) 382 - kref_get(&(p->label.count)); 382 + kref_get(&(p->label.count.count)); 383 383 384 384 return p; 385 385 } ··· 393 393 */ 394 394 static inline struct aa_profile *aa_get_profile_not0(struct aa_profile *p) 395 395 { 396 - if (p && kref_get_unless_zero(&p->label.count)) 396 + if (p && kref_get_unless_zero(&p->label.count.count)) 397 397 return p; 398 398 399 399 return NULL; ··· 413 413 rcu_read_lock(); 414 414 do { 415 415 c = rcu_dereference(*p); 416 - } while (c && !kref_get_unless_zero(&c->label.count)); 416 + } while (c && !kref_get_unless_zero(&c->label.count.count)); 417 417 rcu_read_unlock(); 418 418 419 419 return c; ··· 426 426 static inline void aa_put_profile(struct aa_profile *p) 427 427 { 428 428 if (p) 429 - kref_put(&p->label.count, aa_label_kref); 429 + kref_put(&p->label.count.count, aa_label_kref); 430 430 } 431 431 432 432 static inline int AUDIT_MODE(struct aa_profile *profile) ··· 443 443 struct aa_label *label, struct aa_ns *ns); 444 444 int aa_may_manage_policy(const struct cred *subj_cred, 445 445 struct aa_label *label, struct aa_ns *ns, 446 - u32 mask); 446 + const struct cred *ocred, u32 mask); 447 447 bool aa_current_policy_view_capable(struct aa_ns *ns); 448 448 bool aa_current_policy_admin_capable(struct aa_ns *ns); 449 449
+2
security/apparmor/include/policy_ns.h
··· 18 18 #include "label.h" 19 19 #include "policy.h" 20 20 21 + /* Match max depth of user namespaces */ 22 + #define MAX_NS_DEPTH 32 21 23 22 24 /* struct aa_ns_acct - accounting of profiles in namespace 23 25 * @max_size: maximum space allowed for all profiles in namespace
+49 -34
security/apparmor/include/policy_unpack.h
··· 87 87 u32 version; 88 88 }; 89 89 90 - /* 91 - * struct aa_loaddata - buffer of policy raw_data set 90 + /* struct aa_loaddata - buffer of policy raw_data set 91 + * @count: inode/filesystem refcount - use aa_get_i_loaddata() 92 + * @pcount: profile refcount - use aa_get_profile_loaddata() 93 + * @list: list the loaddata is on 94 + * @work: used to do a delayed cleanup 95 + * @dents: refs to dents created in aafs 96 + * @ns: the namespace this loaddata was loaded into 97 + * @name: 98 + * @size: the size of the data that was loaded 99 + * @compressed_size: the size of the data when it is compressed 100 + * @revision: unique revision count that this data was loaded as 101 + * @abi: the abi number the loaddata uses 102 + * @hash: a hash of the loaddata, used to help dedup data 92 103 * 93 - * there is no loaddata ref for being on ns list, nor a ref from 94 - * d_inode(@dentry) when grab a ref from these, @ns->lock must be held 95 - * && __aa_get_loaddata() needs to be used, and the return value 96 - * checked, if NULL the loaddata is already being reaped and should be 97 - * considered dead. 104 + * There is no loaddata ref for being on ns->rawdata_list, so 105 + * @ns->lock must be held when walking the list. Dentries and 106 + * inode opens hold refs on @count; profiles hold refs on @pcount. 107 + * When the last @pcount drops, do_ploaddata_rmfs() removes the 108 + * fs entries and drops the associated @count ref. 98 109 */ 99 110 struct aa_loaddata { 100 - struct kref count; 111 + struct aa_common_ref count; 112 + struct kref pcount; 101 113 struct list_head list; 102 114 struct work_struct work; 103 115 struct dentry *dents[AAFS_LOADDATA_NDENTS]; ··· 131 119 int aa_unpack(struct aa_loaddata *udata, struct list_head *lh, const char **ns); 132 120 133 121 /** 134 - * __aa_get_loaddata - get a reference count to uncounted data reference 135 - * @data: reference to get a count on 136 - * 137 - * Returns: pointer to reference OR NULL if race is lost and reference is 138 - * being repeated. 139 - * Requires: @data->ns->lock held, and the return code MUST be checked 140 - * 141 - * Use only from inode->i_private and @data->list found references 142 - */ 143 - static inline struct aa_loaddata * 144 - __aa_get_loaddata(struct aa_loaddata *data) 145 - { 146 - if (data && kref_get_unless_zero(&(data->count))) 147 - return data; 148 - 149 - return NULL; 150 - } 151 - 152 - /** 153 122 * aa_get_loaddata - get a reference count from a counted data reference 154 123 * @data: reference to get a count on 155 124 * 156 - * Returns: point to reference 125 + * Returns: pointer to reference 157 126 * Requires: @data to have a valid reference count on it. It is a bug 158 127 * if the race to reap can be encountered when it is used. 159 128 */ 160 129 static inline struct aa_loaddata * 161 - aa_get_loaddata(struct aa_loaddata *data) 130 + aa_get_i_loaddata(struct aa_loaddata *data) 162 131 { 163 - struct aa_loaddata *tmp = __aa_get_loaddata(data); 164 132 165 - AA_BUG(data && !tmp); 133 + if (data) 134 + kref_get(&(data->count.count)); 135 + return data; 136 + } 166 137 167 - return tmp; 138 + 139 + /** 140 + * aa_get_profile_loaddata - get a profile reference count on loaddata 141 + * @data: reference to get a count on 142 + * 143 + * Returns: pointer to reference 144 + * Requires: @data to have a valid reference count on it. 145 + */ 146 + static inline struct aa_loaddata * 147 + aa_get_profile_loaddata(struct aa_loaddata *data) 148 + { 149 + if (data) 150 + kref_get(&(data->pcount)); 151 + return data; 168 152 } 169 153 170 154 void __aa_loaddata_update(struct aa_loaddata *data, long revision); 171 155 bool aa_rawdata_eq(struct aa_loaddata *l, struct aa_loaddata *r); 172 156 void aa_loaddata_kref(struct kref *kref); 157 + void aa_ploaddata_kref(struct kref *kref); 173 158 struct aa_loaddata *aa_loaddata_alloc(size_t size); 174 - static inline void aa_put_loaddata(struct aa_loaddata *data) 159 + static inline void aa_put_i_loaddata(struct aa_loaddata *data) 175 160 { 176 161 if (data) 177 - kref_put(&data->count, aa_loaddata_kref); 162 + kref_put(&data->count.count, aa_loaddata_kref); 163 + } 164 + 165 + static inline void aa_put_profile_loaddata(struct aa_loaddata *data) 166 + { 167 + if (data) 168 + kref_put(&data->pcount, aa_ploaddata_kref); 178 169 } 179 170 180 171 #if IS_ENABLED(CONFIG_KUNIT)
+8 -4
security/apparmor/label.c
··· 52 52 53 53 void aa_proxy_kref(struct kref *kref) 54 54 { 55 - struct aa_proxy *proxy = container_of(kref, struct aa_proxy, count); 55 + struct aa_proxy *proxy = container_of(kref, struct aa_proxy, 56 + count.count); 56 57 57 58 free_proxy(proxy); 58 59 } ··· 64 63 65 64 new = kzalloc_obj(struct aa_proxy, gfp); 66 65 if (new) { 67 - kref_init(&new->count); 66 + kref_init(&new->count.count); 67 + new->count.reftype = REF_PROXY; 68 68 rcu_assign_pointer(new->label, aa_get_label(label)); 69 69 } 70 70 return new; ··· 377 375 378 376 void aa_label_kref(struct kref *kref) 379 377 { 380 - struct aa_label *label = container_of(kref, struct aa_label, count); 378 + struct aa_label *label = container_of(kref, struct aa_label, 379 + count.count); 381 380 struct aa_ns *ns = labels_ns(label); 382 381 383 382 if (!ns) { ··· 415 412 416 413 label->size = size; /* doesn't include null */ 417 414 label->vec[size] = NULL; /* null terminate */ 418 - kref_init(&label->count); 415 + kref_init(&label->count.count); 416 + label->count.reftype = REF_NS; /* for aafs purposes */ 419 417 RB_CLEAR_NODE(&label->node); 420 418 421 419 return true;
+42 -16
security/apparmor/match.c
··· 160 160 if (state_count == 0) 161 161 goto out; 162 162 for (i = 0; i < state_count; i++) { 163 - if (!(BASE_TABLE(dfa)[i] & MATCH_FLAG_DIFF_ENCODE) && 164 - (DEFAULT_TABLE(dfa)[i] >= state_count)) 163 + if (DEFAULT_TABLE(dfa)[i] >= state_count) { 164 + pr_err("AppArmor DFA default state out of bounds"); 165 165 goto out; 166 + } 166 167 if (BASE_TABLE(dfa)[i] & MATCH_FLAGS_INVALID) { 167 168 pr_err("AppArmor DFA state with invalid match flags"); 168 169 goto out; ··· 202 201 size_t j, k; 203 202 204 203 for (j = i; 205 - (BASE_TABLE(dfa)[j] & MATCH_FLAG_DIFF_ENCODE) && 206 - !(BASE_TABLE(dfa)[j] & MARK_DIFF_ENCODE); 204 + ((BASE_TABLE(dfa)[j] & MATCH_FLAG_DIFF_ENCODE) && 205 + !(BASE_TABLE(dfa)[j] & MARK_DIFF_ENCODE_VERIFIED)); 207 206 j = k) { 207 + if (BASE_TABLE(dfa)[j] & MARK_DIFF_ENCODE) 208 + /* loop in current chain */ 209 + goto out; 208 210 k = DEFAULT_TABLE(dfa)[j]; 209 211 if (j == k) 212 + /* self loop */ 210 213 goto out; 211 - if (k < j) 212 - break; /* already verified */ 213 214 BASE_TABLE(dfa)[j] |= MARK_DIFF_ENCODE; 215 + } 216 + /* move mark to verified */ 217 + for (j = i; 218 + (BASE_TABLE(dfa)[j] & MATCH_FLAG_DIFF_ENCODE); 219 + j = k) { 220 + k = DEFAULT_TABLE(dfa)[j]; 221 + if (j < i) 222 + /* jumps to state/chain that has been 223 + * verified 224 + */ 225 + break; 226 + BASE_TABLE(dfa)[j] &= ~MARK_DIFF_ENCODE; 227 + BASE_TABLE(dfa)[j] |= MARK_DIFF_ENCODE_VERIFIED; 214 228 } 215 229 } 216 230 error = 0; ··· 479 463 if (dfa->tables[YYTD_ID_EC]) { 480 464 /* Equivalence class table defined */ 481 465 u8 *equiv = EQUIV_TABLE(dfa); 482 - for (; len; len--) 483 - match_char(state, def, base, next, check, 484 - equiv[(u8) *str++]); 466 + for (; len; len--) { 467 + u8 c = equiv[(u8) *str]; 468 + 469 + match_char(state, def, base, next, check, c); 470 + str++; 471 + } 485 472 } else { 486 473 /* default is direct to next state */ 487 - for (; len; len--) 488 - match_char(state, def, base, next, check, (u8) *str++); 474 + for (; len; len--) { 475 + match_char(state, def, base, next, check, (u8) *str); 476 + str++; 477 + } 489 478 } 490 479 491 480 return state; ··· 524 503 /* Equivalence class table defined */ 525 504 u8 *equiv = EQUIV_TABLE(dfa); 526 505 /* default is direct to next state */ 527 - while (*str) 528 - match_char(state, def, base, next, check, 529 - equiv[(u8) *str++]); 506 + while (*str) { 507 + u8 c = equiv[(u8) *str]; 508 + 509 + match_char(state, def, base, next, check, c); 510 + str++; 511 + } 530 512 } else { 531 513 /* default is direct to next state */ 532 - while (*str) 533 - match_char(state, def, base, next, check, (u8) *str++); 514 + while (*str) { 515 + match_char(state, def, base, next, check, (u8) *str); 516 + str++; 517 + } 534 518 } 535 519 536 520 return state;
+67 -10
security/apparmor/policy.c
··· 191 191 } 192 192 193 193 /** 194 - * __remove_profile - remove old profile, and children 195 - * @profile: profile to be replaced (NOT NULL) 194 + * __remove_profile - remove profile, and children 195 + * @profile: profile to be removed (NOT NULL) 196 196 * 197 197 * Requires: namespace list lock be held, or list not be shared 198 198 */ 199 199 static void __remove_profile(struct aa_profile *profile) 200 200 { 201 + struct aa_profile *curr, *to_remove; 202 + 201 203 AA_BUG(!profile); 202 204 AA_BUG(!profile->ns); 203 205 AA_BUG(!mutex_is_locked(&profile->ns->lock)); 204 206 205 207 /* release any children lists first */ 206 - __aa_profile_list_release(&profile->base.profiles); 208 + if (!list_empty(&profile->base.profiles)) { 209 + curr = list_first_entry(&profile->base.profiles, struct aa_profile, base.list); 210 + 211 + while (curr != profile) { 212 + 213 + while (!list_empty(&curr->base.profiles)) 214 + curr = list_first_entry(&curr->base.profiles, 215 + struct aa_profile, base.list); 216 + 217 + to_remove = curr; 218 + if (!list_is_last(&to_remove->base.list, 219 + &aa_deref_parent(curr)->base.profiles)) 220 + curr = list_next_entry(to_remove, base.list); 221 + else 222 + curr = aa_deref_parent(curr); 223 + 224 + /* released by free_profile */ 225 + aa_label_remove(&to_remove->label); 226 + __aafs_profile_rmdir(to_remove); 227 + __list_remove_profile(to_remove); 228 + } 229 + } 230 + 207 231 /* released by free_profile */ 208 232 aa_label_remove(&profile->label); 209 233 __aafs_profile_rmdir(profile); ··· 350 326 } 351 327 352 328 kfree_sensitive(profile->hash); 353 - aa_put_loaddata(profile->rawdata); 329 + aa_put_profile_loaddata(profile->rawdata); 354 330 aa_label_destroy(&profile->label); 355 331 356 332 kfree_sensitive(profile); ··· 942 918 return res; 943 919 } 944 920 921 + static bool is_subset_of_obj_privilege(const struct cred *cred, 922 + struct aa_label *label, 923 + const struct cred *ocred) 924 + { 925 + if (cred == ocred) 926 + return true; 927 + 928 + if (!aa_label_is_subset(label, cred_label(ocred))) 929 + return false; 930 + /* don't allow crossing userns for now */ 931 + if (cred->user_ns != ocred->user_ns) 932 + return false; 933 + if (!cap_issubset(cred->cap_inheritable, ocred->cap_inheritable)) 934 + return false; 935 + if (!cap_issubset(cred->cap_permitted, ocred->cap_permitted)) 936 + return false; 937 + if (!cap_issubset(cred->cap_effective, ocred->cap_effective)) 938 + return false; 939 + if (!cap_issubset(cred->cap_bset, ocred->cap_bset)) 940 + return false; 941 + if (!cap_issubset(cred->cap_ambient, ocred->cap_ambient)) 942 + return false; 943 + return true; 944 + } 945 + 946 + 945 947 /** 946 948 * aa_may_manage_policy - can the current task manage policy 947 949 * @subj_cred: subjects cred 948 950 * @label: label to check if it can manage policy 949 951 * @ns: namespace being managed by @label (may be NULL if @label's ns) 952 + * @ocred: object cred if request is coming from an open object 950 953 * @mask: contains the policy manipulation operation being done 951 954 * 952 955 * Returns: 0 if the task is allowed to manipulate policy else error 953 956 */ 954 957 int aa_may_manage_policy(const struct cred *subj_cred, struct aa_label *label, 955 - struct aa_ns *ns, u32 mask) 958 + struct aa_ns *ns, const struct cred *ocred, u32 mask) 956 959 { 957 960 const char *op; 958 961 ··· 993 942 /* check if loading policy is locked out */ 994 943 if (aa_g_lock_policy) 995 944 return audit_policy(label, op, NULL, NULL, "policy_locked", 945 + -EACCES); 946 + 947 + if (ocred && !is_subset_of_obj_privilege(subj_cred, label, ocred)) 948 + return audit_policy(label, op, NULL, NULL, 949 + "not privileged for target profile", 996 950 -EACCES); 997 951 998 952 if (!aa_policy_admin_capable(subj_cred, label, ns)) ··· 1171 1115 LIST_HEAD(lh); 1172 1116 1173 1117 op = mask & AA_MAY_REPLACE_POLICY ? OP_PROF_REPL : OP_PROF_LOAD; 1174 - aa_get_loaddata(udata); 1118 + aa_get_profile_loaddata(udata); 1175 1119 /* released below */ 1176 1120 error = aa_unpack(udata, &lh, &ns_name); 1177 1121 if (error) ··· 1198 1142 goto fail; 1199 1143 } 1200 1144 ns_name = ent->ns_name; 1145 + ent->ns_name = NULL; 1201 1146 } else 1202 1147 count++; 1203 1148 } ··· 1223 1166 if (aa_rawdata_eq(rawdata_ent, udata)) { 1224 1167 struct aa_loaddata *tmp; 1225 1168 1226 - tmp = __aa_get_loaddata(rawdata_ent); 1169 + tmp = aa_get_profile_loaddata(rawdata_ent); 1227 1170 /* check we didn't fail the race */ 1228 1171 if (tmp) { 1229 - aa_put_loaddata(udata); 1172 + aa_put_profile_loaddata(udata); 1230 1173 udata = tmp; 1231 1174 break; 1232 1175 } ··· 1239 1182 struct aa_profile *p; 1240 1183 1241 1184 if (aa_g_export_binary) 1242 - ent->new->rawdata = aa_get_loaddata(udata); 1185 + ent->new->rawdata = aa_get_profile_loaddata(udata); 1243 1186 error = __lookup_replace(ns, ent->new->base.hname, 1244 1187 !(mask & AA_MAY_REPLACE_POLICY), 1245 1188 &ent->old, &info); ··· 1372 1315 1373 1316 out: 1374 1317 aa_put_ns(ns); 1375 - aa_put_loaddata(udata); 1318 + aa_put_profile_loaddata(udata); 1376 1319 kfree(ns_name); 1377 1320 1378 1321 if (error)
+2
security/apparmor/policy_ns.c
··· 223 223 AA_BUG(!name); 224 224 AA_BUG(!mutex_is_locked(&parent->lock)); 225 225 226 + if (parent->level > MAX_NS_DEPTH) 227 + return ERR_PTR(-ENOSPC); 226 228 ns = alloc_ns(parent->base.hname, name); 227 229 if (!ns) 228 230 return ERR_PTR(-ENOMEM);
+45 -20
security/apparmor/policy_unpack.c
··· 109 109 return memcmp(l->data, r->data, r->compressed_size ?: r->size) == 0; 110 110 } 111 111 112 - /* 113 - * need to take the ns mutex lock which is NOT safe most places that 114 - * put_loaddata is called, so we have to delay freeing it 115 - */ 116 - static void do_loaddata_free(struct work_struct *work) 112 + static void do_loaddata_free(struct aa_loaddata *d) 117 113 { 118 - struct aa_loaddata *d = container_of(work, struct aa_loaddata, work); 119 - struct aa_ns *ns = aa_get_ns(d->ns); 120 - 121 - if (ns) { 122 - mutex_lock_nested(&ns->lock, ns->level); 123 - __aa_fs_remove_rawdata(d); 124 - mutex_unlock(&ns->lock); 125 - aa_put_ns(ns); 126 - } 127 - 128 114 kfree_sensitive(d->hash); 129 115 kfree_sensitive(d->name); 130 116 kvfree(d->data); ··· 119 133 120 134 void aa_loaddata_kref(struct kref *kref) 121 135 { 122 - struct aa_loaddata *d = container_of(kref, struct aa_loaddata, count); 136 + struct aa_loaddata *d = container_of(kref, struct aa_loaddata, 137 + count.count); 138 + 139 + do_loaddata_free(d); 140 + } 141 + 142 + /* 143 + * need to take the ns mutex lock which is NOT safe most places that 144 + * put_loaddata is called, so we have to delay freeing it 145 + */ 146 + static void do_ploaddata_rmfs(struct work_struct *work) 147 + { 148 + struct aa_loaddata *d = container_of(work, struct aa_loaddata, work); 149 + struct aa_ns *ns = aa_get_ns(d->ns); 150 + 151 + if (ns) { 152 + mutex_lock_nested(&ns->lock, ns->level); 153 + /* remove fs ref to loaddata */ 154 + __aa_fs_remove_rawdata(d); 155 + mutex_unlock(&ns->lock); 156 + aa_put_ns(ns); 157 + } 158 + /* called by dropping last pcount, so drop its associated icount */ 159 + aa_put_i_loaddata(d); 160 + } 161 + 162 + void aa_ploaddata_kref(struct kref *kref) 163 + { 164 + struct aa_loaddata *d = container_of(kref, struct aa_loaddata, pcount); 123 165 124 166 if (d) { 125 - INIT_WORK(&d->work, do_loaddata_free); 167 + INIT_WORK(&d->work, do_ploaddata_rmfs); 126 168 schedule_work(&d->work); 127 169 } 128 170 } ··· 167 153 kfree(d); 168 154 return ERR_PTR(-ENOMEM); 169 155 } 170 - kref_init(&d->count); 156 + kref_init(&d->count.count); 157 + d->count.reftype = REF_RAWDATA; 158 + kref_init(&d->pcount); 171 159 INIT_LIST_HEAD(&d->list); 172 160 173 161 return d; ··· 1026 1010 if (!aa_unpack_u32(e, &pdb->start[AA_CLASS_FILE], "dfa_start")) { 1027 1011 /* default start state for xmatch and file dfa */ 1028 1012 pdb->start[AA_CLASS_FILE] = DFA_START; 1029 - } /* setup class index */ 1013 + } 1014 + 1015 + size_t state_count = pdb->dfa->tables[YYTD_ID_BASE]->td_lolen; 1016 + 1017 + if (pdb->start[0] >= state_count || 1018 + pdb->start[AA_CLASS_FILE] >= state_count) { 1019 + *info = "invalid dfa start state"; 1020 + goto fail; 1021 + } 1022 + 1023 + /* setup class index */ 1030 1024 for (i = AA_CLASS_FILE + 1; i <= AA_CLASS_LAST; i++) { 1031 1025 pdb->start[i] = aa_dfa_next(pdb->dfa, pdb->start[0], 1032 1026 i); ··· 1435 1409 { 1436 1410 int error = -EPROTONOSUPPORT; 1437 1411 const char *name = NULL; 1438 - *ns = NULL; 1439 1412 1440 1413 /* get the interface version */ 1441 1414 if (!aa_unpack_u32(e, &e->version, "version")) {
+1 -1
sound/firewire/dice/dice.c
··· 122 122 fw_csr_string(dev->config_rom + 5, CSR_VENDOR, vendor, sizeof(vendor)); 123 123 strscpy(model, "?"); 124 124 fw_csr_string(dice->unit->directory, CSR_MODEL, model, sizeof(model)); 125 - snprintf(card->longname, sizeof(card->longname), 125 + scnprintf(card->longname, sizeof(card->longname), 126 126 "%s %s (serial %u) at %s, S%d", 127 127 vendor, model, dev->config_rom[4] & 0x3fffff, 128 128 dev_name(&dice->unit->device), 100 << dev->max_speed);
+9
sound/hda/codecs/ca0132.c
··· 9816 9816 spec->dig_in = 0x09; 9817 9817 break; 9818 9818 } 9819 + 9820 + /* Default HP/Speaker auto-detect from headphone pin verb: enable if the 9821 + * pin config indicates presence detect (not AC_DEFCFG_MISC_NO_PRESENCE). 9822 + */ 9823 + if (spec->unsol_tag_hp && 9824 + (snd_hda_query_pin_caps(codec, spec->unsol_tag_hp) & AC_PINCAP_PRES_DETECT) && 9825 + !(get_defcfg_misc(snd_hda_codec_get_pincfg(codec, spec->unsol_tag_hp)) & 9826 + AC_DEFCFG_MISC_NO_PRESENCE)) 9827 + spec->vnode_lswitch[VNID_HP_ASEL - VNODE_START_NID] = 1; 9819 9828 } 9820 9829 9821 9830 static int ca0132_prepare_verbs(struct hda_codec *codec)
+1
sound/hda/codecs/hdmi/tegrahdmi.c
··· 299 299 HDA_CODEC_ID_MODEL(0x10de002f, "Tegra194 HDMI/DP2", MODEL_TEGRA), 300 300 HDA_CODEC_ID_MODEL(0x10de0030, "Tegra194 HDMI/DP3", MODEL_TEGRA), 301 301 HDA_CODEC_ID_MODEL(0x10de0031, "Tegra234 HDMI/DP", MODEL_TEGRA234), 302 + HDA_CODEC_ID_MODEL(0x10de0032, "Tegra238 HDMI/DP", MODEL_TEGRA234), 302 303 HDA_CODEC_ID_MODEL(0x10de0033, "SoC 33 HDMI/DP", MODEL_TEGRA234), 303 304 HDA_CODEC_ID_MODEL(0x10de0034, "Tegra264 HDMI/DP", MODEL_TEGRA234), 304 305 HDA_CODEC_ID_MODEL(0x10de0035, "SoC 35 HDMI/DP", MODEL_TEGRA234),
+1
sound/hda/codecs/realtek/alc269.c
··· 6904 6904 SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST), 6905 6905 SND_PCI_QUIRK(0x103c, 0x88b3, "HP ENVY x360 Convertible 15-es0xxx", ALC245_FIXUP_HP_ENVY_X360_MUTE_LED), 6906 6906 SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED), 6907 + SND_PCI_QUIRK(0x103c, 0x88d1, "HP Pavilion 15-eh1xxx (mainboard 88D1)", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT), 6907 6908 SND_PCI_QUIRK(0x103c, 0x88dd, "HP Pavilion 15z-ec200", ALC285_FIXUP_HP_MUTE_LED), 6908 6909 SND_PCI_QUIRK(0x103c, 0x88eb, "HP Victus 16-e0xxx", ALC245_FIXUP_HP_MUTE_LED_V2_COEFBIT), 6909 6910 SND_PCI_QUIRK(0x103c, 0x8902, "HP OMEN 16", ALC285_FIXUP_HP_MUTE_LED),
+8 -6
sound/hda/codecs/senarytech.c
··· 19 19 #include "hda_jack.h" 20 20 #include "generic.h" 21 21 22 - /* GPIO node ID */ 23 - #define SENARY_GPIO_NODE 0x01 24 - 25 22 struct senary_spec { 26 23 struct hda_gen_spec gen; 27 24 28 25 /* extra EAPD pins */ 29 26 unsigned int num_eapds; 30 27 hda_nid_t eapds[4]; 28 + bool dynamic_eapd; 31 29 hda_nid_t mute_led_eapd; 32 30 33 31 unsigned int parse_flags; /* flag for snd_hda_parse_pin_defcfg() */ ··· 121 123 unsigned int mask = spec->gpio_mute_led_mask | spec->gpio_mic_led_mask; 122 124 123 125 if (mask) { 124 - snd_hda_codec_write(codec, SENARY_GPIO_NODE, 0, AC_VERB_SET_GPIO_MASK, 126 + snd_hda_codec_write(codec, codec->core.afg, 0, AC_VERB_SET_GPIO_MASK, 125 127 mask); 126 - snd_hda_codec_write(codec, SENARY_GPIO_NODE, 0, AC_VERB_SET_GPIO_DIRECTION, 128 + snd_hda_codec_write(codec, codec->core.afg, 0, AC_VERB_SET_GPIO_DIRECTION, 127 129 mask); 128 - snd_hda_codec_write(codec, SENARY_GPIO_NODE, 0, AC_VERB_SET_GPIO_DATA, 130 + snd_hda_codec_write(codec, codec->core.afg, 0, AC_VERB_SET_GPIO_DATA, 129 131 spec->gpio_led); 130 132 } 131 133 } 132 134 133 135 static int senary_init(struct hda_codec *codec) 134 136 { 137 + struct senary_spec *spec = codec->spec; 138 + 135 139 snd_hda_gen_init(codec); 136 140 senary_init_gpio_led(codec); 141 + if (!spec->dynamic_eapd) 142 + senary_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, true); 137 143 snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT); 138 144 139 145 return 0;
+3 -12
sound/hda/codecs/side-codecs/tas2781_hda_i2c.c
··· 60 60 int (*save_calibration)(struct tas2781_hda *h); 61 61 62 62 int hda_chip_id; 63 - bool skip_calibration; 64 63 }; 65 64 66 65 static int tas2781_get_i2c_res(struct acpi_resource *ares, void *data) ··· 478 479 /* If calibrated data occurs error, dsp will still works with default 479 480 * calibrated data inside algo. 480 481 */ 481 - if (!hda_priv->skip_calibration) 482 - hda_priv->save_calibration(tas_hda); 482 + hda_priv->save_calibration(tas_hda); 483 483 } 484 484 485 485 static void tasdev_fw_ready(const struct firmware *fmw, void *context) ··· 533 535 void *master_data) 534 536 { 535 537 struct tas2781_hda *tas_hda = dev_get_drvdata(dev); 536 - struct tas2781_hda_i2c_priv *hda_priv = tas_hda->hda_priv; 537 538 struct hda_component_parent *parent = master_data; 538 539 struct hda_component *comp; 539 540 struct hda_codec *codec; ··· 560 563 tas_hda->catlog_id = LENOVO; 561 564 break; 562 565 } 563 - 564 - /* 565 - * Using ASUS ROG Xbox Ally X (RC73XA) UEFI calibration data 566 - * causes audio dropouts during playback, use fallback data 567 - * from DSP firmware as a workaround. 568 - */ 569 - if (codec->core.subsystem_id == 0x10431384) 570 - hda_priv->skip_calibration = true; 571 566 572 567 guard(pm_runtime_active_auto)(dev); 573 568 ··· 632 643 */ 633 644 device_name = "TIAS2781"; 634 645 hda_priv->hda_chip_id = HDA_TAS2781; 646 + tas_hda->priv->chip_id = TAS2781; 635 647 hda_priv->save_calibration = tas2781_save_calibration; 636 648 tas_hda->priv->global_addr = TAS2781_GLOBAL_ADDR; 637 649 } else if (strstarts(dev_name(&clt->dev), "i2c-TXNW2770")) { ··· 646 656 "i2c-TXNW2781:00-tas2781-hda.0")) { 647 657 device_name = "TXNW2781"; 648 658 hda_priv->hda_chip_id = HDA_TAS2781; 659 + tas_hda->priv->chip_id = TAS2781; 649 660 hda_priv->save_calibration = tas2781_save_calibration; 650 661 tas_hda->priv->global_addr = TAS2781_GLOBAL_ADDR; 651 662 } else if (strstr(dev_name(&clt->dev), "INT8866")) {
+413
sound/soc/amd/acp/amd-acp63-acpi-match.c
··· 30 30 .group_id = 1 31 31 }; 32 32 33 + static const struct snd_soc_acpi_endpoint spk_2_endpoint = { 34 + .num = 0, 35 + .aggregated = 1, 36 + .group_position = 2, 37 + .group_id = 1 38 + }; 39 + 40 + static const struct snd_soc_acpi_endpoint spk_3_endpoint = { 41 + .num = 0, 42 + .aggregated = 1, 43 + .group_position = 3, 44 + .group_id = 1 45 + }; 46 + 33 47 static const struct snd_soc_acpi_adr_device rt711_rt1316_group_adr[] = { 34 48 { 35 49 .adr = 0x000030025D071101ull, ··· 117 103 } 118 104 }; 119 105 106 + static const struct snd_soc_acpi_endpoint cs42l43_endpoints[] = { 107 + { /* Jack Playback Endpoint */ 108 + .num = 0, 109 + .aggregated = 0, 110 + .group_position = 0, 111 + .group_id = 0, 112 + }, 113 + { /* DMIC Capture Endpoint */ 114 + .num = 1, 115 + .aggregated = 0, 116 + .group_position = 0, 117 + .group_id = 0, 118 + }, 119 + { /* Jack Capture Endpoint */ 120 + .num = 2, 121 + .aggregated = 0, 122 + .group_position = 0, 123 + .group_id = 0, 124 + }, 125 + { /* Speaker Playback Endpoint */ 126 + .num = 3, 127 + .aggregated = 0, 128 + .group_position = 0, 129 + .group_id = 0, 130 + }, 131 + }; 132 + 133 + static const struct snd_soc_acpi_adr_device cs35l56x4_l1u3210_adr[] = { 134 + { 135 + .adr = 0x00013301FA355601ull, 136 + .num_endpoints = 1, 137 + .endpoints = &spk_l_endpoint, 138 + .name_prefix = "AMP1" 139 + }, 140 + { 141 + .adr = 0x00013201FA355601ull, 142 + .num_endpoints = 1, 143 + .endpoints = &spk_r_endpoint, 144 + .name_prefix = "AMP2" 145 + }, 146 + { 147 + .adr = 0x00013101FA355601ull, 148 + .num_endpoints = 1, 149 + .endpoints = &spk_2_endpoint, 150 + .name_prefix = "AMP3" 151 + }, 152 + { 153 + .adr = 0x00013001FA355601ull, 154 + .num_endpoints = 1, 155 + .endpoints = &spk_3_endpoint, 156 + .name_prefix = "AMP4" 157 + }, 158 + }; 159 + 160 + static const struct snd_soc_acpi_adr_device cs35l63x2_l0u01_adr[] = { 161 + { 162 + .adr = 0x00003001FA356301ull, 163 + .num_endpoints = 1, 164 + .endpoints = &spk_l_endpoint, 165 + .name_prefix = "AMP1" 166 + }, 167 + { 168 + .adr = 0x00003101FA356301ull, 169 + .num_endpoints = 1, 170 + .endpoints = &spk_r_endpoint, 171 + .name_prefix = "AMP2" 172 + }, 173 + }; 174 + 175 + static const struct snd_soc_acpi_adr_device cs35l63x2_l1u01_adr[] = { 176 + { 177 + .adr = 0x00013001FA356301ull, 178 + .num_endpoints = 1, 179 + .endpoints = &spk_l_endpoint, 180 + .name_prefix = "AMP1" 181 + }, 182 + { 183 + .adr = 0x00013101FA356301ull, 184 + .num_endpoints = 1, 185 + .endpoints = &spk_r_endpoint, 186 + .name_prefix = "AMP2" 187 + }, 188 + }; 189 + 190 + static const struct snd_soc_acpi_adr_device cs35l63x2_l1u13_adr[] = { 191 + { 192 + .adr = 0x00013101FA356301ull, 193 + .num_endpoints = 1, 194 + .endpoints = &spk_l_endpoint, 195 + .name_prefix = "AMP1" 196 + }, 197 + { 198 + .adr = 0x00013301FA356301ull, 199 + .num_endpoints = 1, 200 + .endpoints = &spk_r_endpoint, 201 + .name_prefix = "AMP2" 202 + }, 203 + }; 204 + 205 + static const struct snd_soc_acpi_adr_device cs35l63x4_l0u0246_adr[] = { 206 + { 207 + .adr = 0x00003001FA356301ull, 208 + .num_endpoints = 1, 209 + .endpoints = &spk_l_endpoint, 210 + .name_prefix = "AMP1" 211 + }, 212 + { 213 + .adr = 0x00003201FA356301ull, 214 + .num_endpoints = 1, 215 + .endpoints = &spk_r_endpoint, 216 + .name_prefix = "AMP2" 217 + }, 218 + { 219 + .adr = 0x00003401FA356301ull, 220 + .num_endpoints = 1, 221 + .endpoints = &spk_2_endpoint, 222 + .name_prefix = "AMP3" 223 + }, 224 + { 225 + .adr = 0x00003601FA356301ull, 226 + .num_endpoints = 1, 227 + .endpoints = &spk_3_endpoint, 228 + .name_prefix = "AMP4" 229 + }, 230 + }; 231 + 232 + static const struct snd_soc_acpi_adr_device cs42l43_l0u0_adr[] = { 233 + { 234 + .adr = 0x00003001FA424301ull, 235 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 236 + .endpoints = cs42l43_endpoints, 237 + .name_prefix = "cs42l43" 238 + } 239 + }; 240 + 241 + static const struct snd_soc_acpi_adr_device cs42l43_l0u1_adr[] = { 242 + { 243 + .adr = 0x00003101FA424301ull, 244 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 245 + .endpoints = cs42l43_endpoints, 246 + .name_prefix = "cs42l43" 247 + } 248 + }; 249 + 250 + static const struct snd_soc_acpi_adr_device cs42l43b_l0u1_adr[] = { 251 + { 252 + .adr = 0x00003101FA2A3B01ull, 253 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 254 + .endpoints = cs42l43_endpoints, 255 + .name_prefix = "cs42l43" 256 + } 257 + }; 258 + 259 + static const struct snd_soc_acpi_adr_device cs42l43_l1u0_cs35l56x4_l1u0123_adr[] = { 260 + { 261 + .adr = 0x00013001FA424301ull, 262 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 263 + .endpoints = cs42l43_endpoints, 264 + .name_prefix = "cs42l43" 265 + }, 266 + { 267 + .adr = 0x00013001FA355601ull, 268 + .num_endpoints = 1, 269 + .endpoints = &spk_l_endpoint, 270 + .name_prefix = "AMP1" 271 + }, 272 + { 273 + .adr = 0x00013101FA355601ull, 274 + .num_endpoints = 1, 275 + .endpoints = &spk_r_endpoint, 276 + .name_prefix = "AMP2" 277 + }, 278 + { 279 + .adr = 0x00013201FA355601ull, 280 + .num_endpoints = 1, 281 + .endpoints = &spk_2_endpoint, 282 + .name_prefix = "AMP3" 283 + }, 284 + { 285 + .adr = 0x00013301FA355601ull, 286 + .num_endpoints = 1, 287 + .endpoints = &spk_3_endpoint, 288 + .name_prefix = "AMP4" 289 + }, 290 + }; 291 + 292 + static const struct snd_soc_acpi_adr_device cs42l45_l0u0_adr[] = { 293 + { 294 + .adr = 0x00003001FA424501ull, 295 + /* Re-use endpoints, but cs42l45 has no speaker */ 296 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints) - 1, 297 + .endpoints = cs42l43_endpoints, 298 + .name_prefix = "cs42l45" 299 + } 300 + }; 301 + 302 + static const struct snd_soc_acpi_adr_device cs42l45_l1u0_adr[] = { 303 + { 304 + .adr = 0x00013001FA424501ull, 305 + /* Re-use endpoints, but cs42l45 has no speaker */ 306 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints) - 1, 307 + .endpoints = cs42l43_endpoints, 308 + .name_prefix = "cs42l45" 309 + } 310 + }; 311 + 312 + static const struct snd_soc_acpi_link_adr acp63_cs35l56x4_l1u3210[] = { 313 + { 314 + .mask = BIT(1), 315 + .num_adr = ARRAY_SIZE(cs35l56x4_l1u3210_adr), 316 + .adr_d = cs35l56x4_l1u3210_adr, 317 + }, 318 + {} 319 + }; 320 + 321 + static const struct snd_soc_acpi_link_adr acp63_cs35l63x4_l0u0246[] = { 322 + { 323 + .mask = BIT(0), 324 + .num_adr = ARRAY_SIZE(cs35l63x4_l0u0246_adr), 325 + .adr_d = cs35l63x4_l0u0246_adr, 326 + }, 327 + {} 328 + }; 329 + 330 + static const struct snd_soc_acpi_link_adr acp63_cs42l43_l0u1[] = { 331 + { 332 + .mask = BIT(0), 333 + .num_adr = ARRAY_SIZE(cs42l43_l0u1_adr), 334 + .adr_d = cs42l43_l0u1_adr, 335 + }, 336 + {} 337 + }; 338 + 339 + static const struct snd_soc_acpi_link_adr acp63_cs42l43b_l0u1[] = { 340 + { 341 + .mask = BIT(0), 342 + .num_adr = ARRAY_SIZE(cs42l43b_l0u1_adr), 343 + .adr_d = cs42l43b_l0u1_adr, 344 + }, 345 + {} 346 + }; 347 + 348 + static const struct snd_soc_acpi_link_adr acp63_cs42l43_l0u0_cs35l56x4_l1u3210[] = { 349 + { 350 + .mask = BIT(0), 351 + .num_adr = ARRAY_SIZE(cs42l43_l0u0_adr), 352 + .adr_d = cs42l43_l0u0_adr, 353 + }, 354 + { 355 + .mask = BIT(1), 356 + .num_adr = ARRAY_SIZE(cs35l56x4_l1u3210_adr), 357 + .adr_d = cs35l56x4_l1u3210_adr, 358 + }, 359 + {} 360 + }; 361 + 362 + static const struct snd_soc_acpi_link_adr acp63_cs42l43_l1u0_cs35l56x4_l1u0123[] = { 363 + { 364 + .mask = BIT(1), 365 + .num_adr = ARRAY_SIZE(cs42l43_l1u0_cs35l56x4_l1u0123_adr), 366 + .adr_d = cs42l43_l1u0_cs35l56x4_l1u0123_adr, 367 + }, 368 + {} 369 + }; 370 + 371 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l0u0[] = { 372 + { 373 + .mask = BIT(0), 374 + .num_adr = ARRAY_SIZE(cs42l45_l0u0_adr), 375 + .adr_d = cs42l45_l0u0_adr, 376 + }, 377 + {} 378 + }; 379 + 380 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l0u0_cs35l63x2_l1u01[] = { 381 + { 382 + .mask = BIT(0), 383 + .num_adr = ARRAY_SIZE(cs42l45_l0u0_adr), 384 + .adr_d = cs42l45_l0u0_adr, 385 + }, 386 + { 387 + .mask = BIT(1), 388 + .num_adr = ARRAY_SIZE(cs35l63x2_l1u01_adr), 389 + .adr_d = cs35l63x2_l1u01_adr, 390 + }, 391 + {} 392 + }; 393 + 394 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l0u0_cs35l63x2_l1u13[] = { 395 + { 396 + .mask = BIT(0), 397 + .num_adr = ARRAY_SIZE(cs42l45_l0u0_adr), 398 + .adr_d = cs42l45_l0u0_adr, 399 + }, 400 + { 401 + .mask = BIT(1), 402 + .num_adr = ARRAY_SIZE(cs35l63x2_l1u13_adr), 403 + .adr_d = cs35l63x2_l1u13_adr, 404 + }, 405 + {} 406 + }; 407 + 408 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l1u0[] = { 409 + { 410 + .mask = BIT(1), 411 + .num_adr = ARRAY_SIZE(cs42l45_l1u0_adr), 412 + .adr_d = cs42l45_l1u0_adr, 413 + }, 414 + {} 415 + }; 416 + 417 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l1u0_cs35l63x2_l0u01[] = { 418 + { 419 + .mask = BIT(1), 420 + .num_adr = ARRAY_SIZE(cs42l45_l1u0_adr), 421 + .adr_d = cs42l45_l1u0_adr, 422 + }, 423 + { 424 + .mask = BIT(0), 425 + .num_adr = ARRAY_SIZE(cs35l63x2_l0u01_adr), 426 + .adr_d = cs35l63x2_l0u01_adr, 427 + }, 428 + {} 429 + }; 430 + 431 + static const struct snd_soc_acpi_link_adr acp63_cs42l45_l1u0_cs35l63x4_l0u0246[] = { 432 + { 433 + .mask = BIT(1), 434 + .num_adr = ARRAY_SIZE(cs42l45_l1u0_adr), 435 + .adr_d = cs42l45_l1u0_adr, 436 + }, 437 + { 438 + .mask = BIT(0), 439 + .num_adr = ARRAY_SIZE(cs35l63x4_l0u0246_adr), 440 + .adr_d = cs35l63x4_l0u0246_adr, 441 + }, 442 + {} 443 + }; 444 + 120 445 static const struct snd_soc_acpi_link_adr acp63_rt722_only[] = { 121 446 { 122 447 .mask = BIT(0), ··· 486 133 { 487 134 .link_mask = BIT(0) | BIT(1), 488 135 .links = acp63_4_in_1_sdca, 136 + .drv_name = "amd_sdw", 137 + }, 138 + { 139 + .link_mask = BIT(0) | BIT(1), 140 + .links = acp63_cs42l43_l0u0_cs35l56x4_l1u3210, 141 + .drv_name = "amd_sdw", 142 + }, 143 + { 144 + .link_mask = BIT(0) | BIT(1), 145 + .links = acp63_cs42l45_l1u0_cs35l63x4_l0u0246, 146 + .drv_name = "amd_sdw", 147 + }, 148 + { 149 + .link_mask = BIT(0) | BIT(1), 150 + .links = acp63_cs42l45_l0u0_cs35l63x2_l1u01, 151 + .drv_name = "amd_sdw", 152 + }, 153 + { 154 + .link_mask = BIT(0) | BIT(1), 155 + .links = acp63_cs42l45_l0u0_cs35l63x2_l1u13, 156 + .drv_name = "amd_sdw", 157 + }, 158 + { 159 + .link_mask = BIT(0) | BIT(1), 160 + .links = acp63_cs42l45_l1u0_cs35l63x2_l0u01, 161 + .drv_name = "amd_sdw", 162 + }, 163 + { 164 + .link_mask = BIT(1), 165 + .links = acp63_cs42l43_l1u0_cs35l56x4_l1u0123, 166 + .drv_name = "amd_sdw", 167 + }, 168 + { 169 + .link_mask = BIT(1), 170 + .links = acp63_cs35l56x4_l1u3210, 171 + .drv_name = "amd_sdw", 172 + }, 173 + { 174 + .link_mask = BIT(0), 175 + .links = acp63_cs35l63x4_l0u0246, 176 + .drv_name = "amd_sdw", 177 + }, 178 + { 179 + .link_mask = BIT(0), 180 + .links = acp63_cs42l43_l0u1, 181 + .drv_name = "amd_sdw", 182 + }, 183 + { 184 + .link_mask = BIT(0), 185 + .links = acp63_cs42l43b_l0u1, 186 + .drv_name = "amd_sdw", 187 + }, 188 + { 189 + .link_mask = BIT(0), 190 + .links = acp63_cs42l45_l0u0, 191 + .drv_name = "amd_sdw", 192 + }, 193 + { 194 + .link_mask = BIT(1), 195 + .links = acp63_cs42l45_l1u0, 489 196 .drv_name = "amd_sdw", 490 197 }, 491 198 {},
+7
sound/soc/amd/yc/acp6x-mach.c
··· 710 710 DMI_MATCH(DMI_PRODUCT_NAME, "ASUS EXPERTBOOK BM1503CDA"), 711 711 } 712 712 }, 713 + { 714 + .driver_data = &acp6x_card, 715 + .matches = { 716 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 717 + DMI_MATCH(DMI_BOARD_NAME, "PM1503CDA"), 718 + } 719 + }, 713 720 {} 714 721 }; 715 722
+15 -1
sound/soc/codecs/cs35l56-shared.c
··· 26 26 27 27 #include "cs35l56.h" 28 28 29 - static const struct reg_sequence cs35l56_patch[] = { 29 + static const struct reg_sequence cs35l56_asp_patch[] = { 30 30 /* 31 31 * Firmware can change these to non-defaults to satisfy SDCA. 32 32 * Ensure that they are at known defaults. ··· 43 43 { CS35L56_ASP1TX2_INPUT, 0x00000000 }, 44 44 { CS35L56_ASP1TX3_INPUT, 0x00000000 }, 45 45 { CS35L56_ASP1TX4_INPUT, 0x00000000 }, 46 + }; 47 + 48 + int cs35l56_set_asp_patch(struct cs35l56_base *cs35l56_base) 49 + { 50 + return regmap_register_patch(cs35l56_base->regmap, cs35l56_asp_patch, 51 + ARRAY_SIZE(cs35l56_asp_patch)); 52 + } 53 + EXPORT_SYMBOL_NS_GPL(cs35l56_set_asp_patch, "SND_SOC_CS35L56_SHARED"); 54 + 55 + static const struct reg_sequence cs35l56_patch[] = { 56 + /* 57 + * Firmware can change these to non-defaults to satisfy SDCA. 58 + * Ensure that they are at known defaults. 59 + */ 46 60 { CS35L56_SWIRE_DP3_CH1_INPUT, 0x00000018 }, 47 61 { CS35L56_SWIRE_DP3_CH2_INPUT, 0x00000019 }, 48 62 { CS35L56_SWIRE_DP3_CH3_INPUT, 0x00000029 },
+10 -2
sound/soc/codecs/cs35l56.c
··· 348 348 return wm_adsp_event(w, kcontrol, event); 349 349 } 350 350 351 + static int cs35l56_asp_dai_probe(struct snd_soc_dai *codec_dai) 352 + { 353 + struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(codec_dai->component); 354 + 355 + return cs35l56_set_asp_patch(&cs35l56->base); 356 + } 357 + 351 358 static int cs35l56_asp_dai_set_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt) 352 359 { 353 360 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(codec_dai->component); ··· 559 552 } 560 553 561 554 static const struct snd_soc_dai_ops cs35l56_ops = { 555 + .probe = cs35l56_asp_dai_probe, 562 556 .set_fmt = cs35l56_asp_dai_set_fmt, 563 557 .set_tdm_slot = cs35l56_asp_dai_set_tdm_slot, 564 558 .hw_params = cs35l56_asp_dai_hw_params, ··· 1625 1617 if (num_pulls < 0) 1626 1618 return num_pulls; 1627 1619 1628 - if (num_pulls != num_gpios) { 1620 + if (num_pulls && (num_pulls != num_gpios)) { 1629 1621 dev_warn(cs35l56->base.dev, "%s count(%d) != %s count(%d)\n", 1630 - pull_name, num_pulls, gpio_name, num_gpios); 1622 + pull_name, num_pulls, gpio_name, num_gpios); 1631 1623 } 1632 1624 1633 1625 ret = cs35l56_check_and_save_onchip_spkid_gpios(&cs35l56->base,
+3 -2
sound/soc/codecs/rt1320-sdw.c
··· 2629 2629 struct sdw_port_config port_config; 2630 2630 struct sdw_port_config dmic_port_config[2]; 2631 2631 struct sdw_stream_runtime *sdw_stream; 2632 - int retval; 2632 + int retval, num_channels; 2633 2633 unsigned int sampling_rate; 2634 2634 2635 2635 dev_dbg(dai->dev, "%s %s", __func__, dai->name); ··· 2661 2661 dmic_port_config[1].num = 10; 2662 2662 break; 2663 2663 case RT1321_DEV_ID: 2664 - dmic_port_config[0].ch_mask = BIT(0) | BIT(1); 2664 + num_channels = params_channels(params); 2665 + dmic_port_config[0].ch_mask = GENMASK(num_channels - 1, 0); 2665 2666 dmic_port_config[0].num = 8; 2666 2667 break; 2667 2668 default:
+94
sound/soc/codecs/tas2781-fmwlib.c
··· 32 32 #define TAS2781_YRAM1_PAGE 42 33 33 #define TAS2781_YRAM1_START_REG 88 34 34 35 + #define TAS2781_PG_REG TASDEVICE_REG(0x00, 0x00, 0x7c) 36 + #define TAS2781_PG_1_0 0xA0 37 + #define TAS2781_PG_2_0 0xA8 38 + 35 39 #define TAS2781_YRAM2_START_PAGE 43 36 40 #define TAS2781_YRAM2_END_PAGE 49 37 41 #define TAS2781_YRAM2_START_REG 8 ··· 100 96 struct blktyp_devidx_map { 101 97 unsigned char blktyp; 102 98 unsigned char dev_idx; 99 + }; 100 + 101 + struct tas2781_cali_specific { 102 + unsigned char sin_gni[4]; 103 + int sin_gni_reg; 104 + bool is_sin_gn_flush; 103 105 }; 104 106 105 107 static const char deviceNumber[TASDEVICE_DSP_TAS_MAX_DEVICE] = { ··· 2464 2454 return ret; 2465 2455 } 2466 2456 2457 + static int tas2781_cali_preproc(struct tasdevice_priv *priv, int i) 2458 + { 2459 + struct tas2781_cali_specific *spec = priv->tasdevice[i].cali_specific; 2460 + struct calidata *cali_data = &priv->cali_data; 2461 + struct cali_reg *p = &cali_data->cali_reg_array; 2462 + unsigned char *data = cali_data->data; 2463 + int rc; 2464 + 2465 + /* 2466 + * On TAS2781, if the Speaker calibrated impedance is lower than 2467 + * default value hard-coded inside the TAS2781, it will cuase vol 2468 + * lower than normal. In order to fix this issue, the parameter of 2469 + * SineGainI need updating. 2470 + */ 2471 + if (spec == NULL) { 2472 + int k = i * (cali_data->cali_dat_sz_per_dev + 1); 2473 + int re_org, re_cal, corrected_sin_gn, pg_id; 2474 + unsigned char r0_deflt[4]; 2475 + 2476 + spec = devm_kzalloc(priv->dev, sizeof(*spec), GFP_KERNEL); 2477 + if (spec == NULL) 2478 + return -ENOMEM; 2479 + priv->tasdevice[i].cali_specific = spec; 2480 + rc = tasdevice_dev_bulk_read(priv, i, p->r0_reg, r0_deflt, 4); 2481 + if (rc < 0) { 2482 + dev_err(priv->dev, "invalid RE from %d = %d\n", i, rc); 2483 + return rc; 2484 + } 2485 + /* 2486 + * SineGainI need to be re-calculated, calculate the high 16 2487 + * bits. 2488 + */ 2489 + re_org = r0_deflt[0] << 8 | r0_deflt[1]; 2490 + re_cal = data[k + 1] << 8 | data[k + 2]; 2491 + if (re_org > re_cal) { 2492 + rc = tasdevice_dev_read(priv, i, TAS2781_PG_REG, 2493 + &pg_id); 2494 + if (rc < 0) { 2495 + dev_err(priv->dev, "invalid PG id %d = %d\n", 2496 + i, rc); 2497 + return rc; 2498 + } 2499 + 2500 + spec->sin_gni_reg = (pg_id == TAS2781_PG_1_0) ? 2501 + TASDEVICE_REG(0, 0x1b, 0x34) : 2502 + TASDEVICE_REG(0, 0x18, 0x1c); 2503 + 2504 + rc = tasdevice_dev_bulk_read(priv, i, 2505 + spec->sin_gni_reg, 2506 + spec->sin_gni, 4); 2507 + if (rc < 0) { 2508 + dev_err(priv->dev, "wrong sinegaini %d = %d\n", 2509 + i, rc); 2510 + return rc; 2511 + } 2512 + corrected_sin_gn = re_org * ((spec->sin_gni[0] << 8) + 2513 + spec->sin_gni[1]); 2514 + corrected_sin_gn /= re_cal; 2515 + spec->sin_gni[0] = corrected_sin_gn >> 8; 2516 + spec->sin_gni[1] = corrected_sin_gn & 0xff; 2517 + 2518 + spec->is_sin_gn_flush = true; 2519 + } 2520 + } 2521 + 2522 + if (spec->is_sin_gn_flush) { 2523 + rc = tasdevice_dev_bulk_write(priv, i, spec->sin_gni_reg, 2524 + spec->sin_gni, 4); 2525 + if (rc < 0) { 2526 + dev_err(priv->dev, "update failed %d = %d\n", 2527 + i, rc); 2528 + return rc; 2529 + } 2530 + } 2531 + 2532 + return 0; 2533 + } 2534 + 2467 2535 static void tasdev_load_calibrated_data(struct tasdevice_priv *priv, int i) 2468 2536 { 2469 2537 struct calidata *cali_data = &priv->cali_data; ··· 2556 2468 return; 2557 2469 } 2558 2470 k++; 2471 + 2472 + if (priv->chip_id == TAS2781) { 2473 + rc = tas2781_cali_preproc(priv, i); 2474 + if (rc < 0) 2475 + return; 2476 + } 2559 2477 2560 2478 rc = tasdevice_dev_bulk_write(priv, i, p->r0_reg, &(data[k]), 4); 2561 2479 if (rc < 0) {
+10 -4
sound/soc/fsl/fsl_easrc.c
··· 52 52 struct soc_mreg_control *mc = 53 53 (struct soc_mreg_control *)kcontrol->private_value; 54 54 unsigned int regval = ucontrol->value.integer.value[0]; 55 + int ret; 56 + 57 + ret = (easrc_priv->bps_iec958[mc->regbase] != regval); 55 58 56 59 easrc_priv->bps_iec958[mc->regbase] = regval; 57 60 58 - return 0; 61 + return ret; 59 62 } 60 63 61 64 static int fsl_easrc_iec958_get_bits(struct snd_kcontrol *kcontrol, ··· 96 93 struct snd_soc_component *component = snd_kcontrol_chip(kcontrol); 97 94 struct soc_mreg_control *mc = 98 95 (struct soc_mreg_control *)kcontrol->private_value; 96 + struct fsl_asrc *easrc = snd_soc_component_get_drvdata(component); 99 97 unsigned int regval = ucontrol->value.integer.value[0]; 98 + bool changed; 100 99 int ret; 101 100 102 - ret = snd_soc_component_write(component, mc->regbase, regval); 103 - if (ret < 0) 101 + ret = regmap_update_bits_check(easrc->regmap, mc->regbase, 102 + GENMASK(31, 0), regval, &changed); 103 + if (ret != 0) 104 104 return ret; 105 105 106 - return 0; 106 + return changed; 107 107 } 108 108 109 109 #define SOC_SINGLE_REG_RW(xname, xreg) \
+8
sound/soc/intel/boards/sof_sdw.c
··· 763 763 }, 764 764 .driver_data = (void *)(SOC_SDW_CODEC_SPKR), 765 765 }, 766 + { 767 + .callback = sof_sdw_quirk_cb, 768 + .matches = { 769 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 770 + DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0CCD") 771 + }, 772 + .driver_data = (void *)(SOC_SDW_CODEC_SPKR), 773 + }, 766 774 /* Pantherlake devices*/ 767 775 { 768 776 .callback = sof_sdw_quirk_cb,
+4 -1
sound/soc/sdca/sdca_functions.c
··· 1156 1156 if (!terminal->is_dataport) { 1157 1157 const char *type_name = sdca_find_terminal_name(terminal->type); 1158 1158 1159 - if (type_name) 1159 + if (type_name) { 1160 1160 entity->label = devm_kasprintf(dev, GFP_KERNEL, "%s %s", 1161 1161 entity->label, type_name); 1162 + if (!entity->label) 1163 + return -ENOMEM; 1164 + } 1162 1165 } 1163 1166 1164 1167 ret = fwnode_property_read_u32(entity_node,
+2
sound/usb/quirks.c
··· 2219 2219 QUIRK_FLAG_ALIGN_TRANSFER), 2220 2220 DEVICE_FLG(0x05e1, 0x0480, /* Hauppauge Woodbury */ 2221 2221 QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER), 2222 + DEVICE_FLG(0x0624, 0x3d3f, /* AB13X USB Audio */ 2223 + QUIRK_FLAG_FORCE_IFACE_RESET | QUIRK_FLAG_IFACE_DELAY), 2222 2224 DEVICE_FLG(0x0644, 0x8043, /* TEAC UD-501/UD-501V2/UD-503/NT-503 */ 2223 2225 QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY | 2224 2226 QUIRK_FLAG_IFACE_DELAY),
+2 -10
sound/usb/usx2y/us122l.c
··· 520 520 return err; 521 521 } 522 522 523 - usb_get_intf(usb_ifnum_to_if(device, 0)); 524 - usb_get_dev(device); 525 523 *cardp = card; 526 524 return 0; 527 525 } ··· 540 542 if (intf->cur_altsetting->desc.bInterfaceNumber != 1) 541 543 return 0; 542 544 543 - err = us122l_usb_probe(usb_get_intf(intf), id, &card); 544 - if (err < 0) { 545 - usb_put_intf(intf); 545 + err = us122l_usb_probe(intf, id, &card); 546 + if (err < 0) 546 547 return err; 547 - } 548 548 549 549 usb_set_intfdata(intf, card); 550 550 return 0; ··· 569 573 list_for_each(p, &us122l->midi_list) { 570 574 snd_usbmidi_disconnect(p); 571 575 } 572 - 573 - usb_put_intf(usb_ifnum_to_if(us122l->dev, 0)); 574 - usb_put_intf(usb_ifnum_to_if(us122l->dev, 1)); 575 - usb_put_dev(us122l->dev); 576 576 577 577 snd_card_free_when_closed(card); 578 578 }
+7 -2
tools/bpf/resolve_btfids/Makefile
··· 23 23 HOSTCC ?= gcc 24 24 HOSTLD ?= ld 25 25 HOSTAR ?= ar 26 + HOSTPKG_CONFIG ?= pkg-config 26 27 CROSS_COMPILE = 27 28 28 29 OUTPUT ?= $(srctree)/tools/bpf/resolve_btfids/ ··· 64 63 $(abspath $@) install_headers 65 64 66 65 LIBELF_FLAGS := $(shell $(HOSTPKG_CONFIG) libelf --cflags 2>/dev/null) 66 + 67 + ifneq ($(filter -static,$(EXTRA_LDFLAGS)),) 68 + LIBELF_LIBS := $(shell $(HOSTPKG_CONFIG) libelf --libs --static 2>/dev/null || echo -lelf -lzstd) 69 + else 67 70 LIBELF_LIBS := $(shell $(HOSTPKG_CONFIG) libelf --libs 2>/dev/null || echo -lelf) 71 + endif 68 72 69 73 ZLIB_LIBS := $(shell $(HOSTPKG_CONFIG) zlib --libs 2>/dev/null || echo -lz) 70 - ZSTD_LIBS := $(shell $(HOSTPKG_CONFIG) libzstd --libs 2>/dev/null || echo -lzstd) 71 74 72 75 HOSTCFLAGS_resolve_btfids += -g \ 73 76 -I$(srctree)/tools/include \ ··· 81 76 $(LIBELF_FLAGS) \ 82 77 -Wall -Werror 83 78 84 - LIBS = $(LIBELF_LIBS) $(ZLIB_LIBS) $(ZSTD_LIBS) 79 + LIBS = $(LIBELF_LIBS) $(ZLIB_LIBS) 85 80 86 81 export srctree OUTPUT HOSTCFLAGS_resolve_btfids Q HOSTCC HOSTLD HOSTAR 87 82 include $(srctree)/tools/build/Makefile.include
+4
tools/include/linux/gfp.h
··· 5 5 #include <linux/types.h> 6 6 #include <linux/gfp_types.h> 7 7 8 + /* Helper macro to avoid gfp flags if they are the default one */ 9 + #define __default_gfp(a,...) a 10 + #define default_gfp(...) __default_gfp(__VA_ARGS__ __VA_OPT__(,) GFP_KERNEL) 11 + 8 12 static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags) 9 13 { 10 14 return !!(gfp_flags & __GFP_DIRECT_RECLAIM);
+19
tools/include/linux/overflow.h
··· 69 69 }) 70 70 71 71 /** 72 + * size_mul() - Calculate size_t multiplication with saturation at SIZE_MAX 73 + * @factor1: first factor 74 + * @factor2: second factor 75 + * 76 + * Returns: calculate @factor1 * @factor2, both promoted to size_t, 77 + * with any overflow causing the return value to be SIZE_MAX. The 78 + * lvalue must be size_t to avoid implicit type conversion. 79 + */ 80 + static inline size_t __must_check size_mul(size_t factor1, size_t factor2) 81 + { 82 + size_t bytes; 83 + 84 + if (check_mul_overflow(factor1, factor2, &bytes)) 85 + return SIZE_MAX; 86 + 87 + return bytes; 88 + } 89 + 90 + /** 72 91 * array_size() - Calculate size of 2-dimensional array. 73 92 * 74 93 * @a: dimension one
+9
tools/include/linux/slab.h
··· 202 202 return sheaf->size; 203 203 } 204 204 205 + #define __alloc_objs(KMALLOC, GFP, TYPE, COUNT) \ 206 + ({ \ 207 + const size_t __obj_size = size_mul(sizeof(TYPE), COUNT); \ 208 + (TYPE *)KMALLOC(__obj_size, GFP); \ 209 + }) 210 + 211 + #define kzalloc_obj(P, ...) \ 212 + __alloc_objs(kzalloc, default_gfp(__VA_ARGS__), typeof(P), 1) 213 + 205 214 #endif /* _TOOLS_SLAB_H */
+5 -3
tools/objtool/Makefile
··· 142 142 $(Q)$(RM) -r -- $(LIBSUBCMD_OUTPUT) 143 143 144 144 clean: $(LIBSUBCMD)-clean 145 - $(call QUIET_CLEAN, objtool) $(RM) $(OBJTOOL) 146 - $(Q)find $(OUTPUT) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete 145 + $(Q)find $(OUTPUT) \( -name '*.o' -o -name '\.*.cmd' -o -name '\.*.d' \) -type f -print | xargs $(RM) 147 146 $(Q)$(RM) $(OUTPUT)arch/x86/lib/cpu-feature-names.c $(OUTPUT)fixdep 148 147 $(Q)$(RM) $(OUTPUT)arch/x86/lib/inat-tables.c $(OUTPUT)fixdep 149 148 $(Q)$(RM) -- $(OUTPUT)FEATURE-DUMP.objtool 150 149 $(Q)$(RM) -r -- $(OUTPUT)feature 151 150 151 + mrproper: clean 152 + $(call QUIET_CLEAN, objtool) $(RM) $(OBJTOOL) 153 + 152 154 FORCE: 153 155 154 - .PHONY: clean FORCE 156 + .PHONY: clean mrproper FORCE
+4 -2
tools/testing/kunit/kunit_kernel.py
··· 346 346 return self.validate_config(build_dir) 347 347 348 348 def run_kernel(self, args: Optional[List[str]]=None, build_dir: str='', filter_glob: str='', filter: str='', filter_action: Optional[str]=None, timeout: Optional[int]=None) -> Iterator[str]: 349 - if not args: 350 - args = [] 349 + # Copy to avoid mutating the caller-supplied list. exec_tests() reuses 350 + # the same args across repeated run_kernel() calls (e.g. --run_isolated), 351 + # so appending to the original would accumulate stale flags on each call. 352 + args = list(args) if args else [] 351 353 if filter_glob: 352 354 args.append('kunit.filter_glob=' + filter_glob) 353 355 if filter:
+26
tools/testing/kunit/kunit_tool_test.py
··· 503 503 with open(kunit_kernel.get_outfile_path(build_dir), 'rt') as outfile: 504 504 self.assertEqual(outfile.read(), 'hi\nbye\n', msg='Missing some output') 505 505 506 + def test_run_kernel_args_not_mutated(self): 507 + """Verify run_kernel() copies args so callers can reuse them.""" 508 + start_calls = [] 509 + 510 + def fake_start(start_args, unused_build_dir): 511 + start_calls.append(list(start_args)) 512 + return subprocess.Popen(['printf', 'KTAP version 1\n'], 513 + text=True, stdout=subprocess.PIPE) 514 + 515 + with tempfile.TemporaryDirectory('') as build_dir: 516 + tree = kunit_kernel.LinuxSourceTree(build_dir, 517 + kunitconfig_paths=[os.devnull]) 518 + with mock.patch.object(tree._ops, 'start', side_effect=fake_start), \ 519 + mock.patch.object(kunit_kernel.subprocess, 'call'): 520 + kernel_args = ['mem=1G'] 521 + for _ in tree.run_kernel(args=kernel_args, build_dir=build_dir, 522 + filter_glob='suite.test1'): 523 + pass 524 + for _ in tree.run_kernel(args=kernel_args, build_dir=build_dir, 525 + filter_glob='suite.test2'): 526 + pass 527 + self.assertEqual(kernel_args, ['mem=1G'], 528 + 'run_kernel() should not modify caller args') 529 + self.assertIn('kunit.filter_glob=suite.test1', start_calls[0]) 530 + self.assertIn('kunit.filter_glob=suite.test2', start_calls[1]) 531 + 506 532 def test_build_reconfig_no_config(self): 507 533 with tempfile.TemporaryDirectory('') as build_dir: 508 534 with open(kunit_kernel.get_kunitconfig_path(build_dir), 'w') as f:
+2 -2
tools/testing/selftests/arm64/abi/hwcap.c
··· 475 475 476 476 static void sve2p1_sigill(void) 477 477 { 478 - /* BFADD Z0.H, Z0.H, Z0.H */ 479 - asm volatile(".inst 0x65000000" : : : "z0"); 478 + /* LD1Q {Z0.Q}, P0/Z, [Z0.D, X0] */ 479 + asm volatile(".inst 0xC400A000" : : : "z0"); 480 480 } 481 481 482 482 static void sve2p2_sigill(void)
+1
tools/testing/selftests/bpf/Makefile
··· 409 409 CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \ 410 410 LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \ 411 411 EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' \ 412 + HOSTPKG_CONFIG=$(PKG_CONFIG) \ 412 413 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ) 413 414 414 415 # Get Clang's default includes on this system, as opposed to those seen by
+58 -18
tools/testing/selftests/bpf/prog_tests/reg_bounds.c
··· 422 422 } 423 423 } 424 424 425 - static struct range range_improve(enum num_t t, struct range old, struct range new) 425 + static struct range range_intersection(enum num_t t, struct range old, struct range new) 426 426 { 427 427 return range(t, max_t(t, old.a, new.a), min_t(t, old.b, new.b)); 428 + } 429 + 430 + /* 431 + * Result is precise when 'x' and 'y' overlap or form a continuous range, 432 + * result is an over-approximation if 'x' and 'y' do not overlap. 433 + */ 434 + static struct range range_union(enum num_t t, struct range x, struct range y) 435 + { 436 + if (!is_valid_range(t, x)) 437 + return y; 438 + if (!is_valid_range(t, y)) 439 + return x; 440 + return range(t, min_t(t, x.a, y.a), max_t(t, x.b, y.b)); 441 + } 442 + 443 + /* 444 + * This function attempts to improve x range intersecting it with y. 445 + * range_cast(... to_t ...) looses precision for ranges that pass to_t 446 + * min/max boundaries. To avoid such precision loses this function 447 + * splits both x and y into halves corresponding to non-overflowing 448 + * sub-ranges: [0, smin] and [smax, -1]. 449 + * Final result is computed as follows: 450 + * 451 + * ((x ∩ [0, smax]) ∩ (y ∩ [0, smax])) ∪ 452 + * ((x ∩ [smin,-1]) ∩ (y ∩ [smin,-1])) 453 + * 454 + * Precision might still be lost if final union is not a continuous range. 455 + */ 456 + static struct range range_refine_in_halves(enum num_t x_t, struct range x, 457 + enum num_t y_t, struct range y) 458 + { 459 + struct range x_pos, x_neg, y_pos, y_neg, r_pos, r_neg; 460 + u64 smax, smin, neg_one; 461 + 462 + if (t_is_32(x_t)) { 463 + smax = (u64)(u32)S32_MAX; 464 + smin = (u64)(u32)S32_MIN; 465 + neg_one = (u64)(u32)(s32)(-1); 466 + } else { 467 + smax = (u64)S64_MAX; 468 + smin = (u64)S64_MIN; 469 + neg_one = U64_MAX; 470 + } 471 + x_pos = range_intersection(x_t, x, range(x_t, 0, smax)); 472 + x_neg = range_intersection(x_t, x, range(x_t, smin, neg_one)); 473 + y_pos = range_intersection(y_t, y, range(x_t, 0, smax)); 474 + y_neg = range_intersection(y_t, y, range(y_t, smin, neg_one)); 475 + r_pos = range_intersection(x_t, x_pos, range_cast(y_t, x_t, y_pos)); 476 + r_neg = range_intersection(x_t, x_neg, range_cast(y_t, x_t, y_neg)); 477 + return range_union(x_t, r_pos, r_neg); 478 + 428 479 } 429 480 430 481 static struct range range_refine(enum num_t x_t, struct range x, enum num_t y_t, struct range y) 431 482 { 432 483 struct range y_cast; 484 + 485 + if (t_is_32(x_t) == t_is_32(y_t)) 486 + x = range_refine_in_halves(x_t, x, y_t, y); 433 487 434 488 y_cast = range_cast(y_t, x_t, y); 435 489 ··· 498 444 */ 499 445 if (x_t == S64 && y_t == S32 && y_cast.a <= S32_MAX && y_cast.b <= S32_MAX && 500 446 (s64)x.a >= S32_MIN && (s64)x.b <= S32_MAX) 501 - return range_improve(x_t, x, y_cast); 447 + return range_intersection(x_t, x, y_cast); 502 448 503 449 /* the case when new range knowledge, *y*, is a 32-bit subregister 504 450 * range, while previous range knowledge, *x*, is a full register ··· 516 462 x_swap = range(x_t, swap_low32(x.a, y_cast.a), swap_low32(x.b, y_cast.b)); 517 463 if (!is_valid_range(x_t, x_swap)) 518 464 return x; 519 - return range_improve(x_t, x, x_swap); 520 - } 521 - 522 - if (!t_is_32(x_t) && !t_is_32(y_t) && x_t != y_t) { 523 - if (x_t == S64 && x.a > x.b) { 524 - if (x.b < y.a && x.a <= y.b) 525 - return range(x_t, x.a, y.b); 526 - if (x.a > y.b && x.b >= y.a) 527 - return range(x_t, y.a, x.b); 528 - } else if (x_t == U64 && y.a > y.b) { 529 - if (y.b < x.a && y.a <= x.b) 530 - return range(x_t, y.a, x.b); 531 - if (y.a > x.b && y.b >= x.a) 532 - return range(x_t, x.a, y.b); 533 - } 465 + return range_intersection(x_t, x, x_swap); 534 466 } 535 467 536 468 /* otherwise, plain range cast and intersection works */ 537 - return range_improve(x_t, x, y_cast); 469 + return range_intersection(x_t, x, y_cast); 538 470 } 539 471 540 472 /* =======================
+17 -17
tools/testing/selftests/bpf/progs/exceptions_assert.c
··· 18 18 return *(u64 *)num; \ 19 19 } 20 20 21 - __msg(": R0=0xffffffff80000000") 21 + __msg("R{{.}}=0xffffffff80000000") 22 22 check_assert(s64, ==, eq_int_min, INT_MIN); 23 - __msg(": R0=0x7fffffff") 23 + __msg("R{{.}}=0x7fffffff") 24 24 check_assert(s64, ==, eq_int_max, INT_MAX); 25 - __msg(": R0=0") 25 + __msg("R{{.}}=0") 26 26 check_assert(s64, ==, eq_zero, 0); 27 - __msg(": R0=0x8000000000000000 R1=0x8000000000000000") 27 + __msg("R{{.}}=0x8000000000000000") 28 28 check_assert(s64, ==, eq_llong_min, LLONG_MIN); 29 - __msg(": R0=0x7fffffffffffffff R1=0x7fffffffffffffff") 29 + __msg("R{{.}}=0x7fffffffffffffff") 30 30 check_assert(s64, ==, eq_llong_max, LLONG_MAX); 31 31 32 - __msg(": R0=scalar(id=1,smax=0x7ffffffe)") 32 + __msg("R{{.}}=scalar(id=1,smax=0x7ffffffe)") 33 33 check_assert(s64, <, lt_pos, INT_MAX); 34 - __msg(": R0=scalar(id=1,smax=-1,umin=0x8000000000000000,var_off=(0x8000000000000000; 0x7fffffffffffffff))") 34 + __msg("R{{.}}=scalar(id=1,smax=-1,umin=0x8000000000000000,var_off=(0x8000000000000000; 0x7fffffffffffffff))") 35 35 check_assert(s64, <, lt_zero, 0); 36 - __msg(": R0=scalar(id=1,smax=0xffffffff7fffffff") 36 + __msg("R{{.}}=scalar(id=1,smax=0xffffffff7fffffff") 37 37 check_assert(s64, <, lt_neg, INT_MIN); 38 38 39 - __msg(": R0=scalar(id=1,smax=0x7fffffff)") 39 + __msg("R{{.}}=scalar(id=1,smax=0x7fffffff)") 40 40 check_assert(s64, <=, le_pos, INT_MAX); 41 - __msg(": R0=scalar(id=1,smax=0)") 41 + __msg("R{{.}}=scalar(id=1,smax=0)") 42 42 check_assert(s64, <=, le_zero, 0); 43 - __msg(": R0=scalar(id=1,smax=0xffffffff80000000") 43 + __msg("R{{.}}=scalar(id=1,smax=0xffffffff80000000") 44 44 check_assert(s64, <=, le_neg, INT_MIN); 45 45 46 - __msg(": R0=scalar(id=1,smin=umin=0x80000000,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 46 + __msg("R{{.}}=scalar(id=1,smin=umin=0x80000000,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 47 47 check_assert(s64, >, gt_pos, INT_MAX); 48 - __msg(": R0=scalar(id=1,smin=umin=1,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 48 + __msg("R{{.}}=scalar(id=1,smin=umin=1,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 49 49 check_assert(s64, >, gt_zero, 0); 50 - __msg(": R0=scalar(id=1,smin=0xffffffff80000001") 50 + __msg("R{{.}}=scalar(id=1,smin=0xffffffff80000001") 51 51 check_assert(s64, >, gt_neg, INT_MIN); 52 52 53 - __msg(": R0=scalar(id=1,smin=umin=0x7fffffff,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 53 + __msg("R{{.}}=scalar(id=1,smin=umin=0x7fffffff,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 54 54 check_assert(s64, >=, ge_pos, INT_MAX); 55 - __msg(": R0=scalar(id=1,smin=0,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 55 + __msg("R{{.}}=scalar(id=1,smin=0,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 56 56 check_assert(s64, >=, ge_zero, 0); 57 - __msg(": R0=scalar(id=1,smin=0xffffffff80000000") 57 + __msg("R{{.}}=scalar(id=1,smin=0xffffffff80000000") 58 58 check_assert(s64, >=, ge_neg, INT_MIN); 59 59 60 60 SEC("?tc")
+38 -1
tools/testing/selftests/bpf/progs/verifier_bounds.c
··· 1148 1148 SEC("xdp") 1149 1149 __description("bound check with JMP32_JSLT for crossing 32-bit signed boundary") 1150 1150 __success __retval(0) 1151 - __flag(!BPF_F_TEST_REG_INVARIANTS) /* known invariants violation */ 1151 + __flag(BPF_F_TEST_REG_INVARIANTS) 1152 1152 __naked void crossing_32_bit_signed_boundary_2(void) 1153 1153 { 1154 1154 asm volatile (" \ ··· 1995 1995 if r0 == 0x10 goto +1; \ 1996 1996 r10 = 0; \ 1997 1997 exit; \ 1998 + " : 1999 + : __imm(bpf_get_prandom_u32) 2000 + : __clobber_all); 2001 + } 2002 + 2003 + SEC("socket") 2004 + __success 2005 + __flag(BPF_F_TEST_REG_INVARIANTS) 2006 + __naked void signed_unsigned_intersection32_case1(void *ctx) 2007 + { 2008 + asm volatile(" \ 2009 + call %[bpf_get_prandom_u32]; \ 2010 + w0 &= 0xffffffff; \ 2011 + if w0 < 0x3 goto 1f; /* on fall-through u32 range [3..U32_MAX] */ \ 2012 + if w0 s> 0x1 goto 1f; /* on fall-through s32 range [S32_MIN..1] */ \ 2013 + if w0 s< 0x0 goto 1f; /* range can be narrowed to [S32_MIN..-1] */ \ 2014 + r10 = 0; /* thus predicting the jump. */ \ 2015 + 1: exit; \ 2016 + " : 2017 + : __imm(bpf_get_prandom_u32) 2018 + : __clobber_all); 2019 + } 2020 + 2021 + SEC("socket") 2022 + __success 2023 + __flag(BPF_F_TEST_REG_INVARIANTS) 2024 + __naked void signed_unsigned_intersection32_case2(void *ctx) 2025 + { 2026 + asm volatile(" \ 2027 + call %[bpf_get_prandom_u32]; \ 2028 + w0 &= 0xffffffff; \ 2029 + if w0 > 0x80000003 goto 1f; /* on fall-through u32 range [0..S32_MIN+3] */ \ 2030 + if w0 s< -3 goto 1f; /* on fall-through s32 range [-3..S32_MAX] */ \ 2031 + if w0 s> 5 goto 1f; /* on fall-through s32 range [-3..5] */ \ 2032 + if w0 <= 5 goto 1f; /* range can be narrowed to [0..5] */ \ 2033 + r10 = 0; /* thus predicting the jump */ \ 2034 + 1: exit; \ 1998 2035 " : 1999 2036 : __imm(bpf_get_prandom_u32) 2000 2037 : __clobber_all);
+64
tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
··· 363 363 __sink(path[0]); 364 364 } 365 365 366 + void dummy_calls(void) 367 + { 368 + bpf_iter_num_new(0, 0, 0); 369 + bpf_iter_num_next(0); 370 + bpf_iter_num_destroy(0); 371 + } 372 + 373 + SEC("socket") 374 + __success 375 + __flag(BPF_F_TEST_STATE_FREQ) 376 + int spurious_precision_marks(void *ctx) 377 + { 378 + struct bpf_iter_num iter; 379 + 380 + asm volatile( 381 + "r1 = %[iter];" 382 + "r2 = 0;" 383 + "r3 = 10;" 384 + "call %[bpf_iter_num_new];" 385 + "1:" 386 + "r1 = %[iter];" 387 + "call %[bpf_iter_num_next];" 388 + "if r0 == 0 goto 4f;" 389 + "r7 = *(u32 *)(r0 + 0);" 390 + "r8 = *(u32 *)(r0 + 0);" 391 + /* This jump can't be predicted and does not change r7 or r8 state. */ 392 + "if r7 > r8 goto 2f;" 393 + /* Branch explored first ties r2 and r7 as having the same id. */ 394 + "r2 = r7;" 395 + "goto 3f;" 396 + "2:" 397 + /* Branch explored second does not tie r2 and r7 but has a function call. */ 398 + "call %[bpf_get_prandom_u32];" 399 + "3:" 400 + /* 401 + * A checkpoint. 402 + * When first branch is explored, this would inject linked registers 403 + * r2 and r7 into the jump history. 404 + * When second branch is explored, this would be a cache hit point, 405 + * triggering propagate_precision(). 406 + */ 407 + "if r7 <= 42 goto +0;" 408 + /* 409 + * Mark r7 as precise using an if condition that is always true. 410 + * When reached via the second branch, this triggered a bug in the backtrack_insn() 411 + * because r2 (tied to r7) was propagated as precise to a call. 412 + */ 413 + "if r7 <= 0xffffFFFF goto +0;" 414 + "goto 1b;" 415 + "4:" 416 + "r1 = %[iter];" 417 + "call %[bpf_iter_num_destroy];" 418 + : 419 + : __imm_ptr(iter), 420 + __imm(bpf_iter_num_new), 421 + __imm(bpf_iter_num_next), 422 + __imm(bpf_iter_num_destroy), 423 + __imm(bpf_get_prandom_u32) 424 + : __clobber_common, "r7", "r8" 425 + ); 426 + 427 + return 0; 428 + } 429 + 366 430 char _license[] SEC("license") = "GPL";
+42 -14
tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
··· 40 40 */ 41 41 "r3 = r10;" 42 42 "r3 += r0;" 43 + /* Mark r1 and r2 as alive. */ 44 + "r1 = r1;" 45 + "r2 = r2;" 43 46 "r0 = 0;" 44 47 "exit;" 45 48 : ··· 76 73 */ 77 74 "r4 = r10;" 78 75 "r4 += r0;" 76 + /* Mark r1 and r2 as alive. */ 77 + "r1 = r1;" 78 + "r2 = r2;" 79 79 "r0 = 0;" 80 80 "exit;" 81 81 : ··· 112 106 */ 113 107 "r4 = r10;" 114 108 "r4 += r3;" 109 + /* Mark r1 and r2 as alive. */ 110 + "r0 = r0;" 111 + "r1 = r1;" 112 + "r2 = r2;" 115 113 "r0 = 0;" 116 114 "exit;" 117 115 : ··· 153 143 */ 154 144 "r3 = r10;" 155 145 "r3 += r0;" 146 + /* Mark r1 and r2 as alive. */ 147 + "r1 = r1;" 148 + "r2 = r2;" 156 149 "r0 = 0;" 157 150 "exit;" 158 151 : ··· 169 156 */ 170 157 SEC("socket") 171 158 __success __log_level(2) 172 - __msg("12: (0f) r2 += r1") 159 + __msg("17: (0f) r2 += r1") 173 160 /* Current state */ 174 - __msg("frame2: last_idx 12 first_idx 11 subseq_idx -1 ") 175 - __msg("frame2: regs=r1 stack= before 11: (bf) r2 = r10") 161 + __msg("frame2: last_idx 17 first_idx 14 subseq_idx -1 ") 162 + __msg("frame2: regs=r1 stack= before 16: (bf) r2 = r10") 176 163 __msg("frame2: parent state regs=r1 stack=") 177 164 __msg("frame1: parent state regs= stack=") 178 165 __msg("frame0: parent state regs= stack=") 179 166 /* Parent state */ 180 - __msg("frame2: last_idx 10 first_idx 10 subseq_idx 11 ") 181 - __msg("frame2: regs=r1 stack= before 10: (25) if r1 > 0x7 goto pc+0") 167 + __msg("frame2: last_idx 13 first_idx 13 subseq_idx 14 ") 168 + __msg("frame2: regs=r1 stack= before 13: (25) if r1 > 0x7 goto pc+0") 182 169 __msg("frame2: parent state regs=r1 stack=") 183 170 /* frame1.r{6,7} are marked because mark_precise_scalar_ids() 184 171 * looks for all registers with frame2.r1.id in the current state ··· 186 173 __msg("frame1: parent state regs=r6,r7 stack=") 187 174 __msg("frame0: parent state regs=r6 stack=") 188 175 /* Parent state */ 189 - __msg("frame2: last_idx 8 first_idx 8 subseq_idx 10") 190 - __msg("frame2: regs=r1 stack= before 8: (85) call pc+1") 176 + __msg("frame2: last_idx 9 first_idx 9 subseq_idx 13") 177 + __msg("frame2: regs=r1 stack= before 9: (85) call pc+3") 191 178 /* frame1.r1 is marked because of backtracking of call instruction */ 192 179 __msg("frame1: parent state regs=r1,r6,r7 stack=") 193 180 __msg("frame0: parent state regs=r6 stack=") 194 181 /* Parent state */ 195 - __msg("frame1: last_idx 7 first_idx 6 subseq_idx 8") 196 - __msg("frame1: regs=r1,r6,r7 stack= before 7: (bf) r7 = r1") 197 - __msg("frame1: regs=r1,r6 stack= before 6: (bf) r6 = r1") 182 + __msg("frame1: last_idx 8 first_idx 7 subseq_idx 9") 183 + __msg("frame1: regs=r1,r6,r7 stack= before 8: (bf) r7 = r1") 184 + __msg("frame1: regs=r1,r6 stack= before 7: (bf) r6 = r1") 198 185 __msg("frame1: parent state regs=r1 stack=") 199 186 __msg("frame0: parent state regs=r6 stack=") 200 187 /* Parent state */ 201 - __msg("frame1: last_idx 4 first_idx 4 subseq_idx 6") 202 - __msg("frame1: regs=r1 stack= before 4: (85) call pc+1") 188 + __msg("frame1: last_idx 4 first_idx 4 subseq_idx 7") 189 + __msg("frame1: regs=r1 stack= before 4: (85) call pc+2") 203 190 __msg("frame0: parent state regs=r1,r6 stack=") 204 191 /* Parent state */ 205 192 __msg("frame0: last_idx 3 first_idx 1 subseq_idx 4") ··· 217 204 "r1 = r0;" 218 205 "r6 = r0;" 219 206 "call precision_many_frames__foo;" 207 + "r6 = r6;" /* mark r6 as live */ 220 208 "exit;" 221 209 : 222 210 : __imm(bpf_ktime_get_ns) ··· 234 220 "r6 = r1;" 235 221 "r7 = r1;" 236 222 "call precision_many_frames__bar;" 223 + "r6 = r6;" /* mark r6 as live */ 224 + "r7 = r7;" /* mark r7 as live */ 237 225 "exit" 238 226 ::: __clobber_all); 239 227 } ··· 245 229 { 246 230 asm volatile ( 247 231 "if r1 > 7 goto +0;" 232 + "r6 = 0;" /* mark r6 as live */ 233 + "r7 = 0;" /* mark r7 as live */ 248 234 /* force r1 to be precise, this eventually marks: 249 235 * - bar frame r1 250 236 * - foo frame r{1,6,7} ··· 358 340 "r3 += r7;" 359 341 /* force r9 to be precise, this also marks r8 */ 360 342 "r3 += r9;" 343 + "r6 = r6;" /* mark r6 as live */ 344 + "r8 = r8;" /* mark r8 as live */ 361 345 "exit;" 362 346 : 363 347 : __imm(bpf_ktime_get_ns) ··· 373 353 * collect_linked_regs() can't tie more than 6 registers for a single insn. 374 354 */ 375 355 __msg("8: (25) if r0 > 0x7 goto pc+0 ; R0=scalar(id=1") 376 - __msg("9: (bf) r6 = r6 ; R6=scalar(id=2") 356 + __msg("14: (bf) r6 = r6 ; R6=scalar(id=2") 377 357 /* check that r{0-5} are marked precise after 'if' */ 378 358 __msg("frame0: regs=r0 stack= before 8: (25) if r0 > 0x7 goto pc+0") 379 359 __msg("frame0: parent state regs=r0,r1,r2,r3,r4,r5 stack=:") ··· 392 372 "r6 = r0;" 393 373 /* propagate range for r{0-6} */ 394 374 "if r0 > 7 goto +0;" 375 + /* keep r{1-5} live */ 376 + "r1 = r1;" 377 + "r2 = r2;" 378 + "r3 = r3;" 379 + "r4 = r4;" 380 + "r5 = r5;" 395 381 /* make r6 appear in the log */ 396 382 "r6 = r6;" 397 383 /* force r0 to be precise, ··· 543 517 "*(u64*)(r10 - 8) = r1;" 544 518 /* r9 = pointer to stack */ 545 519 "r9 = r10;" 546 - "r9 += -8;" 520 + "r9 += -16;" 547 521 /* r8 = ktime_get_ns() */ 548 522 "call %[bpf_ktime_get_ns];" 549 523 "r8 = r0;" ··· 564 538 "if r7 > 4 goto l2_%=;" 565 539 /* Access memory at r9[r6] */ 566 540 "r9 += r6;" 541 + "r9 += r7;" 542 + "r9 += r8;" 567 543 "r0 = *(u8*)(r9 + 0);" 568 544 "l2_%=:" 569 545 "r0 = 0;"
+4 -4
tools/testing/selftests/bpf/verifier/precise.c
··· 44 44 mark_precise: frame0: regs=r2 stack= before 23\ 45 45 mark_precise: frame0: regs=r2 stack= before 22\ 46 46 mark_precise: frame0: regs=r2 stack= before 20\ 47 - mark_precise: frame0: parent state regs=r2,r9 stack=:\ 47 + mark_precise: frame0: parent state regs=r2 stack=:\ 48 48 mark_precise: frame0: last_idx 19 first_idx 10\ 49 - mark_precise: frame0: regs=r2,r9 stack= before 19\ 49 + mark_precise: frame0: regs=r2 stack= before 19\ 50 50 mark_precise: frame0: regs=r9 stack= before 18\ 51 51 mark_precise: frame0: regs=r8,r9 stack= before 17\ 52 52 mark_precise: frame0: regs=r0,r9 stack= before 15\ ··· 107 107 mark_precise: frame0: parent state regs=r2 stack=:\ 108 108 mark_precise: frame0: last_idx 20 first_idx 20\ 109 109 mark_precise: frame0: regs=r2 stack= before 20\ 110 - mark_precise: frame0: parent state regs=r2,r9 stack=:\ 110 + mark_precise: frame0: parent state regs=r2 stack=:\ 111 111 mark_precise: frame0: last_idx 19 first_idx 17\ 112 - mark_precise: frame0: regs=r2,r9 stack= before 19\ 112 + mark_precise: frame0: regs=r2 stack= before 19\ 113 113 mark_precise: frame0: regs=r9 stack= before 18\ 114 114 mark_precise: frame0: regs=r8,r9 stack= before 17\ 115 115 mark_precise: frame0: parent state regs= stack=:",
+21 -13
tools/testing/selftests/hid/tests/test_wacom_generic.py
··· 598 598 if unit_set: 599 599 assert required[usage].contains(field) 600 600 601 - def test_prop_direct(self): 602 - """ 603 - Todo: Verify that INPUT_PROP_DIRECT is set on display devices. 604 - """ 605 - pass 606 - 607 - def test_prop_pointer(self): 608 - """ 609 - Todo: Verify that INPUT_PROP_POINTER is set on opaque devices. 610 - """ 611 - pass 612 - 613 601 614 602 class PenTabletTest(BaseTest.TestTablet): 615 603 def assertName(self, uhdev): ··· 664 676 self.sync_and_assert_events( 665 677 uhdev.event(130, 240, pressure=0), [], auto_syn=False, strict=True 666 678 ) 679 + 680 + def test_prop_pointer(self): 681 + """ 682 + Verify that INPUT_PROP_POINTER is set and INPUT_PROP_DIRECT 683 + is not set on opaque devices. 684 + """ 685 + evdev = self.uhdev.get_evdev() 686 + assert libevdev.INPUT_PROP_POINTER in evdev.properties 687 + assert libevdev.INPUT_PROP_DIRECT not in evdev.properties 667 688 668 689 669 690 class TestOpaqueCTLTablet(TestOpaqueTablet): ··· 859 862 ) 860 863 861 864 862 - class TestDTH2452Tablet(test_multitouch.BaseTest.TestMultitouch, TouchTabletTest): 865 + class DirectTabletTest(): 866 + def test_prop_direct(self): 867 + """ 868 + Verify that INPUT_PROP_DIRECT is set and INPUT_PROP_POINTER 869 + is not set on display devices. 870 + """ 871 + evdev = self.uhdev.get_evdev() 872 + assert libevdev.INPUT_PROP_DIRECT in evdev.properties 873 + assert libevdev.INPUT_PROP_POINTER not in evdev.properties 874 + 875 + 876 + class TestDTH2452Tablet(test_multitouch.BaseTest.TestMultitouch, TouchTabletTest, DirectTabletTest): 863 877 ContactIds = namedtuple("ContactIds", "contact_id, tracking_id, slot_num") 864 878 865 879 def create_device(self):
+55
tools/testing/selftests/net/rtnetlink.sh
··· 28 28 kci_test_fdb_get 29 29 kci_test_fdb_del 30 30 kci_test_neigh_get 31 + kci_test_neigh_update 31 32 kci_test_bridge_parent_id 32 33 kci_test_address_proto 33 34 kci_test_enslave_bonding ··· 1159 1158 fi 1160 1159 1161 1160 end_test "PASS: neigh get" 1161 + } 1162 + 1163 + kci_test_neigh_update() 1164 + { 1165 + dstip=10.0.2.4 1166 + dstmac=de:ad:be:ef:13:37 1167 + local ret=0 1168 + 1169 + for proxy in "" "proxy" ; do 1170 + # add a neighbour entry without any flags 1171 + run_cmd ip neigh add $proxy $dstip dev "$devdummy" lladdr $dstmac nud permanent 1172 + run_cmd_grep $dstip ip neigh show $proxy 1173 + run_cmd_grep_fail "$dstip dev $devdummy .*\(managed\|use\|router\|extern\)" ip neigh show $proxy 1174 + 1175 + # set the extern_learn flag, but no other 1176 + run_cmd ip neigh change $proxy $dstip dev "$devdummy" extern_learn 1177 + run_cmd_grep "$dstip dev $devdummy .* extern_learn" ip neigh show $proxy 1178 + run_cmd_grep_fail "$dstip dev $devdummy .* \(managed\|use\|router\)" ip neigh show $proxy 1179 + 1180 + # flags are reset when not provided 1181 + run_cmd ip neigh change $proxy $dstip dev "$devdummy" 1182 + run_cmd_grep $dstip ip neigh show $proxy 1183 + run_cmd_grep_fail "$dstip dev $devdummy .* extern_learn" ip neigh show $proxy 1184 + 1185 + # add a protocol 1186 + run_cmd ip neigh change $proxy $dstip dev "$devdummy" protocol boot 1187 + run_cmd_grep "$dstip dev $devdummy .* proto boot" ip neigh show $proxy 1188 + 1189 + # protocol is retained when not provided 1190 + run_cmd ip neigh change $proxy $dstip dev "$devdummy" 1191 + run_cmd_grep "$dstip dev $devdummy .* proto boot" ip neigh show $proxy 1192 + 1193 + # change protocol 1194 + run_cmd ip neigh change $proxy $dstip dev "$devdummy" protocol static 1195 + run_cmd_grep "$dstip dev $devdummy .* proto static" ip neigh show $proxy 1196 + 1197 + # also check an extended flag for non-proxy neighs 1198 + if [ "$proxy" = "" ]; then 1199 + run_cmd ip neigh change $proxy $dstip dev "$devdummy" managed 1200 + run_cmd_grep "$dstip dev $devdummy managed" ip neigh show $proxy 1201 + 1202 + run_cmd ip neigh change $proxy $dstip dev "$devdummy" lladdr $dstmac 1203 + run_cmd_grep_fail "$dstip dev $devdummy managed" ip neigh show $proxy 1204 + fi 1205 + 1206 + run_cmd ip neigh del $proxy $dstip dev "$devdummy" 1207 + done 1208 + 1209 + if [ $ret -ne 0 ];then 1210 + end_test "FAIL: neigh update" 1211 + return 1 1212 + fi 1213 + 1214 + end_test "PASS: neigh update" 1162 1215 } 1163 1216 1164 1217 kci_test_bridge_parent_id()
+3 -1
tools/testing/selftests/rcutorture/configs/rcu/SRCU-N
··· 2 2 CONFIG_SMP=y 3 3 CONFIG_NR_CPUS=4 4 4 CONFIG_HOTPLUG_CPU=y 5 - CONFIG_PREEMPT_NONE=y 5 + CONFIG_PREEMPT_DYNAMIC=n 6 + CONFIG_PREEMPT_LAZY=y 7 + CONFIG_PREEMPT_NONE=n 6 8 CONFIG_PREEMPT_VOLUNTARY=n 7 9 CONFIG_PREEMPT=n 8 10 #CHECK#CONFIG_RCU_EXPERT=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcu/SRCU-T
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -2
tools/testing/selftests/rcutorture/configs/rcu/SRCU-U
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n ··· 8 7 CONFIG_RCU_TRACE=n 9 8 CONFIG_DEBUG_LOCK_ALLOC=n 10 9 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 11 - CONFIG_PREEMPT_COUNT=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcu/TASKS02
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -2
tools/testing/selftests/rcutorture/configs/rcu/TINY01
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n ··· 12 11 #CHECK#CONFIG_RCU_STALL_COMMON=n 13 12 CONFIG_DEBUG_LOCK_ALLOC=n 14 13 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 15 - CONFIG_PREEMPT_COUNT=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcu/TINY02
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcu/TRACE01
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=5 3 3 CONFIG_HOTPLUG_CPU=y 4 - CONFIG_PREEMPT_NONE=y 4 + CONFIG_PREEMPT_LAZY=y 5 + CONFIG_PREEMPT_NONE=n 5 6 CONFIG_PREEMPT_VOLUNTARY=n 6 7 CONFIG_PREEMPT=n 7 8 CONFIG_PREEMPT_DYNAMIC=n
+3 -1
tools/testing/selftests/rcutorture/configs/rcu/TREE04
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=8 3 + CONFIG_PREEMPT_LAZY=y 3 4 CONFIG_PREEMPT_NONE=n 4 - CONFIG_PREEMPT_VOLUNTARY=y 5 + CONFIG_PREEMPT_VOLUNTARY=n 5 6 CONFIG_PREEMPT=n 6 7 CONFIG_PREEMPT_DYNAMIC=n 7 8 #CHECK#CONFIG_TREE_RCU=y 9 + #CHECK#CONFIG_PREEMPT_RCU=n 8 10 CONFIG_HZ_PERIODIC=n 9 11 CONFIG_NO_HZ_IDLE=n 10 12 CONFIG_NO_HZ_FULL=y
+3 -1
tools/testing/selftests/rcutorture/configs/rcu/TREE05
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=8 3 - CONFIG_PREEMPT_NONE=y 3 + CONFIG_PREEMPT_DYNAMIC=n 4 + CONFIG_PREEMPT_LAZY=y 5 + CONFIG_PREEMPT_NONE=n 4 6 CONFIG_PREEMPT_VOLUNTARY=n 5 7 CONFIG_PREEMPT=n 6 8 #CHECK#CONFIG_TREE_RCU=y
+4 -1
tools/testing/selftests/rcutorture/configs/rcu/TREE06
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=8 3 - CONFIG_PREEMPT_NONE=y 3 + CONFIG_PREEMPT_DYNAMIC=n 4 + CONFIG_PREEMPT_LAZY=y 5 + CONFIG_PREEMPT_NONE=n 4 6 CONFIG_PREEMPT_VOLUNTARY=n 5 7 CONFIG_PREEMPT=n 6 8 #CHECK#CONFIG_TREE_RCU=y 9 + #CHECK#CONFIG_PREEMPT_RCU=n 7 10 CONFIG_HZ_PERIODIC=n 8 11 CONFIG_NO_HZ_IDLE=y 9 12 CONFIG_NO_HZ_FULL=n
+1
tools/testing/selftests/rcutorture/configs/rcu/TREE10
··· 6 6 CONFIG_PREEMPT=n 7 7 CONFIG_PREEMPT_DYNAMIC=n 8 8 #CHECK#CONFIG_TREE_RCU=y 9 + CONFIG_PREEMPT_RCU=n 9 10 CONFIG_HZ_PERIODIC=n 10 11 CONFIG_NO_HZ_IDLE=y 11 12 CONFIG_NO_HZ_FULL=n
+3 -1
tools/testing/selftests/rcutorture/configs/rcu/TRIVIAL
··· 1 1 CONFIG_SMP=y 2 2 CONFIG_NR_CPUS=8 3 - CONFIG_PREEMPT_NONE=y 3 + CONFIG_PREEMPT_DYNAMIC=n 4 + CONFIG_PREEMPT_LAZY=y 5 + CONFIG_PREEMPT_NONE=n 4 6 CONFIG_PREEMPT_VOLUNTARY=n 5 7 CONFIG_PREEMPT=n 6 8 CONFIG_HZ_PERIODIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcuscale/TINY
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01
··· 1 1 CONFIG_SMP=y 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/refscale/NOPREEMPT
··· 1 1 CONFIG_SMP=y 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/refscale/TINY
··· 1 1 CONFIG_SMP=n 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n
+2 -1
tools/testing/selftests/rcutorture/configs/scf/NOPREEMPT
··· 1 1 CONFIG_SMP=y 2 - CONFIG_PREEMPT_NONE=y 2 + CONFIG_PREEMPT_LAZY=y 3 + CONFIG_PREEMPT_NONE=n 3 4 CONFIG_PREEMPT_VOLUNTARY=n 4 5 CONFIG_PREEMPT=n 5 6 CONFIG_PREEMPT_DYNAMIC=n